Sample records for large measurement errors

  1. Volumetric error modeling, identification and compensation based on screw theory for a large multi-axis propeller-measuring machine

    NASA Astrophysics Data System (ADS)

    Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu

    2018-05-01

    Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.

  2. Analysis on the dynamic error for optoelectronic scanning coordinate measurement network

    NASA Astrophysics Data System (ADS)

    Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie

    2018-01-01

    Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.

  3. A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.

    PubMed

    Liu, Shuo; Zhang, Lei; Li, Jian

    2016-11-24

    The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.

  4. Design and analysis of a sub-aperture scanning machine for the transmittance measurements of large-aperture optical system

    NASA Astrophysics Data System (ADS)

    He, Yingwei; Li, Ping; Feng, Guojin; Cheng, Li; Wang, Yu; Wu, Houping; Liu, Zilong; Zheng, Chundi; Sha, Dingguo

    2010-11-01

    For measuring large-aperture optical system transmittance, a novel sub-aperture scanning machine with double-rotating arms (SSMDA) was designed to obtain sub-aperture beam spot. Optical system full-aperture transmittance measurements can be achieved by applying sub-aperture beam spot scanning technology. The mathematical model of the SSMDA based on a homogeneous coordinate transformation matrix is established to develop a detailed methodology for analyzing the beam spot scanning errors. The error analysis methodology considers two fundamental sources of scanning errors, namely (1) the length systematic errors and (2) the rotational systematic errors. As the systematic errors of the parameters are given beforehand, computational results of scanning errors are between -0.007~0.028mm while scanning radius is not lager than 400.000mm. The results offer theoretical and data basis to the research on transmission characteristics of large optical system.

  5. Error analysis and correction of lever-type stylus profilometer based on Nelder-Mead Simplex method

    NASA Astrophysics Data System (ADS)

    Hu, Chunbing; Chang, Suping; Li, Bo; Wang, Junwei; Zhang, Zhongyu

    2017-10-01

    Due to the high measurement accuracy and wide range of applications, lever-type stylus profilometry is commonly used in industrial research areas. However, the error caused by the lever structure has a great influence on the profile measurement, thus this paper analyzes the error of high-precision large-range lever-type stylus profilometry. The errors are corrected by the Nelder-Mead Simplex method, and the results are verified by the spherical surface calibration. It can be seen that this method can effectively reduce the measurement error and improve the accuracy of the stylus profilometry in large-scale measurement.

  6. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  7. Aliasing errors in measurements of beam position and ellipticity

    NASA Astrophysics Data System (ADS)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  8. Implementation and verification of a four-probe motion error measurement system for a large-scale roll lathe used in hybrid manufacturing

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Liu; Niu, Zengyuan; Matsuura, Daiki; Lee, Jung Chul; Shimizu, Yuki; Gao, Wei; Oh, Jeong Seok; Park, Chun Hong

    2017-10-01

    In this paper, a four-probe measurement system is implemented and verified for the carriage slide motion error measurement of a large-scale roll lathe used in hybrid manufacturing where a laser machining probe and a diamond cutting tool are placed on two sides of a roll workpiece for manufacturing. The motion error of the carriage slide of the roll lathe is composed of two straightness motion error components and two parallelism motion error components in the vertical and horizontal planes. Four displacement measurement probes, which are mounted on the carriage slide with respect to four opposing sides of the roll workpiece, are employed for the measurement. Firstly, based on the reversal technique, the four probes are moved by the carriage slide to scan the roll workpiece before and after a 180-degree rotation of the roll workpiece. Taking into consideration the fact that the machining accuracy of the lathe is influenced by not only the carriage slide motion error but also the gravity deformation of the large-scale roll workpiece due to its heavy weight, the vertical motion error is thus characterized relating to the deformed axis of the roll workpiece. The horizontal straightness motion error can also be synchronously obtained based on the reversal technique. In addition, based on an error separation algorithm, the vertical and horizontal parallelism motion error components are identified by scanning the rotating roll workpiece at the start and the end positions of the carriage slide, respectively. The feasibility and reliability of the proposed motion error measurement system are demonstrated by the experimental results and the measurement uncertainty analysis.

  9. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  10. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less

  11. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  12. Influence of ECG measurement accuracy on ECG diagnostic statements.

    PubMed

    Zywietz, C; Celikag, D; Joseph, G

    1996-01-01

    Computer analysis of electrocardiograms (ECGs) provides a large amount of ECG measurement data, which may be used for diagnostic classification and storage in ECG databases. Until now, neither error limits for ECG measurements have been specified nor has their influence on diagnostic statements been systematically investigated. An analytical method is presented to estimate the influence of measurement errors on the accuracy of diagnostic ECG statements. Systematic (offset) errors will usually result in an increase of false positive or false negative statements since they cause a shift of the working point on the receiver operating characteristics curve. Measurement error dispersion broadens the distribution function of discriminative measurement parameters and, therefore, usually increases the overlap between discriminative parameters. This results in a flattening of the receiver operating characteristics curve and an increase of false positive and false negative classifications. The method developed has been applied to ECG conduction defect diagnoses by using the proposed International Electrotechnical Commission's interval measurement tolerance limits. These limits appear too large because more than 30% of false positive atrial conduction defect statements and 10-18% of false intraventricular conduction defect statements could be expected due to tolerated measurement errors. To assure long-term usability of ECG measurement databases, it is recommended that systems provide its error tolerance limits obtained on a defined test set.

  13. Error Analysis and Validation for Insar Height Measurement Induced by Slant Range

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Li, T.; Fan, W.; Geng, X.

    2018-04-01

    InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.

  14. Image Reconstruction for Interferometric Imaging of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    DeSantis, Zachary J.

    Imaging distant objects at a high resolution has always presented a challenge due to the diffraction limit. Larger apertures improve the resolution, but at some point the cost of engineering, building, and correcting phase aberrations of large apertures become prohibitive. Interferometric imaging uses the Van Cittert-Zernike theorem to form an image from measurements of spatial coherence. This effectively allows the synthesis of a large aperture from two or more smaller telescopes to improve the resolution. We apply this method to imaging geosynchronous satellites with a ground-based system. Imaging a dim object from the ground presents unique challenges. The atmosphere creates errors in the phase measurements. The measurements are taken simultaneously across a large bandwidth of light. The atmospheric piston error, therefore, manifests as a linear phase error across the spectral measurements. Because the objects are faint, many of the measurements are expected to have a poor signal-to-noise ratio (SNR). This eliminates possibility of use of commonly used techniques like closure phase, which is a standard technique in astronomical interferometric imaging for making partial phase measurements in the presence of atmospheric error. The bulk of our work has been focused on forming an image, using sub-Nyquist sampled data, in the presence of these linear phase errors without relying on closure phase techniques. We present an image reconstruction algorithm that successfully forms an image in the presence of these linear phase errors. We demonstrate our algorithm’s success in both simulation and in laboratory experiments.

  15. Precision of Four Acoustic Bone Measurement Devices

    NASA Technical Reports Server (NTRS)

    Miller, Christopher; Feiveson, Alan H.; Shackelford, Linda; Rianon, Nahida; LeBlanc, Adrian

    2000-01-01

    Though many studies have quantified the precision of various acoustic bone measurement devices, it is difficult to directly compare the results among the studies, because they used disparate subject pools, did not specify the estimation methodology, or did not use consistent definitions for various precision characteristics. In this study, we used a repeated measures design protocol to directly determine the precision characteristics of four acoustic bone measurement devices: the Mechanical Response Tissue Analyzer (MRTA), the UBA-575+, the SoundScan 2000 (S2000), and the Sahara Ultrasound Done Analyzer. Ten men and ten women were scanned on all four devices by two different operators at five discrete time points: Week 1, Week 2, Week 3, Month 3 and Month 6. The percent coefficient of variation (%CV) and standardized coefficient of variation were computed for the following precision characteristics: interoperator effect, operator-subject interaction, short-term error variance, and long-term drift, The MRTA had high interoperator errors for its ulnar and tibial stiffness measures and a large long-term drift in its tibial stiffness measurement. The UBA-575+ exhibited large short-term error variances and long-term drift for all three of its measurements. The S2000's tibial speed of sound measurement showed a high short-term error variance and a significant operator-subject interaction but very good values ( < 1%) for the other precision characteristics. The Sahara seemed to have the best overall performance, but was hampered by a large %CV for short-term error variance in its broadband ultrasound attenuation measure.

  16. Precision of Four Acoustic Bone Measurement Devices

    NASA Technical Reports Server (NTRS)

    Miller, Christopher; Rianon, Nahid; Feiveson, Alan; Shackelford, Linda; LeBlanc, Adrian

    2000-01-01

    Though many studies have quantified the precision of various acoustic bone measurement devices, it is difficult to directly compare the results among the studies, because they used disparate subject pools, did not specify the estimation methodology, or did not use consistent definitions for various precision characteristics. In this study, we used a repeated measures design protocol to directly determine the precision characteristics of four acoustic bone measurement devices: the Mechanical Response Tissue Analyzer (MRTA), the UBA-575+, the SoundScan 2000 (S2000), and the Sahara Ultrasound Bone Analyzer. Ten men and ten women were scanned on all four devices by two different operators at five discrete time points: Week 1, Week 2, Week 3, Month 3 and Month 6. The percent coefficient of variation (%CV) and standardized coefficient of variation were computed for the following precision characteristics: interoperator effect, operator-subject interaction, short-term error variance, and long-term drift. The MRTA had high interoperator errors for its ulnar and tibial stiffness measures and a large long-term drift in its tibial stiffness measurement. The UBA-575+ exhibited large short-term error variances and long-term drift for all three of its measurements. The S2000's tibial speed of sound measurement showed a high short-term error variance and a significant operator-subject interaction but very good values (less than 1%) for the other precision characteristics. The Sahara seemed to have the best overall performance, but was hampered by a large %CV for short-term error variance in its broadband ultrasound attenuation measure.

  17. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.

  18. Center-to-Limb Variation of Deprojection Errors in SDO/HMI Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Falconer, David; Moore, Ronald; Barghouty, Nasser; Tiwari, Sanjiv K.; Khazanov, Igor

    2015-04-01

    For use in investigating the magnetic causes of coronal heating in active regions and for use in forecasting an active region’s productivity of major CME/flare eruptions, we have evaluated various sunspot-active-region magnetic measures (e.g., total magnetic flux, free-magnetic-energy proxies, magnetic twist measures) from HMI Active Region Patches (HARPs) after the HARP has been deprojected to disk center. From a few tens of thousand HARP vector magnetograms (of a few hundred sunspot active regions) that have been deprojected to disk center, we have determined that the errors in the whole-HARP magnetic measures from deprojection are negligibly small for HARPS deprojected from distances out to 45 heliocentric degrees. For some purposes the errors from deprojection are tolerable out to 60 degrees. We obtained this result by the following process. For each whole-HARP magnetic measure: 1) for each HARP disk passage, normalize the measured values by the measured value for that HARP at central meridian; 2) then for each 0.05 Rs annulus, average the values from all the HARPs in the annulus. This results in an average normalized value as a function of radius for each measure. Assuming no deprojection errors and that, among a large set of HARPs, the measure is as likely to decrease as to increase with HARP distance from disk center, the average of each annulus is expected to be unity, and, for a statistically large sample, the amount of deviation of the average from unity estimates the error from deprojection effects. The deprojection errors arise from 1) errors in the transverse field being deprojected into the vertical field for HARPs observed at large distances from disk center, 2) increasingly larger foreshortening at larger distances from disk center, and 3) possible errors in transverse-field-direction ambiguity resolution.From the compiled set of measured vales of whole-HARP magnetic nonpotentiality parameters measured from deprojected HARPs, we have examined the relation between each nonpotentiality parameter and the speed of CMEs from the measured active regions. For several different nonpotentiality parameters we find there is an upper limit to the CME speed, the limit increasing as the value of the parameter increases.

  19. CME Velocity and Acceleration Error Estimates Using the Bootstrap Method

    NASA Technical Reports Server (NTRS)

    Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji

    2017-01-01

    The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs (e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.

  20. Beyond alpha: an empirical examination of the effects of different sources of measurement error on reliability estimates for measures of individual differences constructs.

    PubMed

    Schmidt, Frank L; Le, Huy; Ilies, Remus

    2003-06-01

    On the basis of an empirical study of measures of constructs from the cognitive domain, the personality domain, and the domain of affective traits, the authors of this study examine the implications of transient measurement error for the measurement of frequently studied individual differences variables. The authors clarify relevant reliability concepts as they relate to transient error and present a procedure for estimating the coefficient of equivalence and stability (L. J. Cronbach, 1947), the only classical reliability coefficient that assesses all 3 major sources of measurement error (random response, transient, and specific factor errors). The authors conclude that transient error exists in all 3 trait domains and is especially large in the domain of affective traits. Their findings indicate that the nearly universal use of the coefficient of equivalence (Cronbach's alpha; L. J. Cronbach, 1951), which fails to assess transient error, leads to overestimates of reliability and undercorrections for biases due to measurement error.

  1. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  2. Errors in Measuring Water Potentials of Small Samples Resulting from Water Adsorption by Thermocouple Psychrometer Chambers 1

    PubMed Central

    Bennett, Jerry M.; Cortes, Peter M.

    1985-01-01

    The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367

  3. Errors in measuring water potentials of small samples resulting from water adsorption by thermocouple psychrometer chambers.

    PubMed

    Bennett, J M; Cortes, P M

    1985-09-01

    The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.

  4. Fluorescence errors in integrating sphere measurements of remote phosphor type LED light sources

    NASA Astrophysics Data System (ADS)

    Keppens, A.; Zong, Y.; Podobedov, V. B.; Nadal, M. E.; Hanselaer, P.; Ohno, Y.

    2011-05-01

    The relative spectral radiant flux error caused by phosphor fluorescence during integrating sphere measurements is investigated both theoretically and experimentally. Integrating sphere and goniophotometer measurements are compared and used for model validation, while a case study provides additional clarification. Criteria for reducing fluorescence errors to a degree of negligibility as well as a fluorescence error correction method based on simple matrix algebra are presented. Only remote phosphor type LED light sources are studied because of their large phosphor surfaces and high application potential in general lighting.

  5. A cautionary note on the use of some mass flow controllers

    NASA Astrophysics Data System (ADS)

    Weinheimer, Andrew J.; Ridley, Brian A.

    1990-06-01

    Commercial mass flow controllers are widely used in atmospheric research where precise and constant gas flows are required. We have determined, however, that some commonly used controllers are far more sensitive to ambient pressure than is acknowledged in the literature of the manufacturers. Since a flow error can lead directly to a measurement error of the same magnitude, this is a matter of great concern. Indeed, in our particular application, were we not aware of this problem, our measurements would be subject to a systematic error that increased with altitude (i.e., a drift), up to a factor of 2 at the highest altitudes (˜37 km). In this note we present laboratory measurements of the errors of two brands of flow controllers when operated at pressures down to a few millibars. The errors are as large as a factor of 2 to 3 and depend not simply on the ambient pressure at a given time, but also on the pressure history. In addition there is a large dependence on flow setting. In light of these flow errors, some past measurements of chemical species in the stratosphere will need to be revised.

  6. Determinants of Wealth Fluctuation: Changes in Hard-To-Measure Economic Variables in a Panel Study

    PubMed Central

    Pfeffer, Fabian T.; Griffin, Jamie

    2017-01-01

    Measuring fluctuation in families’ economic conditions is the raison d’être of household panel studies. Accordingly, a particularly challenging critique is that extreme fluctuation in measured economic characteristics might indicate compounding measurement error rather than actual changes in families’ economic wellbeing. In this article, we address this claim by moving beyond the assumption that particularly large fluctuation in economic conditions might be too large to be realistic. Instead, we examine predictors of large fluctuation, capturing sources related to actual socio-economic changes as well as potential sources of measurement error. Using the Panel Study of Income Dynamics, we study between-wave changes in a dimension of economic wellbeing that is especially hard to measure, namely, net worth as an indicator of total family wealth. Our results demonstrate that even very large between-wave changes in net worth can be attributed to actual socio-economic and demographic processes. We do, however, also identify a potential source of measurement error that contributes to large wealth fluctuation, namely, the treatment of incomplete information, presenting a pervasive challenge for any longitudinal survey that includes questions on economic assets. Our results point to ways for improving wealth variables both in the data collection process (e.g., by measuring active savings) and in data processing (e.g., by improving imputation algorithms). PMID:28316752

  7. Big Data and Large Sample Size: A Cautionary Note on the Potential for Bias

    PubMed Central

    Chambers, David A.; Glasgow, Russell E.

    2014-01-01

    Abstract A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of “big data” that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design. Clin Trans Sci 2014; Volume #: 1–5 PMID:25043853

  8. Assessment of Systematic Measurement Errors for Acoustic Travel-Time Tomography of the Atmosphere

    DTIC Science & Technology

    2013-01-01

    measurements include assess- ment of the time delays in electronic circuits and mechanical hardware (e.g., drivers and microphones) of a tomography array ...hardware and electronic circuits of the tomography array and errors in synchronization of the transmitted and recorded signals. For example, if...coordinates can be as large as 30 cm. These errors are equivalent to the systematic errors in the travel times of 0.9 ms. Third, loudspeakers which are used

  9. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Clifton, Andrew

    Currently, cup anemometers on meteorological towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability; however, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install meteorological towers at potential sites. As a result, remote-sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. Although lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount ofmore » uncertainty surrounding the measurement of turbulence using these devices. Errors in lidar turbulence estimates are caused by a variety of factors, including instrument noise, volume averaging, and variance contamination, in which the magnitude of these factors is highly dependent on measurement height and atmospheric stability. As turbulence has a large impact on wind power production, errors in turbulence measurements will translate into errors in wind power prediction. The impact of using lidars rather than cup anemometers for wind power prediction must be understood if lidars are to be considered a viable alternative to cup anemometers.In this poster, the sensitivity of power prediction error to typical lidar turbulence measurement errors is assessed. Turbulence estimates from a vertically profiling WINDCUBE v2 lidar are compared to high-resolution sonic anemometer measurements at field sites in Oklahoma and Colorado to determine the degree of lidar turbulence error that can be expected under different atmospheric conditions. These errors are then incorporated into a power prediction model to estimate the sensitivity of power prediction error to turbulence measurement error. Power prediction models, including the standard binning method and a random forest method, were developed using data from the aeroelastic simulator FAST for a 1.5 MW turbine. The impact of lidar turbulence error on the predicted power from these different models is examined to determine the degree of turbulence measurement accuracy needed for accurate power prediction.« less

  11. The influence of non-rigid anatomy and patient positioning on endoscopy-CT image registration in the head and neck.

    PubMed

    Ingram, W Scott; Yang, Jinzhong; Wendt, Richard; Beadle, Beth M; Rao, Arvind; Wang, Xin A; Court, Laurence E

    2017-08-01

    To assess the influence of non-rigid anatomy and differences in patient positioning between CT acquisition and endoscopic examination on endoscopy-CT image registration in the head and neck. Radiotherapy planning CTs and 31-35 daily treatment-room CTs were acquired for nineteen patients. Diagnostic CTs were acquired for thirteen of the patients. The surfaces of the airways were segmented on all scans and triangular meshes were created to render virtual endoscopic images with a calibrated pinhole model of an endoscope. The virtual images were used to take projective measurements throughout the meshes, with reference measurements defined as those taken on the planning CTs and test measurements defined as those taken on the daily or diagnostic CTs. The influence of non-rigid anatomy was quantified by 3D distance errors between reference and test measurements on the daily CTs, and the influence of patient positioning was quantified by 3D distance errors between reference and test measurements on the diagnostic CTs. The daily CT measurements were also used to investigate the influences of camera-to-surface distance, surface angle, and the interval of time between scans. Average errors in the daily CTs were 0.36 ± 0.61 cm in the nasal cavity, 0.58 ± 0.83 cm in the naso- and oropharynx, and 0.47 ± 0.73 cm in the hypopharynx and larynx. Average errors in the diagnostic CTs in those regions were 0.52 ± 0.69 cm, 0.65 ± 0.84 cm, and 0.69 ± 0.90 cm, respectively. All CTs had errors heavily skewed towards 0, albeit with large outliers. Large camera-to-surface distances were found to increase the errors, but the angle at which the camera viewed the surface had no effect. The errors in the Day 1 and Day 15 CTs were found to be significantly smaller than those in the Day 30 CTs (P < 0.05). Inconsistencies of patient positioning have a larger influence than non-rigid anatomy on projective measurement errors. In general, these errors are largest when the camera is in the superior pharynx, where it sees large distances and a lot of muscle motion. The errors are larger when the interval of time between CT acquisitions is longer, which suggests that the interval of time between the CT acquisition and the endoscopic examination should be kept short. The median errors found in this study are comparable to acceptable levels of uncertainty in deformable CT registration. Large errors are possible even when image alignment is very good, indicating that projective measurements must be made carefully to avoid these outliers. © 2017 American Association of Physicists in Medicine.

  12. Use of micro-lightguide spectrophotometry for evaluation of microcirculation in the small and large intestines of horses without gastrointestinal disease.

    PubMed

    Reichert, Christof; Kästner, Sabine B R; Hopster, Klaus; Rohn, Karl; Rötting, Anna K

    2014-11-01

    To evaluate the use of a micro-lightguide tissue spectrophotometer for measurement of tissue oxygenation and blood flow in the small and large intestines of horses under anesthesia. 13 adult horses without gastrointestinal disease. Horses were anesthetized and placed in dorsal recumbency. Ventral midline laparotomy was performed. Intestinal segments were exteriorized to obtain measurements. Spectrophotometric measurements of tissue oxygenation and regional blood flow of the jejunum and pelvic flexure were obtained under various conditions that were considered to have a potential effect on measurement accuracy. In addition, arterial oxygen saturation at the measuring sites was determined by use of pulse oximetry. 12,791 single measurements of oxygen saturation, relative amount of hemoglobin, and blood flow were obtained. Errors occurred in 381 of 12,791 (2.98%) measurements. Most measurement errors occurred when surgical lights were directed at the measuring site; covering the probe with the surgeon's hand did not eliminate this error source. No measurement errors were observed when the probe was positioned on the intestinal wall with room light, at the mesenteric side, or between the mesenteric and antimesenteric side. Values for blood flow had higher variability, and this was most likely caused by motion artifacts of the intestines. The micro-lightguide spectrophotometry system was easy to use on the small and large intestines of horses and provided rapid evaluation of the microcirculation. Results indicated that measurements should be performed with room light only and intestinal motion should be minimized.

  13. Correcting For Seed-Particle Lag In LV Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Gregory S.; Gartrell, Luther R.; Kamemoto, Derek Y.

    1994-01-01

    Two experiments conducted to evaluate effects of sizes of seed particles on errors in LV measurements of mean flows. Both theoretical and conventional experimental methods used to evaluate errors. First experiment focused on measurement of decelerating stagnation streamline of low-speed flow around circular cylinder with two-dimensional afterbody. Second performed in transonic flow and involved measurement of decelerating stagnation streamline of hemisphere with cylindrical afterbody. Concluded, mean-quantity LV measurements subject to large errors directly attributable to sizes of particles. Predictions of particle-response theory showed good agreement with experimental results, indicating velocity-error-correction technique used in study viable for increasing accuracy of laser velocimetry measurements. Technique simple and useful in any research facility in which flow velocities measured.

  14. Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor.

    PubMed

    Zhao, Yanzhi; Cao, Yachao; Zhang, Caifeng; Zhang, Dan; Zhang, Jie

    2017-09-29

    By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint) parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H) method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments.

  15. Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor

    PubMed Central

    Zhao, Yanzhi; Cao, Yachao; Zhang, Caifeng; Zhang, Dan; Zhang, Jie

    2017-01-01

    By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint) parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H) method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments. PMID:28961209

  16. Procrustes-based geometric morphometrics on MRI images: An example of inter-operator bias in 3D landmarks and its impact on big datasets.

    PubMed

    Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea

    2018-01-01

    Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.

  17. Procrustes-based geometric morphometrics on MRI images: An example of inter-operator bias in 3D landmarks and its impact on big datasets

    PubMed Central

    Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea

    2018-01-01

    Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586

  18. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  19. Previous Estimates of Mitochondrial DNA Mutation Level Variance Did Not Account for Sampling Error: Comparing the mtDNA Genetic Bottleneck in Mice and Humans

    PubMed Central

    Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.

    2010-01-01

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273

  20. Research on the method of improving the accuracy of CMM (coordinate measuring machine) testing aspheric surface

    NASA Astrophysics Data System (ADS)

    Cong, Wang; Xu, Lingdi; Li, Ang

    2017-10-01

    Large aspheric surface which have the deviation with spherical surface are being used widely in various of optical systems. Compared with spherical surface, Large aspheric surfaces have lots of advantages, such as improving image quality, correcting aberration, expanding field of view, increasing the effective distance and make the optical system compact, lightweight. Especially, with the rapid development of space optics, space sensor resolution is required higher and viewing angle is requred larger. Aspheric surface will become one of the essential components in the optical system. After finishing Aspheric coarse Grinding surface profile error is about Tens of microns[1].In order to achieve the final requirement of surface accuracy,the aspheric surface must be quickly modified, high precision testing is the basement of rapid convergence of the surface error . There many methods on aspheric surface detection[2], Geometric ray detection, hartmann detection, ronchi text, knifeedge method, direct profile test, interferometry, while all of them have their disadvantage[6]. In recent years the measure of the aspheric surface become one of the import factors which are restricting the aspheric surface processing development. A two meter caliber industrial CMM coordinate measuring machine is avaiable, but it has many drawbacks such as large detection error and low repeatability precision in the measurement of aspheric surface coarse grinding , which seriously affects the convergence efficiency during the aspherical mirror processing. To solve those problems, this paper presents an effective error control, calibration and removal method by calibration mirror position of the real-time monitoring and other effective means of error control, calibration and removal by probe correction and the measurement mode selection method to measure the point distribution program development. This method verified by real engineer examples, this method increases the original industrial-grade coordinate system nominal measurement accuracy PV value of 7 microns to 4microns, Which effectively improves the grinding efficiency of aspheric mirrors and verifies the correctness of the method. This paper also investigates the error detection and operation control method, the error calibration of the CMM and the random error calibration of the CMM .

  1. Flux Sampling Errors for Aircraft and Towers

    NASA Technical Reports Server (NTRS)

    Mahrt, Larry

    1998-01-01

    Various errors and influences leading to differences between tower- and aircraft-measured fluxes are surveyed. This survey is motivated by reports in the literature that aircraft fluxes are sometimes smaller than tower-measured fluxes. Both tower and aircraft flux errors are larger with surface heterogeneity due to several independent effects. Surface heterogeneity may cause tower flux errors to increase with decreasing wind speed. Techniques to assess flux sampling error are reviewed. Such error estimates suffer various degrees of inapplicability in real geophysical time series due to nonstationarity of tower time series (or inhomogeneity of aircraft data). A new measure for nonstationarity is developed that eliminates assumptions on the form of the nonstationarity inherent in previous methods. When this nonstationarity measure becomes large, the surface energy imbalance increases sharply. Finally, strategies for obtaining adequate flux sampling using repeated aircraft passes and grid patterns are outlined.

  2. Fault-tolerant quantum error detection.

    PubMed

    Linke, Norbert M; Gutierrez, Mauricio; Landsman, Kevin A; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R; Monroe, Christopher

    2017-10-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors.

  3. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  4. Estimating Aboveground Biomass in Tropical Forests: Field Methods and Error Analysis for the Calibration of Remote Sensing Observations

    DOE PAGES

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...

    2017-01-07

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  5. A new approach to the form and position error measurement of the auto frame surface based on laser

    NASA Astrophysics Data System (ADS)

    Wang, Hua; Li, Wei

    2013-03-01

    Auto frame is a very large workpiece, with length up to 12 meters and width up to 2 meters, and it's very easy to know that it's inconvenient and not automatic to measure such a large workpiece by independent manual operation. In this paper we propose a new approach to reconstruct the 3D model of the large workpiece, especially the auto truck frame, based on multiple pulsed lasers, for the purpose of measuring the form and position errors. In a concerned area, it just needs one high-speed camera and two lasers. It is a fast, high-precision and economical approach.

  6. Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan

    A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ 2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurementmore » errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.« less

  7. Error mechanism analyses of an ultra-precision stage for high speed scan motion over a large stroke

    NASA Astrophysics Data System (ADS)

    Wang, Shaokai; Tan, Jiubin; Cui, Jiwen

    2015-02-01

    Reticle Stage (RS) is designed to complete scan motion with high speed in nanometer-scale over a large stroke. Comparing with the allowable scan accuracy of a few nanometers, errors caused by any internal or external disturbances are critical and must not be ignored. In this paper, RS is firstly introduced in aspects of mechanical structure, forms of motion, and controlling method. Based on that, mechanisms of disturbances transferred to final servo-related error in scan direction are analyzed, including feedforward error, coupling between the large stroke stage (LS) and the short stroke stage (SS), and movement of measurement reference. Especially, different forms of coupling between SS and LS are discussed in detail. After theoretical analysis above, the contributions of these disturbances to final error are simulated numerically. The residual positioning error caused by feedforward error in acceleration process is about 2 nm after settling time, the coupling between SS and LS about 2.19 nm, and the movements of MF about 0.6 nm.

  8. Linear error analysis of slope-area discharge determinations

    USGS Publications Warehouse

    Kirby, W.H.

    1987-01-01

    The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

  9. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  10. Large Sample Confidence Limits for Goodman and Kruskal's Proportional Prediction Measure TAU-b

    ERIC Educational Resources Information Center

    Berry, Kenneth J.; Mielke, Paul W.

    1976-01-01

    A Fortran Extended program which computes Goodman and Kruskal's Tau-b, its asymmetrical counterpart, Tau-a, and three sets of confidence limits for each coefficient under full multinomial and proportional stratified sampling is presented. A correction of an error in the calculation of the large sample standard error of Tau-b is discussed.…

  11. Fault-tolerant quantum error detection

    PubMed Central

    Linke, Norbert M.; Gutierrez, Mauricio; Landsman, Kevin A.; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R.; Monroe, Christopher

    2017-01-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors. PMID:29062889

  12. Can we obtain the coefficient of restitution from the sound of a bouncing ball?

    NASA Astrophysics Data System (ADS)

    Heckel, Michael; Glielmo, Aldo; Gunkelmann, Nina; Pöschel, Thorsten

    2016-03-01

    The coefficient of restitution may be determined from the sound signal emitted by a sphere bouncing repeatedly off the ground. Although there is a large number of publications exploiting this method, so far, there is no quantitative discussion of the error related to this type of measurement. Analyzing the main error sources, we find that even tiny deviations of the shape from the perfect sphere may lead to substantial errors that dominate the overall error of the measurement. Therefore, we come to the conclusion that the well-established method to measure the coefficient of restitution through the emitted sound is applicable only for the case of nearly perfect spheres. For larger falling height, air drag may lead to considerable error, too.

  13. Influence of OPD in wavelength-shifting interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Hongjun; Tian, Ailing; Liu, Bingcai; Dang, Juanjuan

    2009-12-01

    Phase-shifting interferometry is a powerful tool for high accuracy optical measurement. It operates by change the optical path length in the reference arm or test arm. This method practices by move optical device. So it has much problem when the optical device is very large and heavy. For solve this problem, the wavelength-shifting interferometry was put forwarded. In wavelength-shifting interferometry, the phase shifting angle was achieved by change the wavelength of optical source. The phase shifting angle was decided by wavelength and OPD (Optical Path Difference) between test and reference wavefront. So the OPD is an important factor to measure results. But in measurement, because the positional error and profile error of under testing optical element is exist, the phase shifting angle is different in different test point when wavelength scanning, it will introduce phase shifting angle error, so it will introduce optical surface measure error. For analysis influence of OPD on optical surface error, the relation between surface error and OPD was researched. By simulation, the relation between phase shifting error and OPD was established. By analysis, the error compensation method was put forward. After error compensation, the measure results can be improved to great extend.

  14. Influence of OPD in wavelength-shifting interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Hongjun; Tian, Ailing; Liu, Bingcai; Dang, Juanjuan

    2010-03-01

    Phase-shifting interferometry is a powerful tool for high accuracy optical measurement. It operates by change the optical path length in the reference arm or test arm. This method practices by move optical device. So it has much problem when the optical device is very large and heavy. For solve this problem, the wavelength-shifting interferometry was put forwarded. In wavelength-shifting interferometry, the phase shifting angle was achieved by change the wavelength of optical source. The phase shifting angle was decided by wavelength and OPD (Optical Path Difference) between test and reference wavefront. So the OPD is an important factor to measure results. But in measurement, because the positional error and profile error of under testing optical element is exist, the phase shifting angle is different in different test point when wavelength scanning, it will introduce phase shifting angle error, so it will introduce optical surface measure error. For analysis influence of OPD on optical surface error, the relation between surface error and OPD was researched. By simulation, the relation between phase shifting error and OPD was established. By analysis, the error compensation method was put forward. After error compensation, the measure results can be improved to great extend.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  16. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    NASA Astrophysics Data System (ADS)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  17. Experimental investigation of strain errors in stereo-digital image correlation due to camera calibration

    NASA Astrophysics Data System (ADS)

    Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan

    2018-03-01

    The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.

  18. Impact of Tropospheric Aerosol Absorption on Ozone Retrieval from buv Measurements

    NASA Technical Reports Server (NTRS)

    Torres, O.; Bhartia, P. K.

    1998-01-01

    The impact of tropospheric aerosols on the retrieval of column ozone amounts using spaceborne measurements of backscattered ultraviolet radiation is examined. Using radiative transfer calculations, we show that uv-absorbing desert dust may introduce errors as large as 10% in ozone column amount, depending on the aerosol layer height and optical depth. Smaller errors are produced by carbonaceous aerosols that result from biomass burning. Though the error is produced by complex interactions between ozone absorption (both stratospheric and tropospheric), aerosol scattering, and aerosol absorption, a surprisingly simple correction procedure reduces the error to about 1%, for a variety of aerosols and for a wide range of aerosol loading. Comparison of the corrected TOMS data with operational data indicates that though the zonal mean total ozone derived from TOMS are not significantly affected by these errors, localized affects in the tropics can be large enough to seriously affect the studies of tropospheric ozone that are currently undergoing using the TOMS data.

  19. A variational regularization of Abel transform for GPS radio occultation

    NASA Astrophysics Data System (ADS)

    Wee, Tae-Kwon

    2018-04-01

    In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.

  20. Evaluation of the depth-integration method of measuring water discharge in large rivers

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    1992-01-01

    The depth-integration method oor measuring water discharge makes a continuos measurement of the water velocity from the water surface to the bottom at 20 to 40 locations or verticals across a river. It is especially practical for large rivers where river traffic makes it impractical to use boats attached to taglines strung across the river or to use current meters suspended from bridges. This method has the additional advantage over the standard two- and eight-tenths method in that a discharge-weighted suspended-sediment sample can be collected at the same time. When this method is used in large rivers such as the Missouri, Mississippi and Ohio, a microwave navigation system is used to determine the ship's position at each vertical sampling location across the river, and to make accurate velocity corrections to compensate for shift drift. An essential feature is a hydraulic winch that can lower and raise the current meter at a constant transit velocity so that the velocities at all depths are measured for equal lengths of time. Field calibration measurements show that: (1) the mean velocity measured on the upcast (bottom to surface) is within 1% of the standard mean velocity determined by 9-11 point measurements; (2) if the transit velocity is less than 25% of the mean velocity, then average error in the mean velocity is 4% or less. The major source of bias error is a result of mounting the current meter above a sounding weight and sometimes above a suspended-sediment sampling bottle, which prevents measurement of the velocity all the way to the bottom. The measured mean velocity is slightly larger than the true mean velocity. This bias error in the discharge is largest in shallow water (approximately 8% for the Missouri River at Hermann, MO, where the mean depth was 4.3 m) and smallest in deeper water (approximately 3% for the Mississippi River at Vickbsurg, MS, where the mean depth was 14.5 m). The major source of random error in the discharge is the natural variability of river velocities, which we assumed to be independent and random at each vertical. The standard error of the estimated mean velocity, at an individual vertical sampling location, may be as large as 9%, for large sand-bed alluvial rivers. The computed discharge, however, is a weighted mean of these random velocities. Consequently the standard error of computed discharge is divided by the square root of the number of verticals, producing typical values between 1 and 2%. The discharges measured by the depth-integrated method agreed within ??5% of those measured simultaneously by the standard two- and eight-tenths, six-tenth and moving boat methods. ?? 1992.

  1. Measurements of stem diameter: implications for individual- and stand-level errors.

    PubMed

    Paul, Keryn I; Larmour, John S; Roxburgh, Stephen H; England, Jacqueline R; Davies, Micah J; Luck, Hamish D

    2017-08-01

    Stem diameter is one of the most common measurements made to assess the growth of woody vegetation, and the commercial and environmental benefits that it provides (e.g. wood or biomass products, carbon sequestration, landscape remediation). Yet inconsistency in its measurement is a continuing source of error in estimates of stand-scale measures such as basal area, biomass, and volume. Here we assessed errors in stem diameter measurement through repeated measurements of individual trees and shrubs of varying size and form (i.e. single- and multi-stemmed) across a range of contrasting stands, from complex mixed-species plantings to commercial single-species plantations. We compared a standard diameter tape with a Stepped Diameter Gauge (SDG) for time efficiency and measurement error. Measurement errors in diameter were slightly (but significantly) influenced by size and form of the tree or shrub, and stem height at which the measurement was made. Compared to standard tape measurement, the mean systematic error with SDG measurement was only -0.17 cm, but varied between -0.10 and -0.52 cm. Similarly, random error was relatively large, with standard deviations (and percentage coefficients of variation) averaging only 0.36 cm (and 3.8%), but varying between 0.14 and 0.61 cm (and 1.9 and 7.1%). However, at the stand scale, sampling errors (i.e. how well individual trees or shrubs selected for measurement of diameter represented the true stand population in terms of the average and distribution of diameter) generally had at least a tenfold greater influence on random errors in basal area estimates than errors in diameter measurements. This supports the use of diameter measurement tools that have high efficiency, such as the SDG. Use of the SDG almost halved the time required for measurements compared to the diameter tape. Based on these findings, recommendations include the following: (i) use of a tape to maximise accuracy when developing allometric models, or when monitoring relatively small changes in permanent sample plots (e.g. National Forest Inventories), noting that care is required in irregular-shaped, large-single-stemmed individuals, and (ii) use of a SDG to maximise efficiency when using inventory methods to assess basal area, and hence biomass or wood volume, at the stand scale (i.e. in studies of impacts of management or site quality) where there are budgetary constraints, noting the importance of sufficient sample sizes to ensure that the population sampled represents the true population.

  2. SMOS: a satellite mission to measure ocean surface salinity

    NASA Astrophysics Data System (ADS)

    Font, Jordi; Kerr, Yann H.; Srokosz, Meric A.; Etcheto, Jacqueline; Lagerloef, Gary S.; Camps, Adriano; Waldteufel, Philippe

    2001-01-01

    The ESA's SMOS (Soil Moisture and Ocean Salinity) Earth Explorer Opportunity Mission will be launched by 2005. Its baseline payload is a microwave L-band (21 cm, 1.4 GHz) 2D interferometric radiometer, Y shaped, with three arms 4.5 m long. This frequency allows the measurement of brightness temperature (Tb) under the best conditions to retrieve soil moisture and sea surface salinity (SSS). Unlike other oceanographic variables, until now it has not been possible to measure salinity from space. However, large ocean areas lack significant salinity measurements. The 2D interferometer will measure Tb at large and different incidence angles, for two polarizations. It is possible to obtain SSS from L-band passive microwave measurements if the other factors influencing Tb (SST, surface roughness, foam, sun glint, rain, ionospheric effects and galactic/cosmic background radiation) can be accounted for. Since the radiometric sensitivity is low, SSS cannot be recovered to the required accuracy from a single measurement as the error is about 1-2 psu. If the errors contributing to the uncertainty in Tb are random, averaging the independent data and views along the track, and considering a 200 km square, allow the error to be reduced to 0.1-0.2 pus, assuming all ancillary errors are budgeted.

  3. Influence of video compression on the measurement error of the television system

    NASA Astrophysics Data System (ADS)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  4. Achievable flatness in a large microwave power transmitting antenna

    NASA Technical Reports Server (NTRS)

    Ried, R. C.

    1980-01-01

    A dual reference SPS system with pseudoisotropic graphite composite as a representative dimensionally stable composite was studied. The loads, accelerations, thermal environments, temperatures and distortions were calculated for a variety of operational SPS conditions along with statistical considerations of material properties, manufacturing tolerances, measurement accuracy and the resulting loss of sight (LOS) and local slope distributions. A LOS error and a subarray rms slope error of two arc minutes can be achieved with a passive system. Results show that existing materials measurement, manufacturing, assembly and alignment techniques can be used to build the microwave power transmission system antenna structure. Manufacturing tolerance can be critical to rms slope error. The slope error budget can be met with a passive system. Structural joints without free play are essential in the assembly of the large truss structure. Variations in material properties, particularly for coefficient of thermal expansion from part to part, is more significant than actual value.

  5. Airplane wing vibrations due to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Pastel, R. L.; Caruthers, J. E.; Frost, W.

    1981-01-01

    The magnitude of error introduced due to wing vibration when measuring atmospheric turbulence with a wind probe mounted at the wing tip was studied. It was also determined whether accelerometers mounted on the wing tip are needed to correct this error. A spectrum analysis approach is used to determine the error. Estimates of the B-57 wing characteristics are used to simulate the airplane wing, and von Karman's cross spectrum function is used to simulate atmospheric turbulence. It was found that wing vibration introduces large error in measured spectra of turbulence in the frequency's range close to the natural frequencies of the wing.

  6. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    NASA Astrophysics Data System (ADS)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  7. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    PubMed

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  8. An image registration-based technique for noninvasive vascular elastography

    NASA Astrophysics Data System (ADS)

    Valizadeh, Sina; Makkiabadi, Bahador; Mirbagheri, Alireza; Soozande, Mehdi; Manwar, Rayyan; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza

    2018-02-01

    Non-invasive vascular elastography is an emerging technique in vascular tissue imaging. During the past decades, several techniques have been suggested to estimate the tissue elasticity by measuring the displacement of the Carotid vessel wall. Cross correlation-based methods are the most prevalent approaches to measure the strain exerted in the wall vessel by the blood pressure. In the case of a low pressure, the displacement is too small to be apparent in ultrasound imaging, especially in the regions far from the center of the vessel, causing a high error of displacement measurement. On the other hand, increasing the compression leads to a relatively large displacement in the regions near the center, which reduces the performance of the cross correlation-based methods. In this study, a non-rigid image registration-based technique is proposed to measure the tissue displacement for a relatively large compression. The results show that the error of the displacement measurement obtained by the proposed method is reduced by increasing the amount of compression while the error of the cross correlationbased method rises for a relatively large compression. We also used the synthetic aperture imaging method, benefiting the directivity diagram, to improve the image quality, especially in the superficial regions. The best relative root-mean-square error (RMSE) of the proposed method and the adaptive cross correlation method were 4.5% and 6%, respectively. Consequently, the proposed algorithm outperforms the conventional method and reduces the relative RMSE by 25%.

  9. COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.

    USGS Publications Warehouse

    Hromadka, T.V.; Yen, C.C.; Guymon, G.L.

    1985-01-01

    The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.

  10. Satellite measurements of large-scale air pollution - Methods

    NASA Technical Reports Server (NTRS)

    Kaufman, Yoram J.; Ferrare, Richard A.; Fraser, Robert S.

    1990-01-01

    A technique for deriving large-scale pollution parameters from NIR and visible satellite remote-sensing images obtained over land or water is described and demonstrated on AVHRR images. The method is based on comparison of the upward radiances on clear and hazy days and permits simultaneous determination of aerosol optical thickness with error Delta tau(a) = 0.08-0.15, particle size with error + or - 100-200 nm, and single-scattering albedo with error + or - 0.03 (for albedos near 1), all assuming accurate and stable satellite calibration and stable surface reflectance between the clear and hazy days. In the analysis of AVHRR images of smoke from a forest fire, good agreement was obtained between satellite and ground-based (sun-photometer) measurements of aerosol optical thickness, but the satellite particle sizes were systematically greater than those measured from the ground. The AVHRR single-scattering albedo agreed well with a Landsat albedo for the same smoke.

  11. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boehnke, E McKenzie; DeMarco, J; Steers, J

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readingsmore » are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.« less

  12. A new multiple air beam approach for in-process form error optical measurement

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Li, R.

    2018-07-01

    In-process measurement can provide feedback for the control of workpiece precision in terms of size, roughness and, in particular, mid-spatial frequency form error. Optical measurement methods are of the non-contact type and possess high precision, as required for in-process form error measurement. In precision machining, coolant is commonly used to reduce heat generation and thermal deformation on the workpiece surface. However, the use of coolant will induce an opaque coolant barrier if optical measurement methods are used. In this paper, a new multiple air beam approach is proposed. The new approach permits the displacement of coolant from any direction and with a large thickness, i.e. with a large amount of coolant. The model, the working principle, and the key features of the new approach are presented. Based on the proposed new approach, a new in-process form error optical measurement system is developed. The coolant removal capability and the performance of this new multiple air beam approach are assessed. The experimental results show that the workpiece surface y(x, z) can be measured successfully with standard deviation up to 0.3011 µm even under a large amount of coolant, such that the coolant thickness is 15 mm. This means a relative uncertainty of 2σ up to 4.35% and the workpiece surface is deeply immersed in the opaque coolant. The results also show that, in terms of coolant removal capability, air supply and air velocity, the proposed new approach improves by, respectively, 3.3, 1.3 and 5.3 times on the previous single air beam approach. The results demonstrate the significant improvements brought by the new multiple air beam method together with the developed measurement system.

  13. Comparison of MLC error sensitivity of various commercial devices for VMAT pre-treatment quality assurance.

    PubMed

    Saito, Masahide; Sano, Naoki; Shibata, Yuki; Kuriyama, Kengo; Komiyama, Takafumi; Marino, Kan; Aoki, Shinichi; Ashizawa, Kazunari; Yoshizawa, Kazuya; Onishi, Hiroshi

    2018-05-01

    The purpose of this study was to compare the MLC error sensitivity of various measurement devices for VMAT pre-treatment quality assurance (QA). This study used four QA devices (Scandidos Delta4, PTW 2D-array, iRT systems IQM, and PTW Farmer chamber). Nine retrospective VMAT plans were used and nine MLC error plans were generated for all nine original VMAT plans. The IQM and Farmer chamber were evaluated using the cumulative signal difference between the baseline and error-induced measurements. In addition, to investigate the sensitivity of the Delta4 device and the 2D-array, global gamma analysis (1%/1, 2%/2, and 3%/3 mm), dose difference (1%, 2%, and 3%) were used between the baseline and error-induced measurements. Some deviations of the MLC error sensitivity for the evaluation metrics and MLC error ranges were observed. For the two ionization devices, the sensitivity of the IQM was significantly better than that of the Farmer chamber (P < 0.01) while both devices had good linearly correlation between the cumulative signal difference and the magnitude of MLC errors. The pass rates decreased as the magnitude of the MLC error increased for both Delta4 and 2D-array. However, the small MLC error for small aperture sizes, such as for lung SBRT, could not be detected using the loosest gamma criteria (3%/3 mm). Our results indicate that DD could be more useful than gamma analysis for daily MLC QA, and that a large-area ionization chamber has a greater advantage for detecting systematic MLC error because of the large sensitive volume, while the other devices could not detect this error for some cases with a small range of MLC error. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  14. Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method

    PubMed Central

    Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni

    2017-01-01

    The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508

  15. The cost of adherence mismeasurement in serious mental illness: a claims-based analysis.

    PubMed

    Shafrin, Jason; Forma, Felicia; Scherer, Ethan; Hatch, Ainslie; Vytlacil, Edward; Lakdawalla, Darius

    2017-05-01

    To quantify how adherence mismeasurement affects the estimated impact of adherence on inpatient costs among patients with serious mental illness (SMI). Proportion of days covered (PDC) is a common claims-based measure of medication adherence. Because PDC does not measure medication ingestion, however, it may inaccurately measure adherence. We derived a formula to correct the bias that occurs in adherence-utilization studies resulting from errors in claims-based measures of adherence. We conducted a literature review to identify the correlation between gold-standard and claims-based adherence measures. We derived a bias-correction methodology to address claims-based medication adherence measurement error. We then applied this methodology to a case study of patients with SMI who initiated atypical antipsychotics in 2 large claims databases. Our literature review identified 6 studies of interest. The 4 most relevant ones measured correlations between 0.38 and 0.91. Our preferred estimate implies that the effect of adherence on inpatient spending estimated from claims data would understate the true effect by a factor of 5.3, if there were no other sources of bias. Although our procedure corrects for measurement error, such error also may amplify or mitigate other potential biases. For instance, if adherent patients are healthier than nonadherent ones, measurement error makes the resulting bias worse. On the other hand, if adherent patients are sicker, measurement error mitigates the other bias. Measurement error due to claims-based adherence measures is worth addressing, alongside other more widely emphasized sources of bias in inference.

  16. Theoretical and experimental errors for in situ measurements of plant water potential.

    PubMed

    Shackel, K A

    1984-07-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (-0.6 to -1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design.

  17. Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1

    PubMed Central

    Shackel, Kenneth A.

    1984-01-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701

  18. Error correcting coding-theory for structured light illumination systems

    NASA Astrophysics Data System (ADS)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-06-01

    Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.

  19. Comparison of survey and photogrammetry methods to position gravity data, Yucca Mountain, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponce, D.A.; Wu, S.S.C.; Spielman, J.B.

    1985-12-31

    Locations of gravity stations at Yucca Mountain, Nevada, were determined by a survey using an electronic distance-measuring device and by a photogram-metric method. The data from both methods were compared to determine if horizontal and vertical coordinates developed from photogrammetry are sufficently accurate to position gravity data at the site. The results show that elevations from the photogrammetric data have a mean difference of 0.57 +- 0.70 m when compared with those of the surveyed data. Comparison of the horizontal control shows that the two methods agreed to within 0.01 minute. At a latitude of 45{sup 0}, an error ofmore » 0.01 minute (18 m) corresponds to a gravity anomaly error of 0.015 mGal. Bouguer gravity anomalies are most sensitive to errors in elevation, thus elevation is the determining factor for use of photogrammetric or survey methods to position gravity data. Because gravity station positions are difficult to locate on aerial photographs, photogrammetric positions are not always exactly at the gravity station; therefore, large disagreements may appear when comparing electronic and photogrammetric measurements. A mean photogrammetric elevation error of 0.57 m corresponds to a gravity anomaly error of 0.11 mGal. Errors of 0.11 mGal are too large for high-precision or detailed gravity measurements but acceptable for regional work. 1 ref. 2 figs., 4 tabs.« less

  20. Clinical vision characteristics of the congenital achromatopsias. I. Visual acuity, refractive error, and binocular status.

    PubMed

    Haegerstrom-Portnoy, G; Schneck, M E; Verdon, W A; Hewlett, S E

    1996-07-01

    Visual acuity, refractive error, and binocular status were determined in 43 autosomal recessive (AR) and 15 X-linked (XL) congenital achromats. The achromats were classified by color matching and spectral sensitivity data. Large interindividual variation in refractive error and visual acuity was present within each achromat group (complete AR, incomplete AR, and XL). However, the number of individuals with significant interocular acuity differences is very small. Most XLs are myopic; ARs show a wide range of refractive error from high myopia to high hyperopia. Acuity of the AR and XL groups was very similar. With-the-rule astigmatism of large amount is very common in achromats, particularly ARs. There is a close association between strabismus and interocular acuity differences in the ARs, with the fixating eye having better than average acuity. The large overlap of acuity and refractive error of XL and AR achromats suggests that these measures are less useful for differential diagnosis than generally indicated by the clinical literature.

  1. High Precision Metrology on the Ultra-Lightweight W 50.8 cm f/1.25 Parabolic SHARPI Primary Mirror using a CGH Null Lens

    NASA Technical Reports Server (NTRS)

    Antonille, Scott

    2004-01-01

    For potential use on the SHARPI mission, Eastman Kodak has delivered a 50.8cm CA f/1.25 ultra-lightweight UV parabolic mirror with a surface figure error requirement of 6nm RMS. We address the challenges involved in verifying and mapping the surface error of this large lightweight mirror to +/-3nm using a diffractive CGH null lens. Of main concern is removal of large systematic errors resulting from surface deflections of the mirror due to gravity as well as smaller contributions from system misalignment and reference optic errors. We present our efforts to characterize these errors and remove their wavefront error contribution in post-processing as well as minimizing the uncertainty these calculations introduce. Data from Kodak and preliminary measurements from NASA Goddard will be included.

  2. Chapter 11: Sample Design Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Khawaja, M. Sami; Rushton, Josh

    Evaluating an energy efficiency program requires assessing the total energy and demand saved through all of the energy efficiency measures provided by the program. For large programs, the direct assessment of savings for each participant would be cost-prohibitive. Even if a program is small enough that a full census could be managed, such an undertaking would almost always be an inefficient use of evaluation resources. The bulk of this chapter describes methods for minimizing and quantifying sampling error. Measurement error and regression error are discussed in various contexts in other chapters.

  3. Certification of ICI 1012 optical data storage tape

    NASA Technical Reports Server (NTRS)

    Howell, J. M.

    1993-01-01

    ICI has developed a unique and novel method of certifying a Terabyte optical tape. The tape quality is guaranteed as a statistical upper limit on the probability of uncorrectable errors. This is called the Corrected Byte Error Rate or CBER. We developed this probabilistic method because of two reasons why error rate cannot be measured directly. Firstly, written data is indelible, so one cannot employ write/read tests such as used for magnetic tape. Secondly, the anticipated error rates need impractically large samples to measure accurately. For example, a rate of 1E-12 implies only one byte in error per tape. The archivability of ICI 1012 Data Storage Tape in general is well characterized and understood. Nevertheless, customers expect performance guarantees to be supported by test results on individual tapes. In particular, they need assurance that data is retrievable after decades in archive. This paper describes the mathematical basis, measurement apparatus and applicability of the certification method.

  4. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE PAGES

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...

    2016-06-01

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  5. Collaborated measurement of three-dimensional position and orientation errors of assembled miniature devices with two vision systems

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Wei; Luo, Yi; Yang, Weimin; Chen, Liang

    2013-01-01

    In assembly of miniature devices, the position and orientation of the parts to be assembled should be guaranteed during or after assembly. In some cases, the relative position or orientation errors among the parts can not be measured from only one direction using visual method, because of visual occlusion or for the features of parts located in a three-dimensional way. An automatic assembly system for precise miniature devices is introduced. In the modular assembly system, two machine vision systems were employed for measurement of the three-dimensionally distributed assembly errors. High resolution CCD cameras and high position repeatability precision stages were integrated to realize high precision measurement in large work space. The two cameras worked in collaboration in measurement procedure to eliminate the influence of movement errors of the rotational or translational stages. A set of templates were designed for calibration of the vision systems and evaluation of the system's measurement accuracy.

  6. Precise Positioning Method for Logistics Tracking Systems Using Personal Handy-Phone System Based on Mahalanobis Distance

    NASA Astrophysics Data System (ADS)

    Yokoi, Naoaki; Kawahara, Yasuhiro; Hosaka, Hiroshi; Sakata, Kenji

    Focusing on the Personal Handy-phone System (PHS) positioning service used in physical distribution logistics, a positioning error offset method for improving positioning accuracy is invented. A disadvantage of PHS positioning is that measurement errors caused by the fluctuation of radio waves due to buildings around the terminal are large, ranging from several tens to several hundreds of meters. In this study, an error offset method is developed, which learns patterns of positioning results (latitude and longitude) containing errors and the highest signal strength at major logistic points in advance, and matches them with new data measured in actual distribution processes according to the Mahalanobis distance. Then the matching resolution is improved to 1/40 that of the conventional error offset method.

  7. Compensation for positioning error of industrial robot for flexible vision measuring system

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  8. Zonal average earth radiation budget measurements from satellites for climate studies

    NASA Technical Reports Server (NTRS)

    Ellis, J. S.; Haar, T. H. V.

    1976-01-01

    Data from 29 months of satellite radiation budget measurements, taken intermittently over the period 1964 through 1971, are composited into mean month, season and annual zonally averaged meridional profiles. Individual months, which comprise the 29 month set, were selected as representing the best available total flux data for compositing into large scale statistics for climate studies. A discussion of spatial resolution of the measurements along with an error analysis, including both the uncertainty and standard error of the mean, are presented.

  9. Groundwater flow in the transition zone between freshwater and saltwater: a field-based study and analysis of measurement errors

    NASA Astrophysics Data System (ADS)

    Post, Vincent E. A.; Banks, Eddie; Brunke, Miriam

    2018-02-01

    The quantification of groundwater flow near the freshwater-saltwater transition zone at the coast is difficult because of variable-density effects and tidal dynamics. Head measurements were collected along a transect perpendicular to the shoreline at a site south of the city of Adelaide, South Australia, to determine the transient flow pattern. This paper presents a detailed overview of the measurement procedure, data post-processing methods and uncertainty analysis in order to assess how measurement errors affect the accuracy of the inferred flow patterns. A particular difficulty encountered was that some of the piezometers were leaky, which necessitated regular measurements of the electrical conductivity and temperature of the water inside the wells to correct for density effects. Other difficulties included failure of pressure transducers, data logger clock drift and operator error. The data obtained were sufficiently accurate to show that there is net seaward horizontal flow of freshwater in the top part of the aquifer, and a net landward flow of saltwater in the lower part. The vertical flow direction alternated with the tide, but due to the large uncertainty of the head gradients and density terms, no net flow could be established with any degree of confidence. While the measurement problems were amplified under the prevailing conditions at the site, similar errors can lead to large uncertainties everywhere. The methodology outlined acknowledges the inherent uncertainty involved in measuring groundwater flow. It can also assist to establish the accuracy requirements of the experimental setup.

  10. Resolution-enhancement and sampling error correction based on molecular absorption line in frequency scanning interferometry

    NASA Astrophysics Data System (ADS)

    Pan, Hao; Qu, Xinghua; Shi, Chunzhao; Zhang, Fumin; Li, Yating

    2018-06-01

    The non-uniform interval resampling method has been widely used in frequency modulated continuous wave (FMCW) laser ranging. In the large-bandwidth and long-distance measurements, the range peak is deteriorated due to the fiber dispersion mismatch. In this study, we analyze the frequency-sampling error caused by the mismatch and measure it using the spectroscopy of molecular frequency references line. By using the adjacent points' replacement and spline interpolation technique, the sampling errors could be eliminated. The results demonstrated that proposed method is suitable for resolution-enhancement and high-precision measurement. Moreover, using the proposed method, we achieved the precision of absolute distance less than 45 μm within 8 m.

  11. MO-FG-202-05: Identifying Treatment Planning System Errors in IROC-H Phantom Irradiations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, J; Followill, D; Howell, R

    Purpose: Treatment Planning System (TPS) errors can affect large numbers of cancer patients receiving radiation therapy. Using an independent recalculation system, the Imaging and Radiation Oncology Core-Houston (IROC-H) can identify institutions that have not sufficiently modelled their linear accelerators in their TPS model. Methods: Linear accelerator point measurement data from IROC-H’s site visits was aggregated and analyzed from over 30 linear accelerator models. Dosimetrically similar models were combined to create “classes”. The class data was used to construct customized beam models in an independent treatment dose verification system (TVS). Approximately 200 head and neck phantom plans from 2012 to 2015more » were recalculated using this TVS. Comparison of plan accuracy was evaluated by comparing the measured dose to the institution’s TPS dose as well as the TVS dose. In cases where the TVS was more accurate than the institution by an average of >2%, the institution was identified as having a non-negligible TPS error. Results: Of the ∼200 recalculated plans, the average improvement using the TVS was ∼0.1%; i.e. the recalculation, on average, slightly outperformed the institution’s TPS. Of all the recalculated phantoms, 20% were identified as having a non-negligible TPS error. Fourteen plans failed current IROC-H criteria; the average TVS improvement of the failing plans was ∼3% and 57% were found to have non-negligible TPS errors. Conclusion: IROC-H has developed an independent recalculation system to identify institutions that have considerable TPS errors. A large number of institutions were found to have non-negligible TPS errors. Even institutions that passed IROC-H criteria could be identified as having a TPS error. Resolution of such errors would improve dose delivery for a large number of IROC-H phantoms and ultimately, patients.« less

  12. Mars approach navigation using Doppler and range measurements to surface beacons and orbiting spacecraft

    NASA Technical Reports Server (NTRS)

    Thurman, Sam W.; Estefan, Jeffrey A.

    1991-01-01

    Approximate analytical models are developed and used to construct an error covariance analysis for investigating the range of orbit determination accuracies which might be achieved for typical Mars approach trajectories. The sensitivity or orbit determination accuracy to beacon/orbiter position errors and to small spacecraft force modeling errors is also investigated. The results indicate that the orbit determination performance obtained from both Doppler and range data is a strong function of the inclination of the approach trajectory to the Martian equator, for surface beacons, and for orbiters, the inclination relative to the orbital plane. Large variations in performance were also observed for different approach velocity magnitudes; Doppler data in particular were found to perform poorly in determining the downtrack (along the direction of flight) component of spacecraft position. In addition, it was found that small spacecraft acceleration modeling errors can induce large errors in the Doppler-derived downtrack position estimate.

  13. Overview of the TOPEX/Poseidon Platform Harvest Verification Experiment

    NASA Technical Reports Server (NTRS)

    Morris, Charles S.; DiNardo, Steven J.; Christensen, Edward J.

    1995-01-01

    An overview is given of the in situ measurement system installed on Texaco's Platform Harvest for verification of the sea level measurement from the TOPEX/Poseidon satellite. The prelaunch error budget suggested that the total root mean square (RMS) error due to measurements made at this verification site would be less than 4 cm. The actual error budget for the verification site is within these original specifications. However, evaluation of the sea level data from three measurement systems at the platform has resulted in unexpectedly large differences between the systems. Comparison of the sea level measurements from the different tide gauge systems has led to a better understanding of the problems of measuring sea level in relatively deep ocean. As of May 1994, the Platform Harvest verification site has successfully supported 60 TOPEX/Poseidon overflights.

  14. The impact of reflectivity correction and conversion methods to improve precipitation estimation by weather radar for an extreme low-land Mesoscale Convective System

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2014-05-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands. For most of the country this led to over 15 hours of near-continuous precipitation, which resulted in total event accumulations exceeding 150 mm in the eastern part of the Netherlands. Such accumulations belong to the largest sums ever recorded in this country and gave rise to local flooding. Measuring precipitation by weather radar within such mesoscale convective systems is known to be a challenge, since measurements are affected by multiple sources of error. For the current event the operational weather radar rainfall product only estimated about 30% of the actual amount of precipitation as measured by rain gauges. In the current presentation we will try to identify what gave rise to such large underestimations. In general weather radar measurement errors can be subdivided into two different groups: 1) errors affecting the volumetric reflectivity measurements taken, and 2) errors related to the conversion of reflectivity values in rainfall intensity and attenuation estimates. To correct for the first group of errors, the quality of the weather radar reflectivity data was improved by successively correcting for 1) clutter and anomalous propagation, 2) radar calibration, 3) wet radome attenuation, 4) signal attenuation and 5) the vertical profile of reflectivity. Such consistent corrections are generally not performed by operational meteorological services. Results show a large improvement in the quality of the precipitation data, however still only ~65% of the actual observed accumulations was estimated. To further improve the quality of the precipitation estimates, the second group of errors are corrected for by making use of disdrometer measurements taken in close vicinity of the radar. Based on these data the parameters of a normalized drop size distribution are estimated for the total event as well as for each precipitation type separately (convective, stratiform and undefined). These are then used to obtain coherent parameter sets for the radar reflectivity-rainfall rate (Z-R) and radar reflectivity-attenuation (Z-k) relationship, specifically applicable for this event. By applying a single parameter set to correct for both sources of errors, the quality of the rainfall product improves further, leading to >80% of the observed accumulations. However, by differentiating between precipitation type no better results are obtained as when using the operational relationships. This leads to the question: how representative are local disdrometer observations to correct large scale weather radar measurements? In order to tackle this question a Monte Carlo approach was used to generate >10000 sets of the normalized dropsize distribution parameters and to assess their impact on the estimated precipitation amounts. Results show that a large number of parameter sets result in improved precipitation estimated by the weather radar closely resembling observations. However, these optimal sets vary considerably as compared to those obtained from the local disdrometer measurements.

  15. First measurements of error fields on W7-X using flux surface mapping

    DOE PAGES

    Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; ...

    2016-08-03

    Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field 'more » $${\\rlap{-}\\ \\iota} =1/2$$ ' magnetic configuration ($${\\rlap{-}\\ \\iota} =\\iota /2\\pi $$ ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small $$\\sim 0.04$$ m intrinsic island chain with a $${{130}^{\\circ}}$$ phase relative to the first module of the W7-X experiment. Lastly, these error fields are determined to be small and easily correctable by the trim coil system.« less

  16. Research on precision grinding technology of large scale and ultra thin optics

    NASA Astrophysics Data System (ADS)

    Zhou, Lian; Wei, Qiancai; Li, Jie; Chen, Xianhua; Zhang, Qinghua

    2018-03-01

    The flatness and parallelism error of large scale and ultra thin optics have an important influence on the subsequent polishing efficiency and accuracy. In order to realize the high precision grinding of those ductile elements, the low deformation vacuum chuck was designed first, which was used for clamping the optics with high supporting rigidity in the full aperture. Then the optics was planar grinded under vacuum adsorption. After machining, the vacuum system was turned off. The form error of optics was on-machine measured using displacement sensor after elastic restitution. The flatness would be convergenced with high accuracy by compensation machining, whose trajectories were integrated with the measurement result. For purpose of getting high parallelism, the optics was turned over and compensation grinded using the form error of vacuum chuck. Finally, the grinding experiment of large scale and ultra thin fused silica optics with aperture of 430mm×430mm×10mm was performed. The best P-V flatness of optics was below 3 μm, and parallelism was below 3 ″. This machining technique has applied in batch grinding of large scale and ultra thin optics.

  17. Improved motor control method with measurements of fiber optics gyro (FOG) for dual-axis rotational inertial navigation system (RINS).

    PubMed

    Song, Tianxiao; Wang, Xueyun; Liang, Wenwei; Xing, Li

    2018-05-14

    Benefiting from frame structure, RINS can improve the navigation accuracy by modulating the inertial sensor errors with proper rotation scheme. In the traditional motor control method, the measurements of the photoelectric encoder are always adopted to drive inertial measurement unit (IMU) to rotate. However, when carrier conducts heading motion, the inertial sensor errors may no longer be zero-mean in navigation coordinate. Meanwhile, some high-speed carriers like aircraft need to roll a certain angle to balance the centrifugal force during the heading motion, which may result in non-negligible coupling errors, caused by the FOG installation errors and scale factor errors. Moreover, the error parameters of FOG are susceptible to the temperature and magnetic field, and the pre-calibration is a time-consuming process which is difficult to completely suppress the FOG-related errors. In this paper, an improved motor control method with the measurements of FOG is proposed to address these problems, with which the outer frame can insulate the carrier's roll motion and the inner frame can simultaneously achieve the rotary modulation on the basis of insulating the heading motion. The results of turntable experiments indicate that the navigation performance of dual-axis RINS has been significantly improved over the traditional method, which could still be maintained even with large FOG installation errors and scale factor errors, proving that the proposed method can relax the requirements for the accuracy of FOG-related errors.

  18. Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry

    NASA Technical Reports Server (NTRS)

    Brown, Denise L.; Bunoz, Jean-Philippe; Gay, Robert

    2012-01-01

    The Exploration Flight Test 1 (EFT-1) mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on on-board altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. The error sources for the barometric altimeters are not independent, and many error sources result in bias in a specific direction. Therefore conventional error budget methods could not be applied. Instead, high fidelity Monte-Carlo simulation was performed and error bounds were determined based on the results of this analysis. Aerodynamic errors were the largest single contributor to the error budget for the barometric altimeters. The large errors drove a change to the altitude trigger setpoint for FBC jettison deploy.

  19. Measuring Diameters Of Large Vessels

    NASA Technical Reports Server (NTRS)

    Currie, James R.; Kissel, Ralph R.; Oliver, Charles E.; Smith, Earnest C.; Redmon, John W., Sr.; Wallace, Charles C.; Swanson, Charles P.

    1990-01-01

    Computerized apparatus produces accurate results quickly. Apparatus measures diameter of tank or other large cylindrical vessel, without prior knowledge of exact location of cylindrical axis. Produces plot of inner circumference, estimate of true center of vessel, data on radius, diameter of best-fit circle, and negative and positive deviations of radius from circle at closely spaced points on circumference. Eliminates need for time-consuming and error-prone manual measurements.

  20. Accounting for optical errors in microtensiometry.

    PubMed

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup and increases experiential accuracy. In a broad sense, this work outlines the importance of optical errors in all DSA techniques. More specifically, these results have important implications for all microscale and microfluidic measurements of interface curvature. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. IMRT QA: Selecting gamma criteria based on error detection sensitivity.

    PubMed

    Steers, Jennifer M; Fraass, Benedick A

    2016-04-01

    The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.

  2. Recursive Construction of Noiseless Subsystem for Qudits

    NASA Astrophysics Data System (ADS)

    Güngördü, Utkan; Li, Chi-Kwong; Nakahara, Mikio; Poon, Yiu-Tung; Sze, Nung-Sing

    2014-03-01

    When the environmental noise acting on the system has certain symmetries, a subsystem of the total system can avoid errors. Encoding information into such a subsystem is advantageous since it does not require any error syndrome measurements, which may introduce further errors to the system. However, utilizing such a subsystem for large systems gets impractical with the increasing number of qudits. A recursive scheme offers a solution to this problem. Here, we review the recursive construct introduced in, which can asymptotically protect 1/d of the qudits in system against collective errors.

  3. Measurement error in environmental epidemiology and the shape of exposure-response curves.

    PubMed

    Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E

    2011-09-01

    Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health.

  4. Spectral estimates of net radiation and soil heat flux

    USGS Publications Warehouse

    Daughtry, C.S.T.; Kustas, William P.; Moran, M.S.; Pinter, P. J.; Jackson, R. D.; Brown, P.W.; Nichols, W.D.; Gay, L.W.

    1990-01-01

    Conventional methods of measuring surface energy balance are point measurements and represent only a small area. Remote sensing offers a potential means of measuring outgoing fluxes over large areas at the spatial resolution of the sensor. The objective of this study was to estimate net radiation (Rn) and soil heat flux (G) using remotely sensed multispectral data acquired from an aircraft over large agricultural fields. Ground-based instruments measured Rn and G at nine locations along the flight lines. Incoming fluxes were also measured by ground-based instruments. Outgoing fluxes were estimated using remotely sensed data. Remote Rn, estimated as the algebraic sum of incoming and outgoing fluxes, slightly underestimated Rn measured by the ground-based net radiometers. The mean absolute errors for remote Rn minus measured Rn were less than 7%. Remote G, estimated as a function of a spectral vegetation index and remote Rn, slightly overestimated measured G; however, the mean absolute error for remote G was 13%. Some of the differences between measured and remote values of Rn and G are associated with differences in instrument designs and measurement techniques. The root mean square error for available energy (Rn - G) was 12%. Thus, methods using both ground-based and remotely sensed data can provide reliable estimates of the available energy which can be partitioned into sensible and latent heat under nonadvective conditions. ?? 1990.

  5. An adaptive filter method for spacecraft using gravity assist

    NASA Astrophysics Data System (ADS)

    Ning, Xiaolin; Huang, Panpan; Fang, Jiancheng; Liu, Gang; Ge, Shuzhi Sam

    2015-04-01

    Celestial navigation (CeleNav) has been successfully used during gravity assist (GA) flyby for orbit determination in many deep space missions. Due to spacecraft attitude errors, ephemeris errors, the camera center-finding bias, and the frequency of the images before and after the GA flyby, the statistics of measurement noise cannot be accurately determined, and yet have time-varying characteristics, which may introduce large estimation error and even cause filter divergence. In this paper, an unscented Kalman filter (UKF) with adaptive measurement noise covariance, called ARUKF, is proposed to deal with this problem. ARUKF scales the measurement noise covariance according to the changes in innovation and residual sequences. Simulations demonstrate that ARUKF is robust to the inaccurate initial measurement noise covariance matrix and time-varying measurement noise. The impact factors in the ARUKF are also investigated.

  6. Study on the three-station typical network deployments of workspace Measurement and Positioning System

    NASA Astrophysics Data System (ADS)

    Xiong, Zhi; Zhu, J. G.; Xue, B.; Ye, Sh. H.; Xiong, Y.

    2013-10-01

    As a novel network coordinate measurement system based on multi-directional positioning, workspace Measurement and Positioning System (wMPS) has outstanding advantages of good parallelism, wide measurement range and high measurement accuracy, which makes it to be the research hotspots and important development direction in the field of large-scale measurement. Since station deployment has a significant impact on the measurement range and accuracy, and also restricts the use-cost, the optimization method of station deployment was researched in this paper. Firstly, positioning error model was established. Then focusing on the small network consisted of three stations, the typical deployments and error distribution characteristics were studied. Finally, through measuring the simulated fuselage using typical deployments at the industrial spot and comparing the results with Laser Tracker, some conclusions are obtained. The comparison results show that under existing prototype conditions, I_3 typical deployment of which three stations are distributed in a straight line has an average error of 0.30 mm and the maximum error is 0.50 mm in the range of 12 m. Meanwhile, C_3 typical deployment of which three stations are uniformly distributed in the half-circumference of an circle has an average error of 0.17 mm and the maximum error is 0.28 mm. Obviously, C_3 typical deployment has a higher control effect on precision than I_3 type. The research work provides effective theoretical support for global measurement network optimization in the future work.

  7. Problems with small area surveys: lensing covariance of supernova distance measurements.

    PubMed

    Cooray, Asantha; Huterer, Dragan; Holz, Daniel E

    2006-01-20

    While luminosity distances from type Ia supernovae (SNe) are a powerful probe of cosmology, the accuracy with which these distances can be measured is limited by cosmic magnification due to gravitational lensing by the intervening large-scale structure. Spatial clustering of foreground mass leads to correlated errors in SNe distances. By including the full covariance matrix of SNe, we show that future wide-field surveys will remain largely unaffected by lensing correlations. However, "pencil beam" surveys, and those with narrow (but possibly long) fields of view, can be strongly affected. For a survey with 30 arcmin mean separation between SNe, lensing covariance leads to a approximately 45% increase in the expected errors in dark energy parameters.

  8. A video multitracking system for quantification of individual behavior in a large fish shoal: advantages and limits.

    PubMed

    Delcourt, Johann; Becco, Christophe; Vandewalle, Nicolas; Poncin, Pascal

    2009-02-01

    The capability of a new multitracking system to track a large number of unmarked fish (up to 100) is evaluated. This system extrapolates a trajectory from each individual and analyzes recorded sequences that are several minutes long. This system is very efficient in statistical individual tracking, where the individual's identity is important for a short period of time in comparison with the duration of the track. Individual identification is typically greater than 99%. Identification is largely efficient (more than 99%) when the fish images do not cross the image of a neighbor fish. When the images of two fish merge (occlusion), we consider that the spot on the screen has a double identity. Consequently, there are no identification errors during occlusions, even though the measurement of the positions of each individual is imprecise. When the images of these two merged fish separate (separation), individual identification errors are more frequent, but their effect is very low in statistical individual tracking. On the other hand, in complete individual tracking, where individual fish identity is important for the entire trajectory, each identification error invalidates the results. In such cases, the experimenter must observe whether the program assigns the correct identification, and, when an error is made, must edit the results. This work is not too costly in time because it is limited to the separation events, accounting for fewer than 0.1% of individual identifications. Consequently, in both statistical and rigorous individual tracking, this system allows the experimenter to gain time by measuring the individual position automatically. It can also analyze the structural and dynamic properties of an animal group with a very large sample, with precision and sampling that are impossible to obtain with manual measures.

  9. Active Mirror Predictive and Requirements Verification Software (AMP-ReVS)

    NASA Technical Reports Server (NTRS)

    Basinger, Scott A.

    2012-01-01

    This software is designed to predict large active mirror performance at various stages in the fabrication lifecycle of the mirror. It was developed for 1-meter class powered mirrors for astronomical purposes, but is extensible to other geometries. The package accepts finite element model (FEM) inputs and laboratory measured data for large optical-quality mirrors with active figure control. It computes phenomenological contributions to the surface figure error using several built-in optimization techniques. These phenomena include stresses induced in the mirror by the manufacturing process and the support structure, the test procedure, high spatial frequency errors introduced by the polishing process, and other process-dependent deleterious effects due to light-weighting of the mirror. Then, depending on the maturity of the mirror, it either predicts the best surface figure error that the mirror will attain, or it verifies that the requirements for the error sources have been met once the best surface figure error has been measured. The unique feature of this software is that it ties together physical phenomenology with wavefront sensing and control techniques and various optimization methods including convex optimization, Kalman filtering, and quadratic programming to both generate predictive models and to do requirements verification. This software combines three distinct disciplines: wavefront control, predictive models based on FEM, and requirements verification using measured data in a robust, reusable code that is applicable to any large optics for ground and space telescopes. The software also includes state-of-the-art wavefront control algorithms that allow closed-loop performance to be computed. It allows for quantitative trade studies to be performed for optical systems engineering, including computing the best surface figure error under various testing and operating conditions. After the mirror manufacturing process and testing have been completed, the software package can be used to verify that the underlying requirements have been met.

  10. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    PubMed Central

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-01-01

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130

  11. A Systematic Error Correction Method for TOVS Radiances

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.

  12. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    PubMed

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  13. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array

    PubMed Central

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Tao, Yuan

    2018-01-01

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%. PMID:29734742

  14. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array.

    PubMed

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Abu-Siada, Ahmed; Tao, Yuan

    2018-05-05

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%.

  15. Causal Inference for fMRI Time Series Data with Systematic Errors of Measurement in a Balanced On/Off Study of Social Evaluative Threat.

    PubMed

    Sobel, Michael E; Lindquist, Martin A

    2014-07-01

    Functional magnetic resonance imaging (fMRI) has facilitated major advances in understanding human brain function. Neuroscientists are interested in using fMRI to study the effects of external stimuli on brain activity and causal relationships among brain regions, but have not stated what is meant by causation or defined the effects they purport to estimate. Building on Rubin's causal model, we construct a framework for causal inference using blood oxygenation level dependent (BOLD) fMRI time series data. In the usual statistical literature on causal inference, potential outcomes, assumed to be measured without systematic error, are used to define unit and average causal effects. However, in general the potential BOLD responses are measured with stimulus dependent systematic error. Thus we define unit and average causal effects that are free of systematic error. In contrast to the usual case of a randomized experiment where adjustment for intermediate outcomes leads to biased estimates of treatment effects (Rosenbaum, 1984), here the failure to adjust for task dependent systematic error leads to biased estimates. We therefore adjust for systematic error using measured "noise covariates" , using a linear mixed model to estimate the effects and the systematic error. Our results are important for neuroscientists, who typically do not adjust for systematic error. They should also prove useful to researchers in other areas where responses are measured with error and in fields where large amounts of data are collected on relatively few subjects. To illustrate our approach, we re-analyze data from a social evaluative threat task, comparing the findings with results that ignore systematic error.

  16. Impact of food and fluid intake on technical and biological measurement error in body composition assessment methods in athletes.

    PubMed

    Kerr, Ava; Slater, Gary J; Byrne, Nuala

    2017-02-01

    Two, three and four compartment (2C, 3C and 4C) models of body composition are popular methods to measure fat mass (FM) and fat-free mass (FFM) in athletes. However, the impact of food and fluid intake on measurement error has not been established. The purpose of this study was to evaluate standardised (overnight fasted, rested and hydrated) v. non-standardised (afternoon and non-fasted) presentation on technical and biological error on surface anthropometry (SA), 2C, 3C and 4C models. In thirty-two athletic males, measures of SA, dual-energy X-ray absorptiometry (DXA), bioelectrical impedance spectroscopy (BIS) and air displacement plethysmography (BOD POD) were taken to establish 2C, 3C and 4C models. Tests were conducted after an overnight fast (duplicate), about 7 h later after ad libitum food and fluid intake, and repeated 24 h later before and after ingestion of a specified meal. Magnitudes of changes in the mean and typical errors of measurement were determined. Mean change scores for non-standardised presentation and post meal tests for FM were substantially large in BIS, SA, 3C and 4C models. For FFM, mean change scores for non-standardised conditions produced large changes for BIS, 3C and 4C models, small for DXA, trivial for BOD POD and SA. Models that included a total body water (TBW) value from BIS (3C and 4C) were more sensitive to TBW changes in non-standardised conditions than 2C models. Biological error is minimised in all models with standardised presentation but DXA and BOD POD are acceptable if acute food and fluid intake remains below 500 g.

  17. A procedure for removing the effect of response bias errors from waterfowl hunter questionnaire responses

    USGS Publications Warehouse

    Atwood, E.L.

    1958-01-01

    Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.

  18. Specificity of Atmospheric Correction of Satellite Data on Ocean Color in the Far East

    NASA Astrophysics Data System (ADS)

    Aleksanin, A. I.; Kachur, V. A.

    2017-12-01

    Calculation errors in ocean-brightness coefficients in the Far Eastern are analyzed for two atmospheric correction algorithms (NIR and MUMM). The daylight measurements in different water types show that the main error component is systematic and has a simple dependence on the magnitudes of the coefficients. The causes of the error behavior are considered. The most probable explanation for the large errors in ocean-color parameters in the Far East is a high concentration of continental aerosol absorbing light. A comparison between satellite and in situ measurements at AERONET stations in the United States and South Korea has been made. It is shown the errors in these two regions differ by up to 10 times upon close water turbidity and relatively high aerosol optical-depth computation precision in the case of using the NIR correction of the atmospheric effect.

  19. Filtering Methods for Error Reduction in Spacecraft Attitude Estimation Using Quaternion Star Trackers

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil

    2011-01-01

    Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.

  20. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting.

    PubMed

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-10-02

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method.

  1. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting

    PubMed Central

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-01-01

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method. PMID:27706099

  2. Type I Error Inflation for Detecting DIF in the Presence of Impact

    ERIC Educational Resources Information Center

    DeMars, Christine E.

    2010-01-01

    In this brief explication, two challenges for using differential item functioning (DIF) measures when there are large group differences in true proficiency are illustrated. Each of these difficulties may lead to inflated Type I error rates, for very different reasons. One problem is that groups matched on observed score are not necessarily well…

  3. Correction of stream quality trends for the effects of laboratory measurement bias

    USGS Publications Warehouse

    Alexander, Richard B.; Smith, Richard A.; Schwarz, Gregory E.

    1993-01-01

    We present a statistical model relating measurements of water quality to associated errors in laboratory methods. Estimation of the model allows us to correct trends in water quality for long-term and short-term variations in laboratory measurement errors. An illustration of the bias correction method for a large national set of stream water quality and quality assurance data shows that reductions in the bias of estimates of water quality trend slopes are achieved at the expense of increases in the variance of these estimates. Slight improvements occur in the precision of estimates of trend in bias by using correlative information on bias and water quality to estimate random variations in measurement bias. The results of this investigation stress the need for reliable, long-term quality assurance data and efficient statistical methods to assess the effects of measurement errors on the detection of water quality trends.

  4. Comparison of three rf plasma impedance monitors on a high phase angle planar inductively coupled plasma source

    NASA Astrophysics Data System (ADS)

    Uchiyama, H.; Watanabe, M.; Shaw, D. M.; Bahia, J. E.; Collins, G. J.

    1999-10-01

    Accurate measurement of plasma source impedance is important for verification of plasma circuit models, as well as for plasma process characterization and endpoint detection. Most impedance measurement techniques depend in some manner on the cosine of the phase angle to determine the impedance of the plasma load. Inductively coupled plasmas are generally highly inductive, with the phase angle between the applied rf voltage and the rf current in the range of 88 to near 90 degrees. A small measurement error in this phase angle range results in a large error in the calculated cosine of the angle, introducing large impedance measurement variations. In this work, we have compared the measured impedance of a planar inductively coupled plasma using three commercial plasma impedance monitors (ENI V/I probe, Advanced Energy RFZ60 and Advanced Energy Z-Scan). The plasma impedance is independently verified using a specially designed match network and a calibrated load, representing the plasma, to provide a measurement standard.

  5. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    PubMed

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The proportionality of bias and number of measurement occasions should be considered when MLM-UN is used. The good news is that this proportionality can be compensated by means of large sample sizes. Accordingly, MLM-UN can be recommended even for small sample sizes for about three measurement occasions and for large sample sizes for about nine measurement occasions.

  6. Evaluation of probe-induced flow distortion of Campbell CSAT3 sonic anemometers by numerical simulation

    NASA Astrophysics Data System (ADS)

    Mauder, M.; Huq, S.; De Roo, F.; Foken, T.; Manhart, M.; Schmid, H. P. E.

    2017-12-01

    The Campbell CSAT3 sonic anemometer is one of the most widely used instruments for eddy-covariance measurement. However, conflicting estimates for the probe-induced flow distortion error of this instrument have been reported recently, and those error estimates range between 3% and 14% for the measurement of vertical velocity fluctuations. This large discrepancy between the different studies can probably be attributed to the different experimental approaches applied. In order to overcome the limitations of both field intercomparison experiments and wind tunnel experiments, we propose a new approach that relies on virtual measurements in a large-eddy simulation (LES) environment. In our experimental set-up, we generate horizontal and vertical velocity fluctuations at frequencies that typically dominate the turbulence spectra of the surface layer. The probe-induced flow distortion error of a CSAT3 is then quantified by this numerical wind tunnel approach while the statistics of the prescribed inflow signal are taken as reference or etalon. The resulting relative error is found to range from 3% to 7% and from 1% to 3% for the standard deviation of the vertical and the horizontal velocity component, respectively, depending on the orientation of the CSAT3 in the flow field. We further demonstrate that these errors are independent of the frequency of fluctuations at the inflow of the simulation. The analytical corrections proposed by Kaimal et al. (Proc Dyn Flow Conf, 551-565, 1978) and Horst et al. (Boundary-Layer Meteorol, 155, 371-395, 2015) are compared against our simulated results, and we find that they indeed reduce the error by up to three percentage points. However, these corrections fail to reproduce the azimuth-dependence of the error that we observe. Moreover, we investigate the general Reynolds number dependence of the flow distortion error by more detailed idealized simulations.

  7. IMRT QA: Selecting gamma criteria based on error detection sensitivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steers, Jennifer M.; Fraass, Benedick A., E-mail: benedick.fraass@cshs.org

    Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique,more » and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. Conclusions: We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.« less

  8. Identifying model error in metabolic flux analysis - a generalized least squares approach.

    PubMed

    Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G

    2016-09-13

    The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.

  9. Measurement of large steel plates based on linear scan structured light scanning

    NASA Astrophysics Data System (ADS)

    Xiao, Zhitao; Li, Yaru; Lei, Geng; Xi, Jiangtao

    2018-01-01

    A measuring method based on linear structured light scanning is proposed to achieve the accurate measurement of the complex internal shape of large steel plates. Firstly, by using a calibration plate with round marks, an improved line scanning calibration method is designed. The internal and external parameters of camera are determined through the calibration method. Secondly, the images of steel plates are acquired by line scan camera. Then the Canny edge detection method is used to extract approximate contours of the steel plate images, the Gauss fitting algorithm is used to extract the sub-pixel edges of the steel plate contours. Thirdly, for the problem of inaccurate restoration of contour size, by measuring the distance between adjacent points in the grid of known dimensions, the horizontal and vertical error curves of the images are obtained. Finally, these horizontal and vertical error curves can be used to correct the contours of steel plates, and then combined with the calibration parameters of internal and external, the size of these contours can be calculated. The experiments results demonstrate that the proposed method can achieve the error of 1 mm/m in 1.2m×2.6m field of view, which has satisfied the demands of industrial measurement.

  10. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

  11. A Reassessment of the Precision of Carbonate Clumped Isotope Measurements: Implications for Calibrations and Paleoclimate Reconstructions

    NASA Astrophysics Data System (ADS)

    Fernandez, Alvaro; Müller, Inigo A.; Rodríguez-Sanz, Laura; van Dijk, Joep; Looser, Nathan; Bernasconi, Stefano M.

    2017-12-01

    Carbonate clumped isotopes offer a potentially transformational tool to interpret Earth's history, but the proxy is still limited by poor interlaboratory reproducibility. Here, we focus on the uncertainties that result from the analysis of only a few replicate measurements to understand the extent to which unconstrained errors affect calibration relationships and paleoclimate reconstructions. We find that highly precise data can be routinely obtained with multiple replicate analyses, but this is not always done in many laboratories. For instance, using published estimates of external reproducibilities we find that typical clumped isotope measurements (three replicate analyses) have margins of error at the 95% confidence level (CL) that are too large for many applications. These errors, however, can be systematically reduced with more replicate measurements. Second, using a Monte Carlo-type simulation we demonstrate that the degree of disagreement on published calibration slopes is about what we should expect considering the precision of Δ47 data, the number of samples and replicate analyses, and the temperature range covered in published calibrations. Finally, we show that the way errors are typically reported in clumped isotope data can be problematic and lead to the impression that data are more precise than warranted. We recommend that uncertainties in Δ47 data should no longer be reported as the standard error of a few replicate measurements. Instead, uncertainties should be reported as margins of error at a specified confidence level (e.g., 68% or 95% CL). These error bars are a more realistic indication of the reliability of a measurement.

  12. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit

    PubMed Central

    Liu, Shi Qiang; Zhu, Rong

    2016-01-01

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314

  13. How accurate are lexile text measures?

    PubMed

    Stenner, A Jackson; Burdick, Hal; Sanford, Eleanor E; Burdick, Donald S

    2006-01-01

    The Lexile Framework for Reading models comprehension as the difference between a reader measure and a text measure. Uncertainty in comprehension rates results from unreliability in reader measures and inaccuracy in text readability measures. Whole-text processing eliminates sampling error in text measures. However, Lexile text measures are imperfect due to misspecification of the Lexile theory. The standard deviation component associated with theory misspecification is estimated at 64L for a standard-length passage (approximately 125 words). A consequence is that standard errors for longer texts (2,500 to 150,000 words) are measured on the Lexile scale with uncertainties in the single digits. Uncertainties in expected comprehension rates are largely due to imprecision in reader ability and not inaccuracies in text readabilities.

  14. Swing arm profilometer: analytical solutions of misalignment errors for testing axisymmetric optics

    NASA Astrophysics Data System (ADS)

    Xiong, Ling; Luo, Xiao; Liu, Zhenyu; Wang, Xiaokun; Hu, Haixiang; Zhang, Feng; Zheng, Ligong; Zhang, Xuejun

    2016-07-01

    The swing arm profilometer (SAP) has been playing a very important role in testing large aspheric optics. As one of most significant error sources that affects the test accuracy, misalignment error leads to low-order errors such as aspherical aberrations and coma apart from power. In order to analyze the effect of misalignment errors, the relation between alignment parameters and test results of axisymmetric optics is presented. Analytical solutions of SAP system errors from tested mirror misalignment, arm length L deviation, tilt-angle θ deviation, air-table spin error, and air-table misalignment are derived, respectively; and misalignment tolerance is given to guide surface measurement. In addition, experiments on a 2-m diameter parabolic mirror are demonstrated to verify the model; according to the error budget, we achieve the SAP test for low-order errors except power with accuracy of 0.1 μm root-mean-square.

  15. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  16. Statistically Controlling for Confounding Constructs Is Harder than You Think

    PubMed Central

    Westfall, Jacob; Yarkoni, Tal

    2016-01-01

    Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity. PMID:27031707

  17. Traceability of On-Machine Tool Measurement: A Review.

    PubMed

    Mutilba, Unai; Gomez-Acedo, Eneko; Kortaberria, Gorka; Olarra, Aitor; Yagüe-Fabra, Jose A

    2017-07-11

    Nowadays, errors during the manufacturing process of high value components are not acceptable in driving industries such as energy and transportation. Sectors such as aerospace, automotive, shipbuilding, nuclear power, large science facilities or wind power need complex and accurate components that demand close measurements and fast feedback into their manufacturing processes. New measuring technologies are already available in machine tools, including integrated touch probes and fast interface capabilities. They provide the possibility to measure the workpiece in-machine during or after its manufacture, maintaining the original setup of the workpiece and avoiding the manufacturing process from being interrupted to transport the workpiece to a measuring position. However, the traceability of the measurement process on a machine tool is not ensured yet and measurement data is still not fully reliable enough for process control or product validation. The scientific objective is to determine the uncertainty on a machine tool measurement and, therefore, convert it into a machine integrated traceable measuring process. For that purpose, an error budget should consider error sources such as the machine tools, components under measurement and the interactions between both of them. This paper reviews all those uncertainty sources, being mainly focused on those related to the machine tool, either on the process of geometric error assessment of the machine or on the technology employed to probe the measurand.

  18. Transmitted wavefront error of a volume phase holographic grating at cryogenic temperature.

    PubMed

    Lee, David; Taylor, Gordon D; Baillie, Thomas E C; Montgomery, David

    2012-06-01

    This paper describes the results of transmitted wavefront error (WFE) measurements on a volume phase holographic (VPH) grating operating at a temperature of 120 K. The VPH grating was mounted in a cryogenically compatible optical mount and tested in situ in a cryostat. The nominal root mean square (RMS) wavefront error at room temperature was 19 nm measured over a 50 mm diameter test aperture. The WFE remained at 18 nm RMS when the grating was cooled. This important result demonstrates that excellent WFE performance can be obtained with cooled VPH gratings, as required for use in future cryogenic infrared astronomical spectrometers planned for the European Extremely Large Telescope.

  19. Error and uncertainty in Raman thermal conductivity measurements

    DOE PAGES

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less

  20. Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.

    2004-01-01

    Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.

  1. Geometrical correction factors for heat flux meters

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Papell, S. S.

    1974-01-01

    General formulas are derived for determining gage averaging errors of strip-type heat flux meters used in the measurement of one-dimensional heat flux distributions. The local averaging error e(x) is defined as the difference between the measured value of the heat flux and the local value which occurs at the center of the gage. In terms of e(x), a correction procedure is presented which allows a better estimate for the true value of the local heat flux. For many practical problems, it is possible to use relatively large gages to obtain acceptable heat flux measurements.

  2. Analysis and discussion on the experimental data of electrolyte analyzer

    NASA Astrophysics Data System (ADS)

    Dong, XinYu; Jiang, JunJie; Liu, MengJun; Li, Weiwei

    2018-06-01

    In the subsequent verification of electrolyte analyzer, we found that the instrument can achieve good repeatability and stability in repeated measurements with a short period of time, in line with the requirements of verification regulation of linear error and cross contamination rate, but the phenomenon of large indication error is very common, the measurement results of different manufacturers have great difference, in order to find and solve this problem, help enterprises to improve quality of product, to obtain accurate and reliable measurement data, we conducted the experimental evaluation of electrolyte analyzer, and the data were analyzed by statistical analysis.

  3. Analysis of Covariance: Is It the Appropriate Model to Study Change?

    ERIC Educational Resources Information Center

    Marston, Paul T., Borich, Gary D.

    The four main approaches to measuring treatment effects in schools; raw gain, residual gain, covariance, and true scores; were compared. A simulation study showed true score analysis produced a large number of Type-I errors. When corrected for this error, this method showed the least power of the four. This outcome was clearly the result of the…

  4. Evaluating Procedures for Reducing Measurement Error in Math Curriculum-Based Measurement Probes

    ERIC Educational Resources Information Center

    Methe, Scott A.; Briesch, Amy M.; Hulac, David

    2015-01-01

    At present, it is unclear whether math curriculum-based measurement (M-CBM) procedures provide a dependable measure of student progress in math computation because support for its technical properties is based largely upon a body of correlational research. Recent investigations into the dependability of M-CBM scores have found that evaluating…

  5. Circular Array of Magnetic Sensors for Current Measurement: Analysis for Error Caused by Position of Conductor.

    PubMed

    Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi

    2018-02-14

    This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.

  6. Experimental measurement of structural power flow on an aircraft fuselage

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1989-01-01

    An experimental technique is used to measure the structural power flow through an aircraft fuselage with the excitation near the wing attachment location. Because of the large number of measurements required to analyze the whole of an aircraft fuselage, it is necessary that a balance be achieved between the number of measurement transducers, the mounting of these transducers, and the accuracy of the measurements. Using four transducers mounted on a bakelite platform, the structural intensity vectors at locations distributed throughout the fuselage are measured. To minimize the errors associated with using a four transducers technique the measurement positions are selected away from bulkheads and stiffeners. Because four separate transducers are used, with each transducer having its own drive and conditioning amplifiers, phase errors are introduced in the measurements that can be much greater than the phase differences associated with the measurements. To minimize these phase errors two sets of measurements are taken for each position with the orientation of the transducers rotated by 180 deg and an average taken between the two sets of measurements. Results are presented and discussed.

  7. Mathematical Model and Calibration Experiment of a Large Measurement Range Flexible Joints 6-UPUR Six-Axis Force Sensor

    PubMed Central

    Zhao, Yanzhi; Zhang, Caifeng; Zhang, Dan; Shi, Zhongpan; Zhao, Tieshi

    2016-01-01

    Nowadays improving the accuracy and enlarging the measuring range of six-axis force sensors for wider applications in aircraft landing, rocket thrust, and spacecraft docking testing experiments has become an urgent objective. However, it is still difficult to achieve high accuracy and large measuring range with traditional parallel six-axis force sensors due to the influence of the gap and friction of the joints. Therefore, to overcome the mentioned limitations, this paper proposed a 6-Universal-Prismatic-Universal-Revolute (UPUR) joints parallel mechanism with flexible joints to develop a large measurement range six-axis force sensor. The structural characteristics of the sensor are analyzed in comparison with traditional parallel sensor based on the Stewart platform. The force transfer relation of the sensor is deduced, and the force Jacobian matrix is obtained using screw theory in two cases of the ideal state and the state of flexibility of each flexible joint is considered. The prototype and loading calibration system are designed and developed. The K value method and least squares method are used to process experimental data, and in errors of kind Ι and kind II linearity are obtained. The experimental results show that the calibration error of the K value method is more than 13.4%, and the calibration error of the least squares method is 2.67%. The experimental results prove the feasibility of the sensor and the correctness of the theoretical analysis which are expected to be adopted in practical applications. PMID:27529244

  8. Cross-Spectrum PM Noise Measurement, Thermal Energy, and Metamaterial Filters.

    PubMed

    Gruson, Yannick; Giordano, Vincent; Rohde, Ulrich L; Poddar, Ajay K; Rubiola, Enrico

    2017-03-01

    Virtually all commercial instruments for the measurement of the oscillator PM noise make use of the cross-spectrum method (arXiv:1004.5539 [physics.ins-det], 2010). High sensitivity is achieved by correlation and averaging on two equal channels, which measure the same input, and reject the background of the instrument. We show that a systematic error is always present if the thermal energy of the input power splitter is not accounted for. Such error can result in noise underestimation up to a few decibels in the lowest-noise quartz oscillators, and in an invalid measurement in the case of cryogenic oscillators. As another alarming fact, the presence of metamaterial components in the oscillator results in unpredictable behavior and large errors, even in well controlled experimental conditions. We observed a spread of 40 dB in the phase noise spectra of an oscillator, just replacing the output filter.

  9. Research on precise pneumatic-electric displacement sensor with large measurement range

    NASA Astrophysics Data System (ADS)

    Yin, Zhehao; Yuan, Yibao; Liu, Baoshuai

    2017-10-01

    This research mainly focuses on precise pneumatic-electric displacement sensor which has large measurement range. Under the high precision, measurement range can be expanded so that the need of high precision as well as large range can be satisfied in the field of machining inspection technology. This research was started by the analysis of pneumatic-measuring theory. Then, an gas circuit measuring system which is based on differential pressure was designed. This designed system can reach two aims: Firstly, to convert displacement signal into gas signal; Secondly, to reduce the measurement error which caused by pressure and environmental turbulence. Furthermore, in consideration of the high requirement for linearity, sensitivity and stability, the project studied the pneumatic-electric transducer which puts the SCX series pressure sensor as a key part. The main purpose of this pneumatic-electric transducer is to convert gas signal to suitable electrical signal. Lastly, a broken line subsection linearization circuit was designed, which can nonlinear correct the output characteristic curve so as to enlarge the linear measurement range. The final result could be briefly described like this: under the condition that measuring error is less than 1μm, measurement range could be extended to approximately 200μm which is much higher than the measurement range of traditional pneumatic measuring instrument. Meanwhile, it can reach higher exchangeability and stability in order to become more suitable to engineering application.

  10. Seeing in the Dark: Weak Lensing from the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Huff, Eric Michael

    Statistical weak lensing by large-scale structure { cosmic shear { is a promising cosmological tool, which has motivated the design of several large upcoming astronomical surveys. This Thesis presents a measurement of cosmic shear using coadded Sloan Digital Sky Survey (SDSS) imaging in 168 square degrees of the equatorial region, with r < 23:5 and i < 22:5, a source number density of 2.2 per arcmin2 and median redshift of zmed = 0.52. These coadds were generated using a new rounding kernel method that was intended to minimize systematic errors in the lensing measurement due to coherent PSF anisotropies that are otherwise prevalent in the SDSS imaging data. Measurements of cosmic shear out to angular separations of 2 degrees are presented, along with systematics tests of the catalog generation and shear measurement steps that demonstrate that these results are dominated by statistical rather than systematic errors. Assuming a cosmological model corresponding to WMAP7 (Komatsu et al., 2011) and allowing only the amplitude of matter fluctuations sigma8 to vary, the best-t value of the amplitude of matter fluctuations is sigma 8=0.636+0.109-0.154 (1sigma); without systematic errors this would be sigma8=0.636+0.099 -0.137 (1sigma). Assuming a flat Λ CDM model, the combined constraints with WMAP7 are sigma8=0.784+0.028 -0.026 (1sigma). The 2sigma error range is 14 percent smaller than WMAP7 alone. Aside from the intrinsic value of such cosmological constraints from the growth of structure, some important lessons are identified for upcoming surveys that may face similar issues when combining multi-epoch data to measure cosmic shear. Motivated by the challenges faced in the cosmic shear measurement, two new lensing probes are suggested for increasing the available weak lensing signal. Both use galaxy scaling relations to control for scatter in lensing observables. The first employs a version of the well-known fundamental plane relation for early type galaxies. This modified "photometric fundamental plane" replaces velocity dispersions with photometric galaxy properties, thus obviating the need for spectroscopic data. We present the first detection of magnification using this method by applying it to photometric catalogs from the Sloan Digital Sky Survey. This analysis shows that the derived magnification signal is comparable to that available from conventional methods using gravitational shear. We suppress the dominant sources of systematic error and discuss modest improvements that may allow this method to equal or even surpass the signal-to-noise achievable with shear. Moreover, some of the dominant sources of systematic error are substantially different from those of shear-based techniques. The second outlines an idea for using the optical Tully-Fisher relation to dramatically improve the signal-to-noise and systematic error control for shear measurements. The expected error properties and potential advantages of such a measurement are proposed, and a pilot study is suggested in order to test the viability of Tully-Fisher weak lensing in the context of the forthcoming generation of large spectroscopic surveys.

  11. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  12. Use of autocorrelation scanning in DNA copy number analysis.

    PubMed

    Zhang, Liangcai; Zhang, Li

    2013-11-01

    Data quality is a critical issue in the analyses of DNA copy number alterations obtained from microarrays. It is commonly assumed that copy number alteration data can be modeled as piecewise constant and the measurement errors of different probes are independent. However, these assumptions do not always hold in practice. In some published datasets, we find that measurement errors are highly correlated between probes that interrogate nearby genomic loci, and the piecewise-constant model does not fit the data well. The correlated errors cause problems in downstream analysis, leading to a large number of DNA segments falsely identified as having copy number gains and losses. We developed a simple tool, called autocorrelation scanning profile, to assess the dependence of measurement error between neighboring probes. Autocorrelation scanning profile can be used to check data quality and refine the analysis of DNA copy number data, which we demonstrate in some typical datasets. lzhangli@mdanderson.org. Supplementary data are available at Bioinformatics online.

  13. Experimental measurement of structural power flow on an aircraft fuselage

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1991-01-01

    An experimental technique is used to measure structural intensity through an aircraft fuselage with an excitation load applied near one of the wing attachment locations. The fuselage is a relatively large structure, requiring a large number of measurement locations to analyze the whole of the structure. For the measurement of structural intensity, multiple point measurements are necessary at every location of interest. A tradeoff is therefore required between the number of measurement transducers, the mounting of these transducers, and the accuracy of the measurements. Using four transducers mounted on a bakelite platform, structural intensity vectors are measured at locations distributed throughout the fuselage. To minimize the errors associated with using the four transducer technique, the measurement locations are selected to be away from bulkheads and stiffeners. Furthermore, to eliminate phase errors between the four transducer measurements, two sets of data are collected for each position, with the orientation of the platform with the four transducers rotated by 180 degrees and an average taken between the two sets of data. The results of these measurements together with a discussion of the suitability of the approach for measuring structural intensity on a real structure are presented.

  14. Robust Huber-based iterated divided difference filtering with application to cooperative localization of autonomous underwater vehicles.

    PubMed

    Gao, Wei; Liu, Yalong; Xu, Bo

    2014-12-19

    A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.

  15. A dynamic measure of controllability and observability for the placement of actuators and sensors on large space structures

    NASA Technical Reports Server (NTRS)

    Vandervelde, W. E.; Carignan, C. R.

    1982-01-01

    The degree of controllability of a large space structure is found by a four step procedure: (1) finding the minimum control energy for driving the system from a given initial state to the origin in the prescribed time; (2) finding the region of initial state which can be driven to the origin with constrained control energy and time using optimal control strategy; (3) scaling the axes so that a unit displacement in every direction is equally important to control; and (4) finding the linear measurement of the weighted "volume" of the ellipsoid in the equicontrol space. For observability, the error covariance must be reduced toward zero using measurements optimally, and the criterion must be standardized by the magnitude of tolerable errors. The results obtained using these methods are applied to the vibration modes of a free-free beam.

  16. Calibration method of microgrid polarimeters with image interpolation.

    PubMed

    Chen, Zhenyue; Wang, Xia; Liang, Rongguang

    2015-02-10

    Microgrid polarimeters have large advantages over conventional polarimeters because of the snapshot nature and because they have no moving parts. However, they also suffer from several error sources, such as fixed pattern noise (FPN), photon response nonuniformity (PRNU), pixel cross talk, and instantaneous field-of-view (IFOV) error. A characterization method is proposed to improve the measurement accuracy in visible waveband. We first calibrate the camera with uniform illumination so that the response of the sensor is uniform over the entire field of view without IFOV error. Then a spline interpolation method is implemented to minimize IFOV error. Experimental results show the proposed method can effectively minimize the FPN and PRNU.

  17. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.

    PubMed

    Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W

    2017-06-22

    Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.

  18. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback

    PubMed Central

    Lee, Jackson C.; Mittelman, Talia; Stepp, Cara E.; Bohland, Jason W.

    2017-01-01

    Purpose Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Method Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. Results New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. Conclusions This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. Supplemental Material https://doi.org/10.23641/asha.5103067 PMID:28655038

  19. Minimizing Artifacts and Biases in Chamber-Based Measurements of Soil Respiration

    NASA Astrophysics Data System (ADS)

    Davidson, E. A.; Savage, K.

    2001-05-01

    Soil respiration is one of the largest and most important fluxes of carbon in terrestrial ecosystems. The objectives of this paper are to review concerns about uncertainties of chamber-based measurements of CO2 emissions from soils, to evaluate the direction and magnitude of these potential errors, and to explain procedures that minimize these errors and biases. Disturbance of diffusion gradients cause underestimate of fluxes by less than 15% in most cases, and can be partially corrected for with curve fitting and/or can be minimized by using brief measurement periods. Under-pressurization or over-pressurization of the chamber caused by flow restrictions in air circulating designs can cause large errors, but can also be avoided with properly sized chamber vents and unrestricted flows. Somewhat larger pressure differentials are observed under windy conditions, and the accuracy of measurements made under such conditions needs more research. Spatial and temporal heterogeneity can be addressed with appropriate chamber sizes and numbers and frequency of sampling. For example, means of 8 randomly chosen flux measurements from a population of 36 measurements made with 300 cm2 chambers in tropical forests and pastures were within 25% of the full population mean 98% of the time and were within 10% of the full population mean 70% of the time. Comparisons of chamber-based measurements with tower-based measurements of total ecosystem respiration require analysis of the scale of variation within the purported tower footprint. In a forest at Howland, Maine, the differences in soil respiration rates among very poorly drained and well drained soils were large, but they mostly were fortuitously cancelled when evaluated for purported tower footprints of 600-2100 m length. While all of these potential sources of measurement error and sampling biases must be carefully considered, properly designed and deployed chambers provide a reliable means of accurately measuring soil respiration in terrestrial ecosystems.

  20. Configuration Analysis of the ERS Points in Large-Volume Metrology System

    PubMed Central

    Jin, Zhangjun; Yu, Cijun; Li, Jiangxiong; Ke, Yinglin

    2015-01-01

    In aircraft assembly, multiple laser trackers are used simultaneously to measure large-scale aircraft components. To combine the independent measurements, the transformation matrices between the laser trackers’ coordinate systems and the assembly coordinate system are calculated, by measuring the enhanced referring system (ERS) points. This article aims to understand the influence of the configuration of the ERS points that affect the transformation matrix errors, and then optimize the deployment of the ERS points to reduce the transformation matrix errors. To optimize the deployment of the ERS points, an explicit model is derived to estimate the transformation matrix errors. The estimation model is verified by the experiment implemented in the factory floor. Based on the proposed model, a group of sensitivity coefficients are derived to evaluate the quality of the configuration of the ERS points, and then several typical configurations of the ERS points are analyzed in detail with the sensitivity coefficients. Finally general guidance is established to instruct the deployment of the ERS points in the aspects of the layout, the volume size and the number of the ERS points, as well as the position and orientation of the assembly coordinate system. PMID:26402685

  1. Modulating Retro-Reflectors for Space, Tracking, Acquisition and Ranging using Multiple Quantum Well Technology (Preprint)

    DTIC Science & Technology

    2002-01-01

    feedback signals were derived from the motion of the platform rather than directly measured, though an actual spacecraft would likely utilize... large position error spikes due to target motion reversal. Of course, these tracking errors are highly dependent on the feedback gains chosen for the...Key Words: MQW Retromodulators, Modulating Retroreflector(s),Inter- spacecraft communications and navigation, space control

  2. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    PubMed Central

    Severns, Paul M.

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190

  3. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns.

    PubMed

    Breed, Greg A; Severns, Paul M

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  4. Forward scattering in two-beam laser interferometry

    NASA Astrophysics Data System (ADS)

    Mana, G.; Massa, E.; Sasso, C. P.

    2018-04-01

    A fractional error as large as 25 pm mm-1 at the zero optical-path difference has been observed in an optical interferometer measuring the displacement of an x-ray interferometer used to determine the lattice parameter of silicon. Detailed investigations have brought to light that the error was caused by light forward-scattered from the beam feeding the interferometer. This paper reports on the impact of forward-scattered light on the accuracy of two-beam optical interferometry applied to length metrology, and supplies a model capable of explaining the observed error.

  5. Specificity of Atmosphere Correction of Satellite Ocean Color Data in Far-Eastern Region

    NASA Astrophysics Data System (ADS)

    Trusenkova, O.; Kachur, V.; Aleksanin, A. I.

    2016-02-01

    It was carried out an error analysis of satellite reflectance coefficients (Rrs) of MODIS/AQUA colour data for two atmospheric correction algorithms (NIR, MUMM) in the Far-Eastern region. Some sets of unique data of in situ and satellite measurements have been analysed. A set has some measurements with ASD spectroradiometer for each satellite pass. The measurement allocations were selected so the Chlorophyll-a concentration has high variability. Analysis of arbitrary set demonstrated that the main error component is systematic error, and it has simple relations on Rrs values. The reasons of such error behavior are considered. The most probable explanation of the large errors of oceanic color parameters in the Far-Eastern region is the ability of high concentrations of continental aerosol. A comparison of satellite and in situ measurements at AERONET stations of USA and South Korea regions has been made. It was shown that for NIR-correction of the atmosphere influence the error values in these two regions have differences up to 10 times for almost the same water turbidity and relatively good accuracy of computation of aerosol optical thickness. The study was supported by grant Russian Scientific Foundation No. 14-50-00034, by grant of Russian Foundation of Basic Research No.15-35-21032-mol-a-ved, and by Program of Basic Research "Far East" of Far Eastern Branch of Russian Academy of Sciences.

  6. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  7. Uncertainty in eddy covariance measurements and its application to physiological models

    Treesearch

    D.Y. Hollinger; A.D. Richardson; A.D. Richardson

    2005-01-01

    Flux data are noisy, and this uncertainty is largely due to random measurement error. Knowledge of uncertainty is essential for the statistical evaluation of modeled andmeasured fluxes, for comparison of parameters derived by fitting models to measured fluxes and in formal data-assimilation efforts. We used the difference between simultaneous measurements from two...

  8. Effects of Exposure Measurement Error in the Analysis of Health Effects from Traffic-Related Air Pollution

    EPA Science Inventory

    In large epidemiological studies, many researchers use surrogates of air pollution exposure such as geographic information system (GIS)-based characterizations of traffic or simple housing characteristics. It is important to validate these surrogates against measured pollutant co...

  9. Ultrahigh Error Threshold for Surface Codes with Biased Noise

    NASA Astrophysics Data System (ADS)

    Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.

    2018-02-01

    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

  10. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.

  11. Fusing Satellite-Derived Irradiance and Point Measurements through Optimal Interpolation

    NASA Astrophysics Data System (ADS)

    Lorenzo, A.; Morzfeld, M.; Holmgren, W.; Cronin, A.

    2016-12-01

    Satellite-derived irradiance is widely used throughout the design and operation of a solar power plant. While satellite-derived estimates cover a large area, they also have large errors compared to point measurements from sensors on the ground. We describe an optimal interpolation routine that fuses the broad spatial coverage of satellite-derived irradiance with the high accuracy of point measurements. The routine can be applied to any satellite-derived irradiance and point measurement datasets. Unique aspects of this work include the fact that information is spread using cloud location and thickness and that a number of point measurements are collected from rooftop PV systems. The routine is sensitive to errors in the satellite image geolocation, so care must be taken to adjust the cloud locations based on the solar and satellite geometries. Analysis of the optimal interpolation routine over Tucson, AZ, with 20 point measurements shows a significant improvement in the irradiance estimate for two distinct satellite image to irradiance algorithms. Improved irradiance estimates can be used for resource assessment, distributed generation production estimates, and irradiance forecasts.

  12. A Track Initiation Method for the Underwater Target Tracking Environment

    NASA Astrophysics Data System (ADS)

    Li, Dong-dong; Lin, Yang; Zhang, Yao

    2018-04-01

    A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.

  13. Experimental power spectral density analysis for mid- to high-spatial frequency surface error control.

    PubMed

    Hoyo, Javier Del; Choi, Heejoo; Burge, James H; Kim, Geon-Hee; Kim, Dae Wook

    2017-06-20

    The control of surface errors as a function of spatial frequency is critical during the fabrication of modern optical systems. A large-scale surface figure error is controlled by a guided removal process, such as computer-controlled optical surfacing. Smaller-scale surface errors are controlled by polishing process parameters. Surface errors of only a few millimeters may degrade the performance of an optical system, causing background noise from scattered light and reducing imaging contrast for large optical systems. Conventionally, the microsurface roughness is often given by the root mean square at a high spatial frequency range, with errors within a 0.5×0.5  mm local surface map with 500×500 pixels. This surface specification is not adequate to fully describe the characteristics for advanced optical systems. The process for controlling and minimizing mid- to high-spatial frequency surface errors with periods of up to ∼2-3  mm was investigated for many optical fabrication conditions using the measured surface power spectral density (PSD) of a finished Zerodur optical surface. Then, the surface PSD was systematically related to various fabrication process parameters, such as the grinding methods, polishing interface materials, and polishing compounds. The retraceable experimental polishing conditions and processes used to produce an optimal optical surface PSD are presented.

  14. Simple, Fast and Effective Correction for Irradiance Spatial Nonuniformity in Measurement of IVs of Large Area Cells at NREL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriarty, Tom

    The NREL cell measurement lab measures the IV parameters of cells of multiple sizes and configurations. A large contributing factor to errors and uncertainty in Jsc, Imax, Pmax and efficiency can be the irradiance spatial nonuniformity. Correcting for this nonuniformity through its precise and frequent measurement can be very time consuming. This paper explains a simple, fast and effective method based on bicubic interpolation for determining and correcting for spatial nonuniformity and verification of the method's efficacy.

  15. SU-E-T-598: The Effects of Arm Speed for Quality Assurance and Commissioning Measurements in Rectangular and Cylindrical Scanners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakhtiari, M; Schmitt, J

    2014-06-01

    Purpose: Cylindrical and rectangular scanning water tanks are examined with different scanning speeds to investigate the TG-106 criteria and the errors induced in the measurements. Methods: Beam profiles were measured in a depth of R50 for a low-energy electron beam (6 MeV) using rectangular and cylindrical tanks. The speeds of the measurements (arm movement) were varied in different profile measurements. Each profile was measured with a certain speed to obtain the average and standard deviation as a parameter for investigating the reproducibility and errors. Results: At arm speeds of ∼0.8 mm/s the errors were as large as 2% and 1%more » with rectangular and cylindrical tanks, respectively. The errors for electron beams and for photon beams in other depths were within the TG-106 criteria of 1% for both tank shapes. Conclusion: The measurements of low-energy electron beams in a depth of R50, as an extreme case scenario, are sensitive to the speed of the measurement arms for both rectangular and cylindrical tanks. The measurements in other depths, for electron beams and photon beams, with arm speeds of less than 1 cm/s are within the TG-106 criteria. An arm speed of 5 mm/s appeared to be optimal for fast and accurate measurements for both cylindrical and rectangular tanks.« less

  16. Traceability of On-Machine Tool Measurement: A Review

    PubMed Central

    Gomez-Acedo, Eneko; Kortaberria, Gorka; Olarra, Aitor

    2017-01-01

    Nowadays, errors during the manufacturing process of high value components are not acceptable in driving industries such as energy and transportation. Sectors such as aerospace, automotive, shipbuilding, nuclear power, large science facilities or wind power need complex and accurate components that demand close measurements and fast feedback into their manufacturing processes. New measuring technologies are already available in machine tools, including integrated touch probes and fast interface capabilities. They provide the possibility to measure the workpiece in-machine during or after its manufacture, maintaining the original setup of the workpiece and avoiding the manufacturing process from being interrupted to transport the workpiece to a measuring position. However, the traceability of the measurement process on a machine tool is not ensured yet and measurement data is still not fully reliable enough for process control or product validation. The scientific objective is to determine the uncertainty on a machine tool measurement and, therefore, convert it into a machine integrated traceable measuring process. For that purpose, an error budget should consider error sources such as the machine tools, components under measurement and the interactions between both of them. This paper reviews all those uncertainty sources, being mainly focused on those related to the machine tool, either on the process of geometric error assessment of the machine or on the technology employed to probe the measurand. PMID:28696358

  17. On the Reliability of Photovoltaic Short-Circuit Current Temperature Coefficient Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osterwald, Carl R.; Campanelli, Mark; Kelly, George J.

    2015-06-14

    The changes in short-circuit current of photovoltaic (PV) cells and modules with temperature are routinely modeled through a single parameter, the temperature coefficient (TC). This parameter is vital for the translation equations used in system sizing, yet in practice is very difficult to measure. In this paper, we discuss these inherent problems and demonstrate how they can introduce unacceptably large errors in PV ratings. A method for quantifying the spectral dependence of TCs is derived, and then used to demonstrate that databases of module parameters commonly contain values that are physically unreasonable. Possible ways to reduce measurement errors are alsomore » discussed.« less

  18. Evaluation and statistical inference for human connectomes.

    PubMed

    Pestilli, Franco; Yeatman, Jason D; Rokem, Ariel; Kay, Kendrick N; Wandell, Brian A

    2014-10-01

    Diffusion-weighted imaging coupled with tractography is currently the only method for in vivo mapping of human white-matter fascicles. Tractography takes diffusion measurements as input and produces the connectome, a large collection of white-matter fascicles, as output. We introduce a method to evaluate the evidence supporting connectomes. Linear fascicle evaluation (LiFE) takes any connectome as input and predicts diffusion measurements as output, using the difference between the measured and predicted diffusion signals to quantify the prediction error. We use the prediction error to evaluate the evidence that supports the properties of the connectome, to compare tractography algorithms and to test hypotheses about tracts and connections.

  19. SU-E-T-257: Output Constancy: Reducing Measurement Variations in a Large Practice Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hedrick, K; Fitzgerald, T; Miller, R

    2014-06-01

    Purpose: To standardize output constancy check procedures in a large medical physics practice group covering multiple sites, in order to identify and reduce small systematic errors caused by differences in equipment and the procedures of multiple physicists. Methods: A standardized machine output constancy check for both photons and electrons was instituted within the practice group in 2010. After conducting annual TG-51 measurements in water and adjusting the linac to deliver 1.00 cGy/MU at Dmax, an acrylic phantom (comparable at all sites) and PTW farmer ion chamber are used to obtain monthly output constancy reference readings. From the collected charge reading,more » measurements of air pressure and temperature, and chamber Ndw and Pelec, a value we call the Kacrylic factor is determined, relating the chamber reading in acrylic to the dose in water with standard set-up conditions. This procedure easily allows for multiple equipment combinations to be used at any site. The Kacrylic factors and output results from all sites and machines are logged monthly in a central database and used to monitor trends in calibration and output. Results: The practice group consists of 19 sites, currently with 34 Varian and 8 Elekta linacs (24 Varian and 5 Elekta linacs in 2010). Over the past three years, the standard deviation of Kacrylic factors measured on all machines decreased by 20% for photons and high energy electrons as systematic errors were found and reduced. Low energy electrons showed very little change in the distribution of Kacrylic values. Small errors in linac beam data were found by investigating outlier Kacrylic values. Conclusion: While the use of acrylic phantoms introduces an additional source of error through small differences in depth and effective depth, the new standardized procedure eliminates potential sources of error from using many different phantoms and results in more consistent output constancy measurements.« less

  20. Seasonal variability of stratospheric methane: implications for constraining tropospheric methane budgets using total column observations

    NASA Astrophysics Data System (ADS)

    Saad, Katherine M.; Wunch, Debra; Deutscher, Nicholas M.; Griffith, David W. T.; Hase, Frank; De Mazière, Martine; Notholt, Justus; Pollard, David F.; Roehl, Coleen M.; Schneider, Matthias; Sussmann, Ralf; Warneke, Thorsten; Wennberg, Paul O.

    2016-11-01

    Global and regional methane budgets are markedly uncertain. Conventionally, estimates of methane sources are derived by bridging emissions inventories with atmospheric observations employing chemical transport models. The accuracy of this approach requires correctly simulating advection and chemical loss such that modeled methane concentrations scale with surface fluxes. When total column measurements are assimilated into this framework, modeled stratospheric methane introduces additional potential for error. To evaluate the impact of such errors, we compare Total Carbon Column Observing Network (TCCON) and GEOS-Chem total and tropospheric column-averaged dry-air mole fractions of methane. We find that the model's stratospheric contribution to the total column is insensitive to perturbations to the seasonality or distribution of tropospheric emissions or loss. In the Northern Hemisphere, we identify disagreement between the measured and modeled stratospheric contribution, which increases as the tropopause altitude decreases, and a temporal phase lag in the model's tropospheric seasonality driven by transport errors. Within the context of GEOS-Chem, we find that the errors in tropospheric advection partially compensate for the stratospheric methane errors, masking inconsistencies between the modeled and measured tropospheric methane. These seasonally varying errors alias into source attributions resulting from model inversions. In particular, we suggest that the tropospheric phase lag error leads to large misdiagnoses of wetland emissions in the high latitudes of the Northern Hemisphere.

  1. Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing.

    PubMed

    Yang, Z; Hong, J; Zhang, J; Wang, M Y; Zhu, Y

    2013-12-01

    The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results on axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements' repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings.

  2. Ensemble Kalman filters for dynamical systems with unresolved turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.

    Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less

  3. A method on error analysis for large-aperture optical telescope control system

    NASA Astrophysics Data System (ADS)

    Su, Yanrui; Wang, Qiang; Yan, Fabao; Liu, Xiang; Huang, Yongmei

    2016-10-01

    For large-aperture optical telescope, compared with the performance of azimuth in the control system, arc second-level jitters exist in elevation under different speeds' working mode, especially low-speed working mode in the process of its acquisition, tracking and pointing. The jitters are closely related to the working speed of the elevation, resulting in the reduction of accuracy and low-speed stability of the telescope. By collecting a large number of measured data to the elevation, we do analysis on jitters in the time domain, frequency domain and space domain respectively. And the relation between jitter points and the leading speed of elevation and the corresponding space angle is concluded that the jitters perform as periodic disturbance in space domain and the period of the corresponding space angle of the jitter points is 79.1″ approximately. Then we did simulation, analysis and comparison to the influence of the disturbance sources, like PWM power level output disturbance, torque (acceleration) disturbance, speed feedback disturbance and position feedback disturbance on the elevation to find that the space periodic disturbance still exist in the elevation performance. It leads us to infer that the problems maybe exist in angle measurement unit. The telescope employs a 24-bit photoelectric encoder and we can calculate the encoder grating angular resolution as 79.1016'', which is as the corresponding angle value in the whole encoder system of one period of the subdivision signal. The value is approximately equal to the space frequency of the jitters. Therefore, the working elevation of the telescope is affected by subdivision errors and the period of the subdivision error is identical to the period of encoder grating angular. Through comprehensive consideration and mathematical analysis, that DC subdivision error of subdivision error sources causes the jitters is determined, which is verified in the practical engineering. The method that analyze error sources from time domain, frequency domain and space domain respectively has a very good role in guiding to find disturbance sources for large-aperture optical telescope.

  4. Cosmological measurements with general relativistic galaxy correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raccanelli, Alvise; Montanari, Francesco; Durrer, Ruth

    We investigate the cosmological dependence and the constraining power of large-scale galaxy correlations, including all redshift-distortions, wide-angle, lensing and gravitational potential effects on linear scales. We analyze the cosmological information present in the lensing convergence and in the gravitational potential terms describing the so-called ''relativistic effects'', and we find that, while smaller than the information contained in intrinsic galaxy clustering, it is not negligible. We investigate how neglecting them does bias cosmological measurements performed by future spectroscopic and photometric large-scale surveys such as SKA and Euclid. We perform a Fisher analysis using the CLASS code, modified to include scale-dependent galaxymore » bias and redshift-dependent magnification and evolution bias. Our results show that neglecting relativistic terms, especially lensing convergence, introduces an error in the forecasted precision in measuring cosmological parameters of the order of a few tens of percent, in particular when measuring the matter content of the Universe and primordial non-Gaussianity parameters. The analysis suggests a possible substantial systematic error in cosmological parameter constraints. Therefore, we argue that radial correlations and integrated relativistic terms need to be taken into account when forecasting the constraining power of future large-scale number counts of galaxy surveys.« less

  5. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit Reynolds stress model. Fortunately, the numerical error assessment at most of the axial stations used to compare with measurements clearly indicated that the scale-resolving simulations were improving (i.e. approaching the measured values) as the grid was refined. Hence, unlike a Reynolds-averaged simulation, the hybrid approach provides a mechanism to the end-user for reducing model-form errors.

  6. Reflective properties of randomly rough surfaces under large incidence angles.

    PubMed

    Qiu, J; Zhang, W J; Liu, L H; Hsu, P-f; Liu, L J

    2014-06-01

    The reflective properties of randomly rough surfaces at large incidence angles have been reported due to their potential applications in some of the radiative heat transfer research areas. The main purpose of this work is to investigate the formation mechanism of the specular reflection peak of rough surfaces at large incidence angles. The bidirectional reflectance distribution function (BRDF) of rough aluminum surfaces with different roughnesses at different incident angles is measured by a three-axis automated scatterometer. This study used a validated and accurate computational model, the rigorous coupled-wave analysis (RCWA) method, to compare and analyze the measurement BRDF results. It is found that the RCWA results show the same trend of specular peak as the measurement. This paper mainly focuses on the relative roughness at the range of 0.16<σ/λ<5.35. As the relative roughness decreases, the specular peak enhancement dramatically increases and the scattering region significantly reduces, especially under large incidence angles. The RCWA and the Rayleigh criterion results have been compared, showing that the relative error of the total integrated scatter increases as the roughness of the surface increases at large incidence angles. In addition, the zero-order diffractive power calculated by RCWA and the reflectance calculated by Fresnel equations are compared. The comparison shows that the relative error declines sharply when the incident angle is large and the roughness is small.

  7. Improving the twilight model for polar cap absorption nowcasts

    NASA Astrophysics Data System (ADS)

    Rogers, N. C.; Kero, A.; Honary, F.; Verronen, P. T.; Warrington, E. M.; Danskin, D. W.

    2016-11-01

    During solar proton events (SPE), energetic protons ionize the polar mesosphere causing HF radio wave attenuation, more strongly on the dayside where the effective recombination coefficient, αeff, is low. Polar cap absorption models predict the 30 MHz cosmic noise absorption, A, measured by riometers, based on real-time measurements of the integrated proton flux-energy spectrum, J. However, empirical models in common use cannot account for regional and day-to-day variations in the daytime and nighttime profiles of αeff(z) or the related sensitivity parameter, m=A>/&sqrt;J. Large prediction errors occur during twilight when m changes rapidly, and due to errors locating the rigidity cutoff latitude. Modeling the twilight change in m as a linear or Gauss error-function transition over a range of solar-zenith angles (χl < χ < χu) provides a better fit to measurements than selecting day or night αeff profiles based on the Earth-shadow height. Optimal model parameters were determined for several polar cap riometers for large SPEs in 1998-2005. The optimal χl parameter was found to be most variable, with smaller values (as low as 60°) postsunrise compared with presunset and with positive correlation between riometers over a wide area. Day and night values of m exhibited higher correlation for closely spaced riometers. A nowcast simulation is presented in which rigidity boundary latitude and twilight model parameters are optimized by assimilating age-weighted measurements from 25 riometers. The technique reduces model bias, and root-mean-square errors are reduced by up to 30% compared with a model employing no riometer data assimilation.

  8. A novel validation and calibration method for motion capture systems based on micro-triangulation.

    PubMed

    Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M

    2018-06-06

    Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Standard error of estimated average timber volume per acre under point sampling when trees are measured for volume on a subsample of all points.

    Treesearch

    Floyd A. Johnson

    1961-01-01

    This report assumes a knowledge of the principles of point sampling as described by Grosenbaugh, Bell and Alexander, and others. Whenever trees are counted at every point in a sample of points (large sample) and measured for volume at a portion (small sample) of these points, the sampling design could be called ratio double sampling. If the large...

  10. Errors introduced by dose scaling for relative dosimetry

    PubMed Central

    Watanabe, Yoichi; Hayashi, Naoki

    2012-01-01

    Some dosimeters require a relationship between detector signal and delivered dose. The relationship (characteristic curve or calibration equation) usually depends on the environment under which the dosimeters are manufactured or stored. To compensate for the difference in radiation response among different batches of dosimeters, the measured dose can be scaled by normalizing the measured dose to a specific dose. Such a procedure, often called “relative dosimetry”, allows us to skip the time‐consuming production of a calibration curve for each irradiation. In this study, the magnitudes of errors due to the dose scaling procedure were evaluated by using the characteristic curves of BANG3 polymer gel dosimeter, radiographic EDR2 films, and GAFCHROMIC EBT2 films. Several sets of calibration data were obtained for each type of dosimeters, and a calibration equation of one set of data was used to estimate doses of the other dosimeters from different batches. The scaled doses were then compared with expected doses, which were obtained by using the true calibration equation specific to each batch. In general, the magnitude of errors increased with increasing deviation of the dose scaling factor from unity. Also, the errors strongly depended on the difference in the shape of the true and reference calibration curves. For example, for the BANG3 polymer gel, of which the characteristic curve can be approximated with a linear equation, the error for a batch requiring a dose scaling factor of 0.87 was larger than the errors for other batches requiring smaller magnitudes of dose scaling, or scaling factors of 0.93 or 1.02. The characteristic curves of EDR2 and EBT2 films required nonlinear equations. With those dosimeters, errors larger than 5% were commonly observed in the dose ranges of below 50% and above 150% of the normalization dose. In conclusion, the dose scaling for relative dosimetry introduces large errors in the measured doses when a large dose scaling is applied, and this procedure should be applied with special care. PACS numbers: 87.56.Da, 06.20.Dk, 06.20.fb PMID:22955658

  11. Quantifying Data Quality for Clinical Trials Using Electronic Data Capture

    PubMed Central

    Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.

    2008-01-01

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958

  12. Insights from Synthetic Star-forming Regions. II. Verifying Dust Surface Density, Dust Temperature, and Gas Mass Measurements With Modified Blackbody Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koepferl, Christine M.; Robitaille, Thomas P.; Dale, James E., E-mail: koepferl@usm.lmu.de

    We use a large data set of realistic synthetic observations (produced in Paper I of this series) to assess how observational techniques affect the measurement physical properties of star-forming regions. In this part of the series (Paper II), we explore the reliability of the measured total gas mass, dust surface density and dust temperature maps derived from modified blackbody fitting of synthetic Herschel observations. We find from our pixel-by-pixel analysis of the measured dust surface density and dust temperature a worrisome error spread especially close to star formation sites and low-density regions, where for those “contaminated” pixels the surface densitiesmore » can be under/overestimated by up to three orders of magnitude. In light of this, we recommend to treat the pixel-based results from this technique with caution in regions with active star formation. In regions of high background typical in the inner Galactic plane, we are not able to recover reliable surface density maps of individual synthetic regions, since low-mass regions are lost in the far-infrared background. When measuring the total gas mass of regions in moderate background, we find that modified blackbody fitting works well (absolute error: + 9%; −13%) up to 10 kpc distance (errors increase with distance). Commonly, the initial images are convolved to the largest common beam-size, which smears contaminated pixels over large areas. The resulting information loss makes this commonly used technique less verifiable as now χ {sup 2} values cannot be used as a quality indicator of a fitted pixel. Our control measurements of the total gas mass (without the step of convolution to the largest common beam size) produce similar results (absolute error: +20%; −7%) while having much lower median errors especially for the high-mass stellar feedback phase. In upcoming papers (Paper III; Paper IV) of this series we test the reliability of measured star formation rate with direct and indirect techniques.« less

  13. Procedures for experimental measurement and theoretical analysis of large plastic deformations

    NASA Technical Reports Server (NTRS)

    Morris, R. E.

    1974-01-01

    Theoretical equations are derived and analytical procedures are presented for the interpretation of experimental measurements of large plastic strains in the surface of a plate. Orthogonal gage lengths established on the metal surface are measured before and after deformation. The change in orthogonality after deformation is also measured. Equations yield the principal strains, deviatoric stresses in the absence of surface friction forces, true stresses if the stress normal to the surface is known, and the orientation angle between the deformed gage line and the principal stress-strain axes. Errors in the measurement of nominal strains greater than 3 percent are within engineering accuracy. Applications suggested for this strain measurement system include the large-strain-stress analysis of impact test models, burst tests of spherical or cylindrical pressure vessels, and to augment small-strain instrumentation tests where large strains are anticipated.

  14. The velocity and vorticity fields of the turbulent near wake of a circular cylinder

    NASA Technical Reports Server (NTRS)

    Wallace, James; Ong, Lawrence; Moin, Parviz

    1995-01-01

    The purpose of this research is to provide a detailed experimental database of velocity and vorticity statistics in the very near wake (x/d less than 10) of a circular cylinder at Reynolds number of 3900. This study has determined that estimations of the streamwise velocity component in flow fields with large nonzero cross-stream components are not accurate. Similarly, X-wire measurements of the u and v velocity components in flows containing large w are also subject to the errors due to binormal cooling. Using the look-up table (LUT) technique, and by calibrating the X-wire probe used here to include the range of expected angles of attack (+/- 40 deg), accurate X-wire measurements of instantaneous u and v velocity components in the very near wake region of a circular cylinder has been accomplished. The approximate two-dimensionality of the present flow field was verified with four-wire probe measurements, and to some extent the spanwise correlation measurements with the multisensor rake. Hence, binormal cooling errors in the present X-wire measurements are small.

  15. Large field distributed aperture laser semiactive angle measurement system design with imaging fiber bundles.

    PubMed

    Xu, Chunyun; Cheng, Haobo; Feng, Yunpeng; Jing, Xiaoli

    2016-09-01

    A type of laser semiactive angle measurement system is designed for target detecting and tracking. Only one detector is used to detect target location from four distributed aperture optical systems through a 4×1 imaging fiber bundle. A telecentric optical system in image space is designed to increase the efficiency of imaging fiber bundles. According to the working principle of a four-quadrant (4Q) detector, fiber diamond alignment is adopted between an optical system and a 4Q detector. The structure of the laser semiactive angle measurement system is, we believe, novel. Tolerance analysis is carried out to determine tolerance limits of manufacture and installation errors of the optical system. The performance of the proposed method is identified by computer simulations and experiments. It is demonstrated that the linear region of the system is ±12°, with measurement error of better than 0.2°. In general, this new system can be used with large field of view and high accuracy, providing an efficient, stable, and fast method for angle measurement in practical situations.

  16. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle.

    PubMed

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  17. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle

    NASA Astrophysics Data System (ADS)

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  18. Using sediment 'fingerprints' to assess sediment-budget errors, north Halawa Valley, Oahu, Hawaii, 1991-92

    USGS Publications Warehouse

    Hill, B.R.; DeCarlo, E.H.; Fuller, C.C.; Wong, M.F.

    1998-01-01

    Reliable estimates of sediment-budget errors are important for interpreting sediment-budget results. Sediment-budget errors are commonly considered equal to sediment-budget imbalances, which may underestimate actual sediment-budget errors if they include compensating positive and negative errors. We modified the sediment 'fingerprinting' approach to qualitatively evaluate compensating errors in an annual (1991) fine (<63 ??m) sediment budget for the North Halawa Valley, a mountainous, forested drainage basin on the island of Oahu, Hawaii, during construction of a major highway. We measured concentrations of aeolian quartz and 137Cs in sediment sources and fluvial sediments, and combined concentrations of these aerosols with the sediment budget to construct aerosol budgets. Aerosol concentrations were independent of the sediment budget, hence aerosol budgets were less likely than sediment budgets to include compensating errors. Differences between sediment-budget and aerosol-budget imbalances therefore provide a measure of compensating errors in the sediment budget. The sediment-budget imbalance equalled 25% of the fluvial fine-sediment load. Aerosol-budget imbalances were equal to 19% of the fluvial 137Cs load and 34% of the fluval quartz load. The reasonably close agreement between sediment- and aerosol-budget imbalances indicates that compensating errors in the sediment budget were not large and that the sediment-budget imbalance as a reliable measure of sediment-budget error. We attribute at least one-third of the 1991 fluvial fine-sediment load to highway construction. Continued monitoring indicated that highway construction produced 90% of the fluvial fine-sediment load during 1992. Erosion of channel margins and attrition of coarse particles provided most of the fine sediment produced by natural processes. Hillslope processes contributed relatively minor amounts of sediment.

  19. Scoping a field experiment: error diagnostics of TRMM precipitation radar estimates in complex terrain as a basis for IPHEx2014

    NASA Astrophysics Data System (ADS)

    Duan, Y.; Wilson, A. M.; Barros, A. P.

    2014-10-01

    A diagnostic analysis of the space-time structure of error in Quantitative Precipitation Estimates (QPE) from the Precipitation Radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the Southern Appalachian Mountains, USA since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 V7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA, and missed detection, MD) and magnitude errors (underestimation, UND, and overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the Southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter), and especially in the inner region. Although UND dominates the magnitude error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total consistent with regional hydrometeorology. The 2A25 V7 product underestimates low level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the terrain topography mask used to remove ground clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to under-catch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground clutter correction.

  20. Scoping a field experiment: error diagnostics of TRMM precipitation radar estimates in complex terrain as a basis for IPHEx2014

    NASA Astrophysics Data System (ADS)

    Duan, Y.; Wilson, A. M.; Barros, A. P.

    2015-03-01

    A diagnostic analysis of the space-time structure of error in quantitative precipitation estimates (QPEs) from the precipitation radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the southern Appalachian Mountains, USA, since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 Version 7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA; missed detection, MD) and magnitude errors (underestimation, UND; overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter) and especially in the inner region. Although UND dominates the error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total, consistent with regional hydrometeorology. The 2A25 V7 product underestimates low-level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the topography mask used to remove ground-clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to undercatch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and a local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non-uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground-clutter correction.

  1. Numerical analysis of the blade tip-timing signal of a fiber bundle sensor probe

    NASA Astrophysics Data System (ADS)

    Guo, Haotian; Duan, Fajie; Cheng, Zhonghai

    2015-03-01

    Blade tip-timing is the most effective method for online blade vibration measurement of large rotating machines like turbine engines. Fiber bundle sensors are utilized in tip-timing system to measure the arrival time of the blade. The model of the tip-timing signal of the fiber bundle sensor is established. Experiments are conducted and the results are in concordance with the model established. The rising speed of the tip-timing signal is analyzed. To minimize the tip-timing error, the effects of the clearance change between the sensor and the blade and the deflection of the tip surface are analyzed. Simulation results indicate that the variable gain amplifier, which amplifies the signals to a similar level, can eliminate the measurement error caused by the variation of the clearance between the sensor and blade. Increasing the clearance between the sensor and blade can reduce the measurement error introduced by deflection of the tip surface.

  2. Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Z.; Hong, J.; Zhang, J.

    2013-12-15

    The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results onmore » axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements’ repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings.« less

  3. Shared and unshared exposure measurement error in occupational cohort studies and their effects on statistical inference in proportional hazards models.

    PubMed

    Hoffmann, Sabine; Laurier, Dominique; Rage, Estelle; Guihenneuc, Chantal; Ancelet, Sophie

    2018-01-01

    Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies.

  4. Shared and unshared exposure measurement error in occupational cohort studies and their effects on statistical inference in proportional hazards models

    PubMed Central

    Laurier, Dominique; Rage, Estelle

    2018-01-01

    Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies. PMID:29408862

  5. An automated inner dimensional measurement system based on a laser displacement sensor for long-stepped pipes.

    PubMed

    Zhang, Fumin; Qu, Xinghua; Ouyang, Jianfei

    2012-01-01

    A novel measurement prototype based on a mobile vehicle that carries a laser scanning sensor is proposed. The prototype is intended for the automated measurement of the interior 3D geometry of large-diameter long-stepped pipes. The laser displacement sensor, which has a small measurement range, is mounted on an extended arm of known length. It is scanned to improve the measurement accuracy for large-sized pipes. A fixing mechanism based on two sections is designed to ensure that the stepped pipe is concentric with the axis of rotation of the system. Data are acquired in a cylindrical coordinate system and fitted in a circle to determine diameter. Systematic errors covering arm length, tilt, and offset errors are analyzed and calibrated. The proposed system is applied to sample parts and the results are discussed to verify its effectiveness. This technique measures a diameter of 600 mm with an uncertainty of 0.02 mm at a 95% confidence probability. A repeatability test is performed to examine precision, which is 1.1 μm. A laser tracker is used to verify the measurement accuracy of the system, which is evaluated as 9 μm within a diameter of 600 mm.

  6. Observation and analysis of high-speed human motion with frequent occlusion in a large area

    NASA Astrophysics Data System (ADS)

    Wang, Yuru; Liu, Jiafeng; Liu, Guojun; Tang, Xianglong; Liu, Peng

    2009-12-01

    The use of computer vision technology in collecting and analyzing statistics during sports matches or training sessions is expected to provide valuable information for tactics improvement. However, the measurements published in the literature so far are either unreliably documented to be used in training planning due to their limitations or unsuitable for studying high-speed motion in large area with frequent occlusions. A sports annotation system is introduced in this paper for tracking high-speed non-rigid human motion over a large playing area with the aid of motion camera, taking short track speed skating competitions as an example. The proposed system is composed of two sub-systems: precise camera motion compensation and accurate motion acquisition. In the video registration step, a distinctive invariant point feature detector (probability density grads detector) and a global parallax based matching points filter are used, to provide reliable and robust matching across a large range of affine distortion and illumination change. In the motion acquisition step, a two regions' relationship constrained joint color model and Markov chain Monte Carlo based joint particle filter are emphasized, by dividing the human body into two relative key regions. Several field tests are performed to assess measurement errors, including comparison to popular algorithms. With the help of the system presented, the system obtains position data on a 30 m × 60 m large rink with root-mean-square error better than 0.3975 m, velocity and acceleration data with absolute error better than 1.2579 m s-1 and 0.1494 m s-2, respectively.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence ofmore » the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  9. Feasibility, strategy, methodology, and analysis of probe measurements in plasma under high gas pressure

    NASA Astrophysics Data System (ADS)

    Demidov, V. I.; Koepke, M. E.; Kurlyandskaya, I. P.; Malkov, M. A.

    2018-02-01

    This paper reviews existing theories for interpreting probe measurements of electron distribution functions (EDF) at high gas pressure when collisions of electrons with atoms and/or molecules near the probe are pervasive. An explanation of whether or not the measurements are realizable and reliable, an enumeration of the most common sources of measurement error, and an outline of proper probe-experiment design elements that inherently limit or avoid error is presented. Additionally, we describe recent expanded plasma-condition compatibility for EDF measurement, including in applications of large wall probe plasma diagnostics. This summary of the authors’ experiences gained over decades of practicing and developing probe diagnostics is intended to inform, guide, suggest, and detail the advantages and disadvantages of probe application in plasma research.

  10. Metrics for Business Process Models

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.

  11. Characterization of in Band Stray Light in SBUV-2 Instruments

    NASA Technical Reports Server (NTRS)

    Huang, L. K.; DeLand, M. T.; Taylor, S. L.; Flynn, L. E.

    2014-01-01

    Significant in-band stray light (IBSL) error at solar zenith angle (SZA) values larger than 77deg near sunset in 4 SBUV/2 (Solar Backscattered Ultraviolet) instruments, on board the NOAA-14, 17, 18 and 19 satellites, has been characterized. The IBSL error is caused by large surface reflection and scattering of the air-gapped depolarizer in front of the instrument's monochromator aperture. The source of the IBSL error is direct solar illumination of instrument components near the aperture rather than from earth shine. The IBSL contamination at 273 nm can reach 40% of earth radiance near sunset, which results in as much as a 50% error in the retrieved ozone from the upper stratosphere. We have analyzed SBUV/2 albedo measurements on both the dayside and nightside to develop an empirical model for the IBSL error. This error has been corrected in the V8.6 SBUV/2 ozone retrieval.

  12. Comparison of geodetic and glaciological mass-balance techniques, Gulkana Glacier, Alaska, U.S.A

    USGS Publications Warehouse

    Cox, L.H.; March, R.S.

    2004-01-01

    The net mass balance on Gulkana Glacier, Alaska, U.S.A., has been measured since 1966 by the glaciological method, in which seasonal balances are measured at three index sites and extrapolated over large areas of the glacier. Systematic errors can accumulate linearly with time in this method. Therefore, the geodetic balance, in which errors are less time-dependent, was calculated for comparison with the glaciological method. Digital elevation models of the glacier in 1974, 1993 and 1999 were prepared using aerial photographs, and geodetic balances were computed, giving - 6.0??0.7 m w.e. from 1974 to 1993 and - 11.8??0.7 m w.e. from 1974 to 1999. These balances are compared with the glaciological balances over the same intervals, which were - 5.8??0.9 and -11.2??1.0 m w.e. respectively; both balances show that the thinning rate tripled in the 1990s. These cumulative balances differ by <6%. For this close agreement, the glaciologically measured mass balance of Gulkana Glacier must be largely free of systematic errors and be based on a time-variable area-altitude distribution, and the photography used in the geodetic method must have enough contrast to enable accurate photogrammetry.

  13. Utilizing photon number parity measurements to demonstrate quantum computation with cat-states in a cavity

    NASA Astrophysics Data System (ADS)

    Petrenko, A.; Ofek, N.; Vlastakis, B.; Sun, L.; Leghtas, Z.; Heeres, R.; Sliwa, K. M.; Mirrahimi, M.; Jiang, L.; Devoret, M. H.; Schoelkopf, R. J.

    2015-03-01

    Realizing a working quantum computer requires overcoming the many challenges that come with coupling large numbers of qubits to perform logical operations. These include improving coherence times, achieving high gate fidelities, and correcting for the inevitable errors that will occur throughout the duration of an algorithm. While impressive progress has been made in all of these areas, the difficulty of combining these ingredients to demonstrate an error-protected logical qubit, comprised of many physical qubits, still remains formidable. With its large Hilbert space, superior coherence properties, and single dominant error channel (single photon loss), a superconducting 3D resonator acting as a resource for a quantum memory offers a hardware-efficient alternative to multi-qubit codes [Leghtas et.al. PRL 2013]. Here we build upon recent work on cat-state encoding [Vlastakis et.al. Science 2013] and photon-parity jumps [Sun et.al. 2014] by exploring the effects of sequential measurements on a cavity state. Employing a transmon qubit dispersively coupled to two superconducting resonators in a cQED architecture, we explore further the application of parity measurements to characterizing such a hybrid qubit/cat state architecture. In so doing, we demonstrate the promise of integrating cat states as central constituents of future quantum codes.

  14. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  15. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    PubMed Central

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  16. The accuracy of self-reported pregnancy-related weight: a systematic review.

    PubMed

    Headen, I; Cohen, A K; Mujahid, M; Abrams, B

    2017-03-01

    Self-reported maternal weight is error-prone, and the context of pregnancy may impact error distributions. This systematic review summarizes error in self-reported weight across pregnancy and assesses implications for bias in associations between pregnancy-related weight and birth outcomes. We searched PubMed and Google Scholar through November 2015 for peer-reviewed articles reporting accuracy of self-reported, pregnancy-related weight at four time points: prepregnancy, delivery, over gestation and postpartum. Included studies compared maternal self-report to anthropometric measurement or medical report of weights. Sixty-two studies met inclusion criteria. We extracted data on magnitude of error and misclassification. We assessed impact of reporting error on bias in associations between pregnancy-related weight and birth outcomes. Women underreported prepregnancy (PPW: -2.94 to -0.29 kg) and delivery weight (DW: -1.28 to 0.07 kg), and over-reported gestational weight gain (GWG: 0.33 to 3 kg). Magnitude of error was small, ranged widely, and varied by prepregnancy weight class and race/ethnicity. Misclassification was moderate (PPW: 0-48.3%; DW: 39.0-49.0%; GWG: 16.7-59.1%), and overestimated some estimates of population prevalence. However, reporting error did not largely bias associations between pregnancy-related weight and birth outcomes. Although measured weight is preferable, self-report is a cost-effective and practical measurement approach. Future researchers should develop bias correction techniques for self-reported pregnancy-related weight. © 2017 World Obesity Federation.

  17. Evaluation of resolution and periodic errors of a flatbed scanner used for digitizing spectroscopic photographic plates

    PubMed Central

    Wyatt, Madison; Nave, Gillian

    2017-01-01

    We evaluated the use of a commercial flatbed scanner for digitizing photographic plates used for spectroscopy. The scanner has a bed size of 420 mm by 310 mm and a pixel size of about 0.0106 mm. Our tests show that the closest line pairs that can be resolved with the scanner are 0.024 mm apart, only slightly larger than the Nyquist resolution of 0.021 mm expected by the 0.0106 mm pixel size. We measured periodic errors in the scanner using both a calibrated length scale and a photographic plate. We find no noticeable periodic errors in the direction parallel to the linear detector in the scanner, but errors with an amplitude of 0.03 mm to 0.05 mm in the direction perpendicular to the detector. We conclude that large periodic errors in measurements of spectroscopic plates using flatbed scanners can be eliminated by scanning the plates with the dispersion direction parallel to the linear detector by placing the plate along the short side of the scanner. PMID:28463262

  18. An improved procedure for the validation of satellite-based precipitation estimates

    NASA Astrophysics Data System (ADS)

    Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad

    2015-09-01

    The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model provides a clear and concise picture of the systematic and random errors, with both versions of 3B42RT have higher errors in varying degrees than their research (post-real-time) counterparts. The new V7 algorithm shows obvious improvements in reducing random errors in both winter and summer seasons, compared to its predecessors V6. Stage IV, as expected, surpasses the satellite-based datasets in all the metrics over CONUS. Based on the results, we recommend the new procedure be adopted for routine validation of satellite-based precipitation datasets, and we expect the procedure will work effectively for higher resolution data to be produced in the Global Precipitation Measurement (GPM) era.

  19. State estimation for autopilot control of small unmanned aerial vehicles in windy conditions

    NASA Astrophysics Data System (ADS)

    Poorman, David Paul

    The use of small unmanned aerial vehicles (UAVs) both in the military and civil realms is growing. This is largely due to the proliferation of inexpensive sensors and the increase in capability of small computers that has stemmed from the personal electronic device market. Methods for performing accurate state estimation for large scale aircraft have been well known and understood for decades, which usually involve a complex array of expensive high accuracy sensors. Performing accurate state estimation for small unmanned aircraft is a newer area of study and often involves adapting known state estimation methods to small UAVs. State estimation for small UAVs can be more difficult than state estimation for larger UAVs due to small UAVs employing limited sensor suites due to cost, and the fact that small UAVs are more susceptible to wind than large aircraft. The purpose of this research is to evaluate the ability of existing methods of state estimation for small UAVs to accurately capture the states of the aircraft that are necessary for autopilot control of the aircraft in a Dryden wind field. The research begins by showing which aircraft states are necessary for autopilot control in Dryden wind. Then two state estimation methods that employ only accelerometer, gyro, and GPS measurements are introduced. The first method uses assumptions on aircraft motion to directly solve for attitude information and smooth GPS data, while the second method integrates sensor data to propagate estimates between GPS measurements and then corrects those estimates with GPS information. The performance of both methods is analyzed with and without Dryden wind, in straight and level flight, in a coordinated turn, and in a wings level ascent. It is shown that in zero wind, the first method produces significant steady state attitude errors in both a coordinated turn and in a wings level ascent. In Dryden wind, it produces large noise on the estimates for its attitude states, and has a non-zero mean error that increases when gyro bias is increased. The second method is shown to not exhibit any steady state error in the tested scenarios that is inherent to its design. The second method can correct for attitude errors that arise from both integration error and gyro bias states, but it suffers from lack of attitude error observability. The attitude errors are shown to be more observable in wind, but increased integration error in wind outweighs the increase in attitude corrections that such increased observability brings, resulting in larger attitude errors in wind. Overall, this work highlights many technical deficiencies of both of these methods of state estimation that could be improved upon in the future to enhance state estimation for small UAVs in windy conditions.

  20. An audit strategy for time-to-event outcomes measured with error: application to five randomized controlled trials in oncology.

    PubMed

    Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari

    2013-10-01

    Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.

  1. A map overlay error model based on boundary geometry

    USGS Publications Warehouse

    Gaeuman, D.; Symanzik, J.; Schmidt, J.C.

    2005-01-01

    An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.

  2. Reproducibility of 3D kinematics and surface electromyography measurements of mastication.

    PubMed

    Remijn, Lianne; Groen, Brenda E; Speyer, Renée; van Limbeek, Jacques; Nijhuis-van der Sanden, Maria W G

    2016-03-01

    The aim of this study was to determine the measurement reproducibility for a procedure evaluating the mastication process and to estimate the smallest detectable differences of 3D kinematic and surface electromyography (sEMG) variables. Kinematics of mandible movements and sEMG activity of the masticatory muscles were obtained over two sessions with four conditions: two food textures (biscuit and bread) of two sizes (small and large). Twelve healthy adults (mean age 29.1 years) completed the study. The second to the fifth chewing cycle of 5 bites were used for analyses. The reproducibility per outcome variable was calculated with an intraclass correlation coefficient (ICC) and a Bland-Altman analysis was applied to determine the standard error of measurement relative error of measurement and smallest detectable differences of all variables. ICCs ranged from 0.71 to 0.98 for all outcome variables. The outcome variables consisted of four bite and fourteen chewing cycle variables. The relative standard error of measurement of the bite variables was up to 17.3% for 'time-to-swallow', 'time-to-transport' and 'number of chewing cycles', but ranged from 31.5% to 57.0% for 'change of chewing side'. The relative standard error of measurement ranged from 4.1% to 24.7% for chewing cycle variables and was smaller for kinematic variables than sEMG variables. In general, measurements obtained with 3D kinematics and sEMG are reproducible techniques to assess the mastication process. The duration of the chewing cycle and frequency of chewing were the best reproducible measurements. Change of chewing side could not be reproduced. The published measurement error and smallest detectable differences will aid the interpretation of the results of future clinical studies using the same study variables. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Effects of free convection and friction on heat-pulse flowmeter measurement

    NASA Astrophysics Data System (ADS)

    Lee, Tsai-Ping; Chia, Yeeping; Chen, Jiun-Szu; Chen, Hongey; Liu, Chen-Wuing

    2012-03-01

    SummaryHeat-pulse flowmeter can be used to measure low flow velocities in a borehole; however, bias in the results due to measurement error is often encountered. A carefully designed water circulation system was established in the laboratory to evaluate the accuracy and precision of flow velocity measured by heat-pulse flowmeter in various conditions. Test results indicated that the coefficient of variation for repeated measurements, ranging from 0.4% to 5.8%, tends to increase with flow velocity. The measurement error increases from 4.6% to 94.4% as the average flow velocity decreases from 1.37 cm/s to 0.18 cm/s. We found that the error resulted primarily from free convection and frictional loss. Free convection plays an important role in heat transport at low flow velocities. Frictional effect varies with the position of measurement and geometric shape of the inlet and flow-through cell of the flowmeter. Based on the laboratory test data, a calibration equation for the measured flow velocity was derived by the least-squares regression analysis. When the flowmeter is used with a diverter, the range of measured flow velocity can be extended, but the measurement error and the coefficient of variation due to friction increase significantly. At higher velocities under turbulent flow conditions, the measurement error is greater than 100%. Our laboratory experimental results suggested that, to avoid a large error, the heat-pulse flowmeter measurement is better conducted in laminar flow and the effect of free convection should be eliminated at any flow velocities. Field measurement of the vertical flow velocity using the heat-pulse flowmeter was tested in a monitoring well. The calibration of measured velocities not only improved the contrast in hydraulic conductivity between permeable and less permeable layers, but also corrected the inconsistency between the pumping rate and the measured flow rate. We identified two highly permeable sections where the horizontal hydraulic conductivity is 3.7-6.4 times of the equivalent hydraulic conductivity obtained from the pumping test. The field test results indicated that, with a proper calibration, the flowmeter measurement is capable of characterizing the vertical distribution of preferential flow or hydraulic conductivity.

  4. Probabilistic parameter estimation in a 2-step chemical kinetics model for n-dodecane jet autoignition

    NASA Astrophysics Data System (ADS)

    Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph

    2018-05-01

    This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.

  5. Distance error correction for time-of-flight cameras

    NASA Astrophysics Data System (ADS)

    Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian

    2017-06-01

    The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.

  6. Estimations of ABL fluxes and other turbulence parameters from Doppler lidar data

    NASA Technical Reports Server (NTRS)

    Gal-Chen, Tzvi; Xu, Mei; Eberhard, Wynn

    1989-01-01

    Techniques for extraction boundary layer parameters from measurements of a short-pulse CO2 Doppler lidar are described. The measurements are those collected during the First International Satellites Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE). By continuously operating the lidar for about an hour, stable statistics of the radial velocities can be extracted. Assuming that the turbulence is horizontally homogeneous, the mean wind, its standard deviations, and the momentum fluxes were estimated. Spectral analysis of the radial velocities is also performed from which, by examining the amplitude of the power spectrum at the inertial range, the kinetic energy dissipation was deduced. Finally, using the statistical form of the Navier-Stokes equations, the surface heat flux is derived as the residual balance between the vertical gradient of the third moment of the vertical velocity and the kinetic energy dissipation. Combining many measurements would normally reduce the error provided that, it is unbiased and uncorrelated. The nature of some of the algorithms however, is such that, biased and correlated errors may be generated even though the raw measurements are not. Data processing procedures were developed that eliminate bias and minimize error correlation. Once bias and error correlations are accounted for, the large sample size is shown to reduce the errors substantially. The principal features of the derived turbulence statistics for two case studied are presented.

  7. Active wavefront control challenges of the NASA Large Deployable Reflector (LDR)

    NASA Technical Reports Server (NTRS)

    Meinel, Aden B.; Meinel, Marjorie P.; Manhart, Paul K.; Hochberg, Eric B.

    1989-01-01

    The 20-m Large Deployable Reflector will have a segmented primary mirror. Achieving diffraction-limited performance at 50 microns requires correction for the errors of tilt and piston of the primary mirror. This correction can be obtained in two ways, the use of an active primary or a correction at a demagnified pupil of the primary. A critical requirement is the means for measurement of the wavefront error and maintaining phasing during the observation of objects that may be too faint for determining the error. Absolute phasing can only be determined using a cooperative source. Maintenance of phasing can be done with an on-board source. A number of options are being explored as discussed below. The many issues concerning the assessment and control of an active segmented mirror will be addressed with an early construction of the Precision Segmented Reflector testbed.

  8. Active wavefront control challenges of the NASA Large Deployable Reflector (LDR)

    NASA Astrophysics Data System (ADS)

    Meinel, Aden B.; Meinel, Marjorie P.; Manhart, Paul K.; Hochberg, Eric B.

    1989-09-01

    The 20-m Large Deployable Reflector will have a segmented primary mirror. Achieving diffraction-limited performance at 50 microns requires correction for the errors of tilt and piston of the primary mirror. This correction can be obtained in two ways, the use of an active primary or a correction at a demagnified pupil of the primary. A critical requirement is the means for measurement of the wavefront error and maintaining phasing during the observation of objects that may be too faint for determining the error. Absolute phasing can only be determined using a cooperative source. Maintenance of phasing can be done with an on-board source. A number of options are being explored as discussed below. The many issues concerning the assessment and control of an active segmented mirror will be addressed with an early construction of the Precision Segmented Reflector testbed.

  9. Skin Friction at Very High Reynolds Numbers in the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Watson, Ralph D.; Anders, John B.; Hall, Robert M.

    2006-01-01

    Skin friction coefficients were derived from measurements using standard measurement technologies on an axisymmetric cylinder in the NASA Langley National Transonic Facility (NTF) at Mach numbers from 0.2 to 0.85. The pressure gradient was nominally zero, the wall temperature was nominally adiabatic, and the ratio of boundary layer thickness to model diameter within the measurement region was 0.10 to 0.14, varying with distance along the model. Reynolds numbers based on momentum thicknesses ranged from 37,000 to 605,000. The measurements approximately doubled the range of available data for flat plate skin friction coefficients. Three different techniques were used to measure surface shear. The maximum error of Preston tube measurements was estimated to be 2.5 percent, while that of Clauser derived measurements was estimated to be approximately 5 percent. Direct measurements by skin friction balance proved to be subject to large errors and were not considered reliable.

  10. Measuring discharge with ADCPs: Inferences from synthetic velocity profiles

    USGS Publications Warehouse

    Rehmann, C.R.; Mueller, D.S.; Oberg, K.A.

    2009-01-01

    Synthetic velocity profiles are used to determine guidelines for sampling discharge with acoustic Doppler current profilers (ADCPs). The analysis allows the effects of instrument characteristics, sampling parameters, and properties of the flow to be studied systematically. For mid-section measurements, the averaging time required for a single profile measurement always exceeded the 40 s usually recommended for velocity measurements, and it increased with increasing sample interval and increasing time scale of the large eddies. Similarly, simulations of transect measurements show that discharge error decreases as the number of large eddies sampled increases. The simulations allow sampling criteria that account for the physics of the flow to be developed. ?? 2009 ASCE.

  11. Elevation correction factor for absolute pressure measurements

    NASA Technical Reports Server (NTRS)

    Panek, Joseph W.; Sorrells, Mark R.

    1996-01-01

    With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.

  12. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  13. Contextualizing individual differences in error monitoring: Links with impulsivity, negative affect, and conscientiousness.

    PubMed

    Hill, Kaylin E; Samuel, Douglas B; Foti, Dan

    2016-08-01

    The error-related negativity (ERN) is a neural measure of error processing that has been implicated as a neurobehavioral trait and has transdiagnostic links with psychopathology. Few studies, however, have contextualized this traitlike component with regard to dimensions of personality that, as intermediate constructs, may aid in contextualizing links with psychopathology. Accordingly, the aim of this study was to examine the interrelationships between error monitoring and dimensions of personality within a large adult sample (N = 208). Building on previous research, we found that the ERN relates to a combination of negative affect, impulsivity, and conscientiousness. At low levels of conscientiousness, negative urgency (i.e., impulsivity in the context of negative affect) predicted an increased ERN; at high levels of conscientiousness, the effect of negative urgency was not significant. This relationship was driven specifically by the conscientiousness facets of competence, order, and deliberation. Links between personality measures and error positivity amplitude were weaker and nonsignificant. Post-error slowing was also related to conscientiousness, as well as a different facet of impulsivity: lack of perseverance. These findings suggest that, in the general population, error processing is modulated by the joint combination of negative affect, impulsivity, and conscientiousness (i.e., the profile across traits), perhaps more so than any one dimension alone. This work may inform future research concerning aberrant error processing in clinical populations. © 2016 Society for Psychophysiological Research.

  14. Aspheric and freeform surfaces metrology with software configurable optical test system: a computerized reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Su, Peng; Khreishi, Manal A. H.; Su, Tianquan; Huang, Run; Dominguez, Margaret Z.; Maldonado, Alejandro; Butel, Guillaume; Wang, Yuhao; Parks, Robert E.; Burge, James H.

    2014-03-01

    A software configurable optical test system (SCOTS) based on deflectometry was developed at the University of Arizona for rapidly, robustly, and accurately measuring precision aspheric and freeform surfaces. SCOTS uses a camera with an external stop to realize a Hartmann test in reverse. With the external camera stop as the reference, a coordinate measuring machine can be used to calibrate the SCOTS test geometry to a high accuracy. Systematic errors from the camera are carefully investigated and controlled. Camera pupil imaging aberration is removed with the external aperture stop. Imaging aberration and other inherent errors are suppressed with an N-rotation test. The performance of the SCOTS test is demonstrated with the measurement results from a 5-m-diameter Large Synoptic Survey Telescope tertiary mirror and an 8.4-m diameter Giant Magellan Telescope primary mirror. The results show that SCOTS can be used as a large-dynamic-range, high-precision, and non-null test method for precision aspheric and freeform surfaces. The SCOTS test can achieve measurement accuracy comparable to traditional interferometric tests.

  15. Photodissociation in the atmosphere of Mars - Impact of high resolution, temperature-dependent CO2 cross-section measurements

    NASA Technical Reports Server (NTRS)

    Anbar, A. D.; Allen, M.; Nair, H. A.

    1993-01-01

    We have investigated the impact of high resolution, temperature-dependent CO2 cross-section measurements, reported by Lewis and Carver (1983), on calculations of photodissociation rate coefficients in the Martian atmosphere. We find that the adoption of 50 A intervals for the purpose of computational efficiency results in errors in the calculated values for photodissociation of CO2, H2O, and O2 which are generally not above 10 percent, but as large as 20 percent in some instances. These are acceptably small errors, especially considering the uncertainties introduced by the large temperature dependence of the CO2 cross section. The inclusion of temperature-dependent CO2 cross sections is shown to lead to a decrease in the diurnally averaged rate of CO2 photodissociation as large as 33 percent at some altitudes, and increases of as much as 950 percent and 80 percent in the photodissociation rate coefficients of H2O and O2, respectively. The actual magnitude of the changes depends on the assumptions used to model the CO2 absorption spectrum at temperatures lower than the available measurements, and at wavelengths longward of 1970 A.

  16. Community assessment of tropical tree biomass: challenges and opportunities for REDD.

    PubMed

    Theilade, Ida; Rutishauser, Ervan; Poulsen, Michael K

    2015-12-01

    REDD+ programs rely on accurate forest carbon monitoring. Several REDD+ projects have recently shown that local communities can monitor above ground biomass as well as external professionals, but at lower costs. However, the precision and accuracy of carbon monitoring conducted by local communities have rarely been assessed in the tropics. The aim of this study was to investigate different sources of error in tree biomass measurements conducted by community monitors and determine the effect on biomass estimates. Furthermore, we explored the potential of local ecological knowledge to assess wood density and botanical identification of trees. Community monitors were able to measure tree DBH accurately, but some large errors were found in girth measurements of large and odd-shaped trees. Monitors with experience from the logging industry performed better than monitors without previous experience. Indeed, only experienced monitors were able to discriminate trees with low wood densities. Local ecological knowledge did not allow consistent tree identification across monitors. Future REDD+ programmes may benefit from the systematic training of local monitors in tree DBH measurement, with special attention given to large and odd-shaped trees. A better understanding of traditional classification systems and concepts is required for local tree identifications and wood density estimates to become useful in monitoring of biomass and tree diversity.

  17. Building a kinetic Monte Carlo model with a chosen accuracy.

    PubMed

    Bhute, Vijesh J; Chatterjee, Abhijit

    2013-06-28

    The kinetic Monte Carlo (KMC) method is a popular modeling approach for reaching large materials length and time scales. The KMC dynamics is erroneous when atomic processes that are relevant to the dynamics are missing from the KMC model. Recently, we had developed for the first time an error measure for KMC in Bhute and Chatterjee [J. Chem. Phys. 138, 084103 (2013)]. The error measure, which is given in terms of the probability that a missing process will be selected in the correct dynamics, requires estimation of the missing rate. In this work, we present an improved procedure for estimating the missing rate. The estimate found using the new procedure is within an order of magnitude of the correct missing rate, unlike our previous approach where the estimate was larger by orders of magnitude. This enables one to find the error in the KMC model more accurately. In addition, we find the time for which the KMC model can be used before a maximum error in the dynamics has been reached.

  18. Optimal estimation of large structure model errors. [in Space Shuttle controller design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1979-01-01

    In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.

  19. Validation of SenseWear Armband in children, adolescents, and adults.

    PubMed

    Lopez, G A; Brønd, J C; Andersen, L B; Dencker, M; Arvidsson, D

    2018-02-01

    SenseWear Armband (SW) is a multisensor monitor to assess physical activity and energy expenditure. Its prediction algorithms have been updated periodically. The aim was to validate SW in children, adolescents, and adults. The most recent SW algorithm 5.2 (SW5.2) and the previous version 2.2 (SW2.2) were evaluated for estimation of energy expenditure during semi-structured activities in 35 children, 31 adolescents, and 36 adults with indirect calorimetry as reference. Energy expenditure estimated from waist-worn ActiGraph GT3X+ data (AG) was used for comparison. Improvements in measurement errors were demonstrated with SW5.2 compared to SW2.2, especially in children and for biking. The overall mean absolute percent error with SW5.2 was 24% in children, 23% in adolescents, and 20% in adults. The error was larger for sitting and standing (23%-32%) and for basketball and biking (19%-35%), compared to walking and running (8%-20%). The overall mean absolute error with AG was 28% in children, 22% in adolescents, and 28% in adults. The absolute percent error for biking was 32%-74% with AG. In general, SW and AG underestimated energy expenditure. However, both methods demonstrated a proportional bias, with increasing underestimation for increasing energy expenditure level, in addition to the large individual error. SW provides measures of energy expenditure level with similar accuracy in children, adolescents, and adults with the improvements in the updated algorithms. Although SW captures biking better than AG, these methods share remaining measurements errors requiring further improvements for accurate measures of physical activity and energy expenditure in clinical and epidemiological research. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. SU-F-T-471: Simulated External Beam Delivery Errors Detection with a Large Area Ion Chamber Transmission Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, D; Dyer, B; Kumaran Nair, C

    Purpose: The Integral Quality Monitor (IQM), developed by iRT Systems GmbH (Koblenz, Germany) is a large-area, linac-mounted ion chamber used to monitor photon fluence during patient treatment. Our previous work evaluated the change of the ion chamber’s response to deviations from static 1×1 cm2 and 10×10 cm2 photon beams and other characteristics integral to use in external beam detection. The aim of this work is to simulate two external beam radiation delivery errors, quantify the detection of simulated errors and evaluate the reduction in patient harm resulting from detection. Methods: Two well documented radiation oncology delivery errors were selected formore » simulation. The first error was recreated by modifying a wedged whole breast treatment, removing the physical wedge and calculating the planned dose with Pinnacle TPS (Philips Radiation Oncology Systems, Fitchburg, WI). The second error was recreated by modifying a static-gantry IMRT pharyngeal tonsil plan to be delivered in 3 unmodulated fractions. A radiation oncologist evaluated the dose for simulated errors and predicted morbidity and mortality commiserate with the original reported toxicity, indicating that reported errors were approximately simulated. The ion chamber signal of unmodified treatments was compared to the simulated error signal and evaluated in Pinnacle TPS again with radiation oncologist prediction of simulated patient harm. Results: Previous work established that transmission detector system measurements are stable within 0.5% standard deviation (SD). Errors causing signal change greater than 20 SD (10%) were considered detected. The whole breast and pharyngeal tonsil IMRT simulated error increased signal by 215% and 969%, respectively, indicating error detection after the first fraction and IMRT segment, respectively. Conclusion: The transmission detector system demonstrated utility in detecting clinically significant errors and reducing patient toxicity/harm in simulated external beam delivery. Future work will evaluate detection of other smaller magnitude delivery errors.« less

  1. On the formulation of gravitational potential difference between the GRACE satellites based on energy integral in Earth fixed frame

    NASA Astrophysics Data System (ADS)

    Zeng, Y. Y.; Guo, J. Y.; Shang, K.; Shum, C. K.; Yu, J. H.

    2015-09-01

    Two methods for computing gravitational potential difference (GPD) between the GRACE satellites using orbit data have been formulated based on energy integral; one in geocentric inertial frame (GIF) and another in Earth fixed frame (EFF). Here we present a rigorous theoretical formulation in EFF with particular emphasis on necessary approximations, provide a computational approach to mitigate the approximations to negligible level, and verify our approach using simulations. We conclude that a term neglected or ignored in all former work without verification should be retained. In our simulations, 2 cycle per revolution (CPR) errors are present in the GPD computed using our formulation, and empirical removal of the 2 CPR and lower frequency errors can improve the precisions of Stokes coefficients (SCs) of degree 3 and above by 1-2 orders of magnitudes. This is despite of the fact that the result without removing these errors is already accurate enough. Furthermore, the relation between data errors and their influences on GPD is analysed, and a formal examination is made on the possible precision that real GRACE data may attain. The result of removing 2 CPR errors may imply that, if not taken care of properly, the values of SCs computed by means of the energy integral method using real GRACE data may be seriously corrupted by aliasing errors from possibly very large 2 CPR errors based on two facts: (1) errors of bar C_{2,0} manifest as 2 CPR errors in GPD and (2) errors of bar C_{2,0} in GRACE data-the differences between the CSR monthly values of bar C_{2,0} independently determined using GRACE and SLR are a reasonable measure of their magnitude-are very large. Our simulations show that, if 2 CPR errors in GPD vary from day to day as much as those corresponding to errors of bar C_{2,0} from month to month, the aliasing errors of degree 15 and above SCs computed using a month's GPD data may attain a level comparable to the magnitude of gravitational potential variation signal that GRACE was designed to recover. Consequently, we conclude that aliasing errors from 2 CPR errors in real GRACE data may be very large if not properly handled; and therefore, we propose an approach to reduce aliasing errors from 2 CPR and lower frequency errors for computing SCs above degree 2.

  2. Ordinary least squares regression is indicated for studies of allometry.

    PubMed

    Kilmer, J T; Rodríguez, R L

    2017-01-01

    When it comes to fitting simple allometric slopes through measurement data, evolutionary biologists have been torn between regression methods. On the one hand, there is the ordinary least squares (OLS) regression, which is commonly used across many disciplines of biology to fit lines through data, but which has a reputation for underestimating slopes when measurement error is present. On the other hand, there is the reduced major axis (RMA) regression, which is often recommended as a substitute for OLS regression in studies of allometry, but which has several weaknesses of its own. Here, we review statistical theory as it applies to evolutionary biology and studies of allometry. We point out that the concerns that arise from measurement error for OLS regression are small and straightforward to deal with, whereas RMA has several key properties that make it unfit for use in the field of allometry. The recommended approach for researchers interested in allometry is to use OLS regression on measurements taken with low (but realistically achievable) measurement error. If measurement error is unavoidable and relatively large, it is preferable to correct for slope attenuation rather than to turn to RMA regression, or to take the expected amount of attenuation into account when interpreting the data. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  3. Large-Area Visually Augmented Navigation for Autonomous Underwater Vehicles

    DTIC Science & Technology

    2005-06-01

    constrain position drift . Correction of errors in position and orientation are made each time the mosaic is updated, which occurs every Lth video frame. They...are the greatest strength of a VAN methodology. It is these measurements which help to correct dead-reckoned drift error and enforce recovery of a...systems. [INSTRUMENT [VARIABLE I INTENAL? I UPDATE RATE PRECISION FRANGE J DRIFT Acoustic Altimeter Z - Altitude yes varies: 0.1-10 Hz 0.01-1.0 m varies

  4. The accuracy of tomographic particle image velocimetry for measurements of a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Atkinson, Callum; Coudert, Sebastien; Foucaut, Jean-Marc; Stanislas, Michel; Soria, Julio

    2011-04-01

    To investigate the accuracy of tomographic particle image velocimetry (Tomo-PIV) for turbulent boundary layer measurements, a series of synthetic image-based simulations and practical experiments are performed on a high Reynolds number turbulent boundary layer at Reθ = 7,800. Two different approaches to Tomo-PIV are examined using a full-volume slab measurement and a thin-volume "fat" light sheet approach. Tomographic reconstruction is performed using both the standard MART technique and the more efficient MLOS-SMART approach, showing a 10-time increase in processing speed. Random and bias errors are quantified under the influence of the near-wall velocity gradient, reconstruction method, ghost particles, seeding density and volume thickness, using synthetic images. Experimental Tomo-PIV results are compared with hot-wire measurements and errors are examined in terms of the measured mean and fluctuating profiles, probability density functions of the fluctuations, distributions of fluctuating divergence through the volume and velocity power spectra. Velocity gradients have a large effect on errors near the wall and also increase the errors associated with ghost particles, which convect at mean velocities through the volume thickness. Tomo-PIV provides accurate experimental measurements at low wave numbers; however, reconstruction introduces high noise levels that reduces the effective spatial resolution. A thinner volume is shown to provide a higher measurement accuracy at the expense of the measurement domain, albeit still at a lower effective spatial resolution than planar and Stereo-PIV.

  5. Statistical and systematic errors in the measurement of weak-lensing Minkowski functionals: Application to the Canada-France-Hawaii Lensing Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirasaki, Masato; Yoshida, Naoki, E-mail: masato.shirasaki@utap.phys.s.u-tokyo.ac.jp

    2014-05-01

    The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degradesmore » the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ∼1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of Δw {sub 0} ∼ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1σ error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Ω{sub m0}=0.256±{sub 0.046}{sup 0.054}.« less

  6. Stratospheric wind errors, initial states and forecast skill in the GLAS general circulation model

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J.

    1983-01-01

    Relations between stratospheric wind errors, initial states and 500 mb skill are investigated using the GLAS general circulation model initialized with FGGE data. Erroneous stratospheric winds are seen in all current general circulation models, appearing also as weak shear above the subtropical jet and as cold polar stratospheres. In this study it is shown that the more anticyclonic large-scale flows are correlated with large forecast stratospheric winds. In addition, it is found that for North America the resulting errors are correlated with initial state jet stream accelerations while for East Asia the forecast winds are correlated with initial state jet strength. Using 500 mb skill scores over Europe at day 5 to measure forecast performance, it is found that both poor forecast skill and excessive stratospheric winds are correlated with more anticyclonic large-scale flows over North America. It is hypothesized that the resulting erroneous kinetic energy contributes to the poor forecast skill, and that the problem is caused by a failure in the modeling of the stratospheric energy cycle in current general circulation models independent of vertical resolution.

  7. Exploration of multiphoton entangled states by using weak nonlinearities

    PubMed Central

    He, Ying-Qiu; Ding, Dong; Yan, Feng-Li; Gao, Ting

    2016-01-01

    We propose a fruitful scheme for exploring multiphoton entangled states based on linear optics and weak nonlinearities. Compared with the previous schemes the present method is more feasible because there are only small phase shifts instead of a series of related functions of photon numbers in the process of interaction with Kerr nonlinearities. In the absence of decoherence we analyze the error probabilities induced by homodyne measurement and show that the maximal error probability can be made small enough even when the number of photons is large. This implies that the present scheme is quite tractable and it is possible to produce entangled states involving a large number of photons. PMID:26751044

  8. Accuracy in Wrist-Worn, Sensor-Based Measurements of Heart Rate and Energy Expenditure in a Diverse Cohort

    PubMed Central

    Shcherbina, Anna; Mattsson, C. Mikael; Waggott, Daryl; Salisbury, Heidi; Christle, Jeffrey W.; Hastie, Trevor; Wheeler, Matthew T.; Ashley, Euan A.

    2017-01-01

    The ability to measure physical activity through wrist-worn devices provides an opportunity for cardiovascular medicine. However, the accuracy of commercial devices is largely unknown. The aim of this work is to assess the accuracy of seven commercially available wrist-worn devices in estimating heart rate (HR) and energy expenditure (EE) and to propose a wearable sensor evaluation framework. We evaluated the Apple Watch, Basis Peak, Fitbit Surge, Microsoft Band, Mio Alpha 2, PulseOn, and Samsung Gear S2. Participants wore devices while being simultaneously assessed with continuous telemetry and indirect calorimetry while sitting, walking, running, and cycling. Sixty volunteers (29 male, 31 female, age 38 ± 11 years) of diverse age, height, weight, skin tone, and fitness level were selected. Error in HR and EE was computed for each subject/device/activity combination. Devices reported the lowest error for cycling and the highest for walking. Device error was higher for males, greater body mass index, darker skin tone, and walking. Six of the devices achieved a median error for HR below 5% during cycling. No device achieved an error in EE below 20 percent. The Apple Watch achieved the lowest overall error in both HR and EE, while the Samsung Gear S2 reported the highest. In conclusion, most wrist-worn devices adequately measure HR in laboratory-based activities, but poorly estimate EE, suggesting caution in the use of EE measurements as part of health improvement programs. We propose reference standards for the validation of consumer health devices (http://precision.stanford.edu/). PMID:28538708

  9. Accuracy in Wrist-Worn, Sensor-Based Measurements of Heart Rate and Energy Expenditure in a Diverse Cohort.

    PubMed

    Shcherbina, Anna; Mattsson, C Mikael; Waggott, Daryl; Salisbury, Heidi; Christle, Jeffrey W; Hastie, Trevor; Wheeler, Matthew T; Ashley, Euan A

    2017-05-24

    The ability to measure physical activity through wrist-worn devices provides an opportunity for cardiovascular medicine. However, the accuracy of commercial devices is largely unknown. The aim of this work is to assess the accuracy of seven commercially available wrist-worn devices in estimating heart rate (HR) and energy expenditure (EE) and to propose a wearable sensor evaluation framework. We evaluated the Apple Watch, Basis Peak, Fitbit Surge, Microsoft Band, Mio Alpha 2, PulseOn, and Samsung Gear S2. Participants wore devices while being simultaneously assessed with continuous telemetry and indirect calorimetry while sitting, walking, running, and cycling. Sixty volunteers (29 male, 31 female, age 38 ± 11 years) of diverse age, height, weight, skin tone, and fitness level were selected. Error in HR and EE was computed for each subject/device/activity combination. Devices reported the lowest error for cycling and the highest for walking. Device error was higher for males, greater body mass index, darker skin tone, and walking. Six of the devices achieved a median error for HR below 5% during cycling. No device achieved an error in EE below 20 percent. The Apple Watch achieved the lowest overall error in both HR and EE, while the Samsung Gear S2 reported the highest. In conclusion, most wrist-worn devices adequately measure HR in laboratory-based activities, but poorly estimate EE, suggesting caution in the use of EE measurements as part of health improvement programs. We propose reference standards for the validation of consumer health devices (http://precision.stanford.edu/).

  10. Analysis of potential errors in real-time streamflow data and methods of data verification by digital computer

    USGS Publications Warehouse

    Lystrom, David J.

    1972-01-01

    Various methods of verifying real-time streamflow data are outlined in part II. Relatively large errors (those greater than 20-30 percent) can be detected readily by use of well-designed verification programs for a digital computer, and smaller errors can be detected only by discharge measurements and field observations. The capability to substitute a simulated discharge value for missing or erroneous data is incorporated in some of the verification routines described. The routines represent concepts ranging from basic statistical comparisons to complex watershed modeling and provide a selection from which real-time data users can choose a suitable level of verification.

  11. The Influence of Radiosonde 'Age' on TRMM Field Campaign Soundings Humidity Correction

    NASA Technical Reports Server (NTRS)

    Roy, Biswadev; Halverson, Jeffrey B.; Wang, Jun-Hong

    2002-01-01

    Hundreds of Vaisala sondes with a RS80-H Humicap thin-film capacitor humidity sensor were launched during the Tropical Rainfall Measuring Mission (TRMM) field campaigns in Large Scale Biosphere-Atmosphere held in Brazil (LBA) and in Kwajalein experiment (KWAJEX) held in the Republic of Marshall Islands. Using Six humidity error correction algorithms by Wang et al., these sondes were corrected for significant dry bias in the RS80-H data. It is further shown that sonde surface temperature error must be corrected for a better representation of the relative humidity. This error becomes prominent due to sensor arm-heating in the first 50-s data.

  12. Measurement of the integrated luminosities of the data taken by BESIII at √s = 3.650 and 3.773 GeV

    NASA Astrophysics Data System (ADS)

    Ablikim, M.; N. Achasov, M.; Albayrak, O.; J. Ambrose, D.; F. An, F.; Q., An; Z. Bai, J.; R. Baldini, Ferroli; Ban, Y.; Becker, J.; V. Bennett, J.; Bertani, M.; M. Bian, J.; Boger, E.; Bondarenko, O.; Boyko, I.; Braun, S.; A. Briere, R.; Bytev, V.; Cai, H.; Cai, X.; Cakir, O.; Calcaterra, A.; F. Cao, G.; A. Cetin, S.; F. Chang, J.; Chelkov, G.; G., Chen; S. Chen, H.; C. Chen, J.; L. Chen, M.; J. Chen, S.; R. Chen, X.; B. Chen, Y.; P. Cheng, H.; P. Chu, Y.; Cronin-Hennessy, D.; L. Dai, H.; P. Dai, J.; Dedovich, D.; Y. Deng, Z.; Denig, A.; Denysenko, I.; Destefanis, M.; M. Ding, W.; Y., Ding; Y. Dong, L.; Y. Dong, M.; X. Du, S.; J., Fang; S. Fang, S.; Fava, L.; Q. Feng, C.; Friedel, P.; D. Fu, C.; L. Fu, J.; Fuks, O.; Gao, Y.; C., Geng; Goetzen, K.; X. Gong, W.; Gradl, W.; Greco, M.; H. Gu, M.; T. Gu, Y.; H. Guan, Y.; Q. Guo, A.; B. Guo, L.; T., Guo; P. Guo, Y.; L. Han, Y.; A. Harris, F.; L. He, K.; M., He; Y. He, Z.; Held, T.; K. Heng, Y.; L. Hou, Z.; C., Hu; M. Hu, H.; F. Hu, J.; T., Hu; M. Huang, G.; S. Huang, G.; S. Huang, J.; L., Huang; T. Huang, X.; Y., Huang; Hussain, T.; S. Ji, C.; Q., Ji; P. Ji, Q.; B. Ji, X.; L. Ji, X.; L. Jiang, L.; S. Jiang, X.; B. Jiao, J.; Jiao, Z.; P. Jin, D.; S., Jin; F. Jing, F.; Kalantar-Nayestanaki, N.; Kavatsyuk, M.; Kloss, B.; Kopf, B.; Kornicer, M.; Kuehn, W.; Lai, W.; S. Lange, J.; M., Lara; Larin, P.; Leyhe, M.; H. Li, C.; Cheng, Li; Cui, Li; M. Li, D.; F., Li; G., Li; B. Li, H.; C. Li, J.; K., Li; Lei, Li; R. Li, P.; J. Li, Q.; D. Li, W.; G. Li, W.; L. Li, X.; N. Li, X.; Q. Li, X.; R. Li, X.; B. Li, Z.; H., Liang; F. Liang, Y.; T. Liang, Y.; R. Liao, G.; X. Lin(Lin, D.; J. Liu, B.; L. Liu, C.; X. Liu, C.; H. Liu, F.; Fang, Liu; Feng, Liu; B. Liu, H.; H. Liu, H.; M. Liu, H.; P. Liu, J.; K., Liu; Y. Liu, K.; L. Liu, P.; Q., Liu; B. Liu, S.; X., Liu; B. Liu, Y.; A. Liu, Z.; Zhiqiang, Liu; Zhiqing, Liu; Loehner, H.; C. Lou, X.; R. Lu, G.; J. Lu, H.; G. Lu, J.; R. Lu, X.; P. Lu, Y.; L. Luo, C.; X. Luo, M.; Luo, T.; L. Luo, X.; Lv, M.; C. Ma, F.; L. Ma, H.; M. Ma, Q.; Ma, S.; Ma, T.; Y. Ma, X.; E. Maas, F.; Maggiora, M.; A. Malik, Q.; J. Mao, Y.; P. Mao, Z.; G. Messchendorp, J.; J., Min; J. Min, T.; E. Mitchell, R.; H. Mo, X.; Moeini, H.; C. Morales, Morales; Moriya, K.; Yu. Muchnoi, N.; Muramatsu, H.; Nefedov, Y.; B. Nikolaev, I.; Z., Ning; Nisar, S.; L. Olsen, S.; Ouyang, Q.; Pacetti, S.; W. Park, J.; Pelizaeus, M.; P. Peng, H.; Peters, K.; L. Ping, J.; G. Ping, R.; Poling, R.; Prencipe, E.; M., Qi; Qian, S.; F. Qiao, C.; Q. Qin, L.; S. Qin, X.; Y., Qin; H. Qin, Z.; F. Qiu, J.; H. Rashid, K.; F. Redmer, C.; G., Rong; D. Ruan, X.; Sarantsev, A.; Shao, M.; P. Shen, C.; Y. Shen, X.; Y. Sheng, H.; R. Shepherd, M.; M. Song, W.; Y. Song, X.; Spataro, S.; Spruck, B.; X. Sun, G.; F. Sun, J.; S. Sun, S.; J. Sun, Y.; Z. Sun, Y.; J. Sun, Z.; T. Sun, Z.; J. Tang, C.; Tang, X.; Tapan, I.; H. Thorndike, E.; Toth, D.; Ullrich, M.; Uman, I.; S. Varner, G.; B., Wang; D., Wang; Y. Wang, D.; K., Wang; L. Wang, L.; S. Wang, L.; M., Wang; P., Wang; L. Wang, P.; J. Wang, Q.; G. Wang, S.; F. Wang, X.; L. Wang, X.; D. Wang, Y.; F. Wang, Y.; Q. Wang, Y.; Z., Wang; G. Wang, Z.; H. Wang, Z.; Y. Wang, Z.; H. Wei, D.; B. Wei, J.; Weidenkaff, P.; G. Wen, Q.; P. Wen, S.; M., Werner; Wiedner, U.; H. Wu, L.; N., Wu; X. Wu, S.; W., Wu; Z., Wu; G. Xia, L.; X Xia, Y.; J. Xiao, Z.; G. Xie, Y.; L. Xiu, Q.; F. Xu, G.; J. Xu, Q.; N. Xu, Q.; P. Xu, X.; R. Xu, Z.; Xue, Z.; L., Yan; B. Yan, W.; C Yan, W.; H. Yan, Y.; X. Yang, H.; Y., Yang; X. Yang, Y.; Ye, H.; Ye, M.; H. Ye, M.; X. Yu, B.; X. Yu, C.; W. Yu, H.; S. Yu, J.; P. Yu, S.; Z. Yuan, C.; Y., Yuan; A. Zafar, A.; Zallo, A.; L. Zang, S.; Zeng, Y.; X. Zhang, B.; Y. Zhang, B.; Zhang, C.; C. Zhang, C.; H. Zhang, D.; H. Zhang, H.; Y. Zhang, H.; Q. Zhang, J.; W. Zhang, J.; Y. Zhang, J.; Z. Zhang, J.; Lili, Zhang; H. Zhang, S.; J. Zhang, X.; Y. Zhang, X.; Zhang, Y.; H. Zhang, Y.; P. Zhang, Z.; Y. Zhang, Z.; Zhenghao, Zhang; Zhao, G.; W. Zhao, J.; Lei, Zhao; Ling, Zhao; G. Zhao, M.; Zhao, Q.; J. Zhao, S.; C. Zhao, T.; H. Zhao, X.; B. Zhao, Y.; G. Zhao, Z.; Zhemchugov, A.; B., Zheng; P. Zheng, J.; H. Zheng, Y.; B., Zhong; L., Zhou; X., Zhou; K. Zhou, X.; R. Zhou, X.; Zhu, K.; J. Zhu, K.; L. Zhu, X.; C. Zhu, Y.; S. Zhu, Y.; A. Zhu, Z.; J., Zhuang; S. Zou, B.; H. Zou, J.

    2013-12-01

    Data sets were collected with the BESIII detector at the BEPCII collider at the center-of-mass energy of √s = 3.650 GeV during May 2009 and at √s = 3.773 GeV from January 2010 to May 2011. By analyzing the large angle Bhabha scattering events, the integrated luminosities of the two data sets are measured to be (44.49±0.02±0.44) pb-1 and (2916.94±0.18±29.17) pb-1, respectively, where the first error is statistical and the second error is systematic.

  13. Studies of atmospheric refraction effects on laser data

    NASA Technical Reports Server (NTRS)

    Dunn, P. J.; Pearce, W. A.; Johnson, T. S.

    1982-01-01

    The refraction effect from three perspectives was considered. An analysis of the axioms on which the accepted correction algorithms were based was the first priority. The integrity of the meteorological measurements on which the correction model is based was also considered and a large quantity of laser observations was processed in an effort to detect any serious anomalies in them. The effect of refraction errors on geodetic parameters estimated from laser data using the most recent analysis procedures was the focus of the third element of study. The results concentrate on refraction errors which were found to be critical in the eventual use of the data for measurements of crustal dynamics.

  14. CAUSES: Attribution of Surface Radiation Biases in NWP and Climate Models near the U.S. Southern Great Plains

    DOE PAGES

    Van Weverberg, K.; Morcrette, C. J.; Petch, J.; ...

    2018-02-28

    Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stationsmore » near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies.« less

  15. CAUSES: Attribution of Surface Radiation Biases in NWP and Climate Models near the U.S. Southern Great Plains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Weverberg, K.; Morcrette, C. J.; Petch, J.

    Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stationsmore » near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies.« less

  16. CAUSES: Attribution of Surface Radiation Biases in NWP and Climate Models near the U.S. Southern Great Plains

    NASA Astrophysics Data System (ADS)

    Van Weverberg, K.; Morcrette, C. J.; Petch, J.; Klein, S. A.; Ma, H.-Y.; Zhang, C.; Xie, S.; Tang, Q.; Gustafson, W. I.; Qian, Y.; Berg, L. K.; Liu, Y.; Huang, M.; Ahlgrimm, M.; Forbes, R.; Bazile, E.; Roehrig, R.; Cole, J.; Merryfield, W.; Lee, W.-S.; Cheruy, F.; Mellul, L.; Wang, Y.-C.; Johnson, K.; Thieman, M. M.

    2018-04-01

    Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stations near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies.

  17. Two Topics in Seasonal Streamflow Forecasting: Soil Moisture Initialization Error and Precipitation Downscaling

    NASA Technical Reports Server (NTRS)

    Koster, Randal; Walker, Greg; Mahanama, Sarith; Reichle, Rolf

    2012-01-01

    Continental-scale offline simulations with a land surface model are used to address two important issues in the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which the downscaling of seasonal precipitation forecasts, if it could be done accurately, would improve streamflow forecasts. The reduction in streamflow forecast skill (with forecasted streamflow measured against observations) associated with adding noise to a soil moisture field is found to be, to first order, proportional to the average reduction in the accuracy of the soil moisture field itself. This result has implications for streamflow forecast improvement under satellite-based soil moisture measurement programs. In the second and more idealized ("perfect model") analysis, precipitation downscaling is found to have an impact on large-scale streamflow forecasts only if two conditions are met: (i) evaporation variance is significant relative to the precipitation variance, and (ii) the subgrid spatial variance of precipitation is adequately large. In the large-scale continental region studied (the conterminous United States), these two conditions are met in only a somewhat limited area.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, C.J.; McVey, B.; Quimby, D.C.

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less

  19. Estimating error statistics for Chambon-la-Forêt observatory definitive data

    NASA Astrophysics Data System (ADS)

    Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly

    2017-08-01

    We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.

  20. Changes in Rod and Frame Test Scores Recorded in Schoolchildren during Development – A Longitudinal Study

    PubMed Central

    Bagust, Jeff; Docherty, Sharon; Haynes, Wayne; Telford, Richard; Isableu, Brice

    2013-01-01

    The Rod and Frame Test has been used to assess the degree to which subjects rely on the visual frame of reference to perceive vertical (visual field dependence- independence perceptual style). Early investigations found children exhibited a wide range of alignment errors, which reduced as they matured. These studies used a mechanical Rod and Frame system, and presented only mean values of grouped data. The current study also considered changes in individual performance. Changes in rod alignment accuracy in 419 school children were measured using a computer-based Rod and Frame test. Each child was tested at school Grade 2 and retested in Grades 4 and 6. The results confirmed that children displayed a wide range of alignment errors, which decreased with age but did not reach the expected adult values. Although most children showed a decrease in frame dependency over the 4 years of the study, almost 20% had increased alignment errors suggesting that they were becoming more frame-dependent. Plots of individual variation (SD) against mean error allowed the sample to be divided into 4 groups; the majority with small errors and SDs; a group with small SDs, but alignments clustering around the frame angle of 18°; a group showing large errors in the opposite direction to the frame tilt; and a small number with large SDs whose alignment appeared to be random. The errors in the last 3 groups could largely be explained by alignment of the rod to different aspects of the frame. At corresponding ages females exhibited larger alignment errors than males although this did not reach statistical significance. This study confirms that children rely more heavily on the visual frame of reference for processing spatial orientation cues. Most become less frame-dependent as they mature, but there are considerable individual differences. PMID:23724139

  1. Uncertainties in extracted parameters of a Gaussian emission line profile with continuum background.

    PubMed

    Minin, Serge; Kamalabadi, Farzad

    2009-12-20

    We derive analytical equations for uncertainties in parameters extracted by nonlinear least-squares fitting of a Gaussian emission function with an unknown continuum background component in the presence of additive white Gaussian noise. The derivation is based on the inversion of the full curvature matrix (equivalent to Fisher information matrix) of the least-squares error, chi(2), in a four-variable fitting parameter space. The derived uncertainty formulas (equivalent to Cramer-Rao error bounds) are found to be in good agreement with the numerically computed uncertainties from a large ensemble of simulated measurements. The derived formulas can be used for estimating minimum achievable errors for a given signal-to-noise ratio and for investigating some aspects of measurement setup trade-offs and optimization. While the intended application is Fabry-Perot spectroscopy for wind and temperature measurements in the upper atmosphere, the derivation is generic and applicable to other spectroscopy problems with a Gaussian line shape.

  2. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    PubMed

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  3. Use of a control test to aid pH assessment of chemical eye injuries.

    PubMed

    Connor, A J; Severn, P

    2009-11-01

    Chemical burns of the eye represent 7.0%-9.9% of all ocular trauma. Initial management of ocular chemical injuries is irrigation of the eye and conjunctival sac until neutralisation of the tear surface pH is achieved.We present a case of alkali injury in which the raised tear film pH seemed to be unresponsive to irrigation treatment. Suspicion was raised about the accuracy of the litmus paper used to test the tear film pH. The error was confirmed by use of a control litmus pH test of the examining doctor's eyes. Errors in litmus paper pH measurement can occur because of difficulty in matching the paper with scale colours and drying of the paper, which produces a darker colour. A small tear film sample can also create difficulty in colour matching, whereas too large a sample can wash away pigment from the litmus paper. Samples measured too quickly after irrigation can result in a falsely neutral pH measurement. Use of faulty or inappropriate materials can also result in errors. We advocate the use of control litmus pH test in all patients. This would highlight errors in pH measurements and aid in the detection of the end point of irrigation.

  4. A Simulation Study of Categorizing Continuous Exposure Variables Measured with Error in Autism Research: Small Changes with Large Effects.

    PubMed

    Heavner, Karyn; Burstyn, Igor

    2015-08-24

    Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations) used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff) curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to "small numbers." Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.

  5. CAUSES: On the Role of Surface Energy Budget Errors to the Warm Surface Air Temperature Error Over the Central United States

    DOE PAGES

    Ma, H. -Y.; Klein, S. A.; Xie, S.; ...

    2018-02-27

    Many weather forecast and climate models simulate warm surface air temperature (T 2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T 2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions ofmore » surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T 2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T 2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.« less

  6. CAUSES: On the Role of Surface Energy Budget Errors to the Warm Surface Air Temperature Error Over the Central United States

    NASA Astrophysics Data System (ADS)

    Ma, H.-Y.; Klein, S. A.; Xie, S.; Zhang, C.; Tang, S.; Tang, Q.; Morcrette, C. J.; Van Weverberg, K.; Petch, J.; Ahlgrimm, M.; Berg, L. K.; Cheruy, F.; Cole, J.; Forbes, R.; Gustafson, W. I.; Huang, M.; Liu, Y.; Merryfield, W.; Qian, Y.; Roehrig, R.; Wang, Y.-C.

    2018-03-01

    Many weather forecast and climate models simulate warm surface air temperature (T2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions of surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.

  7. CAUSES: On the Role of Surface Energy Budget Errors to the Warm Surface Air Temperature Error Over the Central United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, H. -Y.; Klein, S. A.; Xie, S.

    Many weather forecast and climate models simulate warm surface air temperature (T 2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T 2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions ofmore » surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T 2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T 2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.« less

  8. An Automated Inner Dimensional Measurement System Based on a Laser Displacement Sensor for Long-Stepped Pipes

    PubMed Central

    Zhang, Fumin; Qu, Xinghua; Ouyang, Jianfei

    2012-01-01

    A novel measurement prototype based on a mobile vehicle that carries a laser scanning sensor is proposed. The prototype is intended for the automated measurement of the interior 3D geometry of large-diameter long-stepped pipes. The laser displacement sensor, which has a small measurement range, is mounted on an extended arm of known length. It is scanned to improve the measurement accuracy for large-sized pipes. A fixing mechanism based on two sections is designed to ensure that the stepped pipe is concentric with the axis of rotation of the system. Data are acquired in a cylindrical coordinate system and fitted in a circle to determine diameter. Systematic errors covering arm length, tilt, and offset errors are analyzed and calibrated. The proposed system is applied to sample parts and the results are discussed to verify its effectiveness. This technique measures a diameter of 600 mm with an uncertainty of 0.02 mm at a 95% confidence probability. A repeatability test is performed to examine precision, which is 1.1 μm. A laser tracker is used to verify the measurement accuracy of the system, which is evaluated as 9 μm within a diameter of 600 mm. PMID:22778615

  9. Sensitivity to prediction error in reach adaptation

    PubMed Central

    Haith, Adrian M.; Harran, Michelle D.; Shadmehr, Reza

    2012-01-01

    It has been proposed that the brain predicts the sensory consequences of a movement and compares it to the actual sensory feedback. When the two differ, an error signal is formed, driving adaptation. How does an error in one trial alter performance in the subsequent trial? Here we show that the sensitivity to error is not constant but declines as a function of error magnitude. That is, one learns relatively less from large errors compared with small errors. We performed an experiment in which humans made reaching movements and randomly experienced an error in both their visual and proprioceptive feedback. Proprioceptive errors were created with force fields, and visual errors were formed by perturbing the cursor trajectory to create a visual error that was smaller, the same size, or larger than the proprioceptive error. We measured single-trial adaptation and calculated sensitivity to error, i.e., the ratio of the trial-to-trial change in motor commands to error size. We found that for both sensory modalities sensitivity decreased with increasing error size. A reanalysis of a number of previously published psychophysical results also exhibited this feature. Finally, we asked how the brain might encode sensitivity to error. We reanalyzed previously published probabilities of cerebellar complex spikes (CSs) and found that this probability declined with increasing error size. From this we posit that a CS may be representative of the sensitivity to error, and not error itself, a hypothesis that may explain conflicting reports about CSs and their relationship to error. PMID:22773782

  10. Mass measurement errors of Fourier-transform mass spectrometry (FTMS): distribution, recalibration, and application.

    PubMed

    Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu

    2009-02-01

    The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.

  11. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  12. The Calibration of the Slotted Section for Precision Microwave Measurements

    DTIC Science & Technology

    1952-03-01

    Calibration Curve for lossless Structures B. The Correction Relations for Dis’sipative Structures C The Effect of an Error in the Variable Short...a’discussipn of protoe effects ? and a methpd of correction? for large insertion depths are given in the literature-* xhrs. reppirt is _ cpnceraed...solely with error source fcp)v *w w«v 3Jhe: presence of the slot in the slptted section Intro dub« effects ? fa)" the slot, loads the vmyeguide

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Y; Macq, B; Bondar, L

    Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees,more » and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for complex error scenarios. Yafei Xing and Luiza Bondar are funded by BEWARE grants from the Walloon Region. The work presents simulations results for a prompt gamma camera prototype developed by IBA.« less

  14. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  15. Derivation of an analytic expression for the error associated with the noise reduction rating

    NASA Astrophysics Data System (ADS)

    Murphy, William J.

    2005-04-01

    Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.

  16. RECONSTRUCTING REDSHIFT DISTRIBUTIONS WITH CROSS-CORRELATIONS: TESTS AND AN OPTIMIZED RECIPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, Daniel J.; Newman, Jeffrey A., E-mail: djm70@pitt.ed, E-mail: janewman@pitt.ed

    2010-09-20

    Many of the cosmological tests to be performed by planned dark energy experiments will require extremely well-characterized photometric redshift measurements. Current estimates for cosmic shear are that the true mean redshift of the objects in each photo-z bin must be known to better than 0.002(1 + z), and the width of the bin must be known to {approx}0.003(1 + z) if errors in cosmological measurements are not to be degraded significantly. A conventional approach is to calibrate these photometric redshifts with large sets of spectroscopic redshifts. However, at the depths probed by Stage III surveys (such as DES), let alonemore » Stage IV (LSST, JDEM, and Euclid), existing large redshift samples have all been highly (25%-60%) incomplete, with a strong dependence of success rate on both redshift and galaxy properties. A powerful alternative approach is to exploit the clustering of galaxies to perform photometric redshift calibrations. Measuring the two-point angular cross-correlation between objects in some photometric redshift bin and objects with known spectroscopic redshift, as a function of the spectroscopic z, allows the true redshift distribution of a photometric sample to be reconstructed in detail, even if it includes objects too faint for spectroscopy or if spectroscopic samples are highly incomplete. We test this technique using mock DEEP2 Galaxy Redshift survey light cones constructed from the Millennium Simulation semi-analytic galaxy catalogs. From this realistic test, which incorporates the effects of galaxy bias evolution and cosmic variance, we find that the true redshift distribution of a photometric sample can, in fact, be determined accurately with cross-correlation techniques. We also compare the empirical error in the reconstruction of redshift distributions to previous analytic predictions, finding that additional components must be included in error budgets to match the simulation results. This extra error contribution is small for surveys that sample large areas of sky (>{approx}10{sup 0}-100{sup 0}), but dominant for {approx}1 deg{sup 2} fields. We conclude by presenting a step-by-step, optimized recipe for reconstructing redshift distributions from cross-correlation information using standard correlation measurements.« less

  17. SU-F-T-320: Assessing Placement Error of Optically Stimulated Luminescent in Vivo Dosimeters Using Cone-Beam Computed Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riegel, A; Klein, E; Tariq, M

    Purpose: Optically-stimulated luminescent dosimeters (OSLDs) are increasingly utilized for in vivo dosimetry of complex radiation delivery techniques such as intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Evaluation of clinical uncertainties such as placement error has not been performed. This work retrospectively investigates the magnitude of placement error using conebeam computed tomography (CBCT) and its effect on measured/planned dose agreement. Methods: Each OSLD was placed at a physicist-designated location on the patient surface on a weekly basis. The location was given in terms of a gantry angle and two-dimensional offset from central axis. The OSLDs were placed before dailymore » image guidance. We identified 77 CBCTs from 25 head-and-neck patients who received IMRT or VMAT, where OSLDs were visible on the CT image. Grossly misplaced OSLDs were excluded (e.g. wrong laterality). CBCTs were registered with the treatment plan and the distance between the planned and actual OSLD location was calculated in two dimensions in the beam’s eye view. Distances were correlated with measured/planned dose percent differences. Results: OSLDs were grossly misplaced for 5 CBCTs (6.4%). For the remaining 72 CBCTs, average placement error was 7.0±6.0 mm. These errors were not correlated with measured/planned dose percent differences (R{sup 2}=0.0153). Generalizing the dosimetric effect of placement errors may be unreliable. Conclusion: Correct placement of OSLDs for IMRT and VMAT treatments is critical to accurate and precise in vivo dosimetry. Small placement errors could produce large disagreement between measured and planned dose. Further work includes expansion to other treatment sites, examination of planned dose at the actual point of OSLD placement, and the influence of imageguided shifts on measured/planned dose agreement.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuangrod, T; Simpson, J; Greer, P

    Purpose: A real-time patient treatment delivery verification system using EPID (Watchdog) has been developed as an advanced patient safety tool. In a pilot study data was acquired for 119 prostate and head and neck (HN) IMRT patient deliveries to generate body-site specific action limits using statistical process control. The purpose of this study is to determine the sensitivity of Watchdog to detect clinically significant errors during treatment delivery. Methods: Watchdog utilizes a physics-based model to generate a series of predicted transit cine EPID images as a reference data set, and compares these in real-time to measured transit cine-EPID images acquiredmore » during treatment using chi comparison (4%, 4mm criteria) after the initial 2s of treatment to allow for dose ramp-up. Four study cases were used; dosimetric (monitor unit) errors in prostate (7 fields) and HN (9 fields) IMRT treatments of (5%, 7%, 10%) and positioning (systematic displacement) errors in the same treatments of (5mm, 7mm, 10mm). These errors were introduced by modifying the patient CT scan and re-calculating the predicted EPID data set. The error embedded predicted EPID data sets were compared to the measured EPID data acquired during patient treatment. The treatment delivery percentage (measured from 2s) where Watchdog detected the error was determined. Results: Watchdog detected all simulated errors for all fields during delivery. The dosimetric errors were detected at average treatment delivery percentage of (4%, 0%, 0%) and (7%, 0%, 0%) for prostate and HN respectively. For patient positional errors, the average treatment delivery percentage was (52%, 43%, 25%) and (39%, 16%, 6%). Conclusion: These results suggest that Watchdog can detect significant dosimetric and positioning errors in prostate and HN IMRT treatments in real-time allowing for treatment interruption. Displacements of the patient require longer to detect however incorrect body site or very large geographic misses will be detected rapidly.« less

  20. Can standard cosmological models explain the observed Abell cluster bulk flow?

    NASA Technical Reports Server (NTRS)

    Strauss, Michael A.; Cen, Renyue; Ostriker, Jeremiah P.; Laure, Tod R.; Postman, Marc

    1995-01-01

    Lauer and Postman (LP) observed that all Abell clusters with redshifts less than 15,000 km/s appear to be participating in a bulk flow of 689 km/s with respect to the cosmic microwave background. We find this result difficult to reconcile with all popular models for large-scale structure formation that assume Gaussian initial conditions. This conclusion is based on Monte Carlo realizations of the LP data, drawn from large particle-mesh N-body simulations for six different models of the initial power spectrum (standard, tilted, and Omega(sub 0) = 0.3 cold dark matter, and two variants of the primordial baryon isocurvature model). We have taken special care to treat properly the longest-wavelength components of the power spectra. The simulations are sampled, 'observed,' and analyzed as identically as possible to the LP cluster sample. Large-scale bulk flows as measured from clusters in the simulations are in excellent agreement with those measured from the grid: the clusters do not exhibit any strong velocity bias on large scales. Bulk flows with amplitude as large as that reported by LP are not uncommon in the Monte Carlo data stes; the distribution of measured bulk flows before error bias subtraction is rougly Maxwellian, with a peak around 400 km/s. However the chi squared of the observed bulk flow, taking into account the anisotropy of the error ellipsoid, is much more difficult to match in the simulations. The models examined are ruled out at confidence levels between 94% and 98%.

  1. Estimation of health effects of prenatal methylmercury exposure using structural equation models.

    PubMed

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe; Weihe, Pal

    2002-10-14

    Observational studies in epidemiology always involve concerns regarding validity, especially measurement error, confounding, missing data, and other problems that may affect the study outcomes. Widely used standard statistical techniques, such as multiple regression analysis, may to some extent adjust for these shortcomings. However, structural equations may incorporate most of these considerations, thereby providing overall adjusted estimations of associations. This approach was used in a large epidemiological data set from a prospective study of developmental methyl-mercury toxicity. Structural equation models were developed for assessment of the association between biomarkers of prenatal mercury exposure and neuropsychological test scores in 7 year old children. Eleven neurobehavioral outcomes were grouped into motor function and verbally mediated function. Adjustment for local dependence and item bias was necessary for a satisfactory fit of the model, but had little impact on the estimated mercury effects. The mercury effect on the two latent neurobehavioral functions was similar to the strongest effects seen for individual test scores of motor function and verbal skills. Adjustment for contaminant exposure to poly chlorinated biphenyls (PCBs) changed the estimates only marginally, but the mercury effect could be reduced to non-significance by assuming a large measurement error for the PCB biomarker. The structural equation analysis allows correction for measurement error in exposure variables, incorporation of multiple outcomes and incomplete cases. This approach therefore deserves to be applied more frequently in the analysis of complex epidemiological data sets.

  2. Interferometric phase measurement techniques for coherent beam combining

    NASA Astrophysics Data System (ADS)

    Antier, Marie; Bourderionnet, Jérôme; Larat, Christian; Lallier, Eric; Primot, Jérôme; Brignon, Arnaud

    2015-03-01

    Coherent beam combining of fiber amplifiers provides an attractive mean of reaching high power laser. In an interferometric phase measurement the beams issued for each fiber combined are imaged onto a sensor and interfere with a reference plane wave. This registration of interference patterns on a camera allows the measurement of the exact phase error of each fiber beam in a single shot. Therefore, this method is a promising candidate toward very large number of combined fibers. Based on this technique, several architectures can be proposed to coherently combine a high number of fibers. The first one based on digital holography transfers directly the image of the camera to spatial light modulator (SLM). The generated hologram is used to compensate the phase errors induced by the amplifiers. This architecture has therefore a collective phase measurement and correction. Unlike previous digital holography technique, the probe beams measuring the phase errors between the fibers are co-propagating with the phase-locked signal beams. This architecture is compatible with the use of multi-stage isolated amplifying fibers. In that case, only 20 pixels per fiber on the SLM are needed to obtain a residual phase shift error below λ/10rms. The second proposed architecture calculates the correction applied to each fiber channel by tracking the relative position of the interference finges. In this case, a phase modulator is placed on each channel. In that configuration, only 8 pixels per fiber on the camera is required for a stable close loop operation with a residual phase error of λ/20rms, which demonstrates the scalability of this concept.

  3. Improved estimation of heavy rainfall by weather radar after reflectivity correction and accounting for raindrop size distribution variability

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2015-04-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z-R) and radar reflectivity-specific attenuation (Z-k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.

  4. The impact of reflectivity correction and accounting for raindrop size distribution variability to improve precipitation estimation by weather radar for an extreme low-land mesoscale convective system

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2014-11-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z - R) and radar reflectivity-specific attenuation (Z - k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.

  5. Precision measurements from very-large scale aerial digital imagery.

    PubMed

    Booth, D Terrance; Cox, Samuel E; Berryman, Robert D

    2006-01-01

    Managers need measurements and resource managers need the length/width of a variety of items including that of animals, logs, streams, plant canopies, man-made objects, riparian habitat, vegetation patches and other things important in resource monitoring and land inspection. These types of measurements can now be easily and accurately obtained from very large scale aerial (VLSA) imagery having spatial resolutions as fine as 1 millimeter per pixel by using the three new software programs described here. VLSA images have small fields of view and are used for intermittent sampling across extensive landscapes. Pixel-coverage among images is influenced by small changes in airplane altitude above ground level (AGL) and orientation relative to the ground, as well as by changes in topography. These factors affect the object-to-camera distance used for image-resolution calculations. 'ImageMeasurement' offers a user-friendly interface for accounting for pixel-coverage variation among images by utilizing a database. 'LaserLOG' records and displays airplane altitude AGL measured from a high frequency laser rangefinder, and displays the vertical velocity. 'Merge' sorts through large amounts of data generated by LaserLOG and matches precise airplane altitudes with camera trigger times for input to the ImageMeasurement database. We discuss application of these tools, including error estimates. We found measurements from aerial images (collection resolution: 5-26 mm/pixel as projected on the ground) using ImageMeasurement, LaserLOG, and Merge, were accurate to centimeters with an error less than 10%. We recommend these software packages as a means for expanding the utility of aerial image data.

  6. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE PAGES

    Brito, Thiago V.; Morley, Steven K.

    2017-10-25

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  7. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brito, Thiago V.; Morley, Steven K.

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  8. Columnar aerosol properties over oceans by combining surface and aircraft measurements: sensitivity analysis.

    PubMed

    Zhang, T; Gordon, H R

    1997-04-20

    We report a sensitivity analysis for the algorithm presented by Gordon and Zhang [Appl. Opt. 34, 5552 (1995)] for inverting the radiance exiting the top and bottom of the atmosphere to yield the aerosol-scattering phase function [P(?)] and single-scattering albedo (omega(0)). The study of the algorithm's sensitivity to radiometric calibration errors, mean-zero instrument noise, sea-surface roughness, the curvature of the Earth's atmosphere, the polarization of the light field, and incorrect assumptions regarding the vertical structure of the atmosphere, indicates that the retrieved omega(0) has excellent stability even for very large values (~2) of the aerosol optical thickness; however, the error in the retrieved P(?) strongly depends on the measurement error and on the assumptions made in the retrieval algorithm. The retrieved phase functions in the blue are usually poor compared with those in the near infrared.

  9. An interlaboratory comparison of dosimetry for a multi-institutional radiobiological research project: Observations, problems, solutions and lessons learned.

    PubMed

    Seed, Thomas M; Xiao, Shiyun; Manley, Nancy; Nikolich-Zugich, Janko; Pugh, Jason; Van den Brink, Marcel; Hirabayashi, Yoko; Yasutomo, Koji; Iwama, Atsushi; Koyasu, Shigeo; Shterev, Ivo; Sempowski, Gregory; Macchiarini, Francesca; Nakachi, Kei; Kunugi, Keith C; Hammer, Clifford G; Dewerd, Lawrence A

    2016-01-01

    An interlaboratory comparison of radiation dosimetry was conducted to determine the accuracy of doses being used experimentally for animal exposures within a large multi-institutional research project. The background and approach to this effort are described and discussed in terms of basic findings, problems and solutions. Dosimetry tests were carried out utilizing optically stimulated luminescence (OSL) dosimeters embedded midline into mouse carcasses and thermal luminescence dosimeters (TLD) embedded midline into acrylic phantoms. The effort demonstrated that the majority (4/7) of the laboratories was able to deliver sufficiently accurate exposures having maximum dosing errors of ≤5%. Comparable rates of 'dosimetric compliance' were noted between OSL- and TLD-based tests. Data analysis showed a highly linear relationship between 'measured' and 'target' doses, with errors falling largely between 0 and 20%. Outliers were most notable for OSL-based tests, while multiple tests by 'non-compliant' laboratories using orthovoltage X-rays contributed heavily to the wide variation in dosing errors. For the dosimetrically non-compliant laboratories, the relatively high rates of dosing errors were problematic, potentially compromising the quality of ongoing radiobiological research. This dosimetry effort proved to be instructive in establishing rigorous reviews of basic dosimetry protocols ensuring that dosing errors were minimized.

  10. Linking Errors in Trend Estimation in Large-Scale Surveys: A Case Study. Research Report. ETS RR-10-10

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2010-01-01

    One of the major objectives of large-scale educational surveys is reporting trends in academic achievement. For this purpose, a substantial number of items are carried from one assessment cycle to the next. The linking process that places academic abilities measured in different assessments on a common scale is usually based on a concurrent…

  11. Relay discovery and selection for large-scale P2P streaming

    PubMed Central

    Zhang, Chengwei; Wang, Angela Yunxian

    2017-01-01

    In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers’ network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used “best-out-of-K” selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs. PMID:28410384

  12. Relay discovery and selection for large-scale P2P streaming.

    PubMed

    Zhang, Chengwei; Wang, Angela Yunxian; Hei, Xiaojun

    2017-01-01

    In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.

  13. A 'Quad-Disc' static pressure probe for measurement in adverse atmospheres - With a comparative review of static pressure probe designs

    NASA Astrophysics Data System (ADS)

    Nishiyama, Randall T.; Bedard, Alfred J., Jr.

    1991-09-01

    There are many areas of need for accurate measurements of atmospheric static pressure. These include observations of surface meteorology, airport altimeter settings, pressure distributions around buildings, moving measurement platforms, as well as basic measurements of fluctuating pressures in turbulence. Most of these observations require long-term observations in adverse environments (e.g., rain, dust, or snow). Currently, many pressure measurements are made, of necessity, within buildings, thus involving potential errors of several millibars in mean pressure during moderate winds, accompanied by large fluctuating pressures induced by the structure. In response to these needs, a 'Quad-Disk' pressure probe for continuous, outdoor monitoring purposes was designed which is inherently weather-protected. This Quad-Disk probe has the desirable features of omnidirectional response and small error in pitch. A review of past static pressure probes contrasts design approaches and capabilities.

  14. Exploration and extension of an improved Riemann track fitting algorithm

    NASA Astrophysics Data System (ADS)

    Strandlie, A.; Frühwirth, R.

    2017-09-01

    Recently, a new Riemann track fit which operates on translated and scaled measurements has been proposed. This study shows that the new Riemann fit is virtually as precise as popular approaches such as the Kalman filter or an iterative non-linear track fitting procedure, and significantly more precise than other, non-iterative circular track fitting approaches over a large range of measurement uncertainties. The fit is then extended in two directions: first, the measurements are allowed to lie on plane sensors of arbitrary orientation; second, the full error propagation from the measurements to the estimated circle parameters is computed. The covariance matrix of the estimated track parameters can therefore be computed without recourse to asymptotic properties, and is consequently valid for any number of observation. It does, however, assume normally distributed measurement errors. The calculations are validated on a simulated track sample and show excellent agreement with the theoretical expectations.

  15. Field and laboratory determination of water-surface elevation and velocity using noncontact measurements

    USGS Publications Warehouse

    Nelson, Jonathan M.; Kinzel, Paul J.; Schmeeckle, Mark Walter; McDonald, Richard R.; Minear, Justin T.

    2016-01-01

    Noncontact methods for measuring water-surface elevation and velocity in laboratory flumes and rivers are presented with examples. Water-surface elevations are measured using an array of acoustic transducers in the laboratory and using laser scanning in field situations. Water-surface velocities are based on using particle image velocimetry or other machine vision techniques on infrared video of the water surface. Using spatial and temporal averaging, results from these methods provide information that can be used to develop estimates of discharge for flows over known bathymetry. Making such estimates requires relating water-surface velocities to vertically averaged velocities; the methods here use standard relations. To examine where these relations break down, laboratory data for flows over simple bumps of three amplitudes are evaluated. As anticipated, discharges determined from surface information can have large errors where nonhydrostatic effects are large. In addition to investigating and characterizing this potential error in estimating discharge, a simple method for correction of the issue is presented. With a simple correction based on bed gradient along the flow direction, remotely sensed estimates of discharge appear to be viable.

  16. Thermoelectric property measurements with computer controlled systems

    NASA Technical Reports Server (NTRS)

    Chmielewski, A. B.; Wood, C.

    1984-01-01

    A joint JPL-NASA program to develop an automated system to measure the thermoelectric properties of newly developed materials is described. Consideration is given to the difficulties created by signal drift in measurements of Hall voltage and the Large Delta T Seebeck coefficient. The benefits of a computerized system were examined with respect to error reduction and time savings for human operators. It is shown that the time required to measure Hall voltage can be reduced by a factor of 10 when a computer is used to fit a curve to the ratio of the measured signal and its standard deviation. The accuracy of measurements of the Large Delta T Seebeck coefficient and thermal diffusivity was also enhanced by the use of computers.

  17. Large Uncertainty in Estimating pCO2 From Carbonate Equilibria in Lakes

    NASA Astrophysics Data System (ADS)

    Golub, Malgorzata; Desai, Ankur R.; McKinley, Galen A.; Remucal, Christina K.; Stanley, Emily H.

    2017-11-01

    Most estimates of carbon dioxide (CO2) evasion from freshwaters rely on calculating partial pressure of aquatic CO2 (pCO2) from two out of three CO2-related parameters using carbonate equilibria. However, the pCO2 uncertainty has not been systematically evaluated across multiple lake types and equilibria. We quantified random errors in pH, dissolved inorganic carbon, alkalinity, and temperature from the North Temperate Lakes Long-Term Ecological Research site in four lake groups across a broad gradient of chemical composition. These errors were propagated onto pCO2 calculated from three carbonate equilibria, and for overlapping observations, compared against uncertainties in directly measured pCO2. The empirical random errors in CO2-related parameters were mostly below 2% of their median values. Resulting random pCO2 errors ranged from ±3.7% to ±31.5% of the median depending on alkalinity group and choice of input parameter pairs. Temperature uncertainty had a negligible effect on pCO2. When compared with direct pCO2 measurements, all parameter combinations produced biased pCO2 estimates with less than one third of total uncertainty explained by random pCO2 errors, indicating that systematic uncertainty dominates over random error. Multidecadal trend of pCO2 was difficult to reconstruct from uncertain historical observations of CO2-related parameters. Given poor precision and accuracy of pCO2 estimates derived from virtually any combination of two CO2-related parameters, we recommend direct pCO2 measurements where possible. To achieve consistently robust estimates of CO2 emissions from freshwater components of terrestrial carbon balances, future efforts should focus on improving accuracy and precision of CO2-related parameters (including direct pCO2) measurements and associated pCO2 calculations.

  18. Design and test of voltage and current probes for EAST ICRF antenna impedance measurement

    NASA Astrophysics Data System (ADS)

    Jianhua, WANG; Gen, CHEN; Yanping, ZHAO; Yuzhou, MAO; Shuai, YUAN; Xinjun, ZHANG; Hua, YANG; Chengming, QIN; Yan, CHENG; Yuqing, YANG; Guillaume, URBANCZYK; Lunan, LIU; Jian, CHENG

    2018-04-01

    On the experimental advanced superconducting tokamak (EAST), a pair of voltage and current probes (V/I probes) is installed on the ion cyclotron radio frequency transmission lines to measure the antenna input impedance, and supplement the conventional measurement technique based on voltage probe arrays. The coupling coefficients of V/I probes are sensitive to their sizes and installing locations, thus they should be determined properly to match the measurement range of data acquisition card. The V/I probes are tested in a testing platform at low power with various artificial loads. The testing results show that the deviation of coupling resistance is small for loads R L > 2.5 Ω, while the resistance deviations appear large for loads R L < 1.5 Ω, which implies that the power loss cannot be neglected at high VSWR. As the factors that give rise to the deviation of coupling resistance calculation, the phase measurement error is the more significant factor leads to deleterious results rather than the amplitude measurement error. To exclude the possible ingredients that may lead to phase measurement error, the phase detector can be calibrated in steady L-mode scenario and then use the calibrated data for calculation under H-mode cases in EAST experiments.

  19. Manufacture of ultra high precision aerostatic bearings based on glass guide

    NASA Astrophysics Data System (ADS)

    Guo, Meng; Dai, Yifan; Peng, Xiaoqiang; Tie, Guipeng; Lai, Tao

    2017-10-01

    The aerostatic guide in the traditional three-coordinate measuring machine and profilometer generally use metal or ceramics material. Limited by the guide processing precision, the measurement accuracy of these traditional instruments is around micro-meter level. By selection of optical materials as guide material, optical processing method and laser interference measurement can be introduced to the traditional aerostatic bearings manufacturing field. By using the large aperture wave-front interference measuring equipment , the shape and position error of the glass guide can be obtained in high accuracy and then it can be processed to 0.1μm or even better with the aid of Magnetorheological Finishing(MRF) and Computer Controlled Optical Surfacing (CCOS) process and other modern optical processing method, so the accuracy of aerostatic bearings can be fundamentally improved and ultra high precision coordinate measuring can be achieved. This paper introduces the fabrication and measurement process of the glass guide by K9 with 300mm measuring range, and its working surface accuracy is up to 0.1μm PV, the verticality and parallelism error between the two guide rail face is better than 2μm, and the straightness of the aerostatic bearings by this K9 glass guide is up to 40nm after error compensation.

  20. Implementation of smart phone video plethysmography and dependence on lighting parameters.

    PubMed

    Fletcher, Richard Ribón; Chamberlain, Daniel; Paggi, Nicholas; Deng, Xinyue

    2015-08-01

    The remote measurement of heart rate (HR) and heart rate variability (HRV) via a digital camera (video plethysmography) has emerged as an area of great interest for biomedical and health applications. While a few implementations of video plethysmography have been demonstrated on smart phones under controlled lighting conditions, it has been challenging to create a general scalable solution due to the large variability in smart phone hardware performance, software architecture, and the variable response to lighting parameters. In this context, we present a selfcontained smart phone implementation of video plethysmography for Android OS, which employs both stochastic and deterministic algorithms, and we use this to study the effect of lighting parameters (illuminance, color spectrum) on the accuracy of the remote HR measurement. Using two different phone models, we present the median HR error for five different video plethysmography algorithms under three different types of lighting (natural sunlight, compact fluorescent, and halogen incandescent) and variations in brightness. For most algorithms, we found the optimum light brightness to be in the range 1000-4000 lux and the optimum lighting types to be compact fluorescent and natural light. Moderate errors were found for most algorithms with some devices under conditions of low-brightness (<;500 lux) and highbrightness (>4000 lux). Our analysis also identified camera frame rate jitter as a major source of variability and error across different phone models, but this can be largely corrected through non-linear resampling. Based on testing with six human subjects, our real-time Android implementation successfully predicted the measured HR with a median error of -0.31 bpm, and an inter-quartile range of 2.1bpm.

  1. Error sources in the retrieval of aerosol information over bright surfaces from satellite measurements in the oxygen A band

    NASA Astrophysics Data System (ADS)

    Nanda, Swadhin; de Graaf, Martin; Sneep, Maarten; de Haan, Johan F.; Stammes, Piet; Sanders, Abram F. J.; Tuinder, Olaf; Pepijn Veefkind, J.; Levelt, Pieternel F.

    2018-01-01

    Retrieving aerosol optical thickness and aerosol layer height over a bright surface from measured top-of-atmosphere reflectance spectrum in the oxygen A band is known to be challenging, often resulting in large errors. In certain atmospheric conditions and viewing geometries, a loss of sensitivity to aerosol optical thickness has been reported in the literature. This loss of sensitivity has been attributed to a phenomenon known as critical surface albedo regime, which is a range of surface albedos for which the top-of-atmosphere reflectance has minimal sensitivity to aerosol optical thickness. This paper extends the concept of critical surface albedo for aerosol layer height retrievals in the oxygen A band, and discusses its implications. The underlying physics are introduced by analysing the top-of-atmosphere reflectance spectrum as a sum of atmospheric path contribution and surface contribution, obtained using a radiative transfer model. Furthermore, error analysis of an aerosol layer height retrieval algorithm is conducted over dark and bright surfaces to show the dependence on surface reflectance. The analysis shows that the derivative with respect to aerosol layer height of the atmospheric path contribution to the top-of-atmosphere reflectance is opposite in sign to that of the surface contribution - an increase in surface brightness results in a decrease in information content. In the case of aerosol optical thickness, these derivatives are anti-correlated, leading to large retrieval errors in high surface albedo regimes. The consequence of this anti-correlation is demonstrated with measured spectra in the oxygen A band from the GOME-2 instrument on board the Metop-A satellite over the 2010 Russian wildfires incident.

  2. Optimizing correlation techniques for improved earthquake location

    USGS Publications Warehouse

    Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.

    2004-01-01

    Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.

  3. Proton upsets in LSI memories in space

    NASA Technical Reports Server (NTRS)

    Mcnulty, P. J.; Wyatt, R. C.; Filz, R. C.; Rothwell, P. L.; Farrell, G. E.

    1980-01-01

    Two types of large scale integrated dynamic random access memory devices were tested and found to be subject to soft errors when exposed to protons incident at energies between 18 and 130 MeV. These errors are shown to differ significantly from those induced in the same devices by alphas from an Am-241 source. There is considerable variation among devices in their sensitivity to proton-induced soft errors, even among devices of the same type. For protons incident at 130 MeV, the soft error cross sections measured in these experiments varied from 10 to the -8th to 10 to the -6th sq cm/proton. For individual devices, however, the soft error cross section consistently increased with beam energy from 18-130 MeV. Analysis indicates that the soft errors induced by energetic protons result from spallation interactions between the incident protons and the nuclei of the atoms comprising the device. Because energetic protons are the most numerous of both the galactic and solar cosmic rays and form the inner radiation belt, proton-induced soft errors have potentially serious implications for many electronic systems flown in space.

  4. Effect of Anisotropy on Shape Measurement Accuracy of Silicon Wafer Using Three-Point-Support Inverting Method

    NASA Astrophysics Data System (ADS)

    Ito, Yukihiro; Natsu, Wataru; Kunieda, Masanori

    This paper describes the influences of anisotropy found in the elastic modulus of monocrystalline silicon wafers on the measurement accuracy of the three-point-support inverting method which can measure the warp and thickness of thin large panels simultaneously. Deflection due to gravity depends on the crystal orientation relative to the positions of the three-point-supports. Thus the deviation of actual crystal orientation from the direction indicated by the notch fabricated on the wafer causes measurement errors. Numerical analysis of the deflection confirmed that the uncertainty of thickness measurement increases from 0.168µm to 0.524µm due to this measurement error. In addition, experimental results showed that the rotation of crystal orientation relative to the three-point-supports is effective for preventing wafer vibration excited by disturbance vibration because the resonance frequency of wafers can be changed. Thus, surface shape measurement accuracy was improved by preventing resonant vibration during measurement.

  5. Standardization of Broadband UV Measurements for 365 nm LED Sources

    PubMed Central

    Eppeldauer, George P.

    2012-01-01

    Broadband UV measurements are evaluated when UV-A irradiance meters measure optical radiation from 365 nm UV sources. The CIE standardized rectangular-shape UV-A function can be realized only with large spectral mismatch errors. The spectral power-distribution of the 365 nm excitation source is not standardized. Accordingly, the readings made with different types of UV meters, even if they measure the same UV source, can be very different. Available UV detectors and UV meters were measured and evaluated for spectral responsivity. The spectral product of the source-distribution and the meter’s spectral-responsivity were calculated for different combinations to estimate broad-band signal-measurement errors. Standardization of both the UV source-distribution and the meter spectral-responsivity is recommended here to perform uniform broad-band measurements with low uncertainty. It is shown what spectral responsivity function(s) is needed for new and existing UV irradiance meters to perform low-uncertainty broadband 365 nm measurements. PMID:26900516

  6. Evaluation of real-time data obtained from gravimetric preparation of antineoplastic agents shows medication errors with possible critical therapeutic impact: Results of a large-scale, multicentre, multinational, retrospective study.

    PubMed

    Terkola, R; Czejka, M; Bérubé, J

    2017-08-01

    Medication errors are a significant cause of morbidity and mortality especially with antineoplastic drugs, owing to their narrow therapeutic index. Gravimetric workflow software systems have the potential to reduce volumetric errors during intravenous antineoplastic drug preparation which may occur when verification is reliant on visual inspection. Our aim was to detect medication errors with possible critical therapeutic impact as determined by the rate of prevented medication errors in chemotherapy compounding after implementation of gravimetric measurement. A large-scale, retrospective analysis of data was carried out, related to medication errors identified during preparation of antineoplastic drugs in 10 pharmacy services ("centres") in five European countries following the introduction of an intravenous workflow software gravimetric system. Errors were defined as errors in dose volumes outside tolerance levels, identified during weighing stages of preparation of chemotherapy solutions which would not otherwise have been detected by conventional visual inspection. The gravimetric system detected that 7.89% of the 759 060 doses of antineoplastic drugs prepared at participating centres between July 2011 and October 2015 had error levels outside the accepted tolerance range set by individual centres, and prevented these doses from reaching patients. The proportion of antineoplastic preparations with deviations >10% ranged from 0.49% to 5.04% across sites, with a mean of 2.25%. The proportion of preparations with deviations >20% ranged from 0.21% to 1.27% across sites, with a mean of 0.71%. There was considerable variation in error levels for different antineoplastic agents. Introduction of a gravimetric preparation system for antineoplastic agents detected and prevented dosing errors which would not have been recognized with traditional methods and could have resulted in toxicity or suboptimal therapeutic outcomes for patients undergoing anticancer treatment. © 2017 The Authors. Journal of Clinical Pharmacy and Therapeutics Published by John Wiley & Sons Ltd.

  7. Machine Learning for Discriminating Quantum Measurement Trajectories and Improving Readout.

    PubMed

    Magesan, Easwar; Gambetta, Jay M; Córcoles, A D; Chow, Jerry M

    2015-05-22

    Current methods for classifying measurement trajectories in superconducting qubit systems produce fidelities systematically lower than those predicted by experimental parameters. Here, we place current classification methods within the framework of machine learning (ML) algorithms and improve on them by investigating more sophisticated ML approaches. We find that nonlinear algorithms and clustering methods produce significantly higher assignment fidelities that help close the gap to the fidelity possible under ideal noise conditions. Clustering methods group trajectories into natural subsets within the data, which allows for the diagnosis of systematic errors. We find large clusters in the data associated with T1 processes and show these are the main source of discrepancy between our experimental and ideal fidelities. These error diagnosis techniques help provide a path forward to improve qubit measurements.

  8. Investigation of measurement method of saturation magnetization of iron core material using electromagnet

    NASA Astrophysics Data System (ADS)

    Shibataki, Takuya; Takahashi, Yasuhito; Fujiwara, Koji

    2018-04-01

    This paper discusses a measurement method for saturation magnetizations of iron core materials using an electromagnet, which can apply an extremely large magnetic field strength to a specimen. It is said that electrical steel sheets are completely saturated at such a large magnetic field strength over about 100 kA/m. The saturation magnetization can be obtained by assuming that the completely saturated specimen shows a linear change of the flux density with the magnetic field strength because the saturation magnetization is constant. In order to accurately evaluate the flux density in the specimen, an air flux between the specimen and a winding of B-coil for detecting the flux density is compensated by utilizing an ideal condition that the incremental permeability of saturated specimen is equal to the permeability of vacuum. An error of magnetic field strength caused by setting a sensor does not affect the measurement accuracy of saturation magnetization. The error is conveniently cancelled because the saturation magnetization is a function of a ratio of the magnetic field strength to its increment. It may be concluded that the saturation magnetization can be easily measured with high accuracy by using the proposed method.

  9. Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

    2013-04-01

    Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.

  10. Reliability and measurement error of sagittal spinal motion parameters in 220 patients with chronic low back pain using a three-dimensional measurement device.

    PubMed

    Mieritz, Rune M; Bronfort, Gert; Jakobsen, Markus D; Aagaard, Per; Hartvigsen, Jan

    2014-09-01

    A basic premise for any instrument measuring spinal motion is that reliable outcomes can be obtained on a relevant sample under standardized conditions. The purpose of this study was to assess the overall reliability and measurement error of regional spinal sagittal plane motion in patients with chronic low back pain (LBP), and then to evaluate the influence of body mass index, examiner, gender, stability of pain, and pain distribution on reliability and measurement error. This study comprises a test-retest design separated by 7 to 14 days. The patient cohort consisted of 220 individuals with chronic LBP. Kinematics of the lumbar spine were sampled during standardized spinal extension-flexion testing using a 6-df instrumented spatial linkage system. Test-retest reliability and measurement error were evaluated using interclass correlation coefficients (ICC(1,1)) and Bland-Altman limits of agreement (LOAs). The overall test-retest reliability (ICC(1,1)) for various motion parameters ranged from 0.51 to 0.70, and relatively wide LOAs were observed for all parameters. Reliability measures in patient subgroups (ICC(1,1)) ranged between 0.34 and 0.77. In general, greater (ICC(1,1)) coefficients and smaller LOAs were found in subgroups with patients examined by the same examiner, patients with a stable pain level, patients with a body mass index less than below 30 kg/m(2), patients who were men, and patients in the Quebec Task Force classifications Group 1. This study shows that sagittal plane kinematic data from patients with chronic LBP may be sufficiently reliable in measurements of groups of patients. However, because of the large LOAs, this test procedure appears unusable at the individual patient level. Furthermore, reliability and measurement error varies substantially among subgroups of patients. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. The effect of sediment loading in Fennoscandia and the Barents Sea during the last glacial cycle on glacial isostatic adjustment observations

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; IJpelaar, Thijs

    2017-09-01

    Models for glacial isostatic adjustment (GIA) routinely include the effects of meltwater redistribution and changes in topography and coastlines. Since the sediment transport related to the dynamics of ice sheets may be comparable to that of sea level rise in terms of surface pressure, the loading effect of sediment deposition could cause measurable ongoing viscous readjustment. Here, we study the loading effect of glacially induced sediment redistribution (GISR) related to the Weichselian ice sheet in Fennoscandia and the Barents Sea. The surface loading effect and its effect on the gravitational potential is modeled by including changes in sediment thickness in the sea level equation following the method of Dalca et al. (2013). Sediment displacement estimates are estimated in two different ways: (i) from a compilation of studies on local features (trough mouth fans, large-scale failures, and basin flux) and (ii) from output of a coupled ice-sediment model. To account for uncertainty in Earth's rheology, three viscosity profiles are used. It is found that sediment transport can lead to changes in relative sea level of up to 2 m in the last 6000 years and larger effects occurring earlier in the deglaciation. This magnitude is below the error level of most of the relative sea level data because those data are sparse and errors increase with length of time before present. The effect on present-day uplift rates reaches a few tenths of millimeters per year in large parts of Norway and Sweden, which is around the measurement error of long-term GNSS (global navigation satellite system) monitoring networks. The maximum effect on present-day gravity rates as measured by the GRACE (Gravity Recovery and Climate Experiment) satellite mission is up to tenths of microgal per year, which is larger than the measurement error but below other error sources. Since GISR causes systematic uplift in most of mainland Scandinavia, including GISR in GIA models would improve the interpretation of GNSS and GRACE observations there.

  12. A Piezoelectric Unimorph Deformable Mirror Concept by Wafer Transfer for Ultra Large Space Telescopes

    NASA Technical Reports Server (NTRS)

    Yang, Eui-Hyeok; Shcheglov, Kirill

    2002-01-01

    Future concepts of ultra large space telescopes include segmented silicon mirrors and inflatable polymer mirrors. Primary mirrors for these systems cannot meet optical surface figure requirements and are likely to generate over several microns of wavefront errors. In order to correct for these large wavefront errors, high stroke optical quality deformable mirrors are required. JPL has recently developed a new technology for transferring an entire wafer-level mirror membrane from one substrate to another. A thin membrane, 100 mm in diameter, has been successfully transferred without using adhesives or polymers. The measured peak-to-valley surface error of a transferred and patterned membrane (1 mm x 1 mm x 0.016 mm) is only 9 nm. The mirror element actuation principle is based on a piezoelectric unimorph. A voltage applied to the piezoelectric layer induces stress in the longitudinal direction causing the film to deform and pull on the mirror connected to it. The advantage of this approach is that the small longitudinal strains obtainable from a piezoelectric material at modest voltages are thus translated into large vertical displacements. Modeling is performed for a unimorph membrane consisting of clamped rectangular membrane with a PZT layer with variable dimensions. The membrane transfer technology is combined with the piezoelectric bimorph actuator concept to constitute a compact deformable mirror device with a large stroke actuation of a continuous mirror membrane, resulting in a compact A0 systems for use in ultra large space telescopes.

  13. Hospital-based transfusion error tracking from 2005 to 2010: identifying the key errors threatening patient transfusion safety.

    PubMed

    Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie

    2014-01-01

    This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).

  14. Quantifying the physical demands of a musical performance and their effects on performance quality.

    PubMed

    Drinkwater, Eric J; Klopper, Christopher J

    2010-06-01

    This study investigated the effects of fatigue on performance quality induced by a prolonged musical performance. Ten participants prepared 10 min of repertoire for their chosen wind instrument that they played three times consecutively. Prior to the performance and within short breaks between performances, researchers collected heart rate, respiratory rate, blood pressure, blood lactate concentration, rating of perceived exertion (RPE), and rating of anxiety. All performances were audio recorded and later analysed for performance errors. Reliability in assessing performance errors was assessed by typical error of measure (TEM) of 15 repeat performances. Results indicate all markers of physical stress significantly increased by a moderate to large amount (4.6 to 62.2%; d = 0.50 to 1.54) once the performance began, while heart rate, respirations, and RPE continued to rise by a small to large amount (4.9 to 23.5%; d = 0.28 to 0.93) with each performance. Observed changes in performance between performances were well in excess of the TEM of 7.4%. There was a significant small (21%, d = 0.43) decrease in errors after the first performance; after the second performance, there was a significant large increase (70.4%, d = 1.14). The initial increase in physiological stress with corresponding decrease in errors after the first performance likely indicates "warming up," while the continued increase in markers of physical stress with dramatic decrement in performance quality likely indicates fatigue. Musicians may consider the relevance of physical fitness to maintaining performance quality over the duration of a performance.

  15. Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Fisher, Brad L.; Wolff, David B.

    2007-01-01

    This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.

  16. Error detection capability of a novel transmission detector: a validation study for online VMAT monitoring.

    PubMed

    Pasler, Marlies; Michel, Kilian; Marrazzo, Livia; Obenland, Michael; Pallotta, Stefania; Björnsgard, Mari; Lutterbach, Johannes

    2017-09-01

    The purpose of this study was to characterize a new single large-area ionization chamber, the integral quality monitor system (iRT, Germany), for online and real-time beam monitoring. Signal stability, monitor unit (MU) linearity and dose rate dependence were investigated for static and arc deliveries and compared to independent ionization chamber measurements. The dose verification capability of the transmission detector system was evaluated by comparing calculated and measured detector signals for 15 volumetric modulated arc therapy plans. The error detection sensitivity was tested by introducing MLC position and linac output errors. Deviations in dose distributions between the original and error-induced plans were compared in terms of detector signal deviation, dose-volume histogram (DVH) metrics and 2D γ-evaluation (2%/2 mm and 3%/3 mm). The detector signal is linearly dependent on linac output and shows negligible (<0.4%) dose rate dependence up to 460 MU min -1 . Signal stability is within 1% for cumulative detector output; substantial variations were observed for the segment-by-segment signal. Calculated versus measured cumulative signal deviations ranged from  -0.16%-2.25%. DVH, mean 2D γ-value and detector signal evaluations showed increasing deviations with regard to the respective reference with growing MLC and dose output errors; good correlation between DVH metrics and detector signal deviation was found (e.g. PTV D mean : R 2   =  0.97). Positional MLC errors of 1 mm and errors in linac output of 2% were identified with the transmission detector system. The extensive tests performed in this investigation show that the new transmission detector provides a stable and sensitive cumulative signal output and is suitable for beam monitoring during patient treatment.

  17. The Nature, Significance, and Evaluation of the Schwarzschild-Villiger (SV) Effect in Photometric Procedures

    PubMed Central

    Howling, D. H.; Fitzgerald, P. J.

    1959-01-01

    The Schwarzschild-Villiger effect has been experimentally demonstrated with the optical system used in this laboratory. Using a photographic mosaic specimen as a model, it has been shown that the conclusions of Naora are substantiated and that the SV effect, in large or small magnitude, is always present in optical systems. The theoretical transmission error arising from the presence of the SV effect has been derived for various optical conditions of measurement. The results have been experimentally confirmed. The SV contribution of the substage optics of microspectrophotometers has also been considered. A simple method of evaluating a flare function f(A) is advanced which provides a measure of the SV error present in a system. It is demonstrated that measurements of specimens of optical density less than unity can be made with less than 1 per cent error, when using illuminating beam diameter/specimen diameter ratios of unity and uncoated optical surfaces. For denser specimens it is shown that care must be taken to reduce the illuminating beam/specimen diameter ratio to a value dictated by the magnitude of a flare function f(A), evaluated for a particular optical system, in order to avoid excessive transmission error. It is emphasized that observed densities (transmissions) are not necessarily true densities (transmissions) because of the possibility of SV error. The ambiguity associated with an estimation of stray-light error by means of an opaque object has also been demonstrated. The errors illustrated are not necessarily restricted to microspectrophotometry but may possibly be found in such fields as spectral analysis, the interpretation of x-ray diffraction patterns, the determination of ionizing particle tracks and particle densities in photographic emulsions, and in many other types of photometric analysis. PMID:14403512

  18. Error detection capability of a novel transmission detector: a validation study for online VMAT monitoring

    NASA Astrophysics Data System (ADS)

    Pasler, Marlies; Michel, Kilian; Marrazzo, Livia; Obenland, Michael; Pallotta, Stefania; Björnsgard, Mari; Lutterbach, Johannes

    2017-09-01

    The purpose of this study was to characterize a new single large-area ionization chamber, the integral quality monitor system (iRT, Germany), for online and real-time beam monitoring. Signal stability, monitor unit (MU) linearity and dose rate dependence were investigated for static and arc deliveries and compared to independent ionization chamber measurements. The dose verification capability of the transmission detector system was evaluated by comparing calculated and measured detector signals for 15 volumetric modulated arc therapy plans. The error detection sensitivity was tested by introducing MLC position and linac output errors. Deviations in dose distributions between the original and error-induced plans were compared in terms of detector signal deviation, dose-volume histogram (DVH) metrics and 2D γ-evaluation (2%/2 mm and 3%/3 mm). The detector signal is linearly dependent on linac output and shows negligible (<0.4%) dose rate dependence up to 460 MU min-1. Signal stability is within 1% for cumulative detector output; substantial variations were observed for the segment-by-segment signal. Calculated versus measured cumulative signal deviations ranged from  -0.16%-2.25%. DVH, mean 2D γ-value and detector signal evaluations showed increasing deviations with regard to the respective reference with growing MLC and dose output errors; good correlation between DVH metrics and detector signal deviation was found (e.g. PTV D mean: R 2  =  0.97). Positional MLC errors of 1 mm and errors in linac output of 2% were identified with the transmission detector system. The extensive tests performed in this investigation show that the new transmission detector provides a stable and sensitive cumulative signal output and is suitable for beam monitoring during patient treatment.

  19. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    PubMed

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.

  20. Extreme Universe Space Observatory (EUSO) Optics Module

    NASA Technical Reports Server (NTRS)

    Young, Roy; Christl, Mark

    2008-01-01

    A demonstration part will be manufactured in Japan on one of the large Toshiba machines with a diameter of 2.5 meters. This will be a flat PMMA disk that is cut between 0.5 and 1.25 meters radius. The cut should demonstrate manufacturing the most difficult parts of the 2.5 meter Fresnel pattern and the blazed grating on the diffractive surface. Optical simulations, validated with the subscale prototype, will be used to determine the limits on manufacturing errors (tolerances) that will result in optics that meet EUSO s requirements. There will be limits on surface roughness (or errors at high spatial frequency); radial and azimuthal slope errors (at lower spatial frequencies) and plunge cut depth errors in the blazed grating. The demonstration part will be measured to determine whether it was made within the allowable tolerances.

  1. Sensitivity of thermal inertia calculations to variations in environmental factors. [in mapping of Earth's surface by remote sensing

    NASA Technical Reports Server (NTRS)

    Kahle, A. B.; Alley, R. E.; Schieldge, J. P.

    1984-01-01

    The sensitivity of thermal inertia (TI) calculations to errors in the measurement or parameterization of a number of environmental factors is considered here. The factors include effects of radiative transfer in the atmosphere, surface albedo and emissivity, variations in surface turbulent heat flux density, cloud cover, vegetative cover, and topography. The error analysis is based upon data from the Heat Capacity Mapping Mission (HCMM) satellite for July 1978 at three separate test sites in the deserts of the western United States. Results show that typical errors in atmospheric radiative transfer, cloud cover, and vegetative cover can individually cause root-mean-square (RMS) errors of about 10 percent (with atmospheric effects sometimes as large as 30-40 percent) in HCMM-derived thermal inertia images of 20,000-200,000 pixels.

  2. An approximate Kalman filter for ocean data assimilation: An example with an idealized Gulf Stream model

    NASA Technical Reports Server (NTRS)

    Fukumori, Ichiro; Malanotte-Rizzoli, Paola

    1995-01-01

    A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kalman filter based on approximation of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.

  3. An approximate Kalman filter for ocean data assimilation: An example with an idealized Gulf Stream model

    NASA Astrophysics Data System (ADS)

    Fukumori, Ichiro; Malanotte-Rizzoli, Paola

    1995-04-01

    A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kaiman filter based on approximations of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.

  4. Effect of ephemeris errors on the accuracy of the computation of the tangent point altitude of a solar scanning ray as measured by the SAGE 1 and 2 instruments

    NASA Technical Reports Server (NTRS)

    Buglia, James J.

    1989-01-01

    An analysis was made of the error in the minimum altitude of a geometric ray from an orbiting spacecraft to the Sun. The sunrise and sunset errors are highly correlated and are opposite in sign. With the ephemeris generated for the SAGE 1 instrument data reduction, these errors can be as large as 200 to 350 meters (1 sigma) after 7 days of orbit propagation. The bulk of this error results from errors in the position of the orbiting spacecraft rather than errors in computing the position of the Sun. These errors, in turn, result from the discontinuities in the ephemeris tapes resulting from the orbital determination process. Data taken from the end of the definitive ephemeris tape are used to generate the predict data for the time interval covered by the next arc of the orbit determination process. The predicted data are then updated by using the tracking data. The growth of these errors is very nearly linear, with a slight nonlinearity caused by the beta angle. An approximate analytic method is given, which predicts the magnitude of the errors and their growth in time with reasonable fidelity.

  5. Non-linear matter power spectrum covariance matrix errors and cosmological parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Blot, L.; Corasaniti, P. S.; Amendola, L.; Kitching, T. D.

    2016-06-01

    The covariance of the matter power spectrum is a key element of the analysis of galaxy clustering data. Independent realizations of observational measurements can be used to sample the covariance, nevertheless statistical sampling errors will propagate into the cosmological parameter inference potentially limiting the capabilities of the upcoming generation of galaxy surveys. The impact of these errors as function of the number of realizations has been previously evaluated for Gaussian distributed data. However, non-linearities in the late-time clustering of matter cause departures from Gaussian statistics. Here, we address the impact of non-Gaussian errors on the sample covariance and precision matrix errors using a large ensemble of N-body simulations. In the range of modes where finite volume effects are negligible (0.1 ≲ k [h Mpc-1] ≲ 1.2), we find deviations of the variance of the sample covariance with respect to Gaussian predictions above ˜10 per cent at k > 0.3 h Mpc-1. Over the entire range these reduce to about ˜5 per cent for the precision matrix. Finally, we perform a Fisher analysis to estimate the effect of covariance errors on the cosmological parameter constraints. In particular, assuming Euclid-like survey characteristics we find that a number of independent realizations larger than 5000 is necessary to reduce the contribution of sampling errors to the cosmological parameter uncertainties at subpercent level. We also show that restricting the analysis to large scales k ≲ 0.2 h Mpc-1 results in a considerable loss in constraining power, while using the linear covariance to include smaller scales leads to an underestimation of the errors on the cosmological parameters.

  6. Effects of Cloud on Goddard Lidar Observatory for Wind (GLOW) Performance and Analysis of Associated Errors

    NASA Astrophysics Data System (ADS)

    Bacha, Tulu

    The Goddard Lidar Observatory for Wind (GLOW), a mobile direct detection Doppler LIDAR based on molecular backscattering for measurement of wind in the troposphere and lower stratosphere region of atmosphere is operated and its errors characterized. It was operated at Howard University Beltsville Center for Climate Observation System (BCCOS) side by side with other operating instruments: the NASA/Langely Research Center Validation Lidar (VALIDAR), Leosphere WLS70, and other standard wind sensing instruments. The performance of Goddard Lidar Observatory for Wind (GLOW) is presented for various optical thicknesses of cloud conditions. It was also compared to VALIDAR under various conditions. These conditions include clear and cloudy sky regions. The performance degradation due to the presence of cirrus clouds is quantified by comparing the wind speed error to cloud thickness. The cloud thickness is quantified in terms of aerosol backscatter ratio (ASR) and cloud optical depth (COD). ASR and COD are determined from Howard University Raman Lidar (HURL) operating at the same station as GLOW. The wind speed error of GLOW was correlated with COD and aerosol backscatter ratio (ASR) which are determined from HURL data. The correlation related in a weak linear relationship. Finally, the wind speed measurements of GLOW were corrected using the quantitative relation from the correlation relations. Using ASR reduced the GLOW wind error from 19% to 8% in a thin cirrus cloud and from 58% to 28% in a relatively thick cloud. After correcting for cloud induced error, the remaining error is due to shot noise and atmospheric variability. Shot-noise error is the statistical random error of backscattered photons detected by photon multiplier tube (PMT) can only be minimized by averaging large number of data recorded. The atmospheric backscatter measured by GLOW along its line-of-sight direction is also used to analyze error due to atmospheric variability within the volume of measurement. GLOW scans in five different directions (vertical and at elevation angles of 45° in north, south, east, and west) to generate wind profiles. The non-uniformity of the atmosphere in all scanning directions is a factor contributing to the measurement error of GLOW. The atmospheric variability in the scanning region leads to difference in the intensity of backscattered signals for scanning directions. Taking the ratio of the north (east) to south (west) and comparing the statistical differences lead to a weak linear relation between atmospheric variability and line-of-sights wind speed differences. This relation was used to make correction which reduced by about 50%.

  7. Measurement equivalence of seven selected items of posttraumatic growth between black and white adult survivors of Hurricane Katrina.

    PubMed

    Rhodes, Alison M; Tran, Thanh V

    2013-02-01

    This study examined the equivalence or comparability of the measurement properties of seven selected items measuring posttraumatic growth among self-identified Black (n = 270) and White (n = 707) adult survivors of Hurricane Katrina, using data from the Baseline Survey of the Hurricane Katrina Community Advisory Group Study. Internal consistency reliability was equally good for both groups (Cronbach's alphas = .79), as were correlations between individual scale items and their respective overall scale. Confirmatory factor analysis of a congeneric measurement model of seven selected items of posttraumatic growth showed adequate measures of fit for both groups. The results showed only small variation in magnitude of factor loadings and measurement errors between the two samples. Tests of measurement invariance showed mixed results, but overall indicated that factor loading, error variance, and factor variance were similar between the two samples. These seven selected items can be useful for future large-scale surveys of posttraumatic growth.

  8. Quantifying the Contributions of Environmental Parameters to Ceres Surface Net Radiation Error in China

    NASA Astrophysics Data System (ADS)

    Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.

    2018-04-01

    Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.

  9. Temporally-stable active precision mount for large optics.

    PubMed

    Reinlein, Claudia; Damm, Christoph; Lange, Nicolas; Kamm, Andreas; Mohaupt, Matthias; Brady, Aoife; Goy, Matthias; Leonhard, Nina; Eberhardt, Ramona; Zeitner, Uwe; Tünnermann, Andreas

    2016-06-13

    We present a temporally-stable active mount to compensate for manufacturing-induced deformations of reflective optical components. In this paper, we introduce the design of the active mount, and its evaluation results for two sample mirrors: a quarter mirror of 115 × 105 × 9 mm3, and a full mirror of 228 × 210 × 9 mm3. The quarter mirror with 20 actuators shows a best wavefront error rms of 10 nm. Its installation position depending deformations are addressed by long-time measurements over 14 weeks indicating no significance of the orientation. Size-induced differences of the mount are studied by a full mirror with 80 manual actuators arranged in the same actuator pattern as the quarter mirror. This sample shows a wavefront error rms of (27±2) nm over a measurement period of 46 days. We conclude that the developed mount is suitable to compensate for manufacturing-induced deformations of large reflective optics, and likely to be included in the overall systems alignment procedure.

  10. Residual volume on land and when immersed in water: effect on percent body fat.

    PubMed

    Demura, Shinichi; Yamaji, Shunsuke; Kitabayashi, Tamotsu

    2006-08-01

    There is a large residual volume (RV) error when assessing percent body fat by means of hydrostatic weighing. It has generally been measured before hydrostatic weighing. However, an individual's maximal exhalations on land and in the water may not be identical. The aims of this study were to compare residual volumes and vital capacities on land and when immersed to the neck in water, and to examine the influence of the measurement error on percent body fat. The participants were 20 healthy Japanese males and 20 healthy Japanese females. To assess the influence of the RV error on percent body fat in both conditions and to evaluate the cross-validity of the prediction equation, another 20 males and 20 females were measured using hydrostatic weighing. Residual volume was measured on land and in the water using a nitrogen wash-out technique based on an open-circuit approach. In water, residual volume was measured with the participant sitting on a chair while the whole body, except the head, was submerged . The trial-to-trial reliabilities of residual volume in both conditions were very good (intraclass correlation coefficient > 0.98). Although residual volume measured under the two conditions did not agree completely, they showed a high correlation (males: 0.880; females: 0.853; P < 0.05). The limits of agreement for residual volumes in both conditions using Bland-Altman plots were -0.430 to 0.508 litres. This range was larger than the trial-to-trial error of residual volume on land (-0.260 to 0.304 litres). Moreover, the relationship between percent body fat computed using residual volume measured in both conditions was very good for both sexes (males: r = 0.902; females: r = 0.869, P < 0.0001), and the errors were approximately -6 to 4% (limits of agreement for percent body fat: -3.4 to 2.2% for males; -6.3 to 4.4% for females). We conclude that if these errors are of no importance, residual volume measured on land can be used when assessing body composition.

  11. Testing Accuracy of Long-Range Ultrasonic Sensors for Olive Tree Canopy Measurements

    PubMed Central

    Gamarra-Diezma, Juan Luis; Miranda-Fuentes, Antonio; Llorens, Jordi; Cuenca, Andrés; Blanco-Roldán, Gregorio L.; Rodríguez-Lizana, Antonio

    2015-01-01

    Ultrasonic sensors are often used to adjust spray volume by allowing the calculation of the crown volume of tree crops. The special conditions of the olive tree require the use of long-range sensors, which are less accurate and faster than the most commonly used sensors. The main objectives of the study were to determine the suitability of the sensor in terms of sound cone determination, angle errors, crosstalk errors and field measurements. Different laboratory tests were performed to check the suitability of a commercial long-range ultrasonic sensor, as were the experimental determination of the sound cone diameter at several distances for several target materials, the determination of the influence of the angle of incidence of the sound wave on the target and distance on the accuracy of measurements for several materials and the determination of the importance of the errors due to interference between sensors for different sensor spacings and distances for two different materials. Furthermore, sensor accuracy was tested under real field conditions. The results show that the studied sensor is appropriate for olive trees because the sound cone is narrower for an olive tree than for the other studied materials, the olive tree canopy does not have a large influence on the sensor accuracy with respect to distance and angle, the interference errors are insignificant for high sensor spacings and the sensor's field distance measurements were deemed sufficiently accurate. PMID:25635414

  12. Improving Lidar Turbulence Estimates for Wind Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.

    2016-10-06

    Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. This presentation primarily focuses on the physics-based corrections, which include corrections for instrument noise, volume averaging, and variance contamination. As different factors affect TI under different stability conditions, the combination of physical corrections applied in L-TERRA changes depending on the atmospheric stability during each 10-minute time period. This stability-dependent version of L-TERRA performed well at both sites, reducing TI error and bringing lidar TI estimates closer to estimates from instruments on towers. However, there is still scatter evident in the lidar TI estimates, indicating that there are physics that are not being captured in the current version of L-TERRA. Two options are discussed for modeling the remainder of the TI error physics in L-TERRA: machine learning and lidar simulations. Lidar simulations appear to be a better approach, as they can help improve understanding of atmospheric effects on TI error and do not require a large training data set.« less

  13. Bandpass mismatch error for satellite CMB experiments I: estimating the spurious signal

    NASA Astrophysics Data System (ADS)

    Thuong Hoang, Duc; Patanchon, Guillaume; Bucher, Martin; Matsumura, Tomotake; Banerji, Ranajoy; Ishino, Hirokazu; Hazumi, Masashi; Delabrouille, Jacques

    2017-12-01

    Future Cosmic Microwave Background (CMB) satellite missions aim to use the B mode polarization to measure the tensor-to-scalar ratio r with a sensitivity σr lesssim 10-3. Achieving this goal will not only require sufficient detector array sensitivity but also unprecedented control of all systematic errors inherent in CMB polarization measurements. Since polarization measurements derive from differences between observations at different times and from different sensors, detector response mismatches introduce leakages from intensity to polarization and thus lead to a spurious B mode signal. Because the expected primordial B mode polarization signal is dwarfed by the known unpolarized intensity signal, such leakages could contribute substantially to the final error budget for measuring r. Using simulations we estimate the magnitude and angular spectrum of the spurious B mode signal resulting from bandpass mismatch between different detectors. It is assumed here that the detectors are calibrated, for example using the CMB dipole, so that their sensitivity to the primordial CMB signal has been perfectly matched. Consequently the mismatch in the frequency bandpass shape between detectors introduces differences in the relative calibration of galactic emission components. We simulate this effect using a range of scanning patterns being considered for future satellite missions. We find that the spurious contribution to r from the reionization bump on large angular scales (l < 10) is ≈ 10-3 assuming large detector arrays and 20 percent of the sky masked. We show how the amplitude of the leakage depends on the nonuniformity of the angular coverage in each pixel that results from the scan pattern.

  14. Estimation of height changes of GNSS stations from the solutions of short vectors and PSI measurements

    NASA Astrophysics Data System (ADS)

    Krynski, Jan; Zak, Lukasz; Ziolkowski, Dariusz; Cisak, Jan; Lagiewska, Magdalena

    2017-06-01

    Time series of weekly and daily solutions for coordinates of permanent GNSS stations may indicate local deformations in Earth's crust or local seasonal changes in the atmosphere and hydrosphere. The errors of the determined changes are relatively large, frequently at the level of the signal. Satellite radar interferometry and especially Persistent Scatterer Interferometry (PSI) is a method of a very high accuracy. Its weakness is a relative nature of measurements as well as accumulation of errors which may occur in the case of PSI processing of large areas. It is thus beneficial to confront the results of PSI measurements with those from other techniques, such as GNSS and precise levelling. PSI and GNSS results were jointly processed recreating the history of surface deformation of the area of Warsaw metropolitan with the use of radar images from Envisat and Cosmo-SkyMed satellites. GNSS data from Borowa Gora and Jozefoslaw observatories as well as from WAT1 and CBKA permanent GNSS stations were used to validate the obtained results. Observations from 2000-2015 were processed with the Bernese v.5.0 software. Relative height changes between the GNSS stations were determined from GNSS data and relative height changes between the persistent scatterers located on the objects with GNSS stations were determined from the interferometric results. The consistency of results of the two methods was 3 to 4 times better than the theoretical accuracy of each. The joint use of both methods allows to extract a very small height change below the level of measurement error.

  15. Aniseikonia quantification: error rate of rule of thumb estimation.

    PubMed

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  16. Reduction in Chemotherapy Mixing Errors Using Six Sigma: Illinois CancerCare Experience.

    PubMed

    Heard, Bridgette; Miller, Laura; Kumar, Pankaj

    2012-03-01

    Chemotherapy mixing errors (CTMRs), although rare, have serious consequences. Illinois CancerCare is a large practice with multiple satellite offices. The goal of this study was to reduce the number of CTMRs using Six Sigma methods. A Six Sigma team consisting of five participants (registered nurses and pharmacy technicians [PTs]) was formed. The team had 10 hours of Six Sigma training in the DMAIC (ie, Define, Measure, Analyze, Improve, Control) process. Measurement of errors started from the time the CT order was verified by the PT to the time of CT administration by the nurse. Data collection included retrospective error tracking software, system audits, and staff surveys. Root causes of CTMRs included inadequate knowledge of CT mixing protocol, inconsistencies in checking methods, and frequent changes in staffing of clinics. Initial CTMRs (n = 33,259) constituted 0.050%, with 77% of these errors affecting patients. The action plan included checklists, education, and competency testing. The postimplementation error rate (n = 33,376, annualized) over a 3-month period was reduced to 0.019%, with only 15% of errors affecting patients. Initial Sigma was calculated at 4.2; this process resulted in the improvement of Sigma to 5.2, representing a 100-fold reduction. Financial analysis demonstrated a reduction in annualized loss of revenue (administration charges and drug wastage) from $11,537.95 (Medicare Average Sales Price) before the start of the project to $1,262.40. The Six Sigma process is a powerful technique in the reduction of CTMRs.

  17. Point-of-care blood glucose measurement errors overestimate hypoglycaemia rates in critically ill patients.

    PubMed

    Nya-Ngatchou, Jean-Jacques; Corl, Dawn; Onstad, Susan; Yin, Tom; Tylee, Tracy; Suhr, Louise; Thompson, Rachel E; Wisse, Brent E

    2015-02-01

    Hypoglycaemia is associated with morbidity and mortality in critically ill patients, and many hospitals have programmes to minimize hypoglycaemia rates. Recent studies have established the hypoglycaemic patient-day as a key metric and have published benchmark inpatient hypoglycaemia rates on the basis of point-of-care blood glucose data even though these values are prone to measurement errors. A retrospective, cohort study including all patients admitted to Harborview Medical Center Intensive Care Units (ICUs) during 2010 and 2011 was conducted to evaluate a quality improvement programme to reduce inappropriate documentation of point-of-care blood glucose measurement errors. Laboratory Medicine point-of-care blood glucose data and patient charts were reviewed to evaluate all episodes of hypoglycaemia. A quality improvement intervention decreased measurement errors from 31% of hypoglycaemic (<70 mg/dL) patient-days in 2010 to 14% in 2011 (p < 0.001) and decreased the observed hypoglycaemia rate from 4.3% of ICU patient-days to 3.4% (p < 0.001). Hypoglycaemic events were frequently recurrent or prolonged (~40%), and these events are not identified by the hypoglycaemic patient-day metric, which also may be confounded by a large number of very low risk or minimally monitored patient-days. Documentation of point-of-care blood glucose measurement errors likely overestimates ICU hypoglycaemia rates and can be reduced by a quality improvement effort. The currently used hypoglycaemic patient-day metric does not evaluate recurrent or prolonged events that may be more likely to cause patient harm. The monitored patient-day as currently defined may not be the optimal denominator to determine inpatient hypoglycaemic risk. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Error and bias in size estimates of whale sharks: implications for understanding demography.

    PubMed

    Sequeira, Ana M M; Thums, Michele; Brooks, Kim; Meekan, Mark G

    2016-03-01

    Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected using a stereo-video camera system at Ningaloo Reef, Western Australia. Using linear mixed-effects models, we found that visual lengths were biased towards underestimation with increasing size of the shark. When using the stereo-video camera, the number of larger individuals that were possibly mature (or close to maturity) that were detected increased by approximately 10%. Mean lengths calculated by each method were, however, comparable (5.002 ± 1.194 and 6.128 ± 1.609 m, s.d.), confirming that the population at Ningaloo is mostly composed of immature sharks based on published lengths at maturity. We then collated data sets of total lengths sampled from aggregations of whale sharks worldwide between 1995 and 2013. Except for locations in the East Pacific where large females have been reported, these aggregations also largely consisted of juveniles (mean lengths less than 7 m). Sightings of the largest individuals were limited and occurred mostly prior to 2006. This result highlights the urgent need to locate and quantify the numbers of mature male and female whale sharks in order to ascertain the conservation status and ensure persistence of the species.

  19. Beam localization in HIFU temperature measurements using thermocouples, with application to cooling by large blood vessels.

    PubMed

    Dasgupta, Subhashish; Banerjee, Rupak K; Hariharan, Prasanna; Myers, Matthew R

    2011-02-01

    Experimental studies of thermal effects in high-intensity focused ultrasound (HIFU) procedures are often performed with the aid of fine wire thermocouples positioned within tissue phantoms. Thermocouple measurements are subject to several types of error which must be accounted for before reliable inferences can be made on the basis of the measurements. Thermocouple artifact due to viscous heating is one source of error. A second is the uncertainty regarding the position of the beam relative to the target location or the thermocouple junction, due to the error in positioning the beam at the junction. This paper presents a method for determining the location of the beam relative to a fixed pair of thermocouples. The localization technique reduces the uncertainty introduced by positioning errors associated with very narrow HIFU beams. The technique is presented in the context of an investigation into the effect of blood flow through large vessels on the efficacy of HIFU procedures targeted near the vessel. Application of the beam localization method allowed conclusions regarding the effects of blood flow to be drawn from previously inconclusive (because of localization uncertainties) data. Comparison of the position-adjusted transient temperature profiles for flow rates of 0 and 400ml/min showed that blood flow can reduce temperature elevations by more than 10%, when the HIFU focus is within a 2mm distance from the vessel wall. At acoustic power levels of 17.3 and 24.8W there is a 20- to 70-fold decrease in thermal dose due to the convective cooling effect of blood flow, implying a shrinkage in lesion size. The beam-localization technique also revealed the level of thermocouple artifact as a function of sonication time, providing investigators with an indication of the quality of thermocouple data for a given exposure time. The maximum artifact was found to be double the measured temperature rise, during initial few seconds of sonication. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    PubMed

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. FAMA: Fast Automatic MOOG Analysis

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella

    2014-02-01

    FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.

  2. The Experimental Probe of Inflationary Cosmology: A Mission Concept Study for NASA's Einstein Inflation Probe

    NASA Technical Reports Server (NTRS)

    2008-01-01

    When we began our study we sought to answer five fundamental implementation questions: 1) can foregrounds be measured and subtracted to a sufficiently low level?; 2) can systematic errors be controlled?; 3) can we develop optics with sufficiently large throughput, low polarization, and frequency coverage from 30 to 300 GHz?; 4) is there a technical path to realizing the sensitivity and systematic error requirements?; and 5) what are the specific mission architecture parameters, including cost? Detailed answers to these questions are contained in this report.

  3. Measurement errors when estimating the vertical jump height with flight time using photocell devices: the example of Optojump.

    PubMed

    Attia, A; Dhahbi, W; Chaouachi, A; Padulo, J; Wong, D P; Chamari, K

    2017-03-01

    Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p<0.001) and a large effect size (Cohen's d >1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height.

  4. Measurement errors when estimating the vertical jump height with flight time using photocell devices: the example of Optojump

    PubMed Central

    Attia, A; Chaouachi, A; Padulo, J; Wong, DP; Chamari, K

    2016-01-01

    Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p<0.001) and a large effect size (Cohen’s d>1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height. PMID:28416900

  5. Effect of cephalometer misalignment on calculations of facial asymmetry.

    PubMed

    Lee, Ki-Heon; Hwang, Hyeon-Shik; Curry, Sean; Boyd, Robert L; Norris, Kevin; Baumrind, Sheldon

    2007-07-01

    In this study, we evaluated errors introduced into the interpretation of facial asymmetry on posteroanterior (PA) cephalograms due to malpositioning of the x-ray emitter focal spot. We tested the hypothesis that horizontal displacements of the emitter from its ideal position would produce systematic displacements of skull landmarks that could be fully accounted for by the rules of projective geometry alone. A representative dry skull with 22 metal markers was used to generate a series of PA images from different emitter positions by using a fully calibrated stereo cephalometer. Empirical measurements of the resulting cephalograms were compared with mathematical predictions based solely on geometric rules. The empirical measurements matched the mathematical predictions within the limits of measurement error (x= 0.23 mm), thus supporting the hypothesis. Based upon this finding, we generated a completely symmetrical mathematical skull and calculated the expected errors for focal spots of several different magnitudes. Quantitative data were computed for focal spot displacements of different magnitudes. Misalignment of the x-ray emitter focal spot introduces systematic errors into the interpretation of facial asymmetry on PA cephalograms. For misalignments of less than 20 mm, the effect is small in individual cases. However, misalignments as small as 10 mm can introduce spurious statistical findings of significant asymmetry when mean values for large groups of PA images are evaluated.

  6. 3D bubble reconstruction using multiple cameras and space carving method

    NASA Astrophysics Data System (ADS)

    Fu, Yucheng; Liu, Yang

    2018-07-01

    An accurate measurement of bubble shape and size has a significant value in understanding the behavior of bubbles that exist in many engineering applications. Past studies usually use one or two cameras to estimate bubble volume, surface area, among other parameters. The 3D bubble shape and rotation angle are generally not available in these studies. To overcome this challenge and obtain more detailed information of individual bubbles, a 3D imaging system consisting of four high-speed cameras is developed in this paper, and the space carving method is used to reconstruct the 3D bubble shape based on the recorded high-speed images from different view angles. The proposed method can reconstruct the bubble surface with minimal assumptions. A benchmarking test is performed in a 3 cm  ×  1 cm rectangular channel with stagnant water. The results show that the newly proposed method can measure the bubble volume with an error of less than 2% compared with the syringe reading. The conventional two-camera system has an error around 10%. The one-camera system has an error greater than 25%. The visualization of a 3D bubble rising demonstrates the wall influence on bubble rotation angle and aspect ratio. This also explains the large error that exists in the single camera measurement.

  7. Error trends in SASS winds as functions of atmospheric stability and sea surface temperature

    NASA Technical Reports Server (NTRS)

    Liu, W. T.

    1983-01-01

    Wind speed measurements obtained with the scatterometer instrument aboard the Seasat satellite are compared equivalent neutral wind measurements obtained from ship reports in the western N. Atlantic and eastern N. Pacific where the concentration of ship reports are high and the ranges of atmospheric stability and sea surface temperature are large. It is found that at low wind speeds the difference between satellite measurements and surface reports depends on sea surface temperature. At wind speeds higher than 8 m/s the dependence was greatly reduced. The removal of systematic errors due to fluctuations in atmospheric stability reduced the r.m.s. difference from 1.7 m/s to 0.8 m/s. It is suggested that further clarification of the effects of fluctuations in atmospheric stability on Seasat wind speed measurements should increase their reliability in the future.

  8. Time dependent wind fields

    NASA Technical Reports Server (NTRS)

    Chelton, D. B.

    1986-01-01

    Two tasks were performed: (1) determination of the accuracy of Seasat scatterometer, altimeter, and scanning multichannel microwave radiometer measurements of wind speed; and (2) application of Seasat altimeter measurements of sea level to study the spatial and temporal variability of geostrophic flow in the Antarctic Circumpolar Current. The results of the first task have identified systematic errors in wind speeds estimated by all three satellite sensors. However, in all cases the errors are correctable and corrected wind speeds agree between the three sensors to better than 1 ms sup -1 in 96-day 2 deg. latitude by 6 deg. longitude averages. The second task has resulted in development of a new technique for using altimeter sea level measurements to study the temporal variability of large scale sea level variations. Application of the technique to the Antarctic Circumpolar Current yielded new information about the ocean circulation in this region of the ocean that is poorly sampled by conventional ship-based measurements.

  9. Quantifying uncertainty in carbon and nutrient pools of coarse woody debris

    NASA Astrophysics Data System (ADS)

    See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.

    2016-12-01

    Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.

  10. Quantification of Greenhouse Gas Emission Rates from strong Point Sources by Space-borne IPDA Lidar Measurements: Results from a Sensitivity Analysis Study

    NASA Astrophysics Data System (ADS)

    Ehret, G.; Kiemle, C.; Rapp, M.

    2017-12-01

    The practical implementation of the Paris Agreement (COP21) vastly profit from an independent, reliable and global measurement system of greenhouse gas emissions, in particular of CO2, in order to complement and cross-check national efforts. Most fossil-fuel CO2 emitters emanate from large sources such as cities and power plants. These emissions increase the local CO2 abundance in the atmosphere by 1-10 parts per million (ppm) which is a signal that is significantly larger than the variability from natural sources and sinks over the local source domain. Despite these large signals, they are only sparsely sampled by the ground-based network which calls for satellite measurements. However, none of the existing and forthcoming passive satellite instruments, operating in the NIR spectral domain, can measure CO2 emissions at night time or in low sunlight conditions and in high latitude regions in winter times. The resulting sparse coverage of passive spectrometers is a serious limitation, particularly for the Northern Hemisphere, since these regions exhibit substantial emissions during the winter as well as other times of the year. In contrast, CO2 measurements by an Integrated Path Differential Absorption (IPDA) Lidar are largely immune to these limitations and initial results from airborne application look promising. In this study, we discuss the implication for a space-borne IPDA Lidar system. A Gaussian plume model will be used to simulate the CO2-distribution of large power plants downstream to the source. The space-borne measurements are simulated by applying a simple forward model based on Gaussian error distribution. Besides the sampling frequency, the sampling geometry (e.g. measurement distance to the emitting source) and the error of the measurement itself vastly impact on the flux inversion performance. We will discuss the results by incorporating Gaussian plume and mass budget approaches to quantify the emission rates.

  11. Use of a portable, automated, open-circuit gas quantification system and the sulfur hexafluoride tracer technique for measuring enteric methane emissions in Holstein cows fed ad libitum or restricted

    USDA-ARS?s Scientific Manuscript database

    The sulfur hexafluoride tracer technique (SF**6) is a commonly used method for measuring CH**4 enteric emissions in ruminants. Studies using SF**6 have shown large variation in CH**4 emissions data, inconsistencies in CH**4 emissions across studies, and potential methodological errors. Therefore, th...

  12. Spectral risk measures: the risk quadrangle and optimal approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouri, Drew P.

    We develop a general risk quadrangle that gives rise to a large class of spectral risk measures. The statistic of this new risk quadrangle is the average value-at-risk at a specific confidence level. As such, this risk quadrangle generates a continuum of error measures that can be used for superquantile regression. For risk-averse optimization, we introduce an optimal approximation of spectral risk measures using quadrature. Lastly, we prove the consistency of this approximation and demonstrate our results through numerical examples.

  13. Spectral risk measures: the risk quadrangle and optimal approximation

    DOE PAGES

    Kouri, Drew P.

    2018-05-24

    We develop a general risk quadrangle that gives rise to a large class of spectral risk measures. The statistic of this new risk quadrangle is the average value-at-risk at a specific confidence level. As such, this risk quadrangle generates a continuum of error measures that can be used for superquantile regression. For risk-averse optimization, we introduce an optimal approximation of spectral risk measures using quadrature. Lastly, we prove the consistency of this approximation and demonstrate our results through numerical examples.

  14. Geometrical Characterisation of a 2D Laser System and Calibration of a Cross-Grid Encoder by Means of a Self-Calibration Methodology

    PubMed Central

    Torralba, Marta; Díaz-Pérez, Lucía C.

    2017-01-01

    This article presents a self-calibration procedure and the experimental results for the geometrical characterisation of a 2D laser system operating along a large working range (50 mm × 50 mm) with submicrometre uncertainty. Its purpose is to correct the geometric errors of the 2D laser system setup generated when positioning the two laser heads and the plane mirrors used as reflectors. The non-calibrated artefact used in this procedure is a commercial grid encoder that is also a measuring instrument. Therefore, the self-calibration procedure also allows the determination of the geometrical errors of the grid encoder, including its squareness error. The precision of the proposed algorithm is tested using virtual data. Actual measurements are subsequently registered, and the algorithm is applied. Once the laser system is characterised, the error of the grid encoder is calculated along the working range, resulting in an expanded submicrometre calibration uncertainty (k = 2) for the X and Y axes. The results of the grid encoder calibration are comparable to the errors provided by the calibration certificate for its main central axes. It is, therefore, possible to confirm the suitability of the self-calibration methodology proposed in this article. PMID:28858239

  15. Calibration of low-temperature ac susceptometers with a copper cylinder standard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, D.-X.; Skumryev, V.

    2010-02-15

    A high-quality low-temperature ac susceptometer is calibrated by comparing the measured ac susceptibility of a copper cylinder with its eddy-current ac susceptibility accurately calculated. Different from conventional calibration techniques that compare the measured results with the known property of a standard sample at certain fixed temperature T, field amplitude H{sub m}, and frequency f, to get a magnitude correction factor, here, the electromagnetic properties of the copper cylinder are unknown and are determined during the calibration of the ac susceptometer in the entire T, H{sub m}, and f range. It is shown that the maximum magnitude error and the maximummore » phase error of the susceptometer are less than 0.7% and 0.3 deg., respectively, in the region T=5-300 K and f=111-1111 Hz at H{sub m}=800 A/m, after a magnitude correction by a constant factor as done in a conventional calibration. However, the magnitude and phase errors can reach 2% and 4.3 deg. at 10 000 and 11 Hz, respectively. Since the errors are reproducible, a large portion of them may be further corrected after a calibration, the procedure for which is given. Conceptual discussions concerning the error sources, comparison with other calibration methods, and applications of ac susceptibility techniques are presented.« less

  16. Real-time and on-demand buoy observation system for tsunami and crustal displacement

    NASA Astrophysics Data System (ADS)

    Takahashi, N.; Imai, K.; Ishihara, Y.; Fukuda, T.; Ochi, H.; Suzuki, K.; Kido, M.; Ohta, Y.; Imano, M.; Hino, R.

    2017-12-01

    We develop real-time and on-demand buoy observation system for tsunami and crustal displacement. It is indispensable for observation of crustal displacement to understand changes of stress field related to future large earthquakes. The current status of the observation is carried out by using a vessel with an interval of a few times per a year. When a large earthquake occurs, however, we need dense or on-demand observation of the crustal displacement to grasp nature of the slow slip after the rupture. Therefore, we constructed buoy system with a buoy station, wire-end station, seafloor unit and acoustic transponders for crustal displacement, and we installed a pressure sensor on the seafloor unit and GNSS system on the buoy in addition to measurement of e distance between the buoy and the seafloor acoustic transponders. Tsunami is evaluated using GNSS data and pressure data sent from seafloor. Observation error of the GNSS is about 10 cm. The crustal displacement is estimated using pressure sensor for vertical and acoustic measurement for horizontal. Using current slack ratio of 1.58, the observation error for the measurement of the crustal displacement is about 10 cm. We repeated three times sea trials and confirmed the data acquisition with high data quality, mooring without dredging anchor in the strong sea current with a speed of 5.5 knots. Current issues to be resolved we face are removing noises on the acoustic data transmission, data transmission between the buoy and wire-end stations, electrical consumption on the buoy station and large observation error on the crustal displacement due to large slack ratio. We consider the change of the acoustic transmission for pressure data, replace of a GNSS data logger with large electrical consumption, and reduce of the slack ratio, and search method to reduce resistance of the buoy on the sea water. In this presentation, we introduce the current status of the technical development and tsunami waveforms recorded on our seafloor unit using recent tsunami signals earthquake for the data quality check.

  17. The Quantum Socket: Wiring for Superconducting Qubits - Part 3

    NASA Astrophysics Data System (ADS)

    Mariantoni, M.; Bejianin, J. H.; McConkey, T. G.; Rinehart, J. R.; Bateman, J. D.; Earnest, C. T.; McRae, C. H.; Rohanizadegan, Y.; Shiri, D.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.

    The implementation of a quantum computer requires quantum error correction codes, which allow to correct errors occurring on physical quantum bits (qubits). Ensemble of physical qubits will be grouped to form a logical qubit with a lower error rate. Reaching low error rates will necessitate a large number of physical qubits. Thus, a scalable qubit architecture must be developed. Superconducting qubits have been used to realize error correction. However, a truly scalable qubit architecture has yet to be demonstrated. A critical step towards scalability is the realization of a wiring method that allows to address qubits densely and accurately. A quantum socket that serves this purpose has been designed and tested at microwave frequencies. In this talk, we show results where the socket is used at millikelvin temperatures to measure an on-chip superconducting resonator. The control electronics is another fundamental element for scalability. We will present a proposal based on the quantum socket to interconnect a classical control hardware to a superconducting qubit hardware, where both are operated at millikelvin temperatures.

  18. Modeling error in experimental assays using the bootstrap principle: Understanding discrepancies between assays using different dispensing technologies

    PubMed Central

    Hanson, Sonya M.; Ekins, Sean; Chodera, John D.

    2015-01-01

    All experimental assay data contains error, but the magnitude, type, and primary origin of this error is often not obvious. Here, we describe a simple set of assay modeling techniques based on the bootstrap principle that allow sources of error and bias to be simulated and propagated into assay results. We demonstrate how deceptively simple operations—such as the creation of a dilution series with a robotic liquid handler—can significantly amplify imprecision and even contribute substantially to bias. To illustrate these techniques, we review an example of how the choice of dispensing technology can impact assay measurements, and show how large contributions to discrepancies between assays can be easily understood and potentially corrected for. These simple modeling techniques—illustrated with an accompanying IPython notebook—can allow modelers to understand the expected error and bias in experimental datasets, and even help experimentalists design assays to more effectively reach accuracy and imprecision goals. PMID:26678597

  19. A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2002-01-01

    The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.

  20. Carbon dioxide emission tallies for 210 U.S. coal-fired power plants: a comparison of two accounting methods.

    PubMed

    Quick, Jeffrey C

    2014-01-01

    Annual CO2 emission tallies for 210 coal-fired power plants during 2009 were more accurately calculated from fuel consumption records reported by the US. Energy Information Administration (EIA) than measurements from Continuous Emissions Monitoring Systems (CEMS) reported by the US. Environmental Protection Agency. Results from these accounting methods for individual plants vary by +/- 10.8%. Although the differences systematically vary with the method used to certify flue-gas flow instruments in CEMS, additional sources of CEMS measurement error remain to be identified. Limitations of the EIA fuel consumption data are also discussed. Consideration of weighing, sample collection, laboratory analysis, emission factor, and stock adjustment errors showed that the minimum error for CO2 emissions calculated from the fuel consumption data ranged from +/- 1.3% to +/- 7.2% with a plant average of +/- 1.6%. This error might be reduced by 50% if the carbon content of coal delivered to U.S. power plants were reported. Potentially, this study might inform efforts to regulate CO2 emissions (such as CO2 performance standards or taxes) and more immediately, the U.S. Greenhouse Gas Reporting Rule where large coal-fired power plants currently use CEMS to measure CO2 emissions. Moreover, if, as suggested here, the flue-gas flow measurement limits the accuracy of CO2 emission tallies from CEMS, then the accuracy of other emission tallies from CEMS (such as SO2, NOx, and Hg) would be similarly affected. Consequently, improved flue gas flow measurements are needed to increase the reliability of emission measurements from CEMS.

  1. The mean sea surface height and geoid along the Geosat subtrack from Bermuda to Cape Cod

    NASA Astrophysics Data System (ADS)

    Kelly, Kathryn A.; Joyce, Terrence M.; Schubert, David M.; Caruso, Michael J.

    1991-07-01

    Measurements of near-surface velocity and concurrent sea level along an ascending Geosat subtrack were used to estimate the mean sea surface height and the Earth's gravitational geoid. Velocity measurements were made on three traverses of a Geosat subtrack within 10 days, using an acoustic Doppler current profiler (ADCP). A small bias in the ADCP velocity was removed by considering a mass balance for two pairs of triangles for which expendable bathythermograph measurements were also made. Because of the large curvature of the Gulf Stream, the gradient wind balance was used to estimate the cross-track component of geostrophic velocity from the ADCP vectors; this component was then integrated to obtain the sea surface height profile. The mean sea surface height was estimated as the difference between the instantaneous sea surface height from ADCP and the Geosat residual sea level, with mesoscale errors reduced by low-pass filtering. The error estimates were divided into a bias, tilt, and mesoscale residual; the bias was ignored because profiles were only determined within a constant of integration. The calculated mean sea surface height estimate agreed with an independent estimate of the mean sea surface height from Geosat, obtained by modeling the Gulf Stream as a Gaussian jet, within the expected errors in the estimates: the tilt error was 0.10 m, and the mesoscale error was 0.044 m. To minimize mesoscale errors in the estimate, the alongtrack geoid estimate was computed as the difference between the mean sea level from the Geosat Exact Repeat Mission and an estimate of the mean sea surface height, rather than as the difference between instantaneous profiles of sea level and sea surface height. In the critical region near the Gulf Stream the estimated error reduction using this method was about 0.07 m. Differences between the geoid estimate and a gravimetric geoid were not within the expected errors: the rms mesoscale difference was 0.24 m rms.

  2. MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE USING THE VARIABILITY-LUMINOSITY RELATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Anne H.; Seitz, Stella; Jerke, Jonathan

    2011-05-10

    We introduce a technique to measure gravitational lensing magnification using the variability of type I quasars. Quasars' variability amplitudes and luminosities are tightly correlated, on average. Magnification due to gravitational lensing increases the quasars' apparent luminosity, while leaving the variability amplitude unchanged. Therefore, the mean magnification of an ensemble of quasars can be measured through the mean shift in the variability-luminosity relation. As a proof of principle, we use this technique to measure the magnification of quasars spectroscopically identified in the Sloan Digital Sky Survey (SDSS), due to gravitational lensing by galaxy clusters in the SDSS MaxBCG catalog. The Palomar-QUESTmore » Variability Survey, reduced using the DeepSky pipeline, provides variability data for the sources. We measure the average quasar magnification as a function of scaled distance (r/R{sub 200}) from the nearest cluster; our measurements are consistent with expectations assuming Navarro-Frenk-White cluster profiles, particularly after accounting for the known uncertainty in the clusters' centers. Variability-based lensing measurements are a valuable complement to shape-based techniques because their systematic errors are very different, and also because the variability measurements are amenable to photometric errors of a few percent and to depths seen in current wide-field surveys. Given the volume data of the expected from current and upcoming surveys, this new technique has the potential to be competitive with weak lensing shear measurements of large-scale structure.« less

  3. Field evaluation of distance-estimation error during wetland-dependent bird surveys

    USGS Publications Warehouse

    Nadeau, Christopher P.; Conway, Courtney J.

    2012-01-01

    Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.

  4. Distance Measurement Error in Time-of-Flight Sensors Due to Shot Noise

    PubMed Central

    Illade-Quinteiro, Julio; Brea, Víctor M.; López, Paula; Cabello, Diego; Doménech-Asensi, Gines

    2015-01-01

    Unlike other noise sources, which can be reduced or eliminated by different signal processing techniques, shot noise is an ever-present noise component in any imaging system. In this paper, we present an in-depth study of the impact of shot noise on time-of-flight sensors in terms of the error introduced in the distance estimation. The paper addresses the effect of parameters, such as the size of the photosensor, the background and signal power or the integration time, and the resulting design trade-offs. The study is demonstrated with different numerical examples, which show that, in general, the phase shift determination technique with two background measurements approach is the most suitable for pixel arrays of large resolution. PMID:25723141

  5. Design, calibration and validation of a novel 3D printed instrumented spatial linkage that measures changes in the rotational axes of the tibiofemoral joint.

    PubMed

    Bonny, Daniel P; Hull, M L; Howell, S M

    2014-01-01

    An accurate axis-finding technique is required to measure any changes from normal caused by total knee arthroplasty in the flexion-extension (F-E) and longitudinal rotation (LR) axes of the tibiofemoral joint. In a previous paper, we computationally determined how best to design and use an instrumented spatial linkage (ISL) to locate the F-E and LR axes such that rotational and translational errors were minimized. However, the ISL was not built and consequently was not calibrated; thus the errors in locating these axes were not quantified on an actual ISL. Moreover, previous methods to calibrate an ISL used calibration devices with accuracies that were either undocumented or insufficient for the device to serve as a gold-standard. Accordingly, the objectives were to (1) construct an ISL using the previously established guidelines,(2) calibrate the ISL using an improved method, and (3) quantify the error in measuring changes in the F-E and LR axes. A 3D printed ISL was constructed and calibrated using a coordinate measuring machine, which served as a gold standard. Validation was performed using a fixture that represented the tibiofemoral joint with an adjustable F-E axis and the errors in measuring changes to the positions and orientations of the F-E and LR axes were quantified. The resulting root mean squared errors (RMSEs) of the calibration residuals using the new calibration method were 0.24, 0.33, and 0.15 mm for the anterior-posterior, medial-lateral, and proximal-distal positions, respectively, and 0.11, 0.10, and 0.09 deg for varus-valgus, flexion-extension, and internal-external orientations, respectively. All RMSEs were below 0.29% of the respective full-scale range. When measuring changes to the F-E or LR axes, each orientation error was below 0.5 deg; when measuring changes in the F-E axis, each position error was below 1.0 mm. The largest position RMSE was when measuring a medial-lateral change in the LR axis (1.2 mm). Despite the large size of the ISL, these calibration residuals were better than those for previously published ISLs, particularly when measuring orientations, indicating that using a more accurate gold standard was beneficial in limiting the calibration residuals. The validation method demonstrated that this ISL is capable of accurately measuring clinically important changes (i.e. 1 mm and 1 deg) in the F-E and LR axes.

  6. Evaluating the accuracy and large inaccuracy of two continuous glucose monitoring systems.

    PubMed

    Leelarathna, Lalantha; Nodale, Marianna; Allen, Janet M; Elleri, Daniela; Kumareswaran, Kavita; Haidar, Ahmad; Caldwell, Karen; Wilinska, Malgorzata E; Acerini, Carlo L; Evans, Mark L; Murphy, Helen R; Dunger, David B; Hovorka, Roman

    2013-02-01

    This study evaluated the accuracy and large inaccuracy of the Freestyle Navigator (FSN) (Abbott Diabetes Care, Alameda, CA) and Dexcom SEVEN PLUS (DSP) (Dexcom, Inc., San Diego, CA) continuous glucose monitoring (CGM) systems during closed-loop studies. Paired CGM and plasma glucose values (7,182 data pairs) were collected, every 15-60 min, from 32 adults (36.2±9.3 years) and 20 adolescents (15.3±1.5 years) with type 1 diabetes who participated in closed-loop studies. Levels 1, 2, and 3 of large sensor error with increasing severity were defined according to absolute relative deviation greater than or equal to ±40%, ±50%, and ±60% at a reference glucose level of ≥6 mmol/L or absolute deviation greater than or equal to ±2.4 mmol/L,±3.0 mmol/L, and ±3.6 mmol/L at a reference glucose level of <6 mmol/L. Median absolute relative deviation was 9.9% for FSN and 12.6% for DSP. Proportions of data points in Zones A and B of Clarke error grid analysis were similar (96.4% for FSN vs. 97.8% for DSP). Large sensor over-reading, which increases risk of insulin over-delivery and hypoglycemia, occurred two- to threefold more frequently with DSP than FSN (once every 2.5, 4.6, and 10.7 days of FSN use vs. 1.2, 2.0, and 3.7 days of DSP use for Level 1-3 errors, respectively). At levels 2 and 3, large sensor errors lasting 1 h or longer were absent with FSN but persisted with DSP. FSN and DSP differ substantially in the frequency and duration of large inaccuracy despite only modest differences in conventional measures of numerical and clinical accuracy. Further evaluations are required to confirm that FSN is more suitable for integration into closed-loop delivery systems.

  7. Software Manages Documentation in a Large Test Facility

    NASA Technical Reports Server (NTRS)

    Gurneck, Joseph M.

    2001-01-01

    The 3MCS computer program assists and instrumentation engineer in performing the 3 essential functions of design, documentation, and configuration management of measurement and control systems in a large test facility. Services provided by 3MCS are acceptance of input from multiple engineers and technicians working at multiple locations;standardization of drawings;automated cross-referencing; identification of errors;listing of components and resources; downloading of test settings; and provision of information to customers.

  8. Measurement of Thunderstorm Cloud-Top Parameters Using High-Frequency Satellite Imagery

    DTIC Science & Technology

    1978-01-01

    short wave was present well to the south of this system approximately 2000 ka west of Baja California. Two distinct flow patterns were present, one...view can be observed in near real time whereas radar observations, although excellent for local purposes, involve substantial errors when composited...on a large scale. The time delay in such large scale compositing is critical when attempting to monitor convective cloud systems for a potential

  9. Two-Dimensional Micro-/Nanoradian Angle Generator with High Resolution and Repeatability Based on Piezo-Driven Double-Axis Flexure Hinge and Three Capacitive Sensors.

    PubMed

    Tan, Xinran; Zhu, Fan; Wang, Chao; Yu, Yang; Shi, Jian; Qi, Xue; Yuan, Feng; Tan, Jiubin

    2017-11-19

    This study presents a two-dimensional micro-/nanoradian angle generator (2D-MNAG) that achieves high angular displacement resolution and repeatability using a piezo-driven flexure hinge for two-dimensional deflections and three capacitive sensors for output angle monitoring and feedback control. The principal error of the capacitive sensor for precision microangle measurement is analyzed and compensated for; so as to achieve a high angle output resolution of 10 nrad (0.002 arcsec) and positioning repeatability of 120 nrad (0.024 arcsec) over a large angular range of ±4363 μrad (±900 arcsec) for the 2D-MNAG. The impact of each error component, together with the synthetic error of the 2D-MNAG after principal error compensation are determined using Monte Carlo simulation for further improvement of the 2D-MNAG.

  10. Two-Dimensional Micro-/Nanoradian Angle Generator with High Resolution and Repeatability Based on Piezo-Driven Double-Axis Flexure Hinge and Three Capacitive Sensors

    PubMed Central

    Tan, Xinran; Zhu, Fan; Wang, Chao; Yu, Yang; Shi, Jian; Qi, Xue; Yuan, Feng; Tan, Jiubin

    2017-01-01

    This study presents a two-dimensional micro-/nanoradian angle generator (2D-MNAG) that achieves high angular displacement resolution and repeatability using a piezo-driven flexure hinge for two-dimensional deflections and three capacitive sensors for output angle monitoring and feedback control. The principal error of the capacitive sensor for precision microangle measurement is analyzed and compensated for; so as to achieve a high angle output resolution of 10 nrad (0.002 arcsec) and positioning repeatability of 120 nrad (0.024 arcsec) over a large angular range of ±4363 μrad (±900 arcsec) for the 2D-MNAG. The impact of each error component, together with the synthetic error of the 2D-MNAG after principal error compensation are determined using Monte Carlo simulation for further improvement of the 2D-MNAG. PMID:29156595

  11. Applying large datasets to developing a better understanding of air leakage measurement in homes

    DOE PAGES

    Walker, I. S.; Sherman, M. H.; Joh, J.; ...

    2013-03-01

    Air tightness is an important property of building envelopes. It is a key factor in determining infiltration and related wall-performance properties such as indoor air quality, maintainability and moisture balance. Air leakage in U.S. houses consumes roughly 1/3 of the HVAC energy but provides most of the ventilation used to control IAQ. There are several methods for measuring air tightness that may result in different values and sometimes quite different uncertainties. The two main approaches trade off bias and precision errors and thus result indifferent outcomes for accuracy and repeatability. To interpret results from the two approaches, various questions needmore » to be addressed, such as the need to measure the flow exponent, the need to make both pressurization and depressurization measurements and the role of wind in determining the accuracy and precision of the results. This article uses two large datasets of blower door measurements to reach the following conclusions. For most tests the pressure exponent should be measured but for wind speeds greater than 6 m/s a fixed pressure exponent reduces experimental error. The variability in reported pressure exponents is mostly due to changes in envelope leakage characteristics. Finally, it is preferable to test in both pressurization and depressurization modes due to significant differences between the results in these two modes.« less

  12. Recovery of Large Angular Scale CMB Polarization for Instruments Employing Variable-Delay Polarization Modulators

    NASA Technical Reports Server (NTRS)

    Miller, N. J.; Chuss, D. T.; Marriage, T. A.; Wollack, E. J.; Appel, J. W.; Bennett, C. L.; Eimer, J.; Essinger-Hileman, T.; Fixsen, D. J.; Harrington, K.; hide

    2016-01-01

    Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/ f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r= 0.01. Indeed, r less than 0.01 is achievable with commensurately improved characterizations and controls.

  13. Recovery of Bennu's orientation for the OSIRIS-REx mission: implications for the spin state accuracy and geolocation errors

    NASA Astrophysics Data System (ADS)

    Mazarico, Erwan; Rowlands, David D.; Sabaka, Terence J.; Getzandanner, Kenneth M.; Rubincam, David P.; Nicholas, Joseph B.; Moreau, Michael C.

    2017-10-01

    The goal of the OSIRIS-REx mission is to return a sample of asteroid material from near-Earth asteroid (101955) Bennu. The role of the navigation and flight dynamics team is critical for the spacecraft to execute a precisely planned sampling maneuver over a specifically selected landing site. In particular, the orientation of Bennu needs to be recovered with good accuracy during orbital operations to contribute as small an error as possible to the landing error budget. Although Bennu is well characterized from Earth-based radar observations, its orientation dynamics are not sufficiently known to exclude the presence of a small wobble. To better understand this contingency and evaluate how well the orientation can be recovered in the presence of a large 1° wobble, we conduct a comprehensive simulation with the NASA GSFC GEODYN orbit determination and geodetic parameter estimation software. We describe the dynamic orientation modeling implemented in GEODYN in support of OSIRIS-REx operations and show how both altimetry and imagery data can be used as either undifferenced (landmark, direct altimetry) or differenced (image crossover, altimetry crossover) measurements. We find that these two different types of data contribute differently to the recovery of instrument pointing or planetary orientation. When upweighted, the absolute measurements help reduce the geolocation errors, despite poorer astrometric (inertial) performance. We find that with no wobble present, all the geolocation requirements are met. While the presence of a large wobble is detrimental, the recovery is still reliable thanks to the combined use of altimetry and imagery data.

  14. Appropriate Objective Functions for Quantifying Iris Mechanical Properties Using Inverse Finite Element Modeling.

    PubMed

    Pant, Anup D; Dorairaj, Syril K; Amini, Rouzbeh

    2018-07-01

    Quantifying the mechanical properties of the iris is important, as it provides insight into the pathophysiology of glaucoma. Recent ex vivo studies have shown that the mechanical properties of the iris are different in glaucomatous eyes as compared to normal ones. Notwithstanding the importance of the ex vivo studies, such measurements are severely limited for diagnosis and preclude development of treatment strategies. With the advent of detailed imaging modalities, it is possible to determine the in vivo mechanical properties using inverse finite element (FE) modeling. An inverse modeling approach requires an appropriate objective function for reliable estimation of parameters. In the case of the iris, numerous measurements such as iris chord length (CL) and iris concavity (CV) are made routinely in clinical practice. In this study, we have evaluated five different objective functions chosen based on the iris biometrics (in the presence and absence of clinical measurement errors) to determine the appropriate criterion for inverse modeling. Our results showed that in the absence of experimental measurement error, a combination of iris CL and CV can be used as the objective function. However, with the addition of measurement errors, the objective functions that employ a large number of local displacement values provide more reliable outcomes.

  15. Measurement error in the Liebowitz Social Anxiety Scale: results from a general adult population in Japan.

    PubMed

    Takada, Koki; Takahashi, Kana; Hirao, Kazuki

    2018-01-17

    Although the self-report version of Liebowitz Social Anxiety Scale (LSAS) is frequently used to measure social anxiety, data is lacking on the smallest detectable change (SDC), an important index of measurement error. We therefore aimed to determine the SDC of LSAS. Japanese adults aged 20-69 years were invited from a panel managed by a nationwide internet research agency. We then conducted a test-retest internet survey with a two-week interval to estimate the SDC at the individual (SDC ind ) and group (SDC group ) levels. The analysis included 1300 participants. The SDC ind and SDC group for the total fear subscale (scoring range: 0-72) were 23.52 points (32.7%) and 0.65 points (0.9%), respectively. The SDC ind and SDC group for the total avoidance subscale (scoring range: 0-72) were 32.43 points (45.0%) and 0.90 points (1.2%), respectively. The SDC ind and SDC group for the overall total score (scoring range: 0-144) were 45.90 points (31.9%) and 1.27 points (0.9%), respectively. Measurement error is large and indicate the potential for major problems when attempting to use the LSAS to detect changes at the individual level. These results should be considered when using the LSAS as measures of treatment change.

  16. Correction of broadband snow albedo measurements affected by unknown slope and sensor tilts

    NASA Astrophysics Data System (ADS)

    Weiser, Ursula; Olefs, Marc; Schöner, Wolfgang; Weyss, Gernot; Hynek, Bernhard

    2016-04-01

    Geometric effects induced by the underlying terrain slope or by tilt errors of the radiation sensors lead to an erroneous measurement of snow or ice albedo. Consequently, artificial diurnal albedo variations in the order of 1-20 % are observed. The present paper proposes a general method to correct tilt errors of albedo measurements in cases where tilts of both the sensors and the slopes are not accurately measured or known. We demonstrate that atmospheric parameters for this correction model can either be taken from a nearby well-maintained and horizontally levelled measurement of global radiation or alternatively from a solar radiation model. In a next step the model is fitted to the measured data to determine tilts and directions of sensors and the underlying terrain slope. This then allows us to correct the measured albedo, the radiative balance and the energy balance. Depending on the direction of the slope and the sensors a comparison between measured and corrected albedo values reveals obvious over- or underestimations of albedo. It is also demonstrated that differences between measured and corrected albedo are generally highest for large solar zenith angles.

  17. Genetic and Environmental Contributions to Educational Attainment in Australia.

    ERIC Educational Resources Information Center

    Miller, Paul; Mulvey, Charles; Martin, Nick

    2001-01-01

    Data from a large sample of Australian twins indicate that 50 to 65 percent of variance in educational attainments can be attributed to genetic endowments. Only about 25 to 40 percent may be due to environmental factors, depending on adjustments for measurement error and assortative mating. (Contains 51 references.) (MLH)

  18. Descriptive Statistical Attributes of Special Education Data Sets

    ERIC Educational Resources Information Center

    Felder, Valerie

    2013-01-01

    Micceri (1989) examined the distributional characteristics of 440 large-sample achievement and psychometric measures. All the distributions were found to be nonnormal at alpha = 0.01. Micceri indicated three factors that might contribute to a non-Gaussian error distribution in the population. The first factor is subpopulations within a target…

  19. Mental Representation of Circuit Diagrams: Individual Differences in Procedural Knowledge.

    DTIC Science & Technology

    1983-12-01

    operation. One may know, for example, that a transformer serves to change the voltage of an AC supply, that a particular combination of transitors acts as a...and error measures with respect to overall performance. Even if a large 3-1-- sample could provide statistically significant differences between skill

  20. Using a Simple Optical Rangefinder To Teach Similar Triangles.

    ERIC Educational Resources Information Center

    Cuicchi, Paul M.; Hutchison, Paul S.

    2003-01-01

    Describes how the concept of similar triangles was taught using an optical method of estimating large distances as a corresponding activity. Includes the derivation of a formula to calculate one source of measurement error and is a nice exercise in the use of the properties of similar triangles. (Author/NB)

  1. Modeling Noisy Data with Differential Equations Using Observed and Expected Matrices

    ERIC Educational Resources Information Center

    Deboeck, Pascal R.; Boker, Steven M.

    2010-01-01

    Complex intraindividual variability observed in psychology may be well described using differential equations. It is difficult, however, to apply differential equation models in psychological contexts, as time series are frequently short, poorly sampled, and have large proportions of measurement and dynamic error. Furthermore, current methods for…

  2. MODIS-derived spatiotemporal water clarity patterns in optically shallow FloridaKeys waters: A new approach to remove bottom contamination

    EPA Science Inventory

    Retrievals of water quality parameters from satellite measurements over optically shallow waters have been problematic due to bottom contamination of the signals. As a result, large errors are associated with derived water column properties. These deficiencies greatly reduce the ...

  3. Landmark-Based Navigation of an Unmanned Ground Vehicle (UGV)

    DTIC Science & Technology

    2009-03-01

    against large measurement errors. 20090710280 RELEASE LIMITATION Approved for public release 4p fv^-Jo-osiit? Published by Weapons Systems Division...achieved as numerous low cost gyroscopes in the market meet this requirement. 24 DSTO-TR-2260 3.5.4 Sensitivity to Vehicle Speed In this subsection

  4. New Insights into Handling Missing Values in Environmental Epidemiological Studies

    PubMed Central

    Roda, Célina; Nicolis, Ioannis; Momas, Isabelle; Guihenneuc, Chantal

    2014-01-01

    Missing data are unavoidable in environmental epidemiologic surveys. The aim of this study was to compare methods for handling large amounts of missing values: omission of missing values, single and multiple imputations (through linear regression or partial least squares regression), and a fully Bayesian approach. These methods were applied to the PARIS birth cohort, where indoor domestic pollutant measurements were performed in a random sample of babies' dwellings. A simulation study was conducted to assess performances of different approaches with a high proportion of missing values (from 50% to 95%). Different simulation scenarios were carried out, controlling the true value of the association (odds ratio of 1.0, 1.2, and 1.4), and varying the health outcome prevalence. When a large amount of data is missing, omitting these missing data reduced statistical power and inflated standard errors, which affected the significance of the association. Single imputation underestimated the variability, and considerably increased risk of type I error. All approaches were conservative, except the Bayesian joint model. In the case of a common health outcome, the fully Bayesian approach is the most efficient approach (low root mean square error, reasonable type I error, and high statistical power). Nevertheless for a less prevalent event, the type I error is increased and the statistical power is reduced. The estimated posterior distribution of the OR is useful to refine the conclusion. Among the methods handling missing values, no approach is absolutely the best but when usual approaches (e.g. single imputation) are not sufficient, joint modelling approach of missing process and health association is more efficient when large amounts of data are missing. PMID:25226278

  5. The design and application of large area intensive lens array focal spots measurement system

    NASA Astrophysics Data System (ADS)

    Chen, Bingzhen; Yao, Shun; Yang, Guanghui; Dai, Mingchong; Wang, Zhiyong

    2014-12-01

    Concentrating Photovoltaic (CPV) modules are getting thinner and using smaller cells now days. Correspondingly, large area intensive lens arrays with smaller unit dimension and shorter focal length are wanted. However, the size and power center of lens array focal spots usually differ from the design value and are hard to measure, especially under large area situation. It is because the machining error and deformation of material of the lens array are hard to simulate in the optical design process. Thus the alignment error between solar cells and focal spots in the module assembly process will be hard to control. Under this kind of situation, the efficiency of CPV module with thinner body and smaller cells is much lower than expected. In this paper, a design of large area lens array focal spots automatic measurement system is presented, as well as its prototype application results. In this system, a four-channel parallel light path and its corresponding image capture and process modules are designed. These modules can simulate focal spots under sunlight and have the spots image captured and processed using charge coupled devices and certain gray level algorithm. Thus the important information of focal spots such as spot size and location will be exported. Motion control module based on grating scale signal and interval measurement method are also employed in this system in order to get test results with high speed and high precision on large area lens array no less than 1m×0.8m. The repeatability of the system prototype measurement is +/-10μm with a velocity of 90 spot/min. Compared to the original module assembled using coordinates from optical design, modules assembled using data exported from the prototype is 18% higher in output power, reaching a conversion efficiency of over 31%. This system and its design can be used in the focal spot measurement of planoconvex lens array and Fresnel lens array, as well as other kinds of large area lens array application with small focal spots.

  6. Skylab S-193 radar altimeter experiment analyses and results

    NASA Technical Reports Server (NTRS)

    Brown, G. S. (Editor)

    1977-01-01

    The design of optimum filtering procedures for geoid recovery is discussed. Statistical error bounds are obtained for pointing angle estimates using average waveform data. A correlation of tracking loop bandwidth with magnitude of pointing error is established. The impact of ocean currents and precipitation on the received power are shown to be measurable effects. For large sea state conditions, measurements of sigma 0 deg indicate a distinct saturation level of about 8 dB. Near-nadir less than 15 deg values of sigma 0 deg are also presented and compared with theoretical models. Examination of Great Salt Lake Desert scattering data leads to rejection of a previously hypothesized specularly reflecting surface. Pulse-to-pulse correlation results are in agreement with quasi-monochromatic optics theoretical predictions and indicate a means for estimating direction of pointing error. Pulse compression techniques for and results of estimating significant waveheight from waveform data are presented and are also shown to be in good agreement with surface truth data. A number of results pertaining to system performance are presented.

  7. Simple scheme for encoding and decoding a qubit in unknown state for various topological codes

    PubMed Central

    Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał

    2015-01-01

    We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905

  8. The observed clustering of damaging extratropical cyclones in Europe

    NASA Astrophysics Data System (ADS)

    Cusack, Stephen

    2016-04-01

    The clustering of severe European windstorms on annual timescales has substantial impacts on the (re-)insurance industry. Our knowledge of the risk is limited by large uncertainties in estimates of clustering from typical historical storm data sets covering the past few decades. Eight storm data sets are gathered for analysis in this study in order to reduce these uncertainties. Six of the data sets contain more than 100 years of severe storm information to reduce sampling errors, and observational errors are reduced by the diversity of information sources and analysis methods between storm data sets. All storm severity measures used in this study reflect damage, to suit (re-)insurance applications. The shortest storm data set of 42 years provides indications of stronger clustering with severity, particularly for regions off the main storm track in central Europe and France. However, clustering estimates have very large sampling and observational errors, exemplified by large changes in estimates in central Europe upon removal of one stormy season, 1989/1990. The extended storm records place 1989/1990 into a much longer historical context to produce more robust estimates of clustering. All the extended storm data sets show increased clustering between more severe storms from return periods (RPs) of 0.5 years to the longest measured RPs of about 20 years. Further, they contain signs of stronger clustering off the main storm track, and weaker clustering for smaller-sized areas, though these signals are more uncertain as they are drawn from smaller data samples. These new ultra-long storm data sets provide new information on clustering to improve our management of this risk.

  9. Application of 3D Laser Scanning Technology in Inspection and Dynamic Reserves Detection of Open-Pit Mine

    NASA Astrophysics Data System (ADS)

    Hu, Zhumin; Wei, Shiyu; Jiang, Jun

    2017-10-01

    The traditional open-pit mine mining rights verification and dynamic reserve detection means rely on the total station and RTK to collect the results of the turning point coordinates of mining surface contours. It resulted in obtaining the results of low precision and large error in the means that is limited by the traditional measurement equipment accuracy and measurement methods. The three-dimensional scanning technology can obtain the three-dimensional coordinate data of the surface of the measured object in a large area at high resolution. This paper expounds the commonly used application of 3D scanning technology in the inspection and dynamic reserve detection of open mine mining rights.

  10. TU-C-BRE-05: Clinical Implications of AAA Commissioning Errors and Ability of Common Commissioning ' Credentialing Procedures to Detect Them

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McVicker, A; Oldham, M; Yin, F

    2014-06-15

    Purpose: To test the ability of the TG-119 commissioning process and RPC credentialing to detect errors in the commissioning process for a commercial Treatment Planning System (TPS). Methods: We introduced commissioning errors into the commissioning process for the Anisotropic Analytical Algorithm (AAA) within the Eclipse TPS. We included errors in Dosimetric Leaf Gap (DLG), electron contamination, flattening filter material, and beam profile measurement with an inappropriately large farmer chamber (simulated using sliding window smoothing of profiles). We then evaluated the clinical impact of these errors on clinical intensity modulated radiation therapy (IMRT) plans (head and neck, low and intermediate riskmore » prostate, mesothelioma, and scalp) by looking at PTV D99, and mean and max OAR dose. Finally, for errors with substantial clinical impact we determined sensitivity of the RPC IMRT film analysis at the midpoint between PTV and OAR using a 4mm distance to agreement metric, and of a 7% TLD dose comparison. We also determined sensitivity of the 3 dose planes of the TG-119 C-shape IMRT phantom using gamma criteria of 3% 3mm. Results: The largest clinical impact came from large changes in the DLG with a change of 1mm resulting in up to a 5% change in the primary PTV D99. This resulted in a discrepancy in the RPC TLDs in the PTVs and OARs of 7.1% and 13.6% respectively, which would have resulted in detection. While use of incorrect flattening filter caused only subtle errors (<1%) in clinical plans, the effect was most pronounced for the RPC TLDs in the OARs (>6%). Conclusion: The AAA commissioning process within the Eclipse TPS is surprisingly robust to user error. When errors do occur, the RPC and TG-119 commissioning credentialing criteria are effective at detecting them; however OAR TLDs are the most sensitive despite the RPC currently excluding them from analysis.« less

  11. Impact and quantification of the sources of error in DNA pooling designs.

    PubMed

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  12. A review of uncertainty in in situ measurements and data sets of sea surface temperature

    NASA Astrophysics Data System (ADS)

    Kennedy, John J.

    2014-03-01

    Archives of in situ sea surface temperature (SST) measurements extend back more than 160 years. Quality of the measurements is variable, and the area of the oceans they sample is limited, especially early in the record and during the two world wars. Measurements of SST and the gridded data sets that are based on them are used in many applications so understanding and estimating the uncertainties are vital. The aim of this review is to give an overview of the various components that contribute to the overall uncertainty of SST measurements made in situ and of the data sets that are derived from them. In doing so, it also aims to identify current gaps in understanding. Uncertainties arise at the level of individual measurements with both systematic and random effects and, although these have been extensively studied, refinement of the error models continues. Recent improvements have been made in the understanding of the pervasive systematic errors that affect the assessment of long-term trends and variability. However, the adjustments applied to minimize these systematic errors are uncertain and these uncertainties are higher before the 1970s and particularly large in the period surrounding the Second World War owing to a lack of reliable metadata. The uncertainties associated with the choice of statistical methods used to create globally complete SST data sets have been explored using different analysis techniques, but they do not incorporate the latest understanding of measurement errors, and they want for a fair benchmark against which their skill can be objectively assessed. These problems can be addressed by the creation of new end-to-end SST analyses and by the recovery and digitization of data and metadata from ship log books and other contemporary literature.

  13. A measurement system for large, complex software programs

    NASA Technical Reports Server (NTRS)

    Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.

    1994-01-01

    This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.

  14. Evaluation of Light Detection and Ranging (LIDAR) for measuring river corridor topography

    USGS Publications Warehouse

    Bowen, Z.H.; Waltermire, R.G.

    2002-01-01

    LIDAR is relatively new in the commercial market for remote sensing of topography and it is difficult to find objective reporting on the accuracy of LIDAR measurements in an applied context. Accuracy specifications for LIDAR data in published evaluations range from 1 to 2 m root mean square error (RMSEx,y) and 15 to 20 cm RMSEz. Most of these estimates are based on measurements over relatively flat, homogeneous terrain. This study evaluated the accuracy of one LIDAR data set over a range of terrain types in a western river corridor. Elevation errors based on measurements over all terrain types were larger (RMSEz equals 43 cm) than values typically reported. This result is largely attributable to horizontal positioning limitations (1 to 2 m RMSEx,y) in areas with variable terrain and large topographic relief. Cross-sectional profiles indicated algorithms that were effective for removing vegetation in relatively flat terrain were less effective near the active channel where dense vegetation was found in a narrow band along a low terrace. LIDAR provides relatively accurate data at densities (50,000 to 100,000 points per km2) not feasible with other survey technologies. Other options for projects requiring higher accuracy include low-altitude aerial photography and intensive ground surveying.

  15. Large radius of curvature measurement based on the evaluation of interferogram-quality metric in non-null interferometry

    NASA Astrophysics Data System (ADS)

    Yang, Zhongming; Dou, Jiantai; Du, Jinyu; Gao, Zhishan

    2018-03-01

    Non-null interferometry could use to measure the radius of curvature (ROC), we have presented a virtual quadratic Newton rings phase-shifting moiré-fringes measurement method for large ROC measurement (Yang et al., 2016). In this paper, we propose a large ROC measurement method based on the evaluation of the interferogram-quality metric by the non-null interferometer. With the multi-configuration model of the non-null interferometric system in ZEMAX, the retrace errors and the phase introduced by the test surface are reconstructed. The interferogram-quality metric is obtained by the normalized phase-shifted testing Newton rings with the spherical surface model in the non-null interferometric system. The radius curvature of the test spherical surface can be obtained until the minimum of the interferogram-quality metric is found. Simulations and experimental results are verified the feasibility of our proposed method. For a spherical mirror with a ROC of 41,400 mm, the measurement accuracy is better than 0.13%.

  16. Fault-tolerant, high-level quantum circuits: form, compilation and description

    NASA Astrophysics Data System (ADS)

    Paler, Alexandru; Polian, Ilia; Nemoto, Kae; Devitt, Simon J.

    2017-06-01

    Fault-tolerant quantum error correction is a necessity for any quantum architecture destined to tackle interesting, large-scale problems. Its theoretical formalism has been well founded for nearly two decades. However, we still do not have an appropriate compiler to produce a fault-tolerant, error-corrected description from a higher-level quantum circuit for state-of the-art hardware models. There are many technical hurdles, including dynamic circuit constructions that occur when constructing fault-tolerant circuits with commonly used error correcting codes. We introduce a package that converts high-level quantum circuits consisting of commonly used gates into a form employing all decompositions and ancillary protocols needed for fault-tolerant error correction. We call this form the (I)initialisation, (C)NOT, (M)measurement form (ICM) and consists of an initialisation layer of qubits into one of four distinct states, a massive, deterministic array of CNOT operations and a series of time-ordered X- or Z-basis measurements. The form allows a more flexible approach towards circuit optimisation. At the same time, the package outputs a standard circuit or a canonical geometric description which is a necessity for operating current state-of-the-art hardware architectures using topological quantum codes.

  17. Final acceptance testing of the LSST monolithic primary/tertiary mirror

    NASA Astrophysics Data System (ADS)

    Tuell, Michael T.; Burge, James H.; Cuerden, Brian; Gressler, William; Martin, Hubert M.; West, Steven C.; Zhao, Chunyu

    2014-07-01

    The Large Synoptic Survey Telescope (LSST) is a three-mirror wide-field survey telescope with the primary and tertiary mirrors on one monolithic substrate1. This substrate is made of Ohara E6 borosilicate glass in a honeycomb sandwich, spin cast at the Steward Observatory Mirror Lab at The University of Arizona2. Each surface is aspheric, with the specification in terms of conic constant error, maximum active bending forces and finally a structure function specification on the residual errors3. There are high-order deformation terms, but with no tolerance, any error is considered as a surface error and is included in the structure function. The radii of curvature are very different, requiring two independent test stations, each with instantaneous phase-shifting interferometers with null correctors. The primary null corrector is a standard two-element Offner null lens. The tertiary null corrector is a phase-etched computer-generated hologram (CGH). This paper details the two optical systems and their tolerances, showing that the uncertainty in measuring the figure is a small fraction of the structure function specification. Additional metrology includes the radii of curvature, optical axis locations, and relative surface tilts. The methods for measuring these will also be described along with their tolerances.

  18. Emotion perception, non-social cognition and symptoms as predictors of theory of mind in schizophrenia.

    PubMed

    Vaskinn, Anja; Andersson, Stein; Østefjells, Tiril; Andreassen, Ole A; Sundet, Kjetil

    2018-06-05

    Theory of mind (ToM) can be divided into cognitive and affective ToM, and a distinction can be made between overmentalizing and undermentalizing errors. Research has shown that ToM in schizophrenia is associated with non-social and social cognition, and with clinical symptoms. In this study, we investigate cognitive and clinical predictors of different ToM processes. Ninety-one individuals with schizophrenia participated. ToM was measured with the Movie for the Assessment of Social Cognition (MASC) yielding six scores (total ToM, cognitive ToM, affective ToM, overmentalizing errors, undermentalizing errors and no mentalizing errors). Neurocognition was indexed by a composite score based on the non-social cognitive tests in the MATRICS Consensus Cognitive Battery (MCCB). Emotion perception was measured with Emotion in Biological Motion (EmoBio), a point-light walker task. Clinical symptoms were assessed with the Positive and Negative Syndrome Scale (PANSS). Seventy-one healthy control (HC) participants completed the MASC. Individuals with schizophrenia showed large impairments compared to HC for all MASC scores, except overmentalizing errors. Hierarchical regression analyses with the six different MASC scores as dependent variables revealed that MCCB was a significant predictor of all MASC scores, explaining 8-18% of the variance. EmoBio increased the explained variance significantly, to 17-28%, except for overmentalizing errors. PANSS excited symptoms increased explained variance for total ToM, affective ToM and no mentalizing errors. Both social and non-social cognition were significant predictors of ToM. Overmentalizing was only predicted by non-social cognition. Excited symptoms contributed to overall and affective ToM, and to no mentalizing errors. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Assessing the Impact of Pre-gpm Microwave Precipitation Observations in the Goddard WRF Ensemble Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Chambon, Philippe; Zhang, Sara Q.; Hou, Arthur Y.; Zupanski, Milija; Cheung, Samson

    2013-01-01

    The forthcoming Global Precipitation Measurement (GPM) Mission will provide next generation precipitation observations from a constellation of satellites. Since precipitation by nature has large variability and low predictability at cloud-resolving scales, the impact of precipitation data on the skills of mesoscale numerical weather prediction (NWP) is largely affected by the characterization of background and observation errors and the representation of nonlinear cloud/precipitation physics in an NWP data assimilation system. We present a data impact study on the assimilation of precipitation-affected microwave (MW) radiances from a pre-GPM satellite constellation using the Goddard WRF Ensemble Data Assimilation System (Goddard WRF-EDAS). A series of assimilation experiments are carried out in a Weather Research Forecast (WRF) model domain of 9 km resolution in western Europe. Sensitivities to observation error specifications, background error covariance estimated from ensemble forecasts with different ensemble sizes, and MW channel selections are examined through single-observation assimilation experiments. An empirical bias correction for precipitation-affected MW radiances is developed based on the statistics of radiance innovations in rainy areas. The data impact is assessed by full data assimilation cycling experiments for a storm event that occurred in France in September 2010. Results show that the assimilation of MW precipitation observations from a satellite constellation mimicking GPM has a positive impact on the accumulated rain forecasts verified with surface radar rain estimates. The case-study on a convective storm also reveals that the accuracy of ensemble-based background error covariance is limited by sampling errors and model errors such as precipitation displacement and unresolved convective scale instability.

  20. Effects of stinger axial dynamics and mass compensation methods on experimental modal analysis

    NASA Astrophysics Data System (ADS)

    Hu, Ximing

    1992-06-01

    A longitudinal bar model that includes both stinger elastic and inertia properties is used to analyze the stinger's axial dynamics as well as the mass compensation that is required to obtain accurate input forces when a stinger is installed between the excitation source, force transducer, and the structure under test. Stinger motion transmissibility and force transmissibility, axial resonance and excitation energy transfer problems are discussed in detail. Stinger mass compensation problems occur when the force transducer is mounted on the exciter end of the stinger. These problems are studied theoretically, numerically, and experimentally. It is found that the measured Frequency Response Function (FRF) can be underestimated if mass compensation is based on the stinger exciter-end acceleration and can be overestimated if the mass compensation is based on the structure-end acceleration due to the stinger's compliance. A new mass compensation method that is based on two accelerations is introduced and is seen to improve the accuracy considerably. The effects of the force transducer's compliance on the mass compensation are also discussed. A theoretical model is developed that describes the measurement system's FRD around a test structure's resonance. The model shows that very large measurement errors occur when there is a small relative phase shift between the force and acceleration measurements. These errors can be in hundreds of percent corresponding to a phase error on the order of one or two degrees. The physical reasons for this unexpected error pattern are explained. This error is currently unknown to the experimental modal analysis community. Two sample structures consisting of a rigid mass and a double cantilever beam are used in the numerical calculations and experiments.

  1. High-Threshold Low-Overhead Fault-Tolerant Classical Computation and the Replacement of Measurements with Unitary Quantum Gates.

    PubMed

    Cruikshank, Benjamin; Jacobs, Kurt

    2017-07-21

    von Neumann's classic "multiplexing" method is unique in achieving high-threshold fault-tolerant classical computation (FTCC), but has several significant barriers to implementation: (i) the extremely complex circuits required by randomized connections, (ii) the difficulty of calculating its performance in practical regimes of both code size and logical error rate, and (iii) the (perceived) need for large code sizes. Here we present numerical results indicating that the third assertion is false, and introduce a novel scheme that eliminates the two remaining problems while retaining a threshold very close to von Neumann's ideal of 1/6. We present a simple, highly ordered wiring structure that vastly reduces the circuit complexity, demonstrates that randomization is unnecessary, and provides a feasible method to calculate the performance. This in turn allows us to show that the scheme requires only moderate code sizes, vastly outperforms concatenation schemes, and under a standard error model a unitary implementation realizes universal FTCC with an accuracy threshold of p<5.5%, in which p is the error probability for 3-qubit gates. FTCC is a key component in realizing measurement-free protocols for quantum information processing. In view of this, we use our scheme to show that all-unitary quantum circuits can reproduce any measurement-based feedback process in which the asymptotic error probabilities for the measurement and feedback are (32/63)p≈0.51p and 1.51p, respectively.

  2. Array coding for large data memories

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.

    1982-01-01

    It is pointed out that an array code is a convenient method for storing large quantities of data. In a typical application, the array consists of N data words having M symbols in each word. The probability of undetected error is considered, taking into account three symbol error probabilities which are of interest, and a formula for determining the probability of undetected error. Attention is given to the possibility of reading data into the array using a digital communication system with symbol error probability p. Two different schemes are found to be of interest. The conducted analysis of array coding shows that the probability of undetected error is very small even for relatively large arrays.

  3. Prediction of final error level in learning and repetitive control

    NASA Astrophysics Data System (ADS)

    Levoci, Peter A.

    Repetitive control (RC) is a field that creates controllers to eliminate the effects of periodic disturbances on a feedback control system. The methods have applications in spacecraft problems, to isolate fine pointing equipment from periodic vibration disturbances such as slight imbalances in momentum wheels or cryogenic pumps. A closely related field of control design is iterative learning control (ILC) which aims to eliminate tracking error in a task that repeats, each time starting from the same initial condition. Experiments done on a robot at NASA Langley Research Center showed that the final error levels produced by different candidate repetitive and learning controllers can be very different, even when each controller is analytically proven to converge to zero error in the deterministic case. Real world plant and measurement noise and quantization noise (from analog to digital and digital to analog converters) in these control methods are acted on as if they were error sources that will repeat and should be cancelled, which implies that the algorithms amplify such errors. Methods are developed that predict the final error levels of general first order ILC, of higher order ILC including current cycle learning, and of general RC, in the presence of noise, using frequency response methods. The method involves much less computation than the corresponding time domain approach that involves large matrices. The time domain approach was previously developed for ILC and handles a certain class of ILC methods. Here methods are created to include zero-phase filtering that is very important in creating practical designs. Also, time domain methods are developed for higher order ILC and for repetitive control. Since RC and ILC must be implemented digitally, all of these methods predict final error levels at the sample times. It is shown here that RC can easily converge to small error levels between sample times, but that ILC in most applications will have large and diverging intersample error if in fact zero error is reached at the sample times. This is independent of the ILC law used, and is purely a property of the physical system. Methods are developed to address this issue.

  4. Hydraulic head estimation at unobserved locations: Approximating the distribution of the absolute error based on geologic interpretations

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini

    2017-04-01

    Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.

  5. Soil Moisture Initialization Error and Subgrid Variability of Precipitation in Seasonal Streamflow Forecasting

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Walker, Gregory K.; Mahanama, Sarith P.; Reichle, Rolf H.

    2013-01-01

    Offline simulations over the conterminous United States (CONUS) with a land surface model are used to address two issues relevant to the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which a realistic increase in the spatial resolution of forecasted precipitation would improve streamflow forecasts. The addition of error to a soil moisture initialization field is found to lead to a nearly proportional reduction in streamflow forecast skill. The linearity of the response allows the determination of a lower bound for the increase in streamflow forecast skill achievable through improved soil moisture estimation, e.g., through satellite-based soil moisture measurements. An increase in the resolution of precipitation is found to have an impact on large-scale streamflow forecasts only when evaporation variance is significant relative to the precipitation variance. This condition is met only in the western half of the CONUS domain. Taken together, the two studies demonstrate the utility of a continental-scale land surface modeling system as a tool for addressing the science of hydrological prediction.

  6. Methods for multiple-telescope beam imaging and guiding in the near-infrared

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Amorim, A.; Gordo, P.; Eisenhauer, F.; Pfuhl, O.; Haug, M.; Wieprecht, E.; Wiezorrek, E.; Lima, J.; Perrin, G.; Brandner, W.; Straubmeier, C.; Le Bouquin, J.-B.; Garcia, P. J. V.

    2018-05-01

    Atmospheric turbulence and precise measurement of the astrometric baseline vector between any two telescopes are two major challenges in implementing phase-referenced interferometric astrometry and imaging. They limit the performance of a fibre-fed interferometer by degrading the instrument sensitivity and the precision of astrometric measurements and by introducing image reconstruction errors due to inaccurate phases. A multiple-beam acquisition and guiding camera was built to meet these challenges for a recently commissioned four-beam combiner instrument, GRAVITY, at the European Southern Observatory Very Large Telescope Interferometer. For each telescope beam, it measures (a) field tip-tilts by imaging stars in the sky, (b) telescope pupil shifts by imaging pupil reference laser beacons installed on each telescope using a 2 × 2 lenslet and (c) higher-order aberrations using a 9 × 9 Shack-Hartmann. The telescope pupils are imaged to provide visual monitoring while observing. These measurements enable active field and pupil guiding by actuating a train of tip-tilt mirrors placed in the pupil and field planes, respectively. The Shack-Hartmann measured quasi-static aberrations are used to focus the auxiliary telescopes and allow the possibility of correcting the non-common path errors between the adaptive optics systems of the unit telescopes and GRAVITY. The guiding stabilizes the light injection into single-mode fibres, increasing sensitivity and reducing the astrometric and image reconstruction errors. The beam guiding enables us to achieve an astrometric error of less than 50 μas. Here, we report on the data reduction methods and laboratory tests of the multiple-beam acquisition and guiding camera and its performance on-sky.

  7. Comparison of Measurements from Pressure-recording Inverted Echo Sounders and Satellite Altimetry in the North Equatorial Current Region of the Western Pacific

    NASA Astrophysics Data System (ADS)

    Jeon, Chanhyung; Park, Jae-Hun; Kim, Dong Guk; Kim, Eung; Jeon, Dongchull

    2018-04-01

    An array of 5 pressure-recording inverted echo sounders (PIESs) was deployed along the Jason-2 214 ground track in the North Equatorial Current (NEC) region of the western Pacific Ocean for about 2 years from June 2012. Round-trip acoustic travel time from the bottom to the sea surface and bottom pressure measurements from PIES were converted to sea level anomaly (SLA). AVISO along-track mono-mission SLA (Mono-SLA), reference mapped SLA (Ref-MSLA), and up-to-date mapped SLA (Upd-MSLA) products were used for comparison with PIESderived SLA (η tot). Comparisons of η tot with Mono-SLA revealed that hump artifact errors significantly contaminate the Mono-SLA. Differences of η tot from both Ref-MSLA and Upd-MSLA decreased as the hump errors were reduced in mapped SLA products. Comparisons of Mono-SLA measurements at crossover points of ground tracks near the observation sites revealed large differences though the time differences of their measurements were only 1.53 and 4.58 days. Comparisons between Mono-SLA and mapped SLA suggested that mapped SLA smooths out the hump artifact errors by taking values between the two discrepant Mono-SLA measurements at the crossover points. Consequently, mapped SLA showed better agreement with η tot at our observation sites. AVISO mapped sea surface height (SSH) products are the preferable dataset for studying SSH variability in the NEC region of the western Pacific, though some portions of hump artifact errors seem to still remain in them.

  8. Quantifying Uncertainties in Land-Surface Microwave Emissivity Retrievals

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

    2013-01-01

    Uncertainties in the retrievals of microwaveland-surface emissivities are quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including the Special Sensor Microwave Imager, the Tropical Rainfall Measuring Mission Microwave Imager, and the Advanced Microwave Scanning Radiometer for Earth Observing System, are studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land-surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors inthe retrievals. Generally, these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 1%-4% (3-12 K) over desert and 1%-7% (3-20 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.5%-2% (2-6 K). In particular, at 85.5/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are most likely caused by rain/cloud contamination, which can lead to random errors up to 10-17 K under the most severe conditions.

  9. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  10. Wavefront error sensing

    NASA Technical Reports Server (NTRS)

    Tubbs, Eldred F.

    1986-01-01

    A two-step approach to wavefront sensing for the Large Deployable Reflector (LDR) was examined as part of an effort to define wavefront-sensing requirements and to determine particular areas for more detailed study. A Hartmann test for coarse alignment, particularly segment tilt, seems feasible if LDR can operate at 5 microns or less. The direct measurement of the point spread function in the diffraction limited region may be a way to determine piston error, but this can only be answered by a detailed software model of the optical system. The question of suitable astronomical sources for either test must also be addressed.

  11. Online absolute pose compensation and steering control of industrial robot based on six degrees of freedom laser measurement

    NASA Astrophysics Data System (ADS)

    Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu

    2017-03-01

    In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.

  12. Research on automatic Hartmann test of membrane mirror

    NASA Astrophysics Data System (ADS)

    Zhong, Xing; Jin, Guang; Liu, Chunyu; Zhang, Peng

    2010-10-01

    Electrostatic membrane mirror is ultra-lightweight and easy to acquire a large diameter comparing with traditional optical elements, so its development and usage is the trend of future large mirrors. In order to research the control method of the static stretching membrane mirror, the surface configuration must be tested. However, membrane mirror's shape is always changed by variable voltages on the electrodes, and the optical properties of membrane materials using in our experiment are poor, so it is difficult to test membrane mirror by interferometer and null compensator method. To solve this problem, an automatic optical test procedure for membrane mirror is designed based on Hartmann screen method. The optical path includes point light source, CCD camera, splitter and diffuse transmittance screen. The spots' positions on the diffuse transmittance screen are pictured by CCD camera connected with computer, and image segmentation and centroid solving is auto processed. The CCD camera's lens distortion is measured, and fixing coefficients are given to eliminate the spots' positions recording error caused by lens distortion. To process the low sampling Hartmann test results, Zernike polynomial fitting method is applied to smooth the wave front. So low frequency error of the membrane mirror can be measured then. Errors affecting the test accuracy are also analyzed in this paper. The method proposed in this paper provides a reference for surface shape detection in membrane mirror research.

  13. On the use of unshielded cables in ionization chamber dosimetry for total-skin electron therapy.

    PubMed

    Chen, Z; Agostinelli, A; Nath, R

    1998-03-01

    The dosimetry of total-skin electron therapy (TSET) usually requires ionization chamber measurements in a large electron beam (up to 120 cm x 200 cm). Exposing the chamber's electric cable, its connector and part of the extension cable to the large electron beam will introduce unwanted electronic signals that may lead to inaccurate dosimetry results. While the best strategy to minimize the cable-induced electronic signal is to shield the cables and its connector from the primary electrons, as has been recommended by the AAPM Task Group Report 23 on TSET, cables without additional shielding are often used in TSET dosimetry measurements for logistic reasons, for example when an automatic scanning dosimetry is used. This paper systematically investigates the consequences and the acceptability of using an unshielded cable in ionization chamber dosimetry in a large TSET electron beam. In this paper, we separate cable-induced signals into two types. The type-I signal includes all charges induced which do not change sign upon switching the chamber polarity, and type II includes all those that do. The type-I signal is easily cancelled by the polarity averaging method. The type-II cable-induced signal is independent of the depth of the chamber in a phantom and its magnitude relative to the true signal determines the acceptability of a cable for use under unshielded conditions. Three different cables were evaluated in two different TSET beams in this investigation. For dosimetry near the depth of maximum buildup, the cable-induced dosimetry error was found to be less than 0.2% when the two-polarity averaging technique was applied. At greater depths, the relative dosimetry error was found to increase at a rate approximately equal to the inverse of the electron depth dose. Since the application of the two-polarity averaging technique requires a constant-irradiation condition, it was demonstrated than an additional error of up to 4% could be introduced if the unshielded cable's spatial configuration were altered during the two-polarity measurements. This suggests that automatic scanning systems with unshielded cables should not be used in TSET ionization chamber dosimetry. However, the data did show that an unshielded cable may be used in TSET ionization chamber dosimetry if the size of cable-induced error in a given TSET beam is pre-evaluated and the measurement is carefully conducted. When such an evaluation has not been performed, additional shielding should be applied to the cable being used, making measurements at multiple points difficult.

  14. Instrumental variables vs. grouping approach for reducing bias due to measurement error.

    PubMed

    Batistatou, Evridiki; McNamee, Roseanne

    2008-01-01

    Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or without replicate measurements. Our finding may also have implications for the use of aggregate variables in epidemiology to control for unmeasured confounding.

  15. Power Spectral Density Specification and Analysis of Large Optical Surfaces

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2009-01-01

    The 2-dimensional Power Spectral Density (PSD) can be used to characterize the mid- and the high-spatial frequency components of the surface height errors of an optical surface. We found it necessary to have a complete, easy-to-use approach for specifying and evaluating the PSD characteristics of large optical surfaces, an approach that allows one to specify the surface quality of a large optical surface based on simulated results using a PSD function and to evaluate the measured surface profile data of the same optic in comparison with those predicted by the simulations during the specification-derivation process. This paper provides a complete mathematical description of PSD error, and proposes a new approach in which a 2-dimentional (2D) PSD is converted into a 1-dimentional (1D) one by azimuthally averaging the 2D-PSD. The 1D-PSD calculated this way has the same unit and the same profile as the original PSD function, thus allows one to compare the two with each other directly.

  16. Role-modeling and medical error disclosure: a national survey of trainees.

    PubMed

    Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani

    2014-03-01

    To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P < .001); negative role-modeling had the largest independent, negative effect (standardized effect estimate, -0.26, P < .001). Positive role-modeling had a positive effect on attitudes (standardized effect estimate, 0.26, P < .001). Exposure to negative role-modeling was independently associated with an increased likelihood of trainees' nontransparent behavior in response to an error (OR 1.37, 95% CI 1.15-1.64; P < .001). Exposure to role-modeling predicts trainees' attitudes and behavior regarding the disclosure of harmful errors. Negative role models may be a significant impediment to disclosure among trainees.

  17. In Vivo Evaluation of Wearable Head Impact Sensors.

    PubMed

    Wu, Lyndia C; Nangia, Vaibhav; Bui, Kevin; Hammoor, Bradley; Kurt, Mehmet; Hernandez, Fidel; Kuo, Calvin; Camarillo, David B

    2016-04-01

    Inertial sensors are commonly used to measure human head motion. Some sensors have been tested with dummy or cadaver experiments with mixed results, and methods to evaluate sensors in vivo are lacking. Here we present an in vivo method using high speed video to test teeth-mounted (mouthguard), soft tissue-mounted (skin patch), and headgear-mounted (skull cap) sensors during 6-13 g sagittal soccer head impacts. Sensor coupling to the skull was quantified by displacement from an ear-canal reference. Mouthguard displacements were within video measurement error (<1 mm), while the skin patch and skull cap displaced up to 4 and 13 mm from the ear-canal reference, respectively. We used the mouthguard, which had the least displacement from skull, as the reference to assess 6-degree-of-freedom skin patch and skull cap measurements. Linear and rotational acceleration magnitudes were over-predicted by both the skin patch (with 120% NRMS error for a(mag), 290% for α(mag)) and the skull cap (320% NRMS error for a(mag), 500% for α(mag)). Such over-predictions were largely due to out-of-plane motion. To model sensor error, we found that in-plane skin patch linear acceleration in the anterior-posterior direction could be modeled by an underdamped viscoelastic system. In summary, the mouthguard showed tighter skull coupling than the other sensor mounting approaches. Furthermore, the in vivo methods presented are valuable for investigating skull acceleration sensor technologies.

  18. Photomask CD and LER characterization using Mueller matrix spectroscopic ellipsometry

    NASA Astrophysics Data System (ADS)

    Heinrich, A.; Dirnstorfer, I.; Bischoff, J.; Meiner, K.; Ketelsen, H.; Richter, U.; Mikolajick, T.

    2014-10-01

    Critical dimension and line edge roughness on photomask arrays are determined with Mueller matrix spectroscopic ellipsometry. Arrays with large sinusoidal perturbations are measured for different azimuth angels and compared with simulations based on rigorous coupled wave analysis. Experiment and simulation show that line edge roughness leads to characteristic changes in the different Mueller matrix elements. The influence of line edge roughness is interpreted as an increase of isotropic character of the sample. The changes in the Mueller matrix elements are very similar when the arrays are statistically perturbed with rms roughness values in the nanometer range suggesting that the results on the sinusoidal test structures are also relevant for "real" mask errors. Critical dimension errors and line edge roughness have similar impact on the SE MM measurement. To distinguish between both deviations, a strategy based on the calculation of sensitivities and correlation coefficients for all Mueller matrix elements is shown. The Mueller matrix elements M13/M31 and M34/M43 are the most suitable elements due to their high sensitivities to critical dimension errors and line edge roughness and, at the same time, to a low correlation coefficient between both influences. From the simulated sensitivities, it is estimated that the measurement accuracy has to be in the order of 0.01 and 0.001 for the detection of 1 nm critical dimension error and 1 nm line edge roughness, respectively.

  19. A simple differential steady-state method to measure the thermal conductivity of solid bulk materials with high accuracy.

    PubMed

    Kraemer, D; Chen, G

    2014-02-01

    Accurate measurements of thermal conductivity are of great importance for materials research and development. Steady-state methods determine thermal conductivity directly from the proportionality between heat flow and an applied temperature difference (Fourier Law). Although theoretically simple, in practice, achieving high accuracies with steady-state methods is challenging and requires rather complex experimental setups due to temperature sensor uncertainties and parasitic heat loss. We developed a simple differential steady-state method in which the sample is mounted between an electric heater and a temperature-controlled heat sink. Our method calibrates for parasitic heat losses from the electric heater during the measurement by maintaining a constant heater temperature close to the environmental temperature while varying the heat sink temperature. This enables a large signal-to-noise ratio which permits accurate measurements of samples with small thermal conductance values without an additional heater calibration measurement or sophisticated heater guards to eliminate parasitic heater losses. Additionally, the differential nature of the method largely eliminates the uncertainties of the temperature sensors, permitting measurements with small temperature differences, which is advantageous for samples with high thermal conductance values and/or with strongly temperature-dependent thermal conductivities. In order to accelerate measurements of more than one sample, the proposed method allows for measuring several samples consecutively at each temperature measurement point without adding significant error. We demonstrate the method by performing thermal conductivity measurements on commercial bulk thermoelectric Bi2Te3 samples in the temperature range of 30-150 °C with an error below 3%.

  20. Estimating Relative Positions of Outer-Space Structures

    NASA Technical Reports Server (NTRS)

    Balian, Harry; Breckenridge, William; Brugarolas, Paul

    2009-01-01

    A computer program estimates the relative position and orientation of two structures from measurements, made by use of electronic cameras and laser range finders on one structure, of distances and angular positions of fiducial objects on the other structure. The program was written specifically for use in determining errors in the alignment of large structures deployed in outer space from a space shuttle. The program is based partly on equations for transformations among the various coordinate systems involved in the measurements and on equations that account for errors in the transformation operators. It computes a least-squares estimate of the relative position and orientation. Sequential least-squares estimates, acquired at a measurement rate of 4 Hz, are averaged by passing them through a fourth-order Butterworth filter. The program is executed in a computer aboard the space shuttle, and its position and orientation estimates are displayed to astronauts on a graphical user interface.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamblin, T; de Supinski, B R; Schulz, M

    Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transformmore » and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.« less

  2. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Steady-state low thermal resistance characterization apparatus: The bulk thermal tester

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burg, Brian R.; Kolly, Manuel; Blasakis, Nicolas

    The reliability of microelectronic devices is largely dependent on electronic packaging, which includes heat removal. The appropriate packaging design therefore necessitates precise knowledge of the relevant material properties, including thermal resistance and thermal conductivity. Thin materials and high conductivity layers make their thermal characterization challenging. A steady state measurement technique is presented and evaluated with the purpose to characterize samples with a thermal resistance below 100 mm{sup 2} K/W. It is based on the heat flow meter bar approach made up by two copper blocks and relies exclusively on temperature measurements from thermocouples. The importance of thermocouple calibration is emphasizedmore » in order to obtain accurate temperature readings. An in depth error analysis, based on Gaussian error propagation, is carried out. An error sensitivity analysis highlights the importance of the precise knowledge of the thermal interface materials required for the measurements. Reference measurements on Mo samples reveal a measurement uncertainty in the range of 5% and most accurate measurements are obtained at high heat fluxes. Measurement techniques for homogeneous bulk samples, layered materials, and protruding cavity samples are discussed. Ultimately, a comprehensive overview of a steady state thermal characterization technique is provided, evaluating the accuracy of sample measurements with thermal resistances well below state of the art setups. Accurate characterization of materials used in heat removal applications, such as electronic packaging, will enable more efficient designs and ultimately contribute to energy savings.« less

  4. Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.

    PubMed

    Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean

    2017-06-01

    Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.

  5. Accuracy of Jump-Mat Systems for Measuring Jump Height.

    PubMed

    Pueo, Basilio; Lipinska, Patrycja; Jiménez-Olmedo, José M; Zmijewski, Piotr; Hopkins, Will G

    2017-08-01

    Vertical-jump tests are commonly used to evaluate lower-limb power of athletes and nonathletes. Several types of equipment are available for this purpose. To compare the error of measurement of 2 jump-mat systems (Chronojump-Boscosystem and Globus Ergo Tester) with that of a motion-capture system as a criterion and to determine the modifying effect of foot length on jump height. Thirty-one young adult men alternated 4 countermovement jumps with 4 squat jumps. Mean jump height and standard deviations representing technical error of measurement arising from each device and variability arising from the subjects themselves were estimated with a novel mixed model and evaluated via standardization and magnitude-based inference. The jump-mat systems produced nearly identical measures of jump height (differences in means and in technical errors of measurement ≤1 mm). Countermovement and squat-jump height were both 13.6 cm higher with motion capture (90% confidence limits ±0.3 cm), but this very large difference was reduced to small unclear differences when adjusted to a foot length of zero. Variability in countermovement and squat-jump height arising from the subjects was small (1.1 and 1.5 cm, respectively, 90% confidence limits ±0.3 cm); technical error of motion capture was similar in magnitude (1.7 and 1.6 cm, ±0.3 and ±0.4 cm), and that of the jump mats was similar or smaller (1.2 and 0.3 cm, ±0.5 and ±0.9 cm). The jump-mat systems provide trustworthy measurements for monitoring changes in jump height. Foot length can explain the substantially higher jump height observed with motion capture.

  6. CORRECTING FOR MEASUREMENT ERROR IN LATENT VARIABLES USED AS PREDICTORS*

    PubMed Central

    Schofield, Lynne Steuerle

    2015-01-01

    This paper represents a methodological-substantive synergy. A new model, the Mixed Effects Structural Equations (MESE) model which combines structural equations modeling and item response theory is introduced to attend to measurement error bias when using several latent variables as predictors in generalized linear models. The paper investigates racial and gender disparities in STEM retention in higher education. Using the MESE model with 1997 National Longitudinal Survey of Youth data, I find prior mathematics proficiency and personality have been previously underestimated in the STEM retention literature. Pre-college mathematics proficiency and personality explain large portions of the racial and gender gaps. The findings have implications for those who design interventions aimed at increasing the rates of STEM persistence among women and under-represented minorities. PMID:26977218

  7. Correcting Measurement Error in Latent Regression Covariates via the MC-SIMEX Method

    ERIC Educational Resources Information Center

    Rutkowski, Leslie; Zhou, Yan

    2015-01-01

    Given the importance of large-scale assessments to educational policy conversations, it is critical that subpopulation achievement is estimated reliably and with sufficient precision. Despite this importance, biased subpopulation estimates have been found to occur when variables in the conditioning model side of a latent regression model contain…

  8. Head Start, 4 years After Completing the Program

    ERIC Educational Resources Information Center

    Kim, Young-Joo

    2013-01-01

    This paper studies the effect of the Head Start program on children's achievements in reading and math tests during their first 4 years of schooling after completing the program. Using nationally representative data from the Early Childhood Longitudinal Study, I found large measurement error in the parental reports of Head Start attendance, which…

  9. Estimating True Student Growth Percentile Distributions Using Latent Regression Multidimensional IRT Models

    ERIC Educational Resources Information Center

    Lockwood, J. R.; Castellano, Katherine E.

    2017-01-01

    Student Growth Percentiles (SGPs) increasingly are being used in the United States for inferences about student achievement growth and educator effectiveness. Emerging research has indicated that SGPs estimated from observed test scores have large measurement errors. As such, little is known about "true" SGPs, which are defined in terms…

  10. Bayesian Analysis of Silica Exposure and Lung Cancer Using Human and Animal Studies.

    PubMed

    Bartell, Scott M; Hamra, Ghassan Badri; Steenland, Kyle

    2017-03-01

    Bayesian methods can be used to incorporate external information into epidemiologic exposure-response analyses of silica and lung cancer. We used data from a pooled mortality analysis of silica and lung cancer (n = 65,980), using untransformed and log-transformed cumulative exposure. Animal data came from chronic silica inhalation studies using rats. We conducted Bayesian analyses with informative priors based on the animal data and different cross-species extrapolation factors. We also conducted analyses with exposure measurement error corrections in the absence of a gold standard, assuming Berkson-type error that increased with increasing exposure. The pooled animal data exposure-response coefficient was markedly higher (log exposure) or lower (untransformed exposure) than the coefficient for the pooled human data. With 10-fold uncertainty, the animal prior had little effect on results for pooled analyses and only modest effects in some individual studies. One-fold uncertainty produced markedly different results for both pooled and individual studies. Measurement error correction had little effect in pooled analyses using log exposure. Using untransformed exposure, measurement error correction caused a 5% decrease in the exposure-response coefficient for the pooled analysis and marked changes in some individual studies. The animal prior had more impact for smaller human studies and for one-fold versus three- or 10-fold uncertainty. Adjustment for Berkson error using Bayesian methods had little effect on the exposure-response coefficient when exposure was log transformed or when the sample size was large. See video abstract at, http://links.lww.com/EDE/B160.

  11. High-Threshold Fault-Tolerant Quantum Computation with Analog Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Fukui, Kosuke; Tomita, Akihisa; Okamoto, Atsushi; Fujii, Keisuke

    2018-04-01

    To implement fault-tolerant quantum computation with continuous variables, the Gottesman-Kitaev-Preskill (GKP) qubit has been recognized as an important technological element. However, it is still challenging to experimentally generate the GKP qubit with the required squeezing level, 14.8 dB, of the existing fault-tolerant quantum computation. To reduce this requirement, we propose a high-threshold fault-tolerant quantum computation with GKP qubits using topologically protected measurement-based quantum computation with the surface code. By harnessing analog information contained in the GKP qubits, we apply analog quantum error correction to the surface code. Furthermore, we develop a method to prevent the squeezing level from decreasing during the construction of the large-scale cluster states for the topologically protected, measurement-based, quantum computation. We numerically show that the required squeezing level can be relaxed to less than 10 dB, which is within the reach of the current experimental technology. Hence, this work can considerably alleviate this experimental requirement and take a step closer to the realization of large-scale quantum computation.

  12. Error analysis of high-rate GNSS precise point positioning for seismic wave measurement

    NASA Astrophysics Data System (ADS)

    Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan

    2017-06-01

    High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.

  13. Atmospheric Dispersion Effects in Weak Lensing Measurements

    DOE PAGES

    Plazas, Andrés Alejandro; Bernstein, Gary

    2012-10-01

    The wavelength dependence of atmospheric refraction causes elongation of finite-bandwidth images along the elevation vector, which produces spurious signals in weak gravitational lensing shear measurements unless this atmospheric dispersion is calibrated and removed to high precision. Because astrometric solutions and PSF characteristics are typically calibrated from stellar images, differences between the reference stars' spectra and the galaxies' spectra will leave residual errors in both the astrometric positions (dr) and in the second moment (width) of the wavelength-averaged PSF (dv) for galaxies.We estimate the level of dv that will induce spurious weak lensing signals in PSF-corrected galaxy shapes that exceed themore » statistical errors of the DES and the LSST cosmic-shear experiments. We also estimate the dr signals that will produce unacceptable spurious distortions after stacking of exposures taken at different airmasses and hour angles. We also calculate the errors in the griz bands, and find that dispersion systematics, uncorrected, are up to 6 and 2 times larger in g and r bands,respectively, than the requirements for the DES error budget, but can be safely ignored in i and z bands. For the LSST requirements, the factors are about 30, 10, and 3 in g, r, and i bands,respectively. We find that a simple correction linear in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r band for DES and i band for LSST,but still as much as 5 times than the requirements for LSST r-band observations. More complex corrections will likely be able to reduce the systematic cosmic-shear errors below statistical errors for LSST r band. But g-band effects remain large enough that it seems likely that induced systematics will dominate the statistical errors of both surveys, and cosmic-shear measurements should rely on the redder bands.« less

  14. How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level.

    PubMed

    Moerbeek, Mirjam; van Schie, Sander

    2016-07-11

    The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.

  15. Evaluation of the predicted error of the soil moisture retrieval from C-band SAR by comparison against modelled soil moisture estimates over Australia

    PubMed Central

    Doubková, Marcela; Van Dijk, Albert I.J.M.; Sabel, Daniel; Wagner, Wolfgang; Blöschl, Günter

    2012-01-01

    The Sentinel-1 will carry onboard a C-band radar instrument that will map the European continent once every four days and the global land surface at least once every twelve days with finest 5 × 20 m spatial resolution. The high temporal sampling rate and operational configuration make Sentinel-1 of interest for operational soil moisture monitoring. Currently, updated soil moisture data are made available at 1 km spatial resolution as a demonstration service using Global Mode (GM) measurements from the Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT. The service demonstrates the potential of the C-band observations to monitor variations in soil moisture. Importantly, a retrieval error estimate is also available; these are needed to assimilate observations into models. The retrieval error is estimated by propagating sensor errors through the retrieval model. In this work, the existing ASAR GM retrieval error product is evaluated using independent top soil moisture estimates produced by the grid-based landscape hydrological model (AWRA-L) developed within the Australian Water Resources Assessment system (AWRA). The ASAR GM retrieval error estimate, an assumed prior AWRA-L error estimate and the variance in the respective datasets were used to spatially predict the root mean square error (RMSE) and the Pearson's correlation coefficient R between the two datasets. These were compared with the RMSE calculated directly from the two datasets. The predicted and computed RMSE showed a very high level of agreement in spatial patterns as well as good quantitative agreement; the RMSE was predicted within accuracy of 4% of saturated soil moisture over 89% of the Australian land mass. Predicted and calculated R maps corresponded within accuracy of 10% over 61% of the continent. The strong correspondence between the predicted and calculated RMSE and R builds confidence in the retrieval error model and derived ASAR GM error estimates. The ASAR GM and Sentinel-1 have the same basic physical measurement characteristics, and therefore very similar retrieval error estimation method can be applied. Because of the expected improvements in radiometric resolution of the Sentinel-1 backscatter measurements, soil moisture estimation errors can be expected to be an order of magnitude less than those for ASAR GM. This opens the possibility for operationally available medium resolution soil moisture estimates with very well-specified errors that can be assimilated into hydrological or crop yield models, with potentially large benefits for land-atmosphere fluxes, crop growth, and water balance monitoring and modelling. PMID:23483015

  16. The solvability of quantum k-pair network in a measurement-based way.

    PubMed

    Li, Jing; Xu, Gang; Chen, Xiu-Bo; Qu, Zhiguo; Niu, Xin-Xin; Yang, Yi-Xian

    2017-12-01

    Network coding is an effective means to enhance the communication efficiency. The characterization of network solvability is one of the most important topic in this field. However, for general network, the solvability conditions are still a challenge. In this paper, we consider the solvability of general quantum k-pair network in measurement-based framework. For the first time, a detailed account of measurement-based quantum network coding(MB-QNC) is specified systematically. Differing from existing coding schemes, single qubit measurements on a pre-shared graph state are the only allowed coding operations. Since no control operations are concluded, it makes MB-QNC schemes more feasible. Further, the sufficient conditions formulating by eigenvalue equations and stabilizer matrix are presented, which build an unambiguous relation among the solvability and the general network. And this result can also analyze the feasibility of sharing k EPR pairs task in large-scale networks. Finally, in the presence of noise, we analyze the advantage of MB-QNC in contrast to gate-based way. By an instance network [Formula: see text], we show that MB-QNC allows higher error thresholds. Specially, for X error, the error threshold is about 30% higher than 10% in gate-based way. In addition, the specific expressions of fidelity subject to some constraint conditions are given.

  17. NBC Contamination Survivability, Large Item Exteriors

    DTIC Science & Technology

    1998-04-17

    environment. Ability to control temperature , relative humidity (RH), and wind speed is required. The facility must be designed to ensure safe and...2.2 Instrumentation. Measuring Devices Permissible Error of Measurement Air temperature ±0.5°C Relative humidity (RH) ±5 % Wind speed ±0.1 rm/sec Still...process, excluding monitoring, should last no longer than 75 minutes. (3) The item surface temperature is 30’C and exterior wind speed is no greater

  18. Physical and Mathematical Questions on Signal Processing in Multibase Phase Direction Finders

    NASA Astrophysics Data System (ADS)

    Denisov, V. P.; Dubinin, D. V.; Meshcheryakov, A. A.

    2018-02-01

    Questions on improving the accuracy of multiple-base phase direction finders by rejecting anomalously large errors in the process of resolving the measurement ambiguities are considered. A physical basis is derived and calculated relationships characterizing the efficiency of the proposed solutions are obtained. Results of a computer simulation of a three-base direction finder are analyzed, along with field measurements of a three-base direction finder along near-ground paths.

  19. Statistical inference of seabed sound-speed structure in the Gulf of Oman Basin.

    PubMed

    Sagers, Jason D; Knobles, David P

    2014-06-01

    Addressed is the statistical inference of the sound-speed depth profile of a thick soft seabed from broadband sound propagation data recorded in the Gulf of Oman Basin in 1977. The acoustic data are in the form of time series signals recorded on a sparse vertical line array and generated by explosive sources deployed along a 280 km track. The acoustic data offer a unique opportunity to study a deep-water bottom-limited thickly sedimented environment because of the large number of time series measurements, very low seabed attenuation, and auxiliary measurements. A maximum entropy method is employed to obtain a conditional posterior probability distribution (PPD) for the sound-speed ratio and the near-surface sound-speed gradient. The multiple data samples allow for a determination of the average error constraint value required to uniquely specify the PPD for each data sample. Two complicating features of the statistical inference study are addressed: (1) the need to develop an error function that can both utilize the measured multipath arrival structure and mitigate the effects of data errors and (2) the effect of small bathymetric slopes on the structure of the bottom interacting arrivals.

  20. Towards System Calibration of Panoramic Laser Scanners from a Single Station

    PubMed Central

    Medić, Tomislav; Holst, Christoph; Kuhlmann, Heiner

    2017-01-01

    Terrestrial laser scanner measurements suffer from systematic errors due to internal misalignments. The magnitude of the resulting errors in the point cloud in many cases exceeds the magnitude of random errors. Hence, the task of calibrating a laser scanner is important for applications with high accuracy demands. This paper primarily addresses the case of panoramic terrestrial laser scanners. Herein, it is proven that most of the calibration parameters can be estimated from a single scanner station without a need for any reference information. This hypothesis is confirmed through an empirical experiment, which was conducted in a large machine hall using a Leica Scan Station P20 panoramic laser scanner. The calibration approach is based on the widely used target-based self-calibration approach, with small modifications. A new angular parameterization is used in order to implicitly introduce measurements in two faces of the instrument and for the implementation of calibration parameters describing genuine mechanical misalignments. Additionally, a computationally preferable calibration algorithm based on the two-face measurements is introduced. In the end, the calibration results are discussed, highlighting all necessary prerequisites for the scanner calibration from a single scanner station. PMID:28513548

  1. Effect of grid resolution on large eddy simulation of wall-bounded turbulence

    NASA Astrophysics Data System (ADS)

    Rezaeiravesh, S.; Liefvendahl, M.

    2018-05-01

    The effect of grid resolution on a large eddy simulation (LES) of a wall-bounded turbulent flow is investigated. A channel flow simulation campaign involving a systematic variation of the streamwise (Δx) and spanwise (Δz) grid resolution is used for this purpose. The main friction-velocity-based Reynolds number investigated is 300. Near the walls, the grid cell size is determined by the frictional scaling, Δx+ and Δz+, and strongly anisotropic cells, with first Δy+ ˜ 1, thus aiming for the wall-resolving LES. Results are compared to direct numerical simulations, and several quality measures are investigated, including the error in the predicted mean friction velocity and the error in cross-channel profiles of flow statistics. To reduce the total number of channel flow simulations, techniques from the framework of uncertainty quantification are employed. In particular, a generalized polynomial chaos expansion (gPCE) is used to create metamodels for the errors over the allowed parameter ranges. The differing behavior of the different quality measures is demonstrated and analyzed. It is shown that friction velocity and profiles of the velocity and Reynolds stress tensor are most sensitive to Δz+, while the error in the turbulent kinetic energy is mostly influenced by Δx+. Recommendations for grid resolution requirements are given, together with the quantification of the resulting predictive accuracy. The sensitivity of the results to the subgrid-scale (SGS) model and varying Reynolds number is also investigated. All simulations are carried out with second-order accurate finite-volume-based solver OpenFOAM. It is shown that the choice of numerical scheme for the convective term significantly influences the error portraits. It is emphasized that the proposed methodology, involving the gPCE, can be applied to other modeling approaches, i.e., other numerical methods and the choice of SGS model.

  2. SU-E-T-429: Uncertainties of Cell Surviving Fractions Derived From Tumor-Volume Variation Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chvetsov, A

    2014-06-01

    Purpose: To evaluate uncertainties of cell surviving fraction reconstructed from tumor-volume variation curves during radiation therapy using sensitivity analysis based on linear perturbation theory. Methods: The time dependent tumor-volume functions V(t) have been calculated using a twolevel cell population model which is based on the separation of entire tumor cell population in two subpopulations: oxygenated viable and lethally damaged cells. The sensitivity function is defined as S(t)=[δV(t)/V(t)]/[δx/x] where δV(t)/V(t) is the time dependent relative variation of the volume V(t) and δx/x is the relative variation of the radiobiological parameter x. The sensitivity analysis was performed using direct perturbation method wheremore » the radiobiological parameter x was changed by a certain error and the tumor-volume was recalculated to evaluate the corresponding tumor-volume variation. Tumor volume variation curves and sensitivity functions have been computed for different values of cell surviving fractions from the practically important interval S{sub 2}=0.1-0.7 using the two-level cell population model. Results: The sensitivity functions of tumor-volume to cell surviving fractions achieved a relatively large value of 2.7 for S{sub 2}=0.7 and then approached zero as S{sub 2} is approaching zero Assuming a systematic error of 3-4% we obtain that the relative error in S{sub 2} is less that 20% in the range S2=0.4-0.7. This Resultis important because the large values of S{sub 2} are associated with poor treatment outcome should be measured with relatively small uncertainties. For the very small values of S2<0.3, the relative error can be larger than 20%; however, the absolute error does not increase significantly. Conclusion: Tumor-volume curves measured during radiotherapy can be used for evaluation of cell surviving fractions usually observed in radiation therapy with conventional fractionation.« less

  3. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    PubMed

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  4. Optimized universal color palette design for error diffusion

    NASA Astrophysics Data System (ADS)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  5. Holistic approach for overlay and edge placement error to meet the 5nm technology node requirements

    NASA Astrophysics Data System (ADS)

    Mulkens, Jan; Slachter, Bram; Kubis, Michael; Tel, Wim; Hinnen, Paul; Maslow, Mark; Dillen, Harm; Ma, Eric; Chou, Kevin; Liu, Xuedong; Ren, Weiming; Hu, Xuerang; Wang, Fei; Liu, Kevin

    2018-03-01

    In this paper, we discuss the metrology methods and error budget that describe the edge placement error (EPE). EPE quantifies the pattern fidelity of a device structure made in a multi-patterning scheme. Here the pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. EPE is computed by combining optical and ebeam metrology data. We show that high NA optical scatterometer can be used to densely measure in device CD and overlay errors. Large field e-beam system enables massive CD metrology which is used to characterize the local CD error. Local CD distribution needs to be characterized beyond 6 sigma, and requires high throughput e-beam system. We present in this paper the first images of a multi-beam e-beam inspection system. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As a use case, we evaluated a 5-nm logic patterning process based on Self-Aligned-QuadruplePatterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography.

  6. Comparison of structural and least-squares lines for estimating geologic relations

    USGS Publications Warehouse

    Williams, G.P.; Troutman, B.M.

    1990-01-01

    Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.

  7. Asymmetric vestibular stimulation reveals persistent disruption of motion perception in unilateral vestibular lesions.

    PubMed

    Panichi, R; Faralli, M; Bruni, R; Kiriakarely, A; Occhigrossi, C; Ferraresi, A; Bronstein, A M; Pettorossi, V E

    2017-11-01

    Self-motion perception was studied in patients with unilateral vestibular lesions (UVL) due to acute vestibular neuritis at 1 wk and 4, 8, and 12 mo after the acute episode. We assessed vestibularly mediated self-motion perception by measuring the error in reproducing the position of a remembered visual target at the end of four cycles of asymmetric whole-body rotation. The oscillatory stimulus consists of a slow (0.09 Hz) and a fast (0.38 Hz) half cycle. A large error was present in UVL patients when the slow half cycle was delivered toward the lesion side, but minimal toward the healthy side. This asymmetry diminished over time, but it remained abnormally large at 12 mo. In contrast, vestibulo-ocular reflex responses showed a large direction-dependent error only initially, then they normalized. Normalization also occurred for conventional reflex vestibular measures (caloric tests, subjective visual vertical, and head shaking nystagmus) and for perceptual function during symmetric rotation. Vestibular-related handicap, measured with the Dizziness Handicap Inventory (DHI) at 12 mo correlated with self-motion perception asymmetry but not with abnormalities in vestibulo-ocular function. We conclude that 1 ) a persistent self-motion perceptual bias is revealed by asymmetric rotation in UVLs despite vestibulo-ocular function becoming symmetric over time, 2 ) this dissociation is caused by differential perceptual-reflex adaptation to high- and low-frequency rotations when these are combined as with our asymmetric stimulus, 3 ) the findings imply differential central compensation for vestibuloperceptual and vestibulo-ocular reflex functions, and 4 ) self-motion perception disruption may mediate long-term vestibular-related handicap in UVL patients. NEW & NOTEWORTHY A novel vestibular stimulus, combining asymmetric slow and fast sinusoidal half cycles, revealed persistent vestibuloperceptual dysfunction in unilateral vestibular lesion (UVL) patients. The compensation of motion perception after UVL was slower than that of vestibulo-ocular reflex. Perceptual but not vestibulo-ocular reflex deficits correlated with dizziness-related handicap. Copyright © 2017 the American Physiological Society.

  8. Challenges and Opportunities of Long-Term Continuous Stream Metabolism Measurements at the National Ecological Observatory Network

    NASA Astrophysics Data System (ADS)

    Goodman, K. J.; Lunch, C. K.; Baxter, C.; Hall, R.; Holtgrieve, G. W.; Roberts, B. J.; Marcarelli, A. M.; Tank, J. L.

    2013-12-01

    Recent advances in dissolved oxygen sensing and modeling have made continuous measurements of whole-stream metabolism relatively easy to make, allowing ecologists to quantify and evaluate stream ecosystem health at expanded temporal and spatial scales. Long-term monitoring of continuous stream metabolism will enable a better understanding of the integrated and complex effects of anthropogenic change (e.g., land-use, climate, atmospheric deposition, invasive species, etc.) on stream ecosystem function. In addition to their value in the particular streams measured, information derived from long-term data will improve the ability to extrapolate from shorter-term data. With the need to better understand drivers and responses of whole-stream metabolism come difficulties in interpreting the results. Long-term trends will encompass physical changes in stream morphology and flow regime (e.g., variable flow conditions and changes in channel structure) combined with changes in biota. Additionally long-term data sets will require an organized database structure, careful quantification of errors and uncertainties, as well as propagation of error as a result of the calculation of metabolism metrics. Parsing of continuous data and the choice of modeling approaches can also have a large influence on results and on error estimation. The two main modeling challenges include 1) obtaining unbiased, low-error daily estimates of gross primary production (GPP) and ecosystem respiration (ER), and 2) interpreting GPP and ER measurements over extended time periods. The National Ecological Observatory Network (NEON), in partnership with academic and government scientists, has begun to tackle several of these challenges as it prepares for the collection and calculation of 30 years of continuous whole-stream metabolism data. NEON is a national-scale research platform that will use consistent procedures and protocols to standardize measurements across the United States, providing long-term, high-quality, open-access data from a connected network to address large-scale change. NEON infrastructure will support 36 aquatic sites across 19 ecoclimatic domains. Sites include core sites, which remain for 30 years, and relocatable sites, which move to capture regional gradients. NEON will measure continuous whole-stream metabolism in conjunction with aquatic, terrestrial and airborne observations, allowing researchers to link stream ecosystem function with landscape and climatic drivers encompassing short to long time periods (i.e., decades).

  9. History of the Shack Hartmann wavefront sensor and its impact in ophthalmic optics

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    The Shack Hartmann wavefront sensor is a technology that was developed at the Optical Sciences Center at the University of Arizona in the late 1960s. It is a robust technique for measuring wavefront error that was originally developed for large telescopes to measure errors induced by atmospheric turbulence. The Shack Hartmann sensor has evolved to become a relatively common non-interferometric metrology tool in a variety of fields. Its broadest impact has been in the area of ophthalmic optics where it is used to measure ocular aberrations. The data the Shack Hartmann sensor provides enables custom LASIK treatments, often enhancing visual acuity beyond normal levels. In addition, the Shack Hartmann data coupled with adaptive optics systems enables unprecedented views of the retina. This paper traces the evolution of the technology from the early use of screen-type tests, to the incorporation of lenslet arrays and finally to one of its modern applications, measuring the human eye.

  10. Non-contact method for characterization of small size thermoelectric modules.

    PubMed

    Manno, Michael; Yang, Bao; Bar-Cohen, Avram

    2015-08-01

    Conventional techniques for characterization of thermoelectric performance require bringing measurement equipment into direct contact with the thermoelectric device, which is increasingly error prone as device size decreases. Therefore, the novel work presented here describes a non-contact technique, capable of accurately measuring the maximum ΔT and maximum heat pumping of mini to micro sized thin film thermoelectric coolers. The non-contact characterization method eliminates the measurement errors associated with using thermocouples and traditional heat flux sensors to test small samples and large heat fluxes. Using the non-contact approach, an infrared camera, rather than thermocouples, measures the temperature of the hot and cold sides of the device to determine the device ΔT and a laser is used to heat to the cold side of the thermoelectric module to characterize its heat pumping capacity. As a demonstration of the general applicability of the non-contact characterization technique, testing of a thin film thermoelectric module is presented and the results agree well with those published in the literature.

  11. Measurement of the inertial properties of the Helios F-1 spacecraft

    NASA Technical Reports Server (NTRS)

    Gayman, W. H.

    1975-01-01

    A gravity pendulum method of measuring lateral moments of inertia of large structures with an error of less than 1% is outlined. The method is based on the fact that in a physical pendulum with a knife-edge support the distance from the axis of rotation to the system center of gravity determines the minimal period of oscillation and is equal to the system centroidal radius of gyration. The method is applied to results of a test procedure in which the Helios F-1 spacecraft was placed in a roll fixture with crossed flexure pivots as elastic constraints and system oscillation measurements were made with each of a set of added moment-of-inertia increments. Equations of motion are derived with allowance for the effect of the finite pivot radius and an error analysis is carried out to find the criterion for maximum accuracy in determining the square of the centroidal radius of gyration. The test procedure allows all measurements to be made with the specimen in upright position.

  12. The DiskMass Survey. II. Error Budget

    NASA Astrophysics Data System (ADS)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  13. GOSAT CO2 retrieval results using TANSO-CAI aerosol information over East Asia

    NASA Astrophysics Data System (ADS)

    KIM, M.; Kim, W.; Jung, Y.; Lee, S.; Kim, J.; Lee, H.; Boesch, H.; Goo, T. Y.

    2015-12-01

    In the satellite remote sensing of CO2, incorrect aerosol information could induce large errors as previous studies suggested. Many factors, such as, aerosol type, wavelength dependency of AOD, aerosol polarization effect and etc. have been main error sources. Due to these aerosol effects, large number of data retrieved are screened out in quality control, or retrieval errors tend to increase if not screened out, especially in East Asia where aerosol concentrations are fairly high. To reduce these aerosol induced errors, a CO2 retrieval algorithm using the simultaneous TANSO-CAI aerosol information is developed. This algorithm adopts AOD and aerosol type information as a priori information from the CAI aerosol retrieval algorithm. The CO2 retrieval algorithm based on optimal estimation method and VLIDORT, a vector discrete ordinate radiative transfer model. The CO2 algorithm, developed with various state vectors to find accurate CO2 concentration, shows reasonable results when compared with other dataset. This study concentrates on the validation of retrieved results with the ground-based TCCON measurements in East Asia and the comparison with the previous retrieval from ACOS, NIES, and UoL. Although, the retrieved CO2 concentration is lower than previous results by ppm's, it shows similar trend and high correlation with previous results. Retrieved data and TCCON measurements data are compared at three stations of Tsukuba, Saga, Anmyeondo in East Asia, with the collocation criteria of ±2°in latitude/longitude and ±1 hours of GOSAT passing time. Compared results also show similar trend with good correlation. Based on the TCCON comparison results, bias correction equation is calculated and applied to the East Asia data.

  14. Phase-slope and phase measurements of tunable CW-THz radiation with terahertz comb for wide-dynamic-range, high-resolution, distance measurement of optically rough object.

    PubMed

    Yasui, Takeshi; Fujio, Makoto; Yokoyama, Shuko; Araki, Tsutomu

    2014-07-14

    Phase measurement of continuous-wave terahertz (CW-THz) radiation is a potential tool for direct distance and imaging measurement of optically rough objects due to its high robustness to optical rough surfaces. However, the 2π phase ambiguity in the phase measurement of single-frequency CW-THz radiation limits the dynamic range of the measured distance to the order of the wavelength used. In this article, phase-slope measurement of tunable CW-THz radiation with a THz frequency comb was effectively used to extend the dynamic range up to 1.834 m while maintaining an error of a few tens µm in the distance measurement of an optically rough object. Furthermore, a combination of phase-slope measurement of tunable CW-THz radiation and phase measurement of single-frequency CW-THz radiation enhanced the distance error to a few µm within the dynamic range of 1.834 m without any influence from the 2π phase ambiguity. The proposed method will be a powerful tool for the construction and maintenance of large-scale structures covered with optically rough surfaces.

  15. The Pearson-Readhead Survey of Compact Extragalactic Radio Sources from Space. I. The Images

    NASA Astrophysics Data System (ADS)

    Lister, M. L.; Tingay, S. J.; Murphy, D. W.; Piner, B. G.; Jones, D. L.; Preston, R. A.

    2001-06-01

    We present images from a space-VLBI survey using the facilities of the VLBI Space Observatory Programme (VSOP), drawing our sample from the well-studied Pearson-Readhead survey of extragalactic radio sources. Our survey has taken advantage of long space-VLBI baselines and large arrays of ground antennas, such as the Very Long Baseline Array and European VLBI Network, to obtain high-resolution images of 27 active galactic nuclei and to measure the core brightness temperatures of these sources more accurately than is possible from the ground. A detailed analysis of the source properties is given in accompanying papers. We have also performed an extensive series of simulations to investigate the errors in VSOP images caused by the relatively large holes in the (u,v)-plane when sources are observed near the orbit normal direction. We find that while the nominal dynamic range (defined as the ratio of map peak to off-source error) often exceeds 1000:1, the true dynamic range (map peak to on-source error) is only about 30:1 for relatively complex core-jet sources. For sources dominated by a strong point source, this value rises to approximately 100:1. We find the true dynamic range to be a relatively weak function of the difference in position angle (P.A.) between the jet P.A. and u-v coverage major axis P.A. For regions with low signal-to-noise ratios, typically located down the jet away from the core, large errors can occur, causing spurious features in VSOP images that should be interpreted with caution.

  16. Stochastic characterization of phase detection algorithms in phase-shifting interferometry

    DOE PAGES

    Munteanu, Florin

    2016-11-01

    Phase-shifting interferometry (PSI) is the preferred non-contact method for profiling sub-nanometer surfaces. Based on monochromatic light interference, the method computes the surface profile from a set of interferograms collected at separate stepping positions. Errors in the estimated profile are introduced when these positions are not located correctly. In order to cope with this problem, various algorithms that minimize the effects of certain types of stepping errors (linear, sinusoidal, etc.) have been developed. Despite the relatively large number of algorithms suggested in the literature, there is no unified way of characterizing their performance when additional unaccounted random errors are present. Here,more » we suggest a procedure for quantifying the expected behavior of each algorithm in the presence of independent and identically distributed (i.i.d.) random stepping errors, which can occur in addition to the systematic errors for which the algorithm has been designed. As a result, the usefulness of this method derives from the fact that it can guide the selection of the best algorithm for specific measurement situations.« less

  17. Association between workarounds and medication administration errors in bar-code-assisted medication administration in hospitals.

    PubMed

    van der Veen, Willem; van den Bemt, Patricia M L A; Wouters, Hans; Bates, David W; Twisk, Jos W R; de Gier, Johan J; Taxis, Katja; Duyvendak, Michiel; Luttikhuis, Karen Oude; Ros, Johannes J W; Vasbinder, Erwin C; Atrafi, Maryam; Brasse, Bjorn; Mangelaars, Iris

    2018-04-01

    To study the association of workarounds with medication administration errors using barcode-assisted medication administration (BCMA), and to determine the frequency and types of workarounds and medication administration errors. A prospective observational study in Dutch hospitals using BCMA to administer medication. Direct observation was used to collect data. Primary outcome measure was the proportion of medication administrations with one or more medication administration errors. Secondary outcome was the frequency and types of workarounds and medication administration errors. Univariate and multivariate multilevel logistic regression analysis were used to assess the association between workarounds and medication administration errors. Descriptive statistics were used for the secondary outcomes. We included 5793 medication administrations for 1230 inpatients. Workarounds were associated with medication administration errors (adjusted odds ratio 3.06 [95% CI: 2.49-3.78]). Most commonly, procedural workarounds were observed, such as not scanning at all (36%), not scanning patients because they did not wear a wristband (28%), incorrect medication scanning, multiple medication scanning, and ignoring alert signals (11%). Common types of medication administration errors were omissions (78%), administration of non-ordered drugs (8.0%), and wrong doses given (6.0%). Workarounds are associated with medication administration errors in hospitals using BCMA. These data suggest that BCMA needs more post-implementation evaluation if it is to achieve the intended benefits for medication safety. In hospitals using barcode-assisted medication administration, workarounds occurred in 66% of medication administrations and were associated with large numbers of medication administration errors.

  18. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  19. Verifiable fault tolerance in measurement-based quantum computation

    NASA Astrophysics Data System (ADS)

    Fujii, Keisuke; Hayashi, Masahito

    2017-09-01

    Quantum systems, in general, cannot be simulated efficiently by a classical computer, and hence are useful for solving certain mathematical problems and simulating quantum many-body systems. This also implies, unfortunately, that verification of the output of the quantum systems is not so trivial, since predicting the output is exponentially hard. As another problem, the quantum system is very delicate for noise and thus needs an error correction. Here, we propose a framework for verification of the output of fault-tolerant quantum computation in a measurement-based model. In contrast to existing analyses on fault tolerance, we do not assume any noise model on the resource state, but an arbitrary resource state is tested by using only single-qubit measurements to verify whether or not the output of measurement-based quantum computation on it is correct. Verifiability is equipped by a constant time repetition of the original measurement-based quantum computation in appropriate measurement bases. Since full characterization of quantum noise is exponentially hard for large-scale quantum computing systems, our framework provides an efficient way to practically verify the experimental quantum error correction.

  20. RECOVERY OF LARGE ANGULAR SCALE CMB POLARIZATION FOR INSTRUMENTS EMPLOYING VARIABLE-DELAY POLARIZATION MODULATORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, N. J.; Marriage, T. A.; Appel, J. W.

    2016-02-20

    Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residualmore » modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r = 0.01. Indeed, r < 0.01 is achievable with commensurately improved characterizations and controls.« less

  1. Long-term continuous acoustical suspended-sediment measurements in rivers - Theory, application, bias, and error

    USGS Publications Warehouse

    Topping, David J.; Wright, Scott A.

    2016-05-04

    It is commonly recognized that suspended-sediment concentrations in rivers can change rapidly in time and independently of water discharge during important sediment‑transporting events (for example, during floods); thus, suspended-sediment measurements at closely spaced time intervals are necessary to characterize suspended‑sediment loads. Because the manual collection of sufficient numbers of suspended-sediment samples required to characterize this variability is often time and cost prohibitive, several “surrogate” techniques have been developed for in situ measurements of properties related to suspended-sediment characteristics (for example, turbidity, laser-diffraction, acoustics). Herein, we present a new physically based method for the simultaneous measurement of suspended-silt-and-clay concentration, suspended-sand concentration, and suspended‑sand median grain size in rivers, using multi‑frequency arrays of single-frequency side‑looking acoustic-Doppler profilers. The method is strongly grounded in the extensive scientific literature on the incoherent scattering of sound by random suspensions of small particles. In particular, the method takes advantage of theory that relates acoustic frequency, acoustic attenuation, acoustic backscatter, suspended-sediment concentration, and suspended-sediment grain-size distribution. We develop the theory and methods, and demonstrate the application of the method at six study sites on the Colorado River and Rio Grande, where large numbers of suspended-sediment samples have been collected concurrently with acoustic attenuation and backscatter measurements over many years. The method produces acoustical measurements of suspended-silt-and-clay and suspended-sand concentration (in units of mg/L), and acoustical measurements of suspended-sand median grain size (in units of mm) that are generally in good to excellent agreement with concurrent physical measurements of these quantities in the river cross sections at these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.

  2. Intervertebral anticollision constraints improve out-of-plane translation accuracy of a single-plane fluoroscopy-to-CT registration method for measuring spinal motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Cheng-Chung; Tsai, Tsung-Yuan; Hsu, Shih-Jung

    2013-03-15

    Purpose: The study aimed to propose a new single-plane fluoroscopy-to-CT registration method integrated with intervertebral anticollision constraints for measuring three-dimensional (3D) intervertebral kinematics of the spine; and to evaluate the performance of the method without anticollision and with three variations of the anticollision constraints via an in vitro experiment. Methods: The proposed fluoroscopy-to-CT registration approach, called the weighted edge-matching with anticollision (WEMAC) method, was based on the integration of geometrical anticollision constraints for adjacent vertebrae and the weighted edge-matching score (WEMS) method that matched the digitally reconstructed radiographs of the CT models of the vertebrae and the measured single-plane fluoroscopymore » images. Three variations of the anticollision constraints, namely, T-DOF, R-DOF, and A-DOF methods, were proposed. An in vitro experiment using four porcine cervical spines in different postures was performed to evaluate the performance of the WEMS and the WEMAC methods. Results: The WEMS method gave high precision and small bias in all components for both vertebral pose and intervertebral pose measurements, except for relatively large errors for the out-of-plane translation component. The WEMAC method successfully reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five degrees of freedom (DOF) more or less unaltered. The means (standard deviations) of the out-of-plane translational errors were less than -0.5 (0.6) and -0.3 (0.8) mm for the T-DOF method and the R-DOF method, respectively. Conclusions: The proposed single-plane fluoroscopy-to-CT registration method reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five DOF more or less unaltered. With the submillimeter and subdegree accuracy, the WEMAC method was considered accurate for measuring 3D intervertebral kinematics during various functional activities for research and clinical applications.« less

  3. Evaluation of Collision Cross Section Calibrants for Structural Analysis of Lipids by Traveling Wave Ion Mobility-Mass Spectrometry

    PubMed Central

    2016-01-01

    Collision cross section (CCS) measurement of lipids using traveling wave ion mobility-mass spectrometry (TWIM-MS) is of high interest to the lipidomics field. However, currently available calibrants for CCS measurement using TWIM are predominantly peptides that display quite different physical properties and gas-phase conformations from lipids, which could lead to large CCS calibration errors for lipids. Here we report the direct CCS measurement of a series of phosphatidylcholines (PCs) and phosphatidylethanolamines (PEs) in nitrogen using a drift tube ion mobility (DTIM) instrument and an evaluation of the accuracy and reproducibility of PCs and PEs as CCS calibrants for phospholipids against different classes of calibrants, including polyalanine (PolyAla), tetraalkylammonium salts (TAA), and hexakis(fluoroalkoxy)phosphazines (HFAP), in both positive and negative modes in TWIM-MS analysis. We demonstrate that structurally mismatched calibrants lead to larger errors in calibrated CCS values while the structurally matched calibrants, PCs and PEs, gave highly accurate and reproducible CCS values at different traveling wave parameters. Using the lipid calibrants, the majority of the CCS values of several classes of phospholipids measured by TWIM are within 2% error of the CCS values measured by DTIM. The development of phospholipid CCS calibrants will enable high-accuracy structural studies of lipids and add an additional level of validation in the assignment of identifications in untargeted lipidomics experiments. PMID:27321977

  4. Gauging Through the Crowd: A Crowd-Sourcing Approach to Urban Rainfall Measurement and Storm Water Modeling Implications

    NASA Astrophysics Data System (ADS)

    Yang, Pan; Ng, Tze Ling

    2017-11-01

    Accurate rainfall measurement at high spatial and temporal resolutions is critical for the modeling and management of urban storm water. In this study, we conduct computer simulation experiments to test the potential of a crowd-sourcing approach, where smartphones, surveillance cameras, and other devices act as precipitation sensors, as an alternative to the traditional approach of using rain gauges to monitor urban rainfall. The crowd-sourcing approach is promising as it has the potential to provide high-density measurements, albeit with relatively large individual errors. We explore the potential of this approach for urban rainfall monitoring and the subsequent implications for storm water modeling through a series of simulation experiments involving synthetically generated crowd-sourced rainfall data and a storm water model. The results show that even under conservative assumptions, crowd-sourced rainfall data lead to more accurate modeling of storm water flows as compared to rain gauge data. We observe the relative superiority of the crowd-sourcing approach to vary depending on crowd participation rate, measurement accuracy, drainage area, choice of performance statistic, and crowd-sourced observation type. A possible reason for our findings is the differences between the error structures of crowd-sourced and rain gauge rainfall fields resulting from the differences between the errors and densities of the raw measurement data underlying the two field types.

  5. Research on Strain Measurements of Core Positions for the Chinese Space Station.

    PubMed

    Shen, Jingshi; Zeng, Xiaodong; Luo, Yuxiang; Cao, Changqing; Wang, Ting

    2018-06-05

    The Chinese space station is designed to carry out manned spaceflight, space science research, and so on. In serious applications, it is a common operation to inject gas into the hull, which can produce strain of the bulkhead. Accurate measurement of strain for the bulkhead is one of the key tasks in evaluating the health condition of the space station. This is the first work to perform strain detection for the Chinese space station bulkhead by using optical fiber Bragg grating. In the period of measurements, the resistance strain gauge is used as the strain standard. The measurement error of the fiber optical sensor in the circumferential direction is very small, being less than 4.52 με. However, the error in the axial direction is very large with the highest value of 28.93 με. Because the measurement error of bare fiber in the axial direction is very small, the transverse effect of the substrate of the fiber optical sensor likely plays a role. The comparison of the theoretical and experimental results of the transverse effect coefficients shows that they are fairly consistent, with values of 0.0271 and 0.0287, respectively. After the transverse effect is compensated, the strain deviation in the axial detection is smaller than 2.04 με. It is of great significance to carry out real-time health assessment for the bulkhead of the space station.

  6. HMI Measured Doppler Velocity Contamination from the SDO Orbit Velocity

    NASA Astrophysics Data System (ADS)

    Scherrer, Phil; HMI Team

    2016-10-01

    The Problem: The SDO satellite is in an inclined Geo-sync orbit which allows uninterrupted views of the Sun nearly 98% of the time. This orbit has a velocity of about 3,500 m/s with the solar line-of-sight component varying with time of day and time of year. Due to remaining calibration errors in wavelength filters the orbit velocity leaks into the line-of-sight solar velocity and magnetic field measurements. Since the same model of the filter is used in the Milne-Eddington inversions used to generate the vector magnetic field data, the orbit velocity also contaminates the vector magnetic products. These errors contribute 12h and 24h variations in most HMI data products and are known as the 24-hour problem. Early in the mission we made a patch to the calibration that corrected the disk mean velocity. The resulting LOS velocity has been used for helioseismology with no apparent problems. The velocity signal has about a 1% scale error that varies with time of day and with velocity, i.e. it is non-linear for large velocities. This causes leaks into the LOS field (which is simply the difference between velocity measured in LCP and RCP rescaled for the Zeeman splitting). This poster reviews the measurement process, shows examples of the problem, and describes recent work at resolving the issues. Since the errors are in the filter characterization it makes most sense to work first on the LOS data products since they, unlike the vector products, are directly and simply related to the filter profile without assumptions on the solar atmosphere, filling factors, etc. Therefore this poster is strictly limited to understanding how to better understand the filter profiles as they vary across the field and with time of day and time in years resulting in velocity errors of up to a percent and LOS field estimates with errors up to a few percent (of the standard LOS magnetograph method based on measuring the differences in wavelength of the line centroids in LCP and RCP light). We expect that when better filter profiles are available it will be possible to generate improved vector field data products as well.

  7. Theoretical evaluation of errors in aerosol optical depth retrievals from ground-based direct-sun measurements due to circumsolar and related effects

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Gueymard, Christian A.

    2011-02-01

    Aerosol optical depth (AOD) has a crucial importance for estimating the optical properties of the atmosphere, and is constantly present in optical models of aerosol systems. Any error in aerosol optical depth (∂AOD) has direct and indirect consequences. On the one hand, such errors affect the accuracy of radiative transfer models (thus implying, e.g., potential errors in the evaluation of radiative forcing by aerosols). Additionally, any error in determining AOD is reflected in the retrieved microphysical properties of aerosol particles, which might therefore be inaccurate. Three distinct effects (circumsolar radiation, optical mass, and solar disk's brightness distribution) affecting ∂AOD are qualified and quantified in the present study. The contribution of circumsolar (CS) radiation to the measured flux density of direct solar radiation has received more attention than the two other effects in the literature. It varies rapidly with meteorological conditions and size distribution of the aerosol particles, but also with instrument field of view. Numerical simulations of the three effects just mentioned were conducted, assuming otherwise "perfect" experimental conditions. The results show that CS is responsible for the largest error in AOD, while the effect of brightness distribution (BD) has only a negligible impact. The optical mass (OM) effect yields negligible errors in AOD generally, but noticeable errors for low sun (within 10° of the horizon). In general, the OM and BD effects result in negative errors in AOD (i.e. the true AOD is smaller than that of the experimental determination), conversely to CS. Although the rapid increase in optical mass at large zenith angles can change the sign of ∂AOD, the CS contribution frequently plays the leading role in ∂AOD. To maximize the accuracy in AOD retrievals, the CS effect should not be ignored. In practice, however, this effect can be difficult to evaluate correctly unless the instantaneous aerosols size distribution is known from, e.g., inversion techniques.

  8. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    PubMed

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Making electronic prescribing alerts more effective: scenario-based experimental study in junior doctors

    PubMed Central

    Shah, Priya; Wyatt, Jeremy C; Makubate, Boikanyo; Cross, Frank W

    2011-01-01

    Objective Expert authorities recommend clinical decision support systems to reduce prescribing error rates, yet large numbers of insignificant on-screen alerts presented in modal dialog boxes persistently interrupt clinicians, limiting the effectiveness of these systems. This study compared the impact of modal and non-modal electronic (e-) prescribing alerts on prescribing error rates, to help inform the design of clinical decision support systems. Design A randomized study of 24 junior doctors each performing 30 simulated prescribing tasks in random order with a prototype e-prescribing system. Using a within-participant design, doctors were randomized to be shown one of three types of e-prescribing alert (modal, non-modal, no alert) during each prescribing task. Measurements The main outcome measure was prescribing error rate. Structured interviews were performed to elicit participants' preferences for the prescribing alerts and their views on clinical decision support systems. Results Participants exposed to modal alerts were 11.6 times less likely to make a prescribing error than those not shown an alert (OR 11.56, 95% CI 6.00 to 22.26). Those shown a non-modal alert were 3.2 times less likely to make a prescribing error (OR 3.18, 95% CI 1.91 to 5.30) than those not shown an alert. The error rate with non-modal alerts was 3.6 times higher than with modal alerts (95% CI 1.88 to 7.04). Conclusions Both kinds of e-prescribing alerts significantly reduced prescribing error rates, but modal alerts were over three times more effective than non-modal alerts. This study provides new evidence about the relative effects of modal and non-modal alerts on prescribing outcomes. PMID:21836158

  10. Fast Measurement and Reconstruction of Large Workpieces with Freeform Surfaces by Combining Local Scanning and Global Position Data

    PubMed Central

    Chen, Zhe; Zhang, Fumin; Qu, Xinghua; Liang, Baoqiu

    2015-01-01

    In this paper, we propose a new approach for the measurement and reconstruction of large workpieces with freeform surfaces. The system consists of a handheld laser scanning sensor and a position sensor. The laser scanning sensor is used to acquire the surface and geometry information, and the position sensor is utilized to unify the scanning sensors into a global coordinate system. The measurement process includes data collection, multi-sensor data fusion and surface reconstruction. With the multi-sensor data fusion, errors accumulated during the image alignment and registration process are minimized, and the measuring precision is significantly improved. After the dense accurate acquisition of the three-dimensional (3-D) coordinates, the surface is reconstructed using a commercial software piece, based on the Non-Uniform Rational B-Splines (NURBS) surface. The system has been evaluated, both qualitatively and quantitatively, using reference measurements provided by a commercial laser scanning sensor. The method has been applied for the reconstruction of a large gear rim and the accuracy is up to 0.0963 mm. The results prove that this new combined method is promising for measuring and reconstructing the large-scale objects with complex surface geometry. Compared with reported methods of large-scale shape measurement, it owns high freedom in motion, high precision and high measurement speed in a wide measurement range. PMID:26091396

  11. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of themore » absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.« less

  12. High Sensitivity Gravity Measurements in the Adverse Environment of Oil Wells

    NASA Astrophysics Data System (ADS)

    Pfutzner, Harold

    2014-03-01

    Bulk density is a primary measurement within oil and gas reservoirs and is the basis of most reserves calculations by oil companies. The measurement is performed with a gamma-ray source and two scintillation gamma-ray detectors from within newly drilled exploration and production wells. This nuclear density measurement, while very precise is also very shallow and is therefore susceptible to errors due to any alteration of the formation and fluids in the vicinity of the borehole caused by the drilling process. Measuring acceleration due to gravity along a well provides a direct measure of bulk density with a very large depth of investigation that makes it practically immune to errors from near-borehole effects. Advances in gravity sensors and associated mechanics and electronics provide an opportunity for routine borehole gravity measurements with comparable density precision to the nuclear density measurement and with sufficient ruggedness to survive the rough handling and high temperatures experienced in oil well logging. We will describe a borehole gravity meter and its use under very realistic conditions in an oil well in Saudi Arabia. The density measurements will be presented. Alberto Marsala (2), Paul Wanjau (1), Olivier Moyal (1), and Justin Mlcak (1); (1) Schlumberger, (2) Saudi Aramco.

  13. Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.

    2018-03-01

    Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1

  14. Mismeasurement and the resonance of strong confounders: correlated errors.

    PubMed

    Marshall, J R; Hastrup, J L; Ross, J S

    1999-07-01

    Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.

  15. Gaussian pre-filtering for uncertainty minimization in digital image correlation using numerically-designed speckle patterns

    NASA Astrophysics Data System (ADS)

    Mazzoleni, Paolo; Matta, Fabio; Zappa, Emanuele; Sutton, Michael A.; Cigada, Alfredo

    2015-03-01

    This paper discusses the effect of pre-processing image blurring on the uncertainty of two-dimensional digital image correlation (DIC) measurements for the specific case of numerically-designed speckle patterns having particles with well-defined and consistent shape, size and spacing. Such patterns are more suitable for large measurement surfaces on large-scale specimens than traditional spray-painted random patterns without well-defined particles. The methodology consists of numerical simulations where Gaussian digital filters with varying standard deviation are applied to a reference speckle pattern. To simplify the pattern application process for large areas and increase contrast to reduce measurement uncertainty, the speckle shape, mean size and on-center spacing were selected to be representative of numerically-designed patterns that can be applied on large surfaces through different techniques (e.g., spray-painting through stencils). Such 'designer patterns' are characterized by well-defined regions of non-zero frequency content and non-zero peaks, and are fundamentally different from typical spray-painted patterns whose frequency content exhibits near-zero peaks. The effect of blurring filters is examined for constant, linear, quadratic and cubic displacement fields. Maximum strains between ±250 and ±20,000 με are simulated, thus covering a relevant range for structural materials subjected to service and ultimate stresses. The robustness of the simulation procedure is verified experimentally using a physical speckle pattern subjected to constant displacements. The stability of the relation between standard deviation of the Gaussian filter and measurement uncertainty is assessed for linear displacement fields at varying image noise levels, subset size, and frequency content of the speckle pattern. It is shown that bias error as well as measurement uncertainty are minimized through Gaussian pre-filtering. This finding does not apply to typical spray-painted patterns without well-defined particles, for which image blurring is only beneficial in reducing bias errors.

  16. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  17. Quantifying point source emissions with atmospheric inversions and aircraft measurements: the Aliso Canyon natural gas leak as a tracer experiment

    NASA Astrophysics Data System (ADS)

    Gourdji, S.; Yadav, V.; Karion, A.; Mueller, K. L.; Kort, E. A.; Conley, S.; Ryerson, T. B.; Nehrkorn, T.

    2017-12-01

    The ability of atmospheric inverse models to detect, spatially locate and quantify emissions from large point sources in urban domains needs improvement before inversions can be used reliably as carbon monitoring tools. In this study, we use the Aliso Canyon natural gas leak from October 2015 to February 2016 (near Los Angeles, CA) as a natural tracer experiment to assess inversion quality by comparison with published estimates of leak rates calculated using a mass balance approach (Conley et al., 2016). Fourteen dedicated flights were flown in horizontal transects downwind and throughout the duration of the leak to sample CH4 mole fractions and collect meteorological information for use in the mass-balance estimates. The same CH4 observational data were then used here in geostatistical inverse models with no prior assumptions about the leak location or emission rate and flux sensitivity matrices generated using the WRF-STILT atmospheric transport model. Transport model errors were assessed by comparing WRF-STILT wind speeds, wind direction and planetary boundary layer (PBL) height to those observed on the plane; the impact of these errors in the inversions, and the optimal inversion setup for reducing their influence was also explored. WRF-STILT provides a reasonable simulation of true atmospheric conditions on most flight dates, given the complex terrain and known difficulties in simulating atmospheric transport under such conditions. Moreover, even large (>120°) errors in wind direction were found to be tolerable in terms of spatially locating the leak rate within a 5-km radius of the actual site. Errors in the WRF-STILT wind speed (>50%) and PBL height have more negative impacts on the inversions, with too high wind speeds (typically corresponding with too low PBL heights) resulting in overestimated leak rates, and vice-versa. Coarser data averaging intervals and the use of observed wind speed errors in the model-data mismatch covariance matrix are shown to help reduce the influence of transport model errors, by averaging out compensating errors and de-weighting the influence of problematic observations. This study helps to enable the integration of aircraft measurements with other tower-based data in larger inverse models that can reliably detect, locate and quantify point source emissions in urban areas.

  18. An examination of the usefulness of repeat testing practices in a large hospital clinical chemistry laboratory.

    PubMed

    Deetz, Carl O; Nolan, Debra K; Scott, Mitchell G

    2012-01-01

    A long-standing practice in clinical laboratories has been to automatically repeat laboratory tests when values trigger automated "repeat rules" in the laboratory information system such as a critical test result. We examined 25,553 repeated laboratory values for 30 common chemistry tests from December 1, 2010, to February 28, 2011, to determine whether this practice is necessary and whether it may be possible to reduce repeat testing to improve efficiency and turnaround time for reporting critical values. An "error" was defined to occur when the difference between the initial and verified values exceeded the College of American Pathologists/Clinical Laboratory Improvement Amendments allowable error limit. The initial values from 2.6% of all repeated tests (668) were errors. Of these 668 errors, only 102 occurred for values within the analytic measurement range. Median delays in reporting critical values owing to repeated testing ranged from 5 (blood gases) to 17 (glucose) minutes.

  19. Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

    NASA Technical Reports Server (NTRS)

    Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

    2012-01-01

    Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

  20. A Monte-Carlo Bayesian framework for urban rainfall error modelling

    NASA Astrophysics Data System (ADS)

    Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian

    2016-04-01

    Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data sources (in this case radar and rain gauge estimates typically available at present), while at the same time enabling dynamic combination of these data sources (thus not only quantifying uncertainty, but also reducing it). This model generates an ensemble of merged rainfall estimates, which can then be used as input to urban drainage models in order to examine how uncertainties in rainfall estimates propagate to urban runoff estimates. The proposed model is tested using as case study a detailed rainfall and flow dataset, and a carefully verified urban drainage model of a small (~9 km2) pilot catchment in North-East London. The model has shown to well characterise residual errors in rainfall data at urban scales (which remain after the merging), leading to improved runoff estimates. In fact, the majority of measured flow peaks are bounded within the uncertainty area produced by the runoff ensembles generated with the ensemble rainfall inputs. REFERENCES: [1] Ciach, G. J. & Krajewski, W. F. (1999). On the estimation of radar rainfall error variance. Advances in Water Resources, 22 (6), 585-595. [2] Rico-Ramirez, M. A., Liguori, S. & Schellart, A. N. A. (2015). Quantifying radar-rainfall uncertainties in urban drainage flow modelling. Journal of Hydrology, 528, 17-28.

  1. Effect of number of probes and their orientation on the calculation of several compressor face distortion descriptors

    NASA Technical Reports Server (NTRS)

    Stoll, F.; Tremback, J. W.; Arnaiz, H. H.

    1979-01-01

    A study was performed to determine the effects of the number and position of total pressure probes on the calculation of five compressor face distortion descriptors. This study used three sets of 320 steady state total pressure measurements that were obtained with a special rotating rake apparatus in wind tunnel tests of a mixed-compression inlet. The inlet was a one third scale model of the inlet on a YF-12 airplane, and it was tested in the wind tunnel at representative flight conditions at Mach numbers above 2.0. The study shows that large errors resulted in the calculation of the distortion descriptors even with a number of probes that were considered adequate in the past. There were errors as large as 30 and -50 percent in several distortion descriptors for a configuration consisting of eight rakes with five equal-area-weighted probes on each rake.

  2. Topographic Map of Pathfinder Landing Site

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Topographic map of the landing site, to a distance of 60 meters from the lander in the LSC coordinate system. The lander is shown schematically in the center; 2.5 meter radius circle (black) centered on the camera was not mapped. Gentle relief [root mean square (rms) elevation variation 0.5 m; rms a directional slope 4O] and organization of topography into northwest and northeast-trending ridges about 20 meters apart are apparent. Roughly 30% of the illustrated area is hidden from the camera behind these ridges. Contours (0.2 m interval) and color coding of elevations were generated from a digital terrain model, which was interpolated by kriging from approximately 700 measured points. Angular and parallax point coordinates were measured manually on a large (5 m length) anaglyphic uncontrolled mosaic and used to calculate Cartesian (LSC) coordinates. Errors in azimuth on the order of 10 are therefore likely; elevation errors were minimized by referencing elevations to the local horizon. The uncertainty in range measurements increases quadratically with range. Given a measurement error of 1/2 pixel, the expected precision in range is 0.3 meter at 10 meter range, and 10 meters at 60 meter range. Repeated measurements were made, compared, and edited for consistency to improve the range precision. Systematic errors undoubtedly remain and will be corrected in future maps compiled digitally from geometrically controlled images. Cartographic processing by U.S. Geological Survey.

    NOTE: original caption as published in Science Magazine

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech).

  3. Adjustment of QT dispersion assessed from 12 lead electrocardiograms for different numbers of analysed electrocardiographic leads: comparison of stability of different methods.

    PubMed Central

    Hnatkova, K; Malik, M; Kautzner, J; Gang, Y; Camm, A J

    1994-01-01

    OBJECTIVE--Normal electrocardiographic recordings were analysed to establish the influence of measurement of different numbers of electrocardiographic leads on the results of different formulas expressing QT dispersion and the effects of adjustment of QT dispersion obtained from a subset of an electrocardiogram to approximate to the true QT dispersion obtained from a complete electrocardiogram. SUBJECTS AND METHODS--Resting 12 lead electrocardiograms of 27 healthy people were investigated. In each lead, the QT interval was measured with a digitising board and QT dispersion was evaluated by three formulas: (A) the difference between the longest and the shortest QT interval among all leads; (B) the difference between the second longest and the second shortest QT interval; (C) SD of QT intervals in different leads. For each formula, the "true" dispersion was assessed from all measurable leads and then different combinations of leads were omitted. The mean relative differences between the QT dispersion with a given number of omitted leads and the "true" QT dispersion (mean relative errors) and the coefficients of variance of the results of QT dispersion obtained when omitting combinations of leads were compared for the different formulas. The procedure was repeated with an adjustment of each formula dividing its results by the square root of the number of measured leads. The same approach was used for the measurement of QT dispersion from the chest leads including a fourth formula (D) the SD of interlead differences weighted according to the distances between leads. For different formulas, the mean relative errors caused by omitting individual electrocardiographic leads were also assessed and the importance of individual leads for correct measurement of QT dispersion was investigated. RESULTS--The study found important differences between different formulas for assessment of QT dispersion with respect to compensation for missing measurements of QT interval. The standard max-min formula (A) performed poorly (mean relative errors of 6.1% to 18.5% for missing one to four leads) but was appropriately adjusted with the factor of 1/square root of n (n = number of measured leads). In a population of healthy people such an adjustment removed the systematic bias introduced by missing leads of the 12 lead electrocardiogram and significantly reduced the mean relative errors caused by the omission of several leads. The unadjusted SD was the optimum formula (C) for the analysis of 12 lead electrocardiograms, and the weighted standard deviation (D) was the optimum for the analysis of six lead chest electrocardiograms. The coefficients of variance of measurements of QT dispersion with different missing leads were very large (about 3 to 7 for one to four missing leads). Independently of the formula for measurement of QT dispersion, omission of different leads produced substantially different relative errors. In 12 lead electrocardiograms the largest relative errors (> 10%) were caused by omitting lead aVL or lead V1. CONCLUSIONS--Because of the large coefficients of variance, the concept of adjusting the QT dispersion for different numbers of electrocardiographic leads used in its assessment is difficult if not impossible to fulfil. Thus it is likely to be more appropriate to assess QT dispersion from standardised constant sets of electrocardiographic leads. PMID:7833200

  4. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    ERIC Educational Resources Information Center

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  5. Assessment of Computational Fluid Dynamics (CFD) Models for Shock Boundary-Layer Interaction

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.; Oberkampf, William L.; Wolf, Richard T.; Orkwis, Paul D.; Turner, Mark G.; Babinsky, Holger

    2011-01-01

    A workshop on the computational fluid dynamics (CFD) prediction of shock boundary-layer interactions (SBLIs) was held at the 48th AIAA Aerospace Sciences Meeting. As part of the workshop numerous CFD analysts submitted solutions to four experimentally measured SBLIs. This paper describes the assessment of the CFD predictions. The assessment includes an uncertainty analysis of the experimental data, the definition of an error metric and the application of that metric to the CFD solutions. The CFD solutions provided very similar levels of error and in general it was difficult to discern clear trends in the data. For the Reynolds Averaged Navier-Stokes methods the choice of turbulence model appeared to be the largest factor in solution accuracy. Large-eddy simulation methods produced error levels similar to RANS methods but provided superior predictions of normal stresses.

  6. Three-Signal Method for Accurate Measurements of Depolarization Ratio with Lidar

    NASA Technical Reports Server (NTRS)

    Reichardt, Jens; Baumgart, Rudolf; McGee, Thomsa J.

    2003-01-01

    A method is presented that permits the determination of atmospheric depolarization-ratio profiles from three elastic-backscatter lidar signals with different sensitivity to the state of polarization of the backscattered light. The three-signal method is insensitive to experimental errors and does not require calibration of the measurement, which could cause large systematic uncertainties of the results, as is the case in the lidar technique conventionally used for the observation of depolarization ratios.

  7. Diamond fly cutting of aluminum thermal infrared flat mirrors for the OSIRIS-REx Thermal Emission Spectrometer (OTES) instrument

    NASA Astrophysics Data System (ADS)

    Groppi, Christopher E.; Underhill, Matthew; Farkas, Zoltan; Pelham, Daniel

    2016-07-01

    We present the fabrication and measurement of monolithic aluminum flat mirrors designed to operate in the thermal infrared for the OSIRIS-Rex Thermal Emission Spectrometer (OTES) space instrument. The mirrors were cut using a conventional fly cutter with a large radius diamond cutting tool on a high precision Kern Evo 3-axis CNC milling machine. The mirrors were measured to have less than 150 angstroms RMS surface error.

  8. Measuring in-use ship emissions with international and U.S. federal methods.

    PubMed

    Khan, M Yusuf; Ranganathan, Sindhuja; Agrawal, Harshit; Welch, William A; Laroo, Christopher; Miller, J Wayne; Cocker, David R

    2013-03-01

    Regulatory agencies have shifted their emphasis from measuring emissions during certification cycles to measuring emissions during actual use. Emission measurements in this research were made from two different large ships at sea to compare the Simplified Measurement Method (SMM) compliant with the International Maritime Organization (IMO) NOx Technical Code to the Portable Emission Measurement Systems (PEMS) compliant with the US. Environmental Protection Agency (EPA) 40 Code of Federal Regulations (CFR) Part 1065 for on-road emission testing. Emissions of nitrogen oxides (NOx), carbon dioxide (CO2), and carbon monoxide (CO) were measured at load points specified by the International Organization for Standardization (ISO) to compare the two measurement methods. The average percentage errors calculated for PEMS measurements were 6.5%, 0.6%, and 357% for NOx, CO2, and CO, respectively. The NOx percentage error of 6.5% corresponds to a 0.22 to 1.11 g/kW-hr error in moving from Tier III (3.4 g/kW-hr) to Tier I (17.0 g/kW-hr) emission limits. Emission factors (EFs) of NOx and CO2 measured via SMM were comparable to other studies and regulatory agencies estimates. However EF(PM2.5) for this study was up to 26% higher than that currently used by regulatory agencies. The PM2.5 was comprised predominantly of hydrated sulfate (70-95%), followed by organic carbon (11-14%), ash (6-11%), and elemental carbon (0.4-0.8%). This research provides direct comparison between the International Maritime Organization and U.S. Environmental Protection Agency reference methods for quantifying in-use emissions from ships. This research provides correlations for NOx, CO2, and CO measured by a PEMS unit (certified by U.S. EPA for on-road testing) against IMO's Simplified Measurement Method for on-board certification. It substantiates the measurements of NOx by PEMS and quantifies measurement error. This study also provides in-use modal and overall weighted emission factors of gaseous (NOx, CO, CO2, total hydrocarbons [THC], and SO2) and particulate pollutants from the main engine of a container ship, which are helpful in the development of emission inventory.

  9. A Short History of Probability Theory and Its Applications

    ERIC Educational Resources Information Center

    Debnath, Lokenath; Basu, Kanadpriya

    2015-01-01

    This paper deals with a brief history of probability theory and its applications to Jacob Bernoulli's famous law of large numbers and theory of errors in observations or measurements. Included are the major contributions of Jacob Bernoulli and Laplace. It is written to pay the tricentennial tribute to Jacob Bernoulli, since the year 2013…

  10. A fiber Bragg grating sensor system for estimating the large deflection of a lightweight flexible beam

    NASA Astrophysics Data System (ADS)

    Peng, Te; Yang, Yangyang; Ma, Lina; Yang, Huayong

    2016-10-01

    A sensor system based on fiber Bragg grating (FBG) is presented which is to estimate the deflection of a lightweight flexible beam, including the tip position and the tip rotation angle. In this paper, the classical problem of the deflection of a lightweight flexible beam of linear elastic material is analysed. We present the differential equation governing the behavior of a physical system and show that this equation although straightforward in appearance, is in fact rather difficult to solve due to the presence of a non-linear term. We used epoxy glue to attach the FBG sensors to specific locations upper and lower surface of the beam in order to measure local strain measurements. A quasi-distributed FBG static strain sensor network is designed and established. The estimation results from FBG sensors are also compared to reference displacements from the ANSYS simulation results and the experimental results obtained in the laboratory in the static case. The errors of the estimation by FBG sensors are analysed for further error-correction and option-design. When the load weight is 20g, the precision is the highest, the position errors ex and ex are 0.19%, 0.14% respectively, the rotation error eθ, is 1.23%.

  11. Application of principal component analysis to distinguish patients with schizophrenia from healthy controls based on fractional anisotropy measurements.

    PubMed

    Caprihan, A; Pearlson, G D; Calhoun, V D

    2008-08-15

    Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.

  12. Measurement error is often neglected in medical literature: a systematic review.

    PubMed

    Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten

    2018-06-01

    In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Assessing the quality of humidity measurements from global operational radiosonde sensors

    NASA Astrophysics Data System (ADS)

    Moradi, Isaac; Soden, Brian; Ferraro, Ralph; Arkin, Phillip; Vömel, Holger

    2013-07-01

    The quality of humidity measurements from global operational radiosonde sensors in upper, middle, and lower troposphere for the period 2000-2011 were investigated using satellite observations from three microwave water vapor channels operating at 183.31±1, 183.31±3, and 183.31±7 GHz. The radiosonde data were partitioned based on sensor type into 19 classes. The satellite brightness temperatures (Tb) were simulated using radiosonde profiles and a radiative transfer model, then the radiosonde simulated Tb's were compared with the observed Tb's from the satellites. The surface affected Tb's were excluded from the comparison due to the lack of reliable surface emissivity data at the microwave frequencies. Daytime and nighttime data were examined separately to see the possible effect of daytime radiation bias on the sonde data. The error characteristics among different radiosondes vary significantly, which largely reflects the differences in sensor type. These differences are more evident in the mid-upper troposphere than in the lower troposphere, mainly because some of the sensors stop responding to tropospheric humidity somewhere in the upper or even in the middle troposphere. In the upper troposphere, most sensors have a dry bias but Russian sensors and a few other sensors including GZZ2, VZB2, and RS80H have a wet bias. In middle troposphere, Russian sensors still have a wet bias but all other sensors have a dry bias. All sensors, including Russian sensors, have a dry bias in lower troposphere. The systematic and random errors generally decrease from upper to lower troposphere. Sensors from China, India, Russia, and the U.S. have a large random error in upper troposphere, which indicates that these sensors are not suitable for upper tropospheric studies as they fail to respond to humidity changes in the upper and even middle troposphere. Overall, Vaisala sensors perform better than other sensors throughout the troposphere exhibiting the smallest systematic and random errors. Because of the large differences between different radiosonde humidity sensors, it is important for long-term trend studies to only use data measured using a single type of sensor at any given station. If multiple sensor types are used then it is necessary to consider the bias between sensor types and its possible dependence on humidity and temperature.

  14. Image grating metrology using phase-stepping interferometry in scanning beam interference lithography

    NASA Astrophysics Data System (ADS)

    Li, Minkang; Zhou, Changhe; Wei, Chunlong; Jia, Wei; Lu, Yancong; Xiang, Changcheng; Xiang, XianSong

    2016-10-01

    Large-sized gratings are essential optical elements in laser fusion and space astronomy facilities. Scanning beam interference lithography is an effective method to fabricate large-sized gratings. To minimize the nonlinear phase written into the photo-resist, the image grating must be measured to adjust the left and right beams to interfere at their waists. In this paper, we propose a new method to conduct wavefront metrology based on phase-stepping interferometry. Firstly, a transmission grating is used to combine the two beams to form an interferogram which is recorded by a charge coupled device(CCD). Phase steps are introduced by moving the grating with a linear stage monitored by a laser interferometer. A series of interferograms are recorded as the displacement is measured by the laser interferometer. Secondly, to eliminate the tilt and piston error during the phase stepping, the iterative least square phase shift method is implemented to obtain the wrapped phase. Thirdly, we use the discrete cosine transform least square method to unwrap the phase map. Experiment results indicate that the measured wavefront has a nonlinear phase around 0.05 λ@404.7nm. Finally, as the image grating is acquired, we simulate the print-error written into the photo-resist.

  15. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations

    PubMed Central

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028

  16. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.

    PubMed

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.

  17. Evaluation of mean velocity and turbulence measurements with ADCPs

    USGS Publications Warehouse

    Nystrom, E.A.; Rehmann, C.R.; Oberg, K.A.

    2007-01-01

    To test the ability of acoustic Doppler current profilers (ADCPs) to measure turbulence, profiles measured with two pulse-to-pulse coherent ADCPs in a laboratory flume were compared to profiles measured with an acoustic Doppler velocimeter, and time series measured in the acoustic beam of the ADCPs were examined. A four-beam ADCP was used at a downstream station, while a three-beam ADCP was used at a downstream station and an upstream station. At the downstream station, where the turbulence intensity was low, both ADCPs reproduced the mean velocity profile well away from the flume boundaries; errors near the boundaries were due to transducer ringing, flow disturbance, and sidelobe interference. At the upstream station, where the turbulence intensity was higher, errors in the mean velocity were large. The four-beam ADCP measured the Reynolds stress profile accurately away from the bottom boundary, and these measurements can be used to estimate shear velocity. Estimates of Reynolds stress with a three-beam ADCP and turbulent kinetic energy with both ADCPs cannot be computed without further assumptions, and they are affected by flow inhomogeneity. Neither ADCP measured integral time scales to within 60%. ?? 2007 ASCE.

  18. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  19. Method for revealing biases in precision mass measurements

    NASA Astrophysics Data System (ADS)

    Vabson, V.; Vendt, R.; Kübarsepp, T.; Noorma, M.

    2013-02-01

    A practical method for the quantification of systematic errors of large-scale automatic comparators is presented. This method is based on a comparison of the performance of two different comparators. First, the differences of 16 equal partial loads of 1 kg are measured with a high-resolution mass comparator featuring insignificant bias and 1 kg maximum load. At the second stage, a large-scale comparator is tested by using combined loads with known mass differences. Comparing the different results, the biases of any comparator can be easily revealed. These large-scale comparator biases are determined over a 16-month period, and for the 1 kg loads, a typical pattern of biases in the range of ±0.4 mg is observed. The temperature differences recorded inside the comparator concurrently with mass measurements are found to remain within a range of ±30 mK, which obviously has a minor effect on the detected biases. Seasonal variations imply that the biases likely arise mainly due to the functioning of the environmental control at the measurement location.

  20. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    ERIC Educational Resources Information Center

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

Top