Sample records for improved calibration method

  1. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    PubMed Central

    Deng, Mingjun; Li, Jiansong

    2017-01-01

    The Chinese Gaofen-3 (GF-3) mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR) sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts) using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method. PMID:29240675

  2. SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Elder, E; Roper, J

    2015-06-15

    Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less

  3. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-06-22

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.

  4. Improving the Ar I and II branching ratio calibration method: Monte Carlo simulations of effects from photon scattering/reflecting in hollow cathodes

    NASA Astrophysics Data System (ADS)

    Lawler, J. E.; Den Hartog, E. A.

    2018-03-01

    The Ar I and II branching ratio calibration method is discussed with the goal of improving the technique. This method of establishing a relative radiometric calibration is important in ongoing research to improve atomic transition probabilities for quantitative spectroscopy in astrophysics and other fields. Specific suggestions are presented along with Monte Carlo simulations of wavelength dependent effects from scattering/reflecting of photons in a hollow cathode.

  5. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method.

    PubMed

    HosseiniAliabadi, S J; Hosseini Pooya, S M; Afarideh, H; Mianji, F

    2015-06-01

    The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. This system can be utilized in large scale environmental monitoring with a higher accuracy.

  6. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method

    PubMed Central

    HosseiniAliabadi, S. J.; Hosseini Pooya, S. M.; Afarideh, H.; Mianji, F.

    2015-01-01

    Introduction The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. Objective A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Method Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. Result The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. Conclusion This system can be utilized in large scale environmental monitoring with a higher accuracy. PMID:26157729

  7. Wavelength calibration of dispersive near-infrared spectrometer using relative k-space distribution with low coherence interferometer

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2016-05-01

    The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.

  8. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  9. Generator Dynamic Model Validation and Parameter Calibration Using Phasor Measurements at the Point of Connection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry

    2013-05-01

    Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.

  10. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition

    PubMed Central

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-01-01

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible. PMID:29695041

  11. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition.

    PubMed

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-04-24

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible.

  12. High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu

    2017-05-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.

  13. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  14. IMU-based online kinematic calibration of robot manipulator.

    PubMed

    Du, Guanglong; Zhang, Ping

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  15. A calibration method of infrared LVF based spectroradiometer

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin

    2017-10-01

    In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.

  16. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  17. An improved method for determining force balance calibration accuracy

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T.

    1993-01-01

    The results of an improved statistical method used at Langley Research Center for determining and stating the accuracy of a force balance calibration are presented. The application of the method for initial loads, initial load determination, auxiliary loads, primary loads, and proof loads is described. The data analysis is briefly addressed.

  18. A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors.

    PubMed

    Wang, Shuang; Geng, Yunhai; Jin, Rongyu

    2015-12-12

    In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF) and Least Square Methods (LSM) is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.

  19. IMU-Based Online Kinematic Calibration of Robot Manipulator

    PubMed Central

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods. PMID:24302854

  20. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  1. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  2. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  3. DEVELOPMENT OF GUIDELINES FOR CALIBRATING, VALIDATING, AND EVALUATING HYDROLOGIC AND WATER QUALITY MODELS: ASABE ENGINEERING PRACTICE 621

    USDA-ARS?s Scientific Manuscript database

    Information to support application of hydrologic and water quality (H/WQ) models abounds, yet modelers commonly use arbitrary, ad hoc methods to conduct, document, and report model calibration, validation, and evaluation. Consistent methods are needed to improve model calibration, validation, and e...

  4. Application of composite small calibration objects in traffic accident scene photogrammetry.

    PubMed

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  5. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  6. Effect of the improved accelerometer calibration method on AIUB's GRACE monthly gravity field solution

    NASA Astrophysics Data System (ADS)

    Jean, Yoomin; Meyer, Ulrich; Arnold, Daniel; Bentel, Katrin; Jäggi, Adrian

    2017-04-01

    The monthly global gravity field solutions derived using the measurements from the GRACE (Gravity Recovery and Climate Experiment) satellites have been continuously improved by the processing centers. One of the improvements in the processing method is a more detailed calibration of the on-board accelerometers in the GRACE satellites. The accelerometer data calibration is usually restricted to the scale factors and biases. It has been assumed that the three different axes are perfectly orthogonal in the GRACE science reference frame. Recently, it was shown by Klinger and Mayer-Gürr (2016) that a fully-populated scale matrix considering the non-orthogonality of the axes and the misalignment of the GRACE science reference frame and the GRACE accelerometer frame improves the quality of the C20 coefficient in the GRACE monthly gravity field solutions. We investigate the effect of the more detailed calibration of the GRACE accelerometer data on the C20 coefficient in the case of the AIUB (Astronomical Institute of the University of Bern) processing method using the Celestial Mechanics Approach. We also investigate the effect of the new calibration parameters on the stochastic parameters in the Celestial Mechanics Approach.

  7. Self-calibration of cone-beam CT geometry using 3D–2D image registration

    PubMed Central

    Ouadah, S; Stayman, J W; Gang, G J; Ehtiati, T; Siewerdsen, J H

    2016-01-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM = 0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p < 0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE = 0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p < 0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is applicable to situations where conventional calibration is not feasible, such as complex non-circular CBCT orbits and systems with irreproducible source-detector trajectory. PMID:26961687

  8. Self-calibration of cone-beam CT geometry using 3D-2D image registration

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Stayman, J. W.; Gang, G. J.; Ehtiati, T.; Siewerdsen, J. H.

    2016-04-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is applicable to situations where conventional calibration is not feasible, such as complex non-circular CBCT orbits and systems with irreproducible source-detector trajectory.

  9. Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.

  10. Improved dewpoint-probe calibration

    NASA Technical Reports Server (NTRS)

    Stephenson, J. G.; Theodore, E. A.

    1978-01-01

    Relatively-simple pressure-control apparatus calibrates dewpoint probes considerably faster than conventional methods, with no loss of accuracy. Technique requires only pressure measurement at each calibration point and single absolute-humidity measurement at beginning of run. Several probes can be calibrated simultaneously and points can be checked above room temperature.

  11. Optimized star sensors laboratory calibration method using a regularization neural network.

    PubMed

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  12. Spectroradiometric considerations for advanced land observing systems

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1986-01-01

    Research aimed at improving the inflight absolute radiometric calibration of advanced land observing systems was initiated. Emphasis was on the satellite sensor calibration program at White Sands. Topics addressed include: absolute radiometric calibration of advanced remote sensing; atmospheric effects on reflected radiation; inflight radiometric calibration; field radiometric methods for reflectance and atmospheric measurement; and calibration of field relectance radiometers.

  13. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  14. VIIRS reflective solar bands on-orbit calibration five-year update: extension and improvements

    NASA Astrophysics Data System (ADS)

    Sun, Junqiang; Wang, Menghua

    2016-09-01

    The Suomi National Polar-orbiting Partnership (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS) has been onorbit for almost five years. VIIRS has 22 spectral bands, among which fourteen are reflective solar bands (RSB) covering a spectral range from 0.410 to 2.25 μm. The SNPP VIIRS RSB have performed very well since launch. The radiometric calibration for the RSB has also reached a mature stage after almost five years since its launch. Numerous improvements have been made in the standard RSB calibration methodology. Additionally, a hybrid calibration method, which takes the advantages of both solar diffuser calibration and lunar calibration and avoids the drawbacks of the two methods, successfully finalizes the highly accurate calibration for VIIRS RSB. The successfully calibrated RSB data record significantly impacts the ocean color products, whose stringent requirements are especially sensitive to calibration accuracy, and helps the ocean color products to reach maturity and high quality. Nevertheless, there are still many challenge issues to be investigated for further improvements of the VIIRS sensor data records (SDR). In this presentation, the robust results of the RSB calibrations and the ocean product performance will be presented. The reprocessed SDR is now in more science tests, in addition to the ocean science tests already completed one year ago, readying to be the mission-long operational SDR.

  15. Improved Regression Analysis of Temperature-Dependent Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2015-01-01

    An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.

  16. Standing on the shoulders of giants: improving medical image segmentation via bias correction.

    PubMed

    Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul

    2010-01-01

    We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.

  17. Medical color displays and their color calibration: investigations of various calibration methods, tools, and potential improvement in color difference ΔE

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Hashmi, Syed F.; Dallas, William J.; Krupinski, Elizabeth A.; Rehm, Kelly; Fan, Jiahua

    2010-08-01

    Our laboratory has investigated the efficacy of a suite of color calibration and monitor profiling packages which employ a variety of color measurement sensors. Each of the methods computes gamma correction tables for the red, green and blue color channels of a monitor that attempt to: a) match a desired luminance range and tone reproduction curve; and b) maintain a target neutral point across the range of grey values. All of the methods examined here produce International Color Consortium (ICC) profiles that describe the color rendering capabilities of the monitor after calibration. Color profiles incorporate a transfer matrix that establishes the relationship between RGB driving levels and the International Commission on Illumination (CIE) XYZ (tristimulus) values of the resulting on-screen color; the matrix is developed by displaying color patches of known RGB values on the monitor and measuring the tristimulus values with a sensor. The number and chromatic distribution of color patches varies across methods and is usually not under user control. In this work we examine the effect of employing differing calibration and profiling methods on rendition of color images. A series of color patches encoded in sRGB color space were presented on the monitor using color-management software that utilized the ICC profile produced by each method. The patches were displayed on the calibrated monitor and measured with a Minolta CS200 colorimeter. Differences in intended and achieved luminance and chromaticity were computed using the CIE DE2000 color-difference metric, in which a value of ΔE = 1 is generally considered to be approximately one just noticeable difference (JND) in color. We observed between one and 17 JND's for individual colors, depending on calibration method and target. As an extension of this fundamental work1, we further improved our calibration method by defining concrete calibration parameters for the display, using the NEC wide gamut puck, and making sure that those calibration parameters did conform, with the help of a state of the art Spectroradiometer, PR670. As a result of this addition of the PR670, and also an in-house developed method of profiling and characterization, it appears that there was much improvement in ΔE, the color difference.

  18. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  19. Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry

    PubMed Central

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies. PMID:26011052

  20. [A plane-based hand-eye calibration method for surgical robots].

    PubMed

    Zeng, Bowei; Meng, Fanle; Ding, Hui; Liu, Wenbo; Wu, Di; Wang, Guangzhi

    2017-04-01

    In order to calibrate the hand-eye transformation of the surgical robot and laser range finder (LRF), a calibration algorithm based on a planar template was designed. A mathematical model of the planar template had been given and the approach to address the equations had been derived. Aiming at the problems of the measurement error in a practical system, we proposed a new algorithm for selecting coplanar data. This algorithm can effectively eliminate considerable measurement error data to improve the calibration accuracy. Furthermore, three orthogonal planes were used to improve the calibration accuracy, in which a nonlinear optimization for hand-eye calibration was used. With the purpose of verifying the calibration precision, we used the LRF to measure some fixed points in different directions and a cuboid's surfaces. Experimental results indicated that the precision of a single planar template method was (1.37±0.24) mm, and that of the three orthogonal planes method was (0.37±0.05) mm. Moreover, the mean FRE of three-dimensional (3D) points was 0.24 mm and mean TRE was 0.26 mm. The maximum angle measurement error was 0.4 degree. Experimental results show that the method presented in this paper is effective with high accuracy and can meet the requirements of surgical robot precise location.

  1. Non-matrix Matched Glass Disk Calibration Standards Improve XRF Micronutrient Analysis of Wheat Grain across Five Laboratories in India

    PubMed Central

    Guild, Georgia E.; Stangoulis, James C. R.

    2016-01-01

    Within the HarvestPlus program there are many collaborators currently using X-Ray Fluorescence (XRF) spectroscopy to measure Fe and Zn in their target crops. In India, five HarvestPlus wheat collaborators have laboratories that conduct this analysis and their throughput has increased significantly. The benefits of using XRF are its ease of use, minimal sample preparation and high throughput analysis. The lack of commercially available calibration standards has led to a need for alternative calibration arrangements for many of the instruments. Consequently, the majority of instruments have either been installed with an electronic transfer of an original grain calibration set developed by a preferred lab, or a locally supplied calibration. Unfortunately, neither of these methods has been entirely successful. The electronic transfer is unable to account for small variations between the instruments, whereas the use of a locally provided calibration set is heavily reliant on the accuracy of the reference analysis method, which is particularly difficult to achieve when analyzing low levels of micronutrient. Consequently, we have developed a calibration method that uses non-matrix matched glass disks. Here we present the validation of this method and show this calibration approach can improve the reproducibility and accuracy of whole grain wheat analysis on 5 different XRF instruments across the HarvestPlus breeding program. PMID:27375644

  2. Convert a low-cost sensor to a colorimeter using an improved regression method

    NASA Astrophysics Data System (ADS)

    Wu, Yifeng

    2008-01-01

    Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.

  3. Techniques for precise energy calibration of particle pixel detectors

    NASA Astrophysics Data System (ADS)

    Kroupa, M.; Campbell-Ricketts, T.; Bahadori, A.; Empl, A.

    2017-03-01

    We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.

  4. Techniques for precise energy calibration of particle pixel detectors.

    PubMed

    Kroupa, M; Campbell-Ricketts, T; Bahadori, A; Empl, A

    2017-03-01

    We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.

  5. Multiplexed fluctuation-dissipation-theorem calibration of optical tweezers inside living cells

    NASA Astrophysics Data System (ADS)

    Yan, Hao; Johnston, Jessica F.; Cahn, Sidney B.; King, Megan C.; Mochrie, Simon G. J.

    2017-11-01

    In order to apply optical tweezers-based force measurements within an uncharacterized viscoelastic medium such as the cytoplasm of a living cell, a quantitative calibration method that may be applied in this complex environment is needed. We describe an improved version of the fluctuation-dissipation-theorem calibration method, which has been developed to perform in situ calibration in viscoelastic media without prior knowledge of the trapped object. Using this calibration procedure, it is possible to extract values of the medium's viscoelastic moduli as well as the force constant describing the optical trap. To demonstrate our method, we calibrate an optical trap in water, in polyethylene oxide solutions of different concentrations, and inside living fission yeast (S. pombe).

  6. Simulation of temperature field for temperature-controlled radio frequency ablation using a hyperbolic bioheat equation and temperature-varied voltage calibration: a liver-mimicking phantom study.

    PubMed

    Zhang, Man; Zhou, Zhuhuang; Wu, Shuicai; Lin, Lan; Gao, Hongjian; Feng, Yusheng

    2015-12-21

    This study aims at improving the accuracy of temperature simulation for temperature-controlled radio frequency ablation (RFA). We proposed a new voltage-calibration method in the simulation and investigated the feasibility of a hyperbolic bioheat equation (HBE) in the RFA simulation with longer durations and higher power. A total of 40 RFA experiments was conducted in a liver-mimicking phantom. Four mathematical models with multipolar electrodes were developed by the finite element method in COMSOL software: HBE with/without voltage calibration, and the Pennes bioheat equation (PBE) with/without voltage calibration. The temperature-varied voltage calibration used in the simulation was calculated from an experimental power output and temperature-dependent resistance of liver tissue. We employed the HBE in simulation by considering the delay time τ of 16 s. First, for simulations by each kind of bioheat equation (PBE or HBE), we compared the differences between the temperature-varied voltage-calibration and the fixed-voltage values used in the simulations. Then, the comparisons were conducted between the PBE and the HBE in the simulations with temperature-varied voltage calibration. We verified the simulation results by experimental temperature measurements on nine specific points of the tissue phantom. The results showed that: (1) the proposed voltage-calibration method improved the simulation accuracy of temperature-controlled RFA for both the PBE and the HBE, and (2) for temperature-controlled RFA simulation with the temperature-varied voltage calibration, the HBE method was 0.55 °C more accurate than the PBE method. The proposed temperature-varied voltage calibration may be useful in temperature field simulations of temperature-controlled RFA. Besides, the HBE may be used as an alternative in the simulation of long-duration high-power RFA.

  7. Brightness checkerboard lattice method for the calibration of the coaxial reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Li, Xinji; Hui, Mei; Li, Ning; Hu, Shinan; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin

    2018-01-01

    The coaxial reverse Hartmann test (RHT) is widely used in the measurement of large aspheric surfaces as an auxiliary method for interference measurement, because of its large dynamic range, highly flexible test with low frequency of surface errors, and low cost. And the accuracy of the coaxial RHT depends on the calibration. However, the calibration process remains inefficient, and the signal-to-noise ratio limits the accuracy of the calibration. In this paper, brightness checkerboard lattices were used to replace the traditional dot matrix. The brightness checkerboard method can reduce the number of dot matrix projections in the calibration process, thus improving efficiency. An LCD screen displayed a brightness checkerboard lattice, in which the brighter checkerboard and the darker checkerboard alternately arranged. Based on the image on the detector, the relationship between the rays at certain angles and the photosensitive positions of the detector coordinates can be obtained. And a differential de-noising method can effectively reduce the impact of noise on the measurement results. Simulation and experimentation proved the feasibility of the method. Theoretical analysis and experimental results show that the efficiency of the brightness checkerboard lattices is about four times that of the traditional dot matrix, and the signal-to-noise ratio of the calibration is significantly improved.

  8. Impacts of Cross-Platform Vicarious Calibration on the Deep Blue Aerosol Retrievals for Moderate Resolution Imaging Spectroradiometer Aboard Terra

    NASA Technical Reports Server (NTRS)

    Jeong, Myeong-Jae; Hsu, N. Christina; Kwiatkowska, Ewa J.; Franz, Bryan A.; Meister, Gerhard; Salustro, Clare E.

    2012-01-01

    The retrieval of aerosol properties from spaceborne sensors requires highly accurate and precise radiometric measurements, thus placing stringent requirements on sensor calibration and characterization. For the Terra/Moderate Resolution Imaging Spedroradiometer (MODIS), the characteristics of the detectors of certain bands, particularly band 8 [(B8); 412 nm], have changed significantly over time, leading to increased calibration uncertainty. In this paper, we explore a possibility of utilizing a cross-calibration method developed for characterizing the Terral MODIS detectors in the ocean bands by the National Aeronautics and Space Administration Ocean Biology Processing Group to improve aerosol retrieval over bright land surfaces. We found that the Terra/MODIS B8 reflectance corrected using the cross calibration method resulted in significant improvements for the retrieved aerosol optical thickness when compared with that from the Multi-angle Imaging Spectroradiometer, Aqua/MODIS, and the Aerosol Robotic Network. The method reported in this paper is implemented for the operational processing of the Terra/MODIS Deep Blue aerosol products.

  9. Integrated calibration sphere and calibration step fixture for improved coordinate measurement machine calibration

    DOEpatents

    Clifford, Harry J [Los Alamos, NM

    2011-03-22

    A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.

  10. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  11. Technique for Radiometer and Antenna Array Calibration with Two Antenna Noise Diodes

    NASA Technical Reports Server (NTRS)

    Srinivasan, Karthik; Limaye, Ashutosh; Laymon, Charles; Meyer, Paul

    2011-01-01

    This paper presents a new technique to calibrate a microwave radiometer and phased array antenna system. This calibration technique uses a radiated noise source in addition to an injected noise sources for calibration. The plane of reference for this calibration technique is the face of the antenna and therefore can effectively calibration the gain fluctuations in the active phased array antennas. This paper gives the mathematical formulation for the technique and discusses the improvements brought by the method over the existing calibration techniques.

  12. The on-orbit calibration of geometric parameters of the Tian-Hui 1 (TH-1) satellite

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Wang, Renxiang; Hu, Xin; Su, Zhongbo

    2017-02-01

    The on-orbit calibration of geometric parameters is a key step in improving the location accuracy of satellite images without using Ground Control Points (GCPs). Most methods of on-orbit calibration are based on the self-calibration using additional parameters. When using additional parameters, different number of additional parameters may lead to different results. The triangulation bundle adjustment is another way to calibrate the geometric parameters of camera, which can describe the changes in each geometric parameter. When triangulation bundle adjustment method is applied to calibrate geometric parameters, a prerequisite is that the strip model can avoid systematic deformation caused by the rate of attitude changes. Concerning the stereo camera, the influence of the intersection angle should be considered during calibration. The Equivalent Frame Photo (EFP) bundle adjustment based on the Line-Matrix CCD (LMCCD) image can solve the systematic distortion of the strip model, and obtain high accuracy location without using GCPs. In this paper, the triangulation bundle adjustment is used to calibrate the geometric parameters of TH-1 satellite cameras based on LMCCD image. During the bundle adjustment, the three-line array cameras are reconstructed by adopting the principle of inverse triangulation. Finally, the geometric accuracy is validated before and after on-orbit calibration using 5 testing fields. After on-orbit calibration, the 3D geometric accuracy is improved to 11.8 m from 170 m. The results show that the location accuracy of TH-1 without using GCPs is significantly improved using the on-orbit calibration of the geometric parameters.

  13. A New Calibration Method for Commercial RGB-D Sensors.

    PubMed

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-05-24

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.

  14. Model Calibration with Censored Data

    DOE PAGES

    Cao, Fang; Ba, Shan; Brenneman, William A.; ...

    2017-06-28

    Here, the purpose of model calibration is to make the model predictions closer to reality. The classical Kennedy-O'Hagan approach is widely used for model calibration, which can account for the inadequacy of the computer model while simultaneously estimating the unknown calibration parameters. In many applications, the phenomenon of censoring occurs when the exact outcome of the physical experiment is not observed, but is only known to fall within a certain region. In such cases, the Kennedy-O'Hagan approach cannot be used directly, and we propose a method to incorporate the censoring information when performing model calibration. The method is applied tomore » study the compression phenomenon of liquid inside a bottle. The results show significant improvement over the traditional calibration methods, especially when the number of censored observations is large.« less

  15. A new systematic calibration method of ring laser gyroscope inertial navigation system

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu

    2016-10-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.

  16. Standardization of gamma-glutamyltransferase assays by intermethod calibration. Effect on determining common reference limits.

    PubMed

    Steinmetz, Josiane; Schiele, Françoise; Gueguen, René; Férard, Georges; Henny, Joseph

    2007-01-01

    The improvement of the consistency of gamma-glutamyltransferase (GGT) activity results among different assays after calibration with a common material was estimated. We evaluated if this harmonization could lead to reference limits common to different routine methods. Seven laboratories measured GGT activity using their own routine analytical system both according to the manufacturer's recommendation and after calibration with a multi-enzyme calibrator [value assigned by the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) reference procedure]. All samples were re-measured using the IFCC reference procedure. Two groups of subjects were selected in each laboratory: a group of healthy men aged 18-25 years without long-term medication and with alcohol consumption less than 44 g/day and a group of subjects with elevated GGT activity. The day-to-day coefficients of variation were less than 2.9% in each laboratory. The means obtained in the group of healthy subjects without common calibration (range of the means 16-23 U/L) were significantly different from those obtained by the IFCC procedure in five laboratories. After calibration, the means remained significantly different from the IFCC procedure results in only one laboratory. For three calibrated methods, the slope values of linear regression vs. the IFCC procedure were not different from the value 1. The results obtained with these three methods for healthy subjects (n=117) were gathered and reference limits were calculated. These were 11-49 U/L (2.5th-97.5th percentiles). The calibration also improved the consistency of elevated results when compared to the IFCC procedure. The common calibration improved the level of consistency between different routine methods. It permitted to define common reference limits which are quite similar to those proposed by the IFCC. This approach should lead to a real benefit in terms of prevention, screening, diagnosis, therapeutic monitoring and for epidemiological studies.

  17. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System.

    PubMed

    Wu, Defeng; Chen, Tianfei; Li, Aiguo

    2016-08-30

    A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.

  18. Corner detection and sorting method based on improved Harris algorithm in camera calibration

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Wang, Yonghong; Dan, Xizuo; Huang, Anqi; Hu, Yue; Yang, Lianxiang

    2016-11-01

    In traditional Harris corner detection algorithm, the appropriate threshold which is used to eliminate false corners is selected manually. In order to detect corners automatically, an improved algorithm which combines Harris and circular boundary theory of corners is proposed in this paper. After detecting accurate corner coordinates by using Harris algorithm and Forstner algorithm, false corners within chessboard pattern of the calibration plate can be eliminated automatically by using circular boundary theory. Moreover, a corner sorting method based on an improved calibration plate is proposed to eliminate false background corners and sort remaining corners in order. Experiment results show that the proposed algorithms can eliminate all false corners and sort remaining corners correctly and automatically.

  19. Self-calibration method for rotating laser positioning system using interscanning technology and ultrasonic ranging.

    PubMed

    Wu, Jun; Yu, Zhijing; Zhuge, Jingchang

    2016-04-01

    A rotating laser positioning system (RLPS) is an efficient measurement method for large-scale metrology. Due to multiple transmitter stations, which consist of a measurement network, the position relationship of these stations must be first calibrated. However, with such auxiliary devices such as a laser tracker, scale bar, and complex calibration process, the traditional calibration methods greatly reduce the measurement efficiency. This paper proposes a self-calibration method for RLPS, which can automatically obtain the position relationship. The method is implemented through interscanning technology by using a calibration bar mounted on the transmitter station. Each bar is composed of three RLPS receivers and one ultrasonic sensor whose coordinates are known in advance. The calibration algorithm is mainly based on multiplane and distance constraints and is introduced in detail through a two-station mathematical model. The repeated experiments demonstrate that the coordinate measurement uncertainty of spatial points by using this method is about 0.1 mm, and the accuracy experiments show that the average coordinate measurement deviation is about 0.3 mm compared with a laser tracker. The accuracy can meet the requirements of most applications, while the calibration efficiency is significantly improved.

  20. Calibration of mass spectrometric peptide mass fingerprint data without specific external or internal calibrants

    PubMed Central

    Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut

    2005-01-01

    Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175

  1. Alignment of the measurement scale mark during immersion hydrometer calibration using an image processing system.

    PubMed

    Peña-Perez, Luis Manuel; Pedraza-Ortega, Jesus Carlos; Ramos-Arreguin, Juan Manuel; Arriaga, Saul Tovar; Fernandez, Marco Antonio Aceves; Becerra, Luis Omar; Hurtado, Efren Gorrostieta; Vargas-Soto, Jose Emilio

    2013-10-24

    The present work presents an improved method to align the measurement scale mark in an immersion hydrometer calibration system of CENAM, the National Metrology Institute (NMI) of Mexico, The proposed method uses a vision system to align the scale mark of the hydrometer to the surface of the liquid where it is immersed by implementing image processing algorithms. This approach reduces the variability in the apparent mass determination during the hydrostatic weighing in the calibration process, therefore decreasing the relative uncertainty of calibration.

  2. Alignment of the Measurement Scale Mark during Immersion Hydrometer Calibration Using an Image Processing System

    PubMed Central

    Peña-Perez, Luis Manuel; Pedraza-Ortega, Jesus Carlos; Ramos-Arreguin, Juan Manuel; Arriaga, Saul Tovar; Fernandez, Marco Antonio Aceves; Becerra, Luis Omar; Hurtado, Efren Gorrostieta; Vargas-Soto, Jose Emilio

    2013-01-01

    The present work presents an improved method to align the measurement scale mark in an immersion hydrometer calibration system of CENAM, the National Metrology Institute (NMI) of Mexico, The proposed method uses a vision system to align the scale mark of the hydrometer to the surface of the liquid where it is immersed by implementing image processing algorithms. This approach reduces the variability in the apparent mass determination during the hydrostatic weighing in the calibration process, therefore decreasing the relative uncertainty of calibration. PMID:24284770

  3. Phase Calibration for the Block 1 VLBI System

    NASA Technical Reports Server (NTRS)

    Roth, M. G.; Runge, T. F.

    1983-01-01

    Very Long Baseline Interferometry (VLBI) in the DSN provides support for spacecraft navigation, Earth orientation measurements, and synchronization of network time and frequency standards. An improved method for calibrating instrumental phase shifts has recently been implemented as a computer program in the Block 1 system. The new calibration program, called PRECAL, performs calibrations over intervals as small as 0.4 seconds and greatly reduces the amount of computer processing required to perform phase calibration.

  4. Improved method for calibration of exchange flows for a physical transport box model of Tampa Bay, FL USA

    EPA Science Inventory

    Results for both sequential and simultaneous calibration of exchange flows between segments of a 10-box, one-dimensional, well-mixed, bifurcated tidal mixing model for Tampa Bay are reported. Calibrations were conducted for three model options with different mathematical expressi...

  5. A New Calibration Method for Commercial RGB-D Sensors

    PubMed Central

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-01-01

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter-level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges. PMID:28538695

  6. Automatic Calibration Method for Driver’s Head Orientation in Natural Driving Environment

    PubMed Central

    Fu, Xianping; Guan, Xiao; Peli, Eli; Liu, Hongbo; Luo, Gang

    2013-01-01

    Gaze tracking is crucial for studying driver’s attention, detecting fatigue, and improving driver assistance systems, but it is difficult in natural driving environments due to nonuniform and highly variable illumination and large head movements. Traditional calibrations that require subjects to follow calibrators are very cumbersome to be implemented in daily driving situations. A new automatic calibration method, based on a single camera for determining the head orientation and which utilizes the side mirrors, the rear-view mirror, the instrument board, and different zones in the windshield as calibration points, is presented in this paper. Supported by a self-learning algorithm, the system tracks the head and categorizes the head pose in 12 gaze zones based on facial features. The particle filter is used to estimate the head pose to obtain an accurate gaze zone by updating the calibration parameters. Experimental results show that, after several hours of driving, the automatic calibration method without driver’s corporation can achieve the same accuracy as a manual calibration method. The mean error of estimated eye gazes was less than 5°in day and night driving. PMID:24639620

  7. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  8. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.

    PubMed

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method

    NASA Astrophysics Data System (ADS)

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods.

  10. Calibration of DEM parameters on shear test experiments using Kriging method

    NASA Astrophysics Data System (ADS)

    Bednarek, Xavier; Martin, Sylvain; Ndiaye, Abibatou; Peres, Véronique; Bonnefoy, Olivier

    2017-06-01

    Calibration of powder mixing simulation using Discrete-Element-Method is still an issue. Achieving good agreement with experimental results is difficult because time-efficient use of DEM involves strong assumptions. This work presents a methodology to calibrate DEM parameters using Efficient Global Optimization (EGO) algorithm based on Kriging interpolation method. Classical shear test experiments are used as calibration experiments. The calibration is made on two parameters - Young modulus and friction coefficient. The determination of the minimal number of grains that has to be used is a critical step. Simulations of a too small amount of grains would indeed not represent the realistic behavior of powder when using huge amout of grains will be strongly time consuming. The optimization goal is the minimization of the objective function which is the distance between simulated and measured behaviors. The EGO algorithm uses the maximization of the Expected Improvement criterion to find next point that has to be simulated. This stochastic criterion handles with the two interpolations made by the Kriging method : prediction of the objective function and estimation of the error made. It is thus able to quantify the improvement in the minimization that new simulations at specified DEM parameters would lead to.

  11. Research on orbit prediction for solar-based calibration proper satellite

    NASA Astrophysics Data System (ADS)

    Chen, Xuan; Qi, Wenwen; Xu, Peng

    2018-03-01

    Utilizing the mathematical model of the orbit mechanics, the orbit prediction is to forecast the space target's orbit information of a certain time based on the orbit of the initial moment. The proper satellite radiometric calibration and calibration orbit prediction process are introduced briefly. On the basis of the research of the calibration space position design method and the radiative transfer model, an orbit prediction method for proper satellite radiometric calibration is proposed to select the appropriate calibration arc for the remote sensor and to predict the orbit information of the proper satellite and the remote sensor. By analyzing the orbit constraint of the proper satellite calibration, the GF-1solar synchronous orbit is chose as the proper satellite orbit in order to simulate the calibration visible durance for different satellites to be calibrated. The results of simulation and analysis provide the basis for the improvement of the radiometric calibration accuracy of the satellite remote sensor, which lays the foundation for the high precision and high frequency radiometric calibration.

  12. Design and calibration of field deployable ground-viewing radiometers.

    PubMed

    Anderson, Nikolaus; Czapla-Myers, Jeffrey; Leisso, Nathan; Biggar, Stuart; Burkhart, Charles; Kingston, Rob; Thome, Kurtis

    2013-01-10

    Three improved ground-viewing radiometers were built to support the Radiometric Calibration Test Site (RadCaTS) developed by the Remote Sensing Group (RSG) at the University of Arizona. Improved over previous light-emitting diode based versions, these filter-based radiometers employ seven silicon detectors and one InGaAs detector covering a wavelength range of 400-1550 nm. They are temperature controlled and designed for greater stability and lower noise. The radiometer systems show signal-to-noise ratios of greater than 1000 for all eight channels at typical field calibration signal levels. Predeployment laboratory radiance calibrations using a 1 m spherical integrating source compare well with in situ field calibrations using the solar radiation based calibration method; all bands are within ±2.7% for the case tested.

  13. SU-C-204-02: Improved Patient-Specific Optimization of the Stopping Power Calibration for Proton Therapy Planning Using a Single Proton Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rinaldi, I; Ludwig Maximilian University, Garching, DE; Heidelberg University Hospital, Heidelberg, DE

    2015-06-15

    Purpose: We present an improved method to calculate patient-specific calibration curves to convert X-ray computed tomography (CT) Hounsfield Unit (HU) to relative stopping powers (RSP) for proton therapy treatment planning. Methods: By optimizing the HU-RSP calibration curve, the difference between a proton radiographic image and a digitally reconstructed X-ray radiography (DRR) is minimized. The feasibility of this approach has previously been demonstrated. This scenario assumes that all discrepancies between proton radiography and DRR originate from uncertainties in the HU-RSP curve. In reality, external factors cause imperfections in the proton radiography, such as misalignment compared to the DRR and unfaithful representationmore » of geometric structures (“blurring”). We analyze these effects based on synthetic datasets of anthropomorphic phantoms and suggest an extended optimization scheme which explicitly accounts for these effects. Performance of the method is been tested for various simulated irradiation parameters. The ultimate purpose of the optimization is to minimize uncertainties in the HU-RSP calibration curve. We therefore suggest and perform a thorough statistical treatment to quantify the accuracy of the optimized HU-RSP curve. Results: We demonstrate that without extending the optimization scheme, spatial blurring (equivalent to FWHM=3mm convolution) in the proton radiographies can cause up to 10% deviation between the optimized and the ground truth HU-RSP calibration curve. Instead, results obtained with our extended method reach 1% or better correspondence. We have further calculated gamma index maps for different acceptance levels. With DTA=0.5mm and RD=0.5%, a passing ratio of 100% is obtained with the extended method, while an optimization neglecting effects of spatial blurring only reach ∼90%. Conclusion: Our contribution underlines the potential of a single proton radiography to generate a patient-specific calibration curve and to improve dose delivery by optimizing the HU-RSP calibration curve as long as all sources of systematic incongruence are properly modeled.« less

  14. An "In Situ" Calibration Correction Procedure (KCICLO) Based on AOD Diurnal Cycle: Application to AERONET-El Arenosillo (Spain) AOD Data Series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cachorro, V. E.; Toledano, C.; Berjon, A.

    Aerosol optical depth (AOD) very often shows a distinct diurnal cycle pattern, which seems to be an artifact resulting from an incorrect calibration (or an equivalent effect, such as filter degradation). The shape of this fictitious AOD diurnal cycle varies as the inverse of the solar air mass (m) and the magnitude of the effect is greatest at midday. The observation of this effect is not easy at many field stations, and only those stations with good weather conditions permit an easier detection and the possibility of its correction. By taking advantage of this dependence on the air mass, wemore » propose an improved “in situ” correction-calibration procedure to AOD measured data series. The method is named KCICLO because the determination of a constant K and the behavior of AOD as a cycle (ciclo, in Spanish). We estimate it has an accuracy of 0.2–0.5% for the calibration ratio constant K, or 0.002–0.005 in AOD at field stations. Although the KCICLO is an “in situ” calibration method, we recommend it to be used as an AOD correction method for field stations. At high-altitude sites, it may be used independently of the classical Langley method (CLM). However, we also recommend it to be used as a complement to CLM, improving it considerably. The application of this calibration correction method to the nearly 5 year AOD data series at El Arenosillo (Huelva, southwestern Spain) station belonging to Aerosol Robotic Network (AERONET)-PHOTONS shows that 8 (50%) of 16 filters of the four analyzed Sun photometers were outside of the 0.02 uncertainty of AERONET specification. The largest departures reached values of 0.06. The results show the efficiency of the method and a significant improvement over other “in situ” methods, with no other information required beyond the same AOD data.« less

  15. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  16. Calibration Method of an Ultrasonic System for Temperature Measurement

    PubMed Central

    Zhou, Chao; Wang, Yueke; Qiao, Chunjie; Dai, Weihua

    2016-01-01

    System calibration is fundamental to the overall accuracy of the ultrasonic temperature measurement, and it is basically involved in accurately measuring the path length and the system latency of the ultrasonic system. This paper proposes a method of high accuracy system calibration. By estimating the time delay between the transmitted signal and the received signal at several different temperatures, the calibration equations are constructed, and the calibrated results are determined with the use of the least squares algorithm. The formulas are deduced for calculating the calibration uncertainties, and the possible influential factors are analyzed. The experimental results in distilled water show that the calibrated path length and system latency can achieve uncertainties of 0.058 mm and 0.038 μs, respectively, and the temperature accuracy is significantly improved by using the calibrated results. The temperature error remains within ±0.04°C consistently, and the percentage error is less than 0.15%. PMID:27788252

  17. Recent Goddard Space Flight Center (GSFC) experience with on-orbit calibration of attitude sensors

    NASA Technical Reports Server (NTRS)

    Davis, W.; Hashmall, J.; Harman, R.

    1992-01-01

    The results of on-orbit calibration for several satellites by the flight Dynamics Facility (FDF) at GSFC are reviewed. The examples discussed include attitude calibrations for sensors, including fixed-head star trackers, fine sun sensors, three-axis magnetometers, and inertial reference units taken from recent experience with the Compton Gamma Ray observatory, the Upper Atmosphere Research Satellite, and the Extreme Ultraviolet Explorer calibration. The methods used and the results of calibration are discussed, as are the improvements attained from in-flight calibration.

  18. Note: Improved calibration of atomic force microscope cantilevers using multiple reference cantilevers.

    PubMed

    Sader, John E; Friend, James R

    2015-05-01

    Overall precision of the simplified calibration method in J. E. Sader et al., Rev. Sci. Instrum. 83, 103705 (2012), Sec. III D, is dominated by the spring constant of the reference cantilever. The question arises: How does one take measurements from multiple reference cantilevers, and combine these results, to improve uncertainty of the reference cantilever's spring constant and hence the overall precision of the method? This question is addressed in this note. Its answer enables manufacturers to specify of a single set of data for the spring constant, resonant frequency, and quality factor, from measurements on multiple reference cantilevers. With this data set, users can trivially calibrate cantilevers of the same type.

  19. A novel calibration method for non-orthogonal shaft laser theodolite measurement system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Bin, E-mail: wubin@tju.edu.cn, E-mail: xueting@tju.edu.cn; Yang, Fengting; Ding, Wen

    2016-03-15

    Non-orthogonal shaft laser theodolite (N-theodolite) is a new kind of large-scale metrological instrument made up by two rotary tables and one collimated laser. There are three axes for an N-theodolite. According to naming conventions in traditional theodolite, rotary axes of two rotary tables are called as horizontal axis and vertical axis, respectively, and the collimated laser beam is named as sight axis. And the difference between N-theodolite and traditional theodolite is obvious, since the former one with no orthogonal and intersecting accuracy requirements. So the calibration method for traditional theodolite is no longer suitable for N-theodolite, while the calibration methodmore » applied currently is really complicated. Thus this paper introduces a novel calibration method for non-orthogonal shaft laser theodolite measurement system to simplify the procedure and to improve the calibration accuracy. A simple two-step process, calibration for intrinsic parameters and for extrinsic parameters, is proposed by the novel method. And experiments have shown its efficiency and accuracy.« less

  20. Accommodating subject and instrument variations in spectroscopic determinations

    DOEpatents

    Haas, Michael J [Albuquerque, NM; Rowe, Robert K [Corrales, NM; Thomas, Edward V [Albuquerque, NM

    2006-08-29

    A method and apparatus for measuring a biological attribute, such as the concentration of an analyte, particularly a blood analyte in tissue such as glucose. The method utilizes spectrographic techniques in conjunction with an improved instrument-tailored or subject-tailored calibration model. In a calibration phase, calibration model data is modified to reduce or eliminate instrument-specific attributes, resulting in a calibration data set modeling intra-instrument or intra-subject variation. In a prediction phase, the prediction process is tailored for each target instrument separately using a minimal number of spectral measurements from each instrument or subject.

  1. Fast calibration of high-order adaptive optics systems.

    PubMed

    Kasper, Markus; Fedrigo, Enrico; Looze, Douglas P; Bonnet, Henri; Ivanescu, Liviu; Oberti, Sylvain

    2004-06-01

    We present a new method of calibrating adaptive optics systems that greatly reduces the required calibration time or, equivalently, improves the signal-to-noise ratio. The method uses an optimized actuation scheme with Hadamard patterns and does not scale with the number of actuators for a given noise level in the wavefront sensor channels. It is therefore highly desirable for high-order systems and/or adaptive secondary systems on a telescope without a Gregorian focal plane. In the latter case, the measurement noise is increased by the effects of the turbulent atmosphere when one is calibrating on a natural guide star.

  2. Development and Characterization of a Low-Pressure Calibration System for Hypersonic Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Green, Del L.; Everhart, Joel L.; Rhode, Matthew N.

    2004-01-01

    Minimization of uncertainty is essential for accurate ESP measurements at very low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources requires a well defined and controlled calibration method. A calibration system has been constructed and environmental control software developed to control experimentation to eliminate human induced error sources. The initial stability study of the calibration system shows a high degree of measurement accuracy and precision in temperature and pressure control. Control manometer drift and reference pressure instabilities induce uncertainty into the repeatability of voltage responses measured from the PSI System 8400 between calibrations. Methods of improving repeatability are possible through software programming and further experimentation.

  3. On-orbit calibration for star sensors without priori information.

    PubMed

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, Chengfen; Yang, Yanqiang

    2017-07-24

    The star sensor is a prerequisite navigation device for a spacecraft. The on-orbit calibration is an essential guarantee for its operation performance. However, traditional calibration methods rely on ground information and are invalid without priori information. The uncertain on-orbit parameters will eventually influence the performance of guidance navigation and control system. In this paper, a novel calibration method without priori information for on-orbit star sensors is proposed. Firstly, the simplified back propagation neural network is designed for focal length and main point estimation along with system property evaluation, called coarse calibration. Then the unscented Kalman filter is adopted for the precise calibration of all parameters, including focal length, main point and distortion. The proposed method benefits from self-initialization and no attitude or preinstalled sensor parameter is required. Precise star sensor parameter estimation can be achieved without priori information, which is a significant improvement for on-orbit devices. Simulations and experiments results demonstrate that the calibration is easy for operation with high accuracy and robustness. The proposed method can satisfy the stringent requirement for most star sensors.

  4. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  5. New Method of Calibrating IRT Models.

    ERIC Educational Resources Information Center

    Jiang, Hai; Tang, K. Linda

    This discussion of new methods for calibrating item response theory (IRT) models looks into new optimization procedures, such as the Genetic Algorithm (GA) to improve on the use of the Newton-Raphson procedure. The advantages of using a global optimization procedure like GA is that this kind of procedure is not easily affected by local optima and…

  6. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-01-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  7. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-07-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  8. Improved CRDS δ13C Stability Through New Calibration Application For CO2 and CH4

    NASA Astrophysics Data System (ADS)

    Arata, C.; Rella, C.

    2014-12-01

    Stable carbon isotope ratio measurements of CO2 and CH4 provide valuable insight into global and regional sources and sinks of the two most important greenhouse gasses. Methodologies based on Cavity Ring-Down Spectroscopy (CRDS) have been developed capable of delivering δ13C measurements with a precision greater than 0.12 permil for CO2 and 0.4 permil for CH4 (1 hour window, 5 minute average). Here we present a method to further improve this measurement's stability. We have developed a two point calibration method which corrects for δ13C drift due to a dependance on carbon species concentration. This method calibrates for both carbon species concentration as well as δ13C. We go on to show that this added stability is especially valuable when using carbon isotope data in linear regression models such as Keeling plots, where even small amounts of error can be magnified to give inconclusive results. This method is demonstrated in both laboratory and ambient atmospheric conditions, and we demonstrate how to select the calibration frequency.

  9. A New Calibration Method Using Low Cost MEM IMUs to Verify the Performance of UAV-Borne MMS Payloads

    PubMed Central

    Chiang, Kai-Wei; Tsai, Meng-Lun; Naser, El-Sheimy; Habib, Ayman; Chu, Chien-Hsun

    2015-01-01

    Spatial information plays a critical role in remote sensing and mapping applications such as environment surveying and disaster monitoring. An Unmanned Aerial Vehicle (UAV)-borne mobile mapping system (MMS) can accomplish rapid spatial information acquisition under limited sky conditions with better mobility and flexibility than other means. This study proposes a long endurance Direct Geo-referencing (DG)-based fixed-wing UAV photogrammetric platform and two DG modules that each use different commercial Micro-Electro Mechanical Systems’ (MEMS) tactical grade Inertial Measurement Units (IMUs). Furthermore, this study develops a novel kinematic calibration method which includes lever arms, boresight angles and camera shutter delay to improve positioning accuracy. The new calibration method is then compared with the traditional calibration approach. The results show that the accuracy of the DG can be significantly improved by flying at a lower altitude using the new higher specification hardware. The new proposed method improves the accuracy of DG by about 20%. The preliminary results show that two-dimensional (2D) horizontal DG positioning accuracy is around 5.8 m at a flight height of 300 m using the newly designed tactical grade integrated Positioning and Orientation System (POS). The positioning accuracy in three-dimensions (3D) is less than 8 m. PMID:25808764

  10. New calibration method using low cost MEM IMUs to verify the performance of UAV-borne MMS payloads.

    PubMed

    Chiang, Kai-Wei; Tsai, Meng-Lun; Naser, El-Sheimy; Habib, Ayman; Chu, Chien-Hsun

    2015-03-19

    Spatial information plays a critical role in remote sensing and mapping applications such as environment surveying and disaster monitoring. An Unmanned Aerial Vehicle (UAV)-borne mobile mapping system (MMS) can accomplish rapid spatial information acquisition under limited sky conditions with better mobility and flexibility than other means. This study proposes a long endurance Direct Geo-referencing (DG)-based fixed-wing UAV photogrammetric platform and two DG modules that each use different commercial Micro-Electro Mechanical Systems' (MEMS) tactical grade Inertial Measurement Units (IMUs). Furthermore, this study develops a novel kinematic calibration method which includes lever arms, boresight angles and camera shutter delay to improve positioning accuracy. The new calibration method is then compared with the traditional calibration approach. The results show that the accuracy of the DG can be significantly improved by flying at a lower altitude using the new higher specification hardware. The new proposed method improves the accuracy of DG by about 20%. The preliminary results show that two-dimensional (2D) horizontal DG positioning accuracy is around 5.8 m at a flight height of 300 m using the newly designed tactical grade integrated Positioning and Orientation System (POS). The positioning accuracy in three-dimensions (3D) is less than 8 m.

  11. Radiometric Cross-Calibration of the HJ-1B IRS in the Thermal Infrared Spectral Band

    NASA Astrophysics Data System (ADS)

    Sun, K.

    2012-12-01

    The natural calamities occur continually, environment pollution and destruction in a severe position on the earth presently, which restricts societal and economic development. The satellite remote sensing technology has an important effect on improving surveillance ability of environment pollution and natural calamities. The radiometric calibration is precondition of quantitative remote sensing; which accuracy decides quality of the retrieval parameters. Since the China Environment Satellite (HJ-1A/B) has been launched successfully on September 6th, 2008, it has made an important role in the economic development of China. The satellite has four infrared bands; and one of it is thermal infrared. With application fields of quantitative remote sensing in china, finding appropriate calibration method becomes more and more important. Many kinds of independent methods can be used to do the absolute radiometric calibration. In this paper, according to the characteristic of thermal infrared channel of HJ-1B thermal infrared multi-spectral camera, the thermal infrared spectral band of HJ-1B IRS was calibrated using cross-calibration methods based on MODIS data. Firstly, the corresponding bands of the two sensors were obtained. Secondly, the MONDTRAN was run to analyze the influences of different spectral response, satellite view zenith angle, atmosphere condition and temperature on the match factor. In the end, their band match factor was calculated in different temperature, considering the dissimilar band response of the match bands. Seven images of Lake Qinghai in different time were chosen as the calibration data. On the basis of radiance of MODIS and match factor, the IRS radiance was calculated. And then the calibration coefficients were obtained by linearly regressing the radiance and the DN value. We compared the result of this cross-calibration with that of the onboard blackbody calibration, which consistency was good.The maximum difference of brightness temperature between HJ-1B IRS band4 and MODIS band 31 is less than 1 K. Therefore cross-calibration is a rapid and financial way to get calibration coefficients of HJ-1B, however, the matched factor calculation method need further research in order to further improve cross-calibration precision.

  12. Improving near-infrared prediction model robustness with support vector machine regression: a pharmaceutical tablet assay example.

    PubMed

    Igne, Benoît; Drennen, James K; Anderson, Carl A

    2014-01-01

    Changes in raw materials and process wear and tear can have significant effects on the prediction error of near-infrared calibration models. When the variability that is present during routine manufacturing is not included in the calibration, test, and validation sets, the long-term performance and robustness of the model will be limited. Nonlinearity is a major source of interference. In near-infrared spectroscopy, nonlinearity can arise from light path-length differences that can come from differences in particle size or density. The usefulness of support vector machine (SVM) regression to handle nonlinearity and improve the robustness of calibration models in scenarios where the calibration set did not include all the variability present in test was evaluated. Compared to partial least squares (PLS) regression, SVM regression was less affected by physical (particle size) and chemical (moisture) differences. The linearity of the SVM predicted values was also improved. Nevertheless, although visualization and interpretation tools have been developed to enhance the usability of SVM-based methods, work is yet to be done to provide chemometricians in the pharmaceutical industry with a regression method that can supplement PLS-based methods.

  13. Performance assessment of FY-3C/MERSI on early orbit

    NASA Astrophysics Data System (ADS)

    Hu, Xiuqing; Xu, Na; Wu, Ronghua; Chen, Lin; Min, Min; Wang, Ling; Xu, Hanlie; Sun, Ling; Yang, Zhongdong; Zhang, Peng

    2014-11-01

    FY-3C/MERSI has some remarkable improvements compared to the previous MERSIs including better spectral response function (SRF) consistency of different detectors within one band, increasing the capability of lunar observation by space view (SV) and the improvement of radiometric response stability of solar bands. During the In-orbit verification (IOV) commissioning phase, early results that indicate the MERSI representative performance were derived, including the signal noise ratio (SNR), dynamic range, MTF, B2B registration, calibration bias and instrument stability. The SNRs at the solar bands (Bands 1-4 and 6-20) was largely beyond the specifications except for two NIR bands. The in-flight calibration and verification for these bands are also heavily relied on the vicarious techniques such as China radiometric calibration sites(CRCS), cross-calibration, lunar calibration, DCC calibration, stability monitoring using Pseudo Invariant Calibration Sites (PICS) and multi-site radiance simulation. This paper will give the results of the above several calibration methods and monitoring the instrument degradation in early on-orbit time.

  14. Four years of Landsat-7 on-orbit geometric calibration and performance

    USGS Publications Warehouse

    Lee, D.S.; Storey, James C.; Choate, M.J.; Hayes, R.W.

    2004-01-01

    Unlike its predecessors, Landsat-7 has undergone regular geometric and radiometric performance monitoring and calibration since launch in April 1999. This ongoing activity, which includes issuing quarterly updates to calibration parameters, has generated a wealth of geometric performance data over the four-year on-orbit period of operations. A suite of geometric characterization (measurement and evaluation procedures) and calibration (procedures to derive improved estimates of instrument parameters) methods are employed by the Landsat-7 Image Assessment System to maintain the geometric calibration and to track specific aspects of geometric performance. These include geodetic accuracy, band-to-band registration accuracy, and image-to-image registration accuracy. These characterization and calibration activities maintain image product geometric accuracy at a high level - by monitoring performance to determine when calibration is necessary, generating new calibration parameters, and verifying that new parameters achieve desired improvements in accuracy. Landsat-7 continues to meet and exceed all geometric accuracy requirements, although aging components have begun to affect performance.

  15. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2004-03-23

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  16. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2002-01-01

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  17. Objective Measurement of Erythema in Psoriasis using Digital Color Photography with Color Calibration

    PubMed Central

    Raina, Abhay; Hennessy, Ricky; Rains, Michael; Allred, James; Hirshburg, Jason M; Diven, Dayna; Markey, Mia K.

    2016-01-01

    Background Traditional metrics for evaluating the severity of psoriasis are subjective, which complicates efforts to measure effective treatments in clinical trials. Methods We collected images of psoriasis plaques and calibrated the coloration of the images according to an included color card. Features were extracted from the images and used to train a linear discriminant analysis classifier with cross-validation to automatically classify the degree of erythema. The results were tested against numerical scores obtained by a panel of dermatologists using a standard rating system. Results Quantitative measures of erythema based on the digital color images showed good agreement with subjective assessment of erythema severity (κ = 0.4203). The color calibration process improved the agreement from κ = 0.2364 to κ = 0.4203. Conclusions We propose a method for the objective measurement of the psoriasis severity parameter of erythema and show that the calibration process improved the results. PMID:26517973

  18. Improved uncertainty quantification in nondestructive assay for nonproliferation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Ken

    2016-12-01

    This paper illustrates methods to improve uncertainty quantification (UQ) for non-destructive assay (NDA) measurements used in nuclear nonproliferation. First, it is shown that current bottom-up UQ applied to calibration data is not always adequate, for three main reasons: (1) Because there are errors in both the predictors and the response, calibration involves a ratio of random quantities, and calibration data sets in NDA usually consist of only a modest number of samples (3–10); therefore, asymptotic approximations involving quantities needed for UQ such as means and variances are often not sufficiently accurate; (2) Common practice overlooks that calibration implies a partitioningmore » of total error into random and systematic error, and (3) In many NDA applications, test items exhibit non-negligible departures in physical properties from calibration items, so model-based adjustments are used, but item-specific bias remains in some data. Therefore, improved bottom-up UQ using calibration data should predict the typical magnitude of item-specific bias, and the suggestion is to do so by including sources of item-specific bias in synthetic calibration data that is generated using a combination of modeling and real calibration data. Second, for measurements of the same nuclear material item by both the facility operator and international inspectors, current empirical (top-down) UQ is described for estimating operator and inspector systematic and random error variance components. A Bayesian alternative is introduced that easily accommodates constraints on variance components, and is more robust than current top-down methods to the underlying measurement error distributions.« less

  19. A method for soil moisture probes calibration and validation of satellite estimates.

    PubMed

    Holzman, Mauro; Rivas, Raúl; Carmona, Facundo; Niclòs, Raquel

    2017-01-01

    Optimization of field techniques is crucial to ensure high quality soil moisture data. The aim of the work is to present a sampling method for undisturbed soil and soil water content to calibrated soil moisture probes, in a context of the SMOS (Soil Moisture and Ocean Salinity) mission MIRAS Level 2 soil moisture product validation in Pampean Region of Argentina. The method avoids soil alteration and is recommended to calibrated probes based on soil type under a freely drying process at ambient temperature. A detailed explanation of field and laboratory procedures to obtain reference soil moisture is shown. The calibration results reflected accurate operation for the Delta-T thetaProbe ML2x probes in most of analyzed cases (RMSE and bias ≤ 0.05 m 3 /m 3 ). Post-calibration results indicated that the accuracy improves significantly applying the adjustments of the calibration based on soil types (RMSE ≤ 0.022 m 3 /m 3 , bias ≤ -0.010 m 3 /m 3 ). •A sampling method that provides high quality data of soil water content for calibration of probes is described.•Importance of calibration based on soil types.•A calibration process for similar soil types could be suitable in practical terms, depending on the required accuracy level.

  20. The research on calibration methods of dual-CCD laser three-dimensional human face scanning system

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong

    2013-09-01

    In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.

  1. Application of six sigma and AHP in analysis of variable lead time calibration process instrumentation

    NASA Astrophysics Data System (ADS)

    Rimantho, Dino; Rahman, Tomy Abdul; Cahyadi, Bambang; Tina Hernawati, S.

    2017-02-01

    Calibration of instrumentation equipment in the pharmaceutical industry is an important activity to determine the true value of a measurement. Preliminary studies indicated that occur lead-time calibration resulted in disruption of production and laboratory activities. This study aimed to analyze the causes of lead-time calibration. Several methods used in this study such as, Six Sigma in order to determine the capability process of the calibration instrumentation of equipment. Furthermore, the method of brainstorming, Pareto diagrams, and Fishbone diagrams were used to identify and analyze the problems. Then, the method of Hierarchy Analytical Process (AHP) was used to create a hierarchical structure and prioritize problems. The results showed that the value of DPMO around 40769.23 which was equivalent to the level of sigma in calibration equipment approximately 3,24σ. This indicated the need for improvements in the calibration process. Furthermore, the determination of problem-solving strategies Lead Time Calibration such as, shortens the schedule preventive maintenance, increase the number of instrument Calibrators, and train personnel. Test results on the consistency of the whole matrix of pairwise comparisons and consistency test showed the value of hierarchy the CR below 0.1.

  2. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  3. Note: Improved calibration of atomic force microscope cantilevers using multiple reference cantilevers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sader, John E., E-mail: jsader@unimelb.edu.au; Friend, James R.

    2015-05-15

    Overall precision of the simplified calibration method in J. E. Sader et al., Rev. Sci. Instrum. 83, 103705 (2012), Sec. III D, is dominated by the spring constant of the reference cantilever. The question arises: How does one take measurements from multiple reference cantilevers, and combine these results, to improve uncertainty of the reference cantilever’s spring constant and hence the overall precision of the method? This question is addressed in this note. Its answer enables manufacturers to specify of a single set of data for the spring constant, resonant frequency, and quality factor, from measurements on multiple reference cantilevers. Withmore » this data set, users can trivially calibrate cantilevers of the same type.« less

  4. The influence of temperature calibration on the OC-EC results from a dual-optics thermal carbon analyzer

    NASA Astrophysics Data System (ADS)

    Pavlovic, J.; Kinsey, J. S.; Hays, M. D.

    2014-09-01

    Thermal-optical analysis (TOA) is a widely used technique that fractionates carbonaceous aerosol particles into organic and elemental carbon (OC and EC), or carbonate. Thermal sub-fractions of evolved OC and EC are also used for source identification and apportionment; thus, oven temperature accuracy during TOA analysis is essential. Evidence now indicates that the "actual" sample (filter) temperature and the temperature measured by the built-in oven thermocouple (or set-point temperature) can differ by as much as 50 °C. This difference can affect the OC-EC split point selection and consequently the OC and EC fraction and sub-fraction concentrations being reported, depending on the sample composition and in-use TOA method and instrument. The present study systematically investigates the influence of an oven temperature calibration procedure for TOA. A dual-optical carbon analyzer that simultaneously measures transmission and reflectance (TOT and TOR) is used, functioning under the conditions of both the National Institute of Occupational Safety and Health Method 5040 (NIOSH) and Interagency Monitoring of Protected Visual Environment (IMPROVE) protocols. The application of the oven calibration procedure to our dual-optics instrument significantly changed NIOSH 5040 carbon fractions (OC and EC) and the IMPROVE OC fraction. In addition, the well-known OC-EC split difference between NIOSH and IMPROVE methods is even further perturbed following the instrument calibration. Further study is needed to determine if the widespread application of this oven temperature calibration procedure will indeed improve accuracy and our ability to compare among carbonaceous aerosol studies that use TOA.

  5. Improving integrity of on-line grammage measurement with traceable basic calibration.

    PubMed

    Kangasrääsiö, Juha

    2010-07-01

    The automatic control of grammage (basis weight) in paper and board production is based upon on-line grammage measurement. Furthermore, the automatic control of other quality variables such as moisture, ash content and coat weight, may rely on the grammage measurement. The integrity of Kr-85 based on-line grammage measurement systems was studied, by performing basic calibrations with traceably calibrated plastic reference standards. The calibrations were performed according to the EN ISO/IEC 17025 standard, which is a requirement for calibration laboratories. The observed relative measurement errors were 3.3% in the first time calibrations at the 95% confidence level. With the traceable basic calibration method, however, these errors can be reduced to under 0.5%, thus improving the integrity of on-line grammage measurements. Also a standardised algorithm, based on the experience from the performed calibrations, is proposed to ease the adjustment of the different grammage measurement systems. The calibration technique can basically be applied to all beta-radiation based grammage measurements. 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) calibration of the Upper Atmosphere Research Satellite (UARS) sensors

    NASA Technical Reports Server (NTRS)

    Hashmall, J.; Garrick, J.

    1993-01-01

    Flight Dynamics Facility (FDF) responsibilities for calibration of Upper Atmosphere Research Satellite (UARS) sensors included alignment calibration of the fixed-head star trackers (FHST's) and the fine Sun sensor (FSS), determination of misalignments and scale factors for the inertial reference units (IRU's), determination of biases for the three-axis magnetometers (TAM's) and Earth sensor assemblies (ESA's), determination of gimbal misalignments of the Solar/Stellar Pointing Platform (SSPP), and field-of-view calibration for the FSS's mounted both on the Modular Attitude Control System (MACS) and on the SSPP. The calibrations, which used a combination of new and established algorithms, gave excellent results. Alignment calibration results markedly improved the accuracy of both ground and onboard Computer (OBC) attitude determination. SSPP calibration results allowed UARS to identify stars in the period immediately after yaw maneuvers, removing the delay required for the OBC to reacquire its fine pointing attitude mode. SSPP calibration considerably improved the pointing accuracy of the attached science instrument package. This paper presents a summary of the methods used and the results of all FDF UARS sensor calibration.

  7. Residual mode correction in calibrating nonlinear damper for vibration control of flexible structures

    NASA Astrophysics Data System (ADS)

    Sun, Limin; Chen, Lin

    2017-10-01

    Residual mode correction is found crucial in calibrating linear resonant absorbers for flexible structures. The classic modal representation augmented with stiffness and inertia correction terms accounting for non-resonant modes improves the calibration accuracy and meanwhile avoids complex modal analysis of the full system. This paper explores the augmented modal representation in calibrating control devices with nonlinearity, by studying a taut cable attached with a general viscous damper and its Equivalent Dynamic Systems (EDSs), i.e. the augmented modal representations connected to the same damper. As nonlinearity is concerned, Frequency Response Functions (FRFs) of the EDSs are investigated in detail for parameter calibration, using the harmonic balance method in combination with numerical continuation. The FRFs of the EDSs and corresponding calibration results are then compared with those of the full system documented in the literature for varied structural modes, damper locations and nonlinearity. General agreement is found and in particular the EDS with both stiffness and inertia corrections (quasi-dynamic correction) performs best among available approximate methods. This indicates that the augmented modal representation although derived from linear cases is applicable to a relatively wide range of damper nonlinearity. Calibration of nonlinear devices by this means still requires numerical analysis while the efficiency is largely improved owing to the system order reduction.

  8. [Optimization of end-tool parameters based on robot hand-eye calibration].

    PubMed

    Zhang, Lilong; Cao, Tong; Liu, Da

    2017-04-01

    A new one-time registration method was developed in this research for hand-eye calibration of a surgical robot to simplify the operation process and reduce the preparation time. And a new and practical method is introduced in this research to optimize the end-tool parameters of the surgical robot based on analysis of the error sources in this registration method. In the process with one-time registration method, firstly a marker on the end-tool of the robot was recognized by a fixed binocular camera, and then the orientation and position of the marker were calculated based on the joint parameters of the robot. Secondly the relationship between the camera coordinate system and the robot base coordinate system could be established to complete the hand-eye calibration. Because of manufacturing and assembly errors of robot end-tool, an error equation was established with the transformation matrix between the robot end coordinate system and the robot end-tool coordinate system as the variable. Numerical optimization was employed to optimize end-tool parameters of the robot. The experimental results showed that the one-time registration method could significantly improve the efficiency of the robot hand-eye calibration compared with the existing methods. The parameter optimization method could significantly improve the absolute positioning accuracy of the one-time registration method. The absolute positioning accuracy of the one-time registration method can meet the requirements of the clinical surgery.

  9. Shortwave Radiometer Calibration Methods Comparison and Resulting Solar Irradiance Measurement Differences: A User Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Banks financing solar energy projects require assurance that these systems will produce the energy predicted. Furthermore, utility planners and grid system operators need to understand the impact of the variable solar resource on solar energy conversion system performance. Accurate solar radiation data sets reduce the expense associated with mitigating performance risk and assist in understanding the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methods provided by radiometric calibrationmore » service providers, such as NREL and manufacturers of radiometers, on the resulting calibration responsivity. Some of these radiometers are calibrated indoors and some outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides the outdoor calibration responsivity of pyranometers and pyrheliometers at 45 degree solar zenith angle, and as a function of solar zenith angle determined by clear-sky comparisons with reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison between the test radiometer under calibration and a reference radiometer of the same type. In both methods, the reference radiometer calibrations are traceable to the World Radiometric Reference (WRR). These different methods of calibration demonstrated +1% to +2% differences in solar irradiance measurement. Analyzing these differences will ultimately help determine the uncertainty of the field radiometer data and guide the development of a consensus standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainty will allow more accurate prediction of solar output and improve the bankability of solar projects.« less

  10. Calibration of the degree of linear polarization measurements of the polarized Sun-sky radiometer based on the POLBOX system.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Li; Xu, Hua; Xie, Yisong; Ma, Yan; Li, Donghui; Goloub, Philippe; Yuan, Yinlin; Zheng, Xiaobing

    2018-02-10

    Polarization observation of sky radiation is the frontier approach to improve the remote sensing of atmospheric components, e.g., aerosol and clouds. The polarization calibration of the ground-based Sun-sky radiometer is the basis for obtaining accurate degree of linear polarization (DOLP) measurement. In this paper, a DOLP calibration method based on a laboratory polarized light source (POLBOX) is introduced in detail. Combined with the CE318-DP Sun-sky polarized radiometer, a calibration scheme for DOLP measurement is established for the spectral range of 440-1640 nm. Based on the calibration results of the Sun-sky radiometer observation network, the polarization calibration coefficient and the DOLP calibration residual are analyzed statistically. The results show that the DOLP residual of the calibration scheme is about 0.0012, and thus it can be estimated that the final DOLP calibration accuracy of this method is about 0.005. Finally, it is verified that the accuracy of the calibration results is in accordance with the expected results by comparing the simulated DOLP with the vector radiative transfer calculations.

  11. Integrated calibration between digital camera and laser scanner from mobile mapping system for land vehicles

    NASA Astrophysics Data System (ADS)

    Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang

    The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.

  12. Accuracy of rapid radiographic film calibration for intensity‐modulated radiation therapy verification

    PubMed Central

    Kulasekere, Ravi; Moran, Jean M.; Fraass, Benedick A.; Roberson, Peter L.

    2006-01-01

    A single calibration film method was evaluated for use with intensity‐modulated radiation therapy film quality assurance measurements. The single‐film method has the potential advantages of exposure simplicity, less media consumption, and improved processor quality control. Potential disadvantages include cross contamination of film exposure, implementation effort to document delivered dose, and added complication of film response analysis. Film response differences were measured between standard and single‐film calibration methods. Additional measurements were performed to help trace causes for the observed discrepancies. Kodak X‐OmatV (XV) film was found to have greater response variability than extended dose range (EDR) film. We found it advisable for XV film to relate the film response calibration for the single‐film method to a user‐defined optimal calibration geometry. Using a single calibration film exposed at the time of experiment, the total uncertainty of film response was estimated to be <2% (1%) for XV (EDR) film at 50 (100) cGy and higher, respectively. PACS numbers: 87.53.‐j, 87.53.Dq PMID:17533325

  13. Improved method to fully compensate the spatial phase nonuniformity of LCoS devices with a Fizeau interferometer.

    PubMed

    Lu, Qiang; Sheng, Lei; Zeng, Fei; Gao, Shijie; Qiao, Yanfeng

    2016-10-01

    Liquid crystal on silicon (LCoS) devices usually show spatial phase nonuniformity (SPNU) in applications of phase modulation, which comprises the phase retardance nonuniformity (PRNU) as a function of the applied voltage and inherent wavefront distortion (WFD) introduced by the device itself. We propose a multipoint calibration method utilizing a Fizeau interferometer to compensate SPNU of the device. Calibration of PRNU is realized by defining a grid of 3×6 cells onto the aperture and then calculating phase retardance of each cell versus a gradient gray pattern. With designing an adjusted gray pattern calculated by the calibrated multipoint phase retardance function, compensation of inherent WFD is achieved. The peak-to-valley (PV) value of the residual WFD compensated by the multipoint calibration method is significantly reduced from 2.5λ to 0.140λ, while the PV value of the residual WFD after global calibration is reduced to 0.364λ. Experimental results of the generated finite-energy 2D Airy beams in Fourier space demonstrate the effectiveness of this multipoint calibration method.

  14. Improved phase-ellipse method for in-situ geophone calibration.

    USGS Publications Warehouse

    Liu, Huaibao P.; Peselnick, L.

    1986-01-01

    For amplitude and phase response calibration of moving-coil electromagnetic geophones 2 parameters are needed, namely the geophone natural frequency, fo, and the geophone upper resonance frequency fu. The phase-ellipse method is commonly used for the in situ determination of these parameters. For a given signal-to-noise ratio, the precision of the measurement of fo and fu depends on the phase sensitivity, f(delta PHI/delta PHIf). For some commercial geophones (f(delta PHI/delta PHI) at fu can be an order of magnitude less than the sensitivity at fo. Presents an improved phase-ellipse method with increased precision. Compared to measurements made with the existing phase-ellipse methods, the method shows a 6- and 3-fold improvement in the precision, respectively, on measurements of fo and fu on a commercial geophone.-from Authors

  15. The multi-channel infrared sea truth radiometric calibrator (MISTRC)

    USGS Publications Warehouse

    Suarez, M.J.; Emery, W. J.; Wick, G.A.

    1997-01-01

    A new multichannel infrared sea truth radiometer has been designed and built to improve validation of satellite-determined sea surface temperature. Horizontal grid polarized filters installed on the shortwave channels are very effective in reducing reflected solar radiation and in improving the noise characteristics. The system uses a continuous (every other cycle) seawater calibration technique. An analysis of the data from its first deployment is presented and recommendations are made for further improving the experimental method.

  16. An Improved Calibration Method for a Rotating 2D LIDAR System.

    PubMed

    Zeng, Yadan; Yu, Heng; Dai, Houde; Song, Shuang; Lin, Mingqiang; Sun, Bo; Jiang, Wei; Meng, Max Q-H

    2018-02-07

    This paper presents an improved calibration method of a rotating two-dimensional light detection and ranging (R2D-LIDAR) system, which can obtain the 3D scanning map of the surroundings. The proposed R2D-LIDAR system, composed of a 2D LIDAR and a rotating unit, is pervasively used in the field of robotics owing to its low cost and dense scanning data. Nevertheless, the R2D-LIDAR system must be calibrated before building the geometric model because there are assembled deviation and abrasion between the 2D LIDAR and the rotating unit. Hence, the calibration procedures should contain both the adjustment between the two devices and the bias of 2D LIDAR itself. The main purpose of this work is to resolve the 2D LIDAR bias issue with a flat plane based on the Levenberg-Marquardt (LM) algorithm. Experimental results for the calibration of the R2D-LIDAR system prove the reliability of this strategy to accurately estimate sensor offsets with the error range from -15 mm to 15 mm for the performance of capturing scans.

  17. An Improved Calibration Method for a Rotating 2D LIDAR System

    PubMed Central

    Zeng, Yadan; Yu, Heng; Song, Shuang; Lin, Mingqiang; Sun, Bo; Jiang, Wei; Meng, Max Q.-H.

    2018-01-01

    This paper presents an improved calibration method of a rotating two-dimensional light detection and ranging (R2D-LIDAR) system, which can obtain the 3D scanning map of the surroundings. The proposed R2D-LIDAR system, composed of a 2D LIDAR and a rotating unit, is pervasively used in the field of robotics owing to its low cost and dense scanning data. Nevertheless, the R2D-LIDAR system must be calibrated before building the geometric model because there are assembled deviation and abrasion between the 2D LIDAR and the rotating unit. Hence, the calibration procedures should contain both the adjustment between the two devices and the bias of 2D LIDAR itself. The main purpose of this work is to resolve the 2D LIDAR bias issue with a flat plane based on the Levenberg–Marquardt (LM) algorithm. Experimental results for the calibration of the R2D-LIDAR system prove the reliability of this strategy to accurately estimate sensor offsets with the error range from −15 mm to 15 mm for the performance of capturing scans. PMID:29414885

  18. Improved Absolute Radiometric Calibration of a UHF Airborne Radar

    NASA Technical Reports Server (NTRS)

    Chapin, Elaine; Hawkins, Brian P.; Harcke, Leif; Hensley, Scott; Lou, Yunling; Michel, Thierry R.; Moreira, Laila; Muellerschoen, Ronald J.; Shimada, Joanne G.; Tham, Kean W.; hide

    2015-01-01

    The AirMOSS airborne SAR operates at UHF and produces fully polarimetric imagery. The AirMOSS radar data are used to produce Root Zone Soil Moisture (RZSM) depth profiles. The absolute radiometric accuracy of the imagery, ideally of better than 0.5 dB, is key to retrieving RZSM, especially in wet soils where the backscatter as a function of soil moisture function tends to flatten out. In this paper we assess the absolute radiometric uncertainty in previously delivered data, describe a method to utilize Built In Test (BIT) data to improve the radiometric calibration, and evaluate the improvement from applying the method.

  19. Bayesian Treed Calibration: An Application to Carbon Capture With AX Sorbent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Lai, Kevin

    2017-01-02

    In cases where field or experimental measurements are not available, computer models can model real physical or engineering systems to reproduce their outcomes. They are usually calibrated in light of experimental data to create a better representation of the real system. Statistical methods, based on Gaussian processes, for calibration and prediction have been especially important when the computer models are expensive and experimental data limited. In this paper, we develop the Bayesian treed calibration (BTC) as an extension of standard Gaussian process calibration methods to deal with non-stationarity computer models and/or their discrepancy from the field (or experimental) data. Ourmore » proposed method partitions both the calibration and observable input space, based on a binary tree partitioning, into sub-regions where existing model calibration methods can be applied to connect a computer model with the real system. The estimation of the parameters in the proposed model is carried out using Markov chain Monte Carlo (MCMC) computational techniques. Different strategies have been applied to improve mixing. We illustrate our method in two artificial examples and a real application that concerns the capture of carbon dioxide with AX amine based sorbents. The source code and the examples analyzed in this paper are available as part of the supplementary materials.« less

  20. Improved Calibration Of Acoustic Plethysmographic Sensors

    NASA Technical Reports Server (NTRS)

    Zuckerwar, Allan J.; Davis, David C.

    1993-01-01

    Improved method of calibration of acoustic plethysmographic sensors involves acoustic-impedance test conditions like those encountered in use. Clamped aluminum tube holds source of sound (hydrophone) inside balloon. Test and reference sensors attached to outside of balloon. Sensors used to measure blood flow, blood pressure, heart rate, breathing sounds, and other vital signs from surfaces of human bodies. Attached to torsos or limbs by straps or adhesives.

  1. Improved CRDS δ13C Stability Through New Calibration Application For CO2 And CH4

    NASA Astrophysics Data System (ADS)

    Rella, Chris; Arata, Caleb; Saad, Nabil; Leggett, Graham; Miles, Natasha; Richardson, Scott; Davis, Ken

    2015-04-01

    Stable carbon isotope ratio measurements of CO2 and CH4 provide valuable insight into global and regional sources and sinks of the two most important greenhouse gases. Methodologies based on Cavity Ring-Down Spectroscopy (CRDS) have been developed and are capable of delivering δ13C measurements with a precision better than 0.12 permil for CO2 and 0.4 permil for CH4 (1 hour window, 5 minute average). Here we present a method to further improve this measurement stability. We have developed a two-point calibration method which corrects for δ13C drift due to a dependence on carbon species concentration. This method calibrates for both carbon species concentration as well as δ13C. In addition, we further demonstrate that this added stability is especially valuable when using carbon isotope data in linear regression models such as Keeling plots, where even small amounts of error can be magnified to give inconclusive results. Furthermore, we show how this method is used to validate multiple instruments simultaneously and can be used to create the standard samples needed for field calibrations.

  2. A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems

    PubMed Central

    Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.

    2013-01-01

    Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415

  3. Photometric calibration of the COMBO-17 survey with the Softassign Procrustes Matching method

    NASA Astrophysics Data System (ADS)

    Sheikhbahaee, Z.; Nakajima, R.; Erben, T.; Schneider, P.; Hildebrandt, H.; Becker, A. C.

    2017-11-01

    Accurate photometric calibration of optical data is crucial for photometric redshift estimation. We present the Softassign Procrustes Matching (SPM) method to improve the colour calibration upon the commonly used Stellar Locus Regression (SLR) method for the COMBO-17 survey. Our colour calibration approach can be categorised as a point-set matching method, which is frequently used in medical imaging and pattern recognition. We attain a photometric redshift precision Δz/(1 + zs) of better than 2 per cent. Our method is based on aligning the stellar locus of the uncalibrated stars to that of a spectroscopic sample of the Sloan Digital Sky Survey standard stars. We achieve our goal by finding a correspondence matrix between the two point-sets and applying the matrix to estimate the appropriate translations in multidimensional colour space. The SPM method is able to find the translation between two point-sets, despite the existence of noise and incompleteness of the common structures in the sets, as long as there is a distinct structure in at least one of the colour-colour pairs. We demonstrate the precision of our colour calibration method with a mock catalogue. The SPM colour calibration code is publicly available at https://neuronphysics@bitbucket.org/neuronphysics/spm.git.

  4. SCALA: In situ calibration for integral field spectrographs

    NASA Astrophysics Data System (ADS)

    Lombardo, S.; Küsters, D.; Kowalski, M.; Aldering, G.; Antilogus, P.; Bailey, S.; Baltay, C.; Barbary, K.; Baugh, D.; Bongard, S.; Boone, K.; Buton, C.; Chen, J.; Chotard, N.; Copin, Y.; Dixon, S.; Fagrelius, P.; Feindt, U.; Fouchez, D.; Gangler, E.; Hayden, B.; Hillebrandt, W.; Hoffmann, A.; Kim, A. G.; Leget, P.-F.; McKay, L.; Nordin, J.; Pain, R.; Pécontal, E.; Pereira, R.; Perlmutter, S.; Rabinowitz, D.; Reif, K.; Rigault, M.; Rubin, D.; Runge, K.; Saunders, C.; Smadja, G.; Suzuki, N.; Taubenberger, S.; Tao, C.; Thomas, R. C.; Nearby Supernova Factory

    2017-11-01

    Aims: The scientific yield of current and future optical surveys is increasingly limited by systematic uncertainties in the flux calibration. This is the case for type Ia supernova (SN Ia) cosmology programs, where an improved calibration directly translates into improved cosmological constraints. Current methodology rests on models of stars. Here we aim to obtain flux calibration that is traceable to state-of-the-art detector-based calibration. Methods: We present the SNIFS Calibration Apparatus (SCALA), a color (relative) flux calibration system developed for the SuperNova integral field spectrograph (SNIFS), operating at the University of Hawaii 2.2 m (UH 88) telescope. Results: By comparing the color trend of the illumination generated by SCALA during two commissioning runs, and to previous laboratory measurements, we show that we can determine the light emitted by SCALA with a long-term repeatability better than 1%. We describe the calibration procedure necessary to control for system aging. We present measurements of the SNIFS throughput as estimated by SCALA observations. Conclusions: The SCALA calibration unit is now fully deployed at the UH 88 telescope, and with it color-calibration between 4000 Å and 9000 Å is stable at the percent level over a one-year baseline.

  5. The Value of Hydrograph Partitioning Curves for Calibrating Hydrological Models in Glacierized Basins

    NASA Astrophysics Data System (ADS)

    He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno

    2018-03-01

    This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.

  6. Comparison Between One-Point Calibration and Two-Point Calibration Approaches in a Continuous Glucose Monitoring Algorithm

    PubMed Central

    Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl

    2014-01-01

    Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420

  7. Improved calibration-based non-uniformity correction method for uncooled infrared camera

    NASA Astrophysics Data System (ADS)

    Liu, Chengwei; Sui, Xiubao

    2017-08-01

    With the latest improvements of microbolometer focal plane arrays (FPA), uncooled infrared (IR) cameras are becoming the most widely used devices in thermography, especially in handheld devices. However the influences derived from changing ambient condition and the non-uniform response of the sensors make it more difficult to correct the nonuniformity of uncooled infrared camera. In this paper, based on the infrared radiation characteristic in the TEC-less uncooled infrared camera, a novel model was proposed for calibration-based non-uniformity correction (NUC). In this model, we introduce the FPA temperature, together with the responses of microbolometer under different ambient temperature to calculate the correction parameters. Based on the proposed model, we can work out the correction parameters with the calibration measurements under controlled ambient condition and uniform blackbody. All correction parameters can be determined after the calibration process and then be used to correct the non-uniformity of the infrared camera in real time. This paper presents the detail of the compensation procedure and the performance of the proposed calibration-based non-uniformity correction method. And our method was evaluated on realistic IR images obtained by a 384x288 pixels uncooled long wave infrared (LWIR) camera operated under changed ambient condition. The results show that our method can exclude the influence caused by the changed ambient condition, and ensure that the infrared camera has a stable performance.

  8. Automation is an Effective Way to Improve Quality of Verification (Calibration) of Measuring Instruments

    NASA Astrophysics Data System (ADS)

    Golobokov, M.; Danilevich, S.

    2018-04-01

    In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.

  9. Innovative self-calibration method for accelerometer scale factor of the missile-borne RINS with fiber optic gyro.

    PubMed

    Zhang, Qian; Wang, Lei; Liu, Zengjun; Zhang, Yiming

    2016-09-19

    The calibration of an inertial measurement unit (IMU) is a key technique to improve the preciseness of the inertial navigation system (INS) for missile, especially for the calibration of accelerometer scale factor. Traditional calibration method is generally based on the high accuracy turntable, however, it leads to expensive costs and the calibration results are not suitable to the actual operating environment. In the wake of developments in multi-axis rotational INS (RINS) with optical inertial sensors, self-calibration is utilized as an effective way to calibrate IMU on missile and the calibration results are more accurate in practical application. However, the introduction of multi-axis RINS causes additional calibration errors, including non-orthogonality errors of mechanical processing and non-horizontal errors of operating environment, it means that the multi-axis gimbals could not be regarded as a high accuracy turntable. As for its application on missiles, in this paper, after analyzing the relationship between the calibration error of accelerometer scale factor and non-orthogonality and non-horizontal angles, an innovative calibration procedure using the signals of fiber optic gyro and photoelectric encoder is proposed. The laboratory and vehicle experiment results validate the theory and prove that the proposed method relaxes the orthogonality requirement of rotation axes and eliminates the strict application condition of the system.

  10. Panorama parking assistant system with improved particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong

    2013-10-01

    A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.

  11. Differential computation method used to calibrate the angle-centroid relationship in coaxial reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-05-01

    A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.

  12. ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.

    2011-04-20

    While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can bemore » applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.« less

  13. Calibration Issues and Operating System Requirements for Electron-Probe Microanalysis

    NASA Technical Reports Server (NTRS)

    Carpenter, P.

    2006-01-01

    Instrument purchase requirements and dialogue with manufacturers have established hardware parameters for alignment, stability, and reproducibility, which have helped improve the precision and accuracy of electron microprobe analysis (EPMA). The development of correction algorithms and the accurate solution to quantitative analysis problems requires the minimization of systematic errors and relies on internally consistent data sets. Improved hardware and computer systems have resulted in better automation of vacuum systems, stage and wavelength-dispersive spectrometer (WDS) mechanisms, and x-ray detector systems which have improved instrument stability and precision. Improved software now allows extended automated runs involving diverse setups and better integrates digital imaging and quantitative analysis. However, instrumental performance is not regularly maintained, as WDS are aligned and calibrated during installation but few laboratories appear to check and maintain this calibration. In particular, detector deadtime (DT) data is typically assumed rather than measured, due primarily to the difficulty and inconvenience of the measurement process. This is a source of fundamental systematic error in many microprobe laboratories and is unknown to the analyst, as the magnitude of DT correction is not listed in output by microprobe operating systems. The analyst must remain vigilant to deviations in instrumental alignment and calibration, and microprobe system software must conveniently verify the necessary parameters. Microanalysis of mission critical materials requires an ongoing demonstration of instrumental calibration. Possible approaches to improvements in instrument calibration, quality control, and accuracy will be discussed. Development of a set of core requirements based on discussions with users, researchers, and manufacturers can yield documents that improve and unify the methods by which instruments can be calibrated. These results can be used to continue improvements of EPMA.

  14. 3D artifact for calibrating kinematic parameters of articulated arm coordinate measuring machines

    NASA Astrophysics Data System (ADS)

    Zhao, Huining; Yu, Liandong; Xia, Haojie; Li, Weishi; Jiang, Yizhou; Jia, Huakun

    2018-06-01

    In this paper, a 3D artifact is proposed to calibrate the kinematic parameters of articulated arm coordinate measuring machines (AACMMs). The artifact is composed of 14 reference points with three different heights, which provides 91 different reference lengths, and a method is proposed to calibrate the artifact with laser tracker multi-stations. Therefore, the kinematic parameters of an AACMM can be calibrated in one setup of the proposed artifact, instead of having to adjust the 1D or 2D artifacts to different positions and orientations in the existing methods. As a result, it saves time to calibrate the AACMM with the proposed artifact in comparison with the traditional 1D or 2D artifacts. The performance of the AACMM calibrated with the proposed artifact is verified with a 600.003 mm gauge block. The result shows that the measurement accuracy of the AACMM is improved effectively through calibration with the proposed artifact.

  15. Exploring the calibration of a wind forecast ensemble for energy applications

    NASA Astrophysics Data System (ADS)

    Heppelmann, Tobias; Ben Bouallegue, Zied; Theis, Susanne

    2015-04-01

    In the German research project EWeLiNE, Deutscher Wetterdienst (DWD) and Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) are collaborating with three German Transmission System Operators (TSO) in order to provide the TSOs with improved probabilistic power forecasts. Probabilistic power forecasts are derived from probabilistic weather forecasts, themselves derived from ensemble prediction systems (EPS). Since the considered raw ensemble wind forecasts suffer from underdispersiveness and bias, calibration methods are developed for the correction of the model bias and the ensemble spread bias. The overall aim is to improve the ensemble forecasts such that the uncertainty of the possible weather deployment is depicted by the ensemble spread from the first forecast hours. Additionally, the ensemble members after calibration should remain physically consistent scenarios. We focus on probabilistic hourly wind forecasts with horizon of 21 h delivered by the convection permitting high-resolution ensemble system COSMO-DE-EPS which has become operational in 2012 at DWD. The ensemble consists of 20 ensemble members driven by four different global models. The model area includes whole Germany and parts of Central Europe with a horizontal resolution of 2.8 km and a vertical resolution of 50 model levels. For verification we use wind mast measurements around 100 m height that corresponds to the hub height of wind energy plants that belong to wind farms within the model area. Calibration of the ensemble forecasts can be performed by different statistical methods applied to the raw ensemble output. Here, we explore local bivariate Ensemble Model Output Statistics at individual sites and quantile regression with different predictors. Applying different methods, we already show an improvement of ensemble wind forecasts from COSMO-DE-EPS for energy applications. In addition, an ensemble copula coupling approach transfers the time-dependencies of the raw ensemble to the calibrated ensemble. The calibrated wind forecasts are evaluated first with univariate probabilistic scores and additionally with diagnostics of wind ramps in order to assess the time-consistency of the calibrated ensemble members.

  16. Calibration of High Heat Flux Sensors at NIST

    PubMed Central

    Murthy, A. V.; Tsai, B. K.; Gibson, C. E.

    1997-01-01

    An ongoing program at the National Institute of Standards and Technology (NIST) is aimed at improving and standardizing heat-flux sensor calibration methods. The current calibration needs of U.S. science and industry exceed the current NIST capability of 40 kW/m2 irradiance. In achieving this goal, as well as meeting lower-level non-radiative heat flux calibration needs of science and industry, three different types of calibration facilities currently are under development at NIST: convection, conduction, and radiation. This paper describes the research activities associated with the NIST Radiation Calibration Facility. Two different techniques, transfer and absolute, are presented. The transfer calibration technique employs a transfer standard calibrated with reference to a radiometric standard for calibrating the sensors using a graphite tube blackbody. Plans for an absolute calibration facility include the use of a spherical blackbody and a cooled aperture and sensor-housing assembly to calibrate the sensors in a low convective environment. PMID:27805156

  17. Calibration Matters: Advances in Strapdown Airborne Gravimetry

    NASA Astrophysics Data System (ADS)

    Becker, D.

    2015-12-01

    Using a commercial navigation-grade strapdown inertial measurement unit (IMU) for airborne gravimetry can be advantageous in terms of cost, handling, and space consumption compared to the classical stable-platform spring gravimeters. Up to now, however, large sensor errors made it impossible to reach the mGal-level using such type IMUs as they are not designed or optimized for this kind of application. Apart from a proper error-modeling in the filtering process, specific calibration methods that are tailored to the application of aerogravity may help to bridge this gap and to improve their performance. Based on simulations, a quantitative analysis is presented on how much IMU sensor errors, as biases, scale factors, cross couplings, and thermal drifts distort the determination of gravity and the deflection of the vertical (DOV). Several lab and in-field calibration methods are briefly discussed, and calibration results are shown for an iMAR RQH unit. In particular, a thermal lab calibration of its QA2000 accelerometers greatly improved the long-term drift behavior. Latest results from four recent airborne gravimetry campaigns confirm the effectiveness of the calibrations applied, with cross-over accuracies reaching 1.0 mGal (0.6 mGal after cross-over adjustment) and DOV accuracies reaching 1.1 arc seconds after cross-over adjustment.

  18. Aircraft electric field measurements: Calibration and ambient field retrieval

    NASA Technical Reports Server (NTRS)

    Koshak, William J.; Bailey, Jeff; Christian, Hugh J.; Mach, Douglas M.

    1994-01-01

    An aircraft locally distorts the ambient thundercloud electric field. In order to determine the field in the absence of the aircraft, an aircraft calibration is required. In this work a matrix inversion method is introduced for calibrating an aircraft equipped with four or more electric field sensors and a high-voltage corona point that is capable of charging the aircraft. An analytic, closed form solution for the estimate of a (3 x 3) aircraft calibration matrix is derived, and an absolute calibration experiment is used to improve the relative magnitudes of the elements of this matrix. To demonstrate the calibration procedure, we analyze actual calibration date derived from a Lear jet 28/29 that was equipped with five shutter-type field mill sensors (each with sensitivities of better than 1 V/m) located on the top, bottom, port, starboard, and aft positions. As a test of the calibration method, we analyze computer-simulated calibration data (derived from known aircraft and ambient fields) and explicitly determine the errors involved in deriving the variety of calibration matrices. We extend our formalism to arrive at an analytic solution for the ambient field, and again carry all errors explicitly.

  19. Precision alignment and calibration of optical systems using computer generated holograms

    NASA Astrophysics Data System (ADS)

    Coyle, Laura Elizabeth

    As techniques for manufacturing and metrology advance, optical systems are being designed with more complexity than ever before. Given these prescriptions, alignment and calibration can be a limiting factor in their final performance. Computer generated holograms (CGHs) have several unique properties that make them powerful tools for meeting these demanding tolerances. This work will present three novel methods for alignment and calibration of optical systems using computer generated holograms. Alignment methods using CGHs require that the optical wavefront created by the CGH be related to a mechanical datum to locate it space. An overview of existing methods is provided as background, then two new alignment methods are discussed in detail. In the first method, the CGH contact Ball Alignment Tool (CBAT) is used to align a ball or sphere mounted retroreflector (SMR) to a Fresnel zone plate pattern with micron level accuracy. The ball is bonded directly onto the CGH substrate and provides permanent, accurate registration between the optical wavefront and a mechanical reference to locate the CGH in space. A prototype CBAT was built and used to align and bond an SMR to a CGH. In the second method, CGH references are used to align axi-symmetric optics in four degrees of freedom with low uncertainty and real time feedback. The CGHs create simultaneous 3D optical references where the zero order reflection sets tilt and the first diffracted order sets centration. The flexibility of the CGH design can be used to accommodate a wide variety of optical systems and maximize sensitivity to misalignments. A 2-CGH prototype system was aligned multiplied times and the alignment uncertainty was quantified and compared to an error model. Finally, an enhanced calibration method is presented. It uses multiple perturbed measurements of a master sphere to improve the calibration of CGH-based Fizeau interferometers ultimately measuring aspheric test surfaces. The improvement in the calibration is a function of the interferometer error and the aspheric departure of the desired test surface. This calibration is most effective at reducing coma and trefoil from figure error or misalignments of the interferometer components. The enhanced calibration can reduce overall measurement uncertainty or allow the budgeted error contribution from another source to be increased. A single set of sphere measurements can be used to calculate calibration maps for closely related aspheres, including segmented primary mirrors for telescopes. A parametric model is developed and compared to the simulated calibration of a case study interferometer.

  20. Investigating the Effects of Variable Water Type for VIIRS Calibration

    NASA Astrophysics Data System (ADS)

    Bowers, J.; Ladner, S.; Martinolich, P.; Arnone, R.; Lawson, A.; Crout, R. L.; Vandermeulen, R. A.

    2016-02-01

    The Naval Research Laboratory - Stennis Space Center (NRL-SSC) currently provides calibration and validation support for the Visible Infrared Imaging Radiometer Suite (VIIRS) satellite ocean color products. NRL-SSC utilizes the NASA Ocean Biology Processing Group (OBPG) methodology for on-orbit vicarious calibration with in situ data collected in blue ocean water by the Marine Optical Buoy (MOBY). An acceptable calibration consists of 20-40 satellite to in situ matchups that establish the radiance correlation at specific points within the operating range of the VIIRS instrument. While the current method improves the VIIRS performance, the MOBY data alone does not represent the full range of radiance values seen in the coastal oceans. However, by utilizing data from the AERONET-OC coastal sites we expand our calibration matchups to cover a more realistic range of continuous values particularly in the green and red spectral regions of the sensor. Improved calibration will provide more accurate data to support daily operations and enable construction of valid climatology for future reference.

  1. Design of a Two-Step Calibration Method of Kinematic Parameters for Serial Robots

    NASA Astrophysics Data System (ADS)

    WANG, Wei; WANG, Lei; YUN, Chao

    2017-03-01

    Serial robots are used to handle workpieces with large dimensions, and calibrating kinematic parameters is one of the most efficient ways to upgrade their accuracy. Many models are set up to investigate how many kinematic parameters can be identified to meet the minimal principle, but the base frame and the kinematic parameter are indistinctly calibrated in a one-step way. A two-step method of calibrating kinematic parameters is proposed to improve the accuracy of the robot's base frame and kinematic parameters. The forward kinematics described with respect to the measuring coordinate frame are established based on the product-of-exponential (POE) formula. In the first step the robot's base coordinate frame is calibrated by the unit quaternion form. The errors of both the robot's reference configuration and the base coordinate frame's pose are equivalently transformed to the zero-position errors of the robot's joints. The simplified model of the robot's positioning error is established in second-power explicit expressions. Then the identification model is finished by the least square method, requiring measuring position coordinates only. The complete subtasks of calibrating the robot's 39 kinematic parameters are finished in the second step. It's proved by a group of calibration experiments that by the proposed two-step calibration method the average absolute accuracy of industrial robots is updated to 0.23 mm. This paper presents that the robot's base frame should be calibrated before its kinematic parameters in order to upgrade its absolute positioning accuracy.

  2. Requirements for Calibration in Noninvasive Glucose Monitoring by Raman Spectroscopy

    PubMed Central

    Lipson, Jan; Bernhardt, Jeff; Block, Ueyn; Freeman, William R.; Hofmeister, Rudy; Hristakeva, Maya; Lenosky, Thomas; McNamara, Robert; Petrasek, Danny; Veltkamp, David; Waydo, Stephen

    2009-01-01

    Background In the development of noninvasive glucose monitoring technology, it is highly desirable to derive a calibration that relies on neither person-dependent calibration information nor supplementary calibration points furnished by an existing invasive measurement technique (universal calibration). Method By appropriate experimental design and associated analytical methods, we establish the sufficiency of multiple factors required to permit such a calibration. Factors considered are the discrimination of the measurement technique, stabilization of the experimental apparatus, physics–physiology-based measurement techniques for normalization, the sufficiency of the size of the data set, and appropriate exit criteria to establish the predictive value of the algorithm. Results For noninvasive glucose measurements, using Raman spectroscopy, the sufficiency of the scale of data was demonstrated by adding new data into an existing calibration algorithm and requiring that (a) the prediction error should be preserved or improved without significant re-optimization, (b) the complexity of the model for optimum estimation not rise with the addition of subjects, and (c) the estimation for persons whose data were removed entirely from the training set should be no worse than the estimates on the remainder of the population. Using these criteria, we established guidelines empirically for the number of subjects (30) and skin sites (387) for a preliminary universal calibration. We obtained a median absolute relative difference for our entire data set of 30 mg/dl, with 92% of the data in the Clarke A and B ranges. Conclusions Because Raman spectroscopy has high discrimination for glucose, a data set of practical dimensions appears to be sufficient for universal calibration. Improvements based on reducing the variance of blood perfusion are expected to reduce the prediction errors substantially, and the inclusion of supplementary calibration points for the wearable device under development will be permissible and beneficial. PMID:20144354

  3. Calibration of a portable HPGe detector using MCNP code for the determination of 137Cs in soils.

    PubMed

    Gutiérrez-Villanueva, J L; Martín-Martín, A; Peña, V; Iniguez, M P; de Celis, B; de la Fuente, R

    2008-10-01

    In situ gamma spectrometry provides a fast method to determine (137)Cs inventories in soils. To improve the accuracy of the estimates, one can use not only the information on the photopeak count rates but also on the peak to forward-scatter ratios. Before applying this procedure to field measurements, a calibration including several experimental simulations must be carried out in the laboratory. In this paper it is shown that Monte Carlo methods are a valuable tool to minimize the number of experimental measurements needed for the calibration.

  4. An intelligent space for mobile robot localization using a multi-camera system.

    PubMed

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  5. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    PubMed Central

    Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.

    2014-01-01

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009

  6. Simultaneous auto-calibration and gradient delays estimation (SAGE) in non-Cartesian parallel MRI using low-rank constraints.

    PubMed

    Jiang, Wenwen; Larson, Peder E Z; Lustig, Michael

    2018-03-09

    To correct gradient timing delays in non-Cartesian MRI while simultaneously recovering corruption-free auto-calibration data for parallel imaging, without additional calibration scans. The calibration matrix constructed from multi-channel k-space data should be inherently low-rank. This property is used to construct reconstruction kernels or sensitivity maps. Delays between the gradient hardware across different axes and RF receive chain, which are relatively benign in Cartesian MRI (excluding EPI), lead to trajectory deviations and hence data inconsistencies for non-Cartesian trajectories. These in turn lead to higher rank and corrupted calibration information which hampers the reconstruction. Here, a method named Simultaneous Auto-calibration and Gradient delays Estimation (SAGE) is proposed that estimates the actual k-space trajectory while simultaneously recovering the uncorrupted auto-calibration data. This is done by estimating the gradient delays that result in the lowest rank of the calibration matrix. The Gauss-Newton method is used to solve the non-linear problem. The method is validated in simulations using center-out radial, projection reconstruction and spiral trajectories. Feasibility is demonstrated on phantom and in vivo scans with center-out radial and projection reconstruction trajectories. SAGE is able to estimate gradient timing delays with high accuracy at a signal to noise ratio level as low as 5. The method is able to effectively remove artifacts resulting from gradient timing delays and restore image quality in center-out radial, projection reconstruction, and spiral trajectories. The low-rank based method introduced simultaneously estimates gradient timing delays and provides accurate auto-calibration data for improved image quality, without any additional calibration scans. © 2018 International Society for Magnetic Resonance in Medicine.

  7. Research on the method of improving the accuracy of CMM (coordinate measuring machine) testing aspheric surface

    NASA Astrophysics Data System (ADS)

    Cong, Wang; Xu, Lingdi; Li, Ang

    2017-10-01

    Large aspheric surface which have the deviation with spherical surface are being used widely in various of optical systems. Compared with spherical surface, Large aspheric surfaces have lots of advantages, such as improving image quality, correcting aberration, expanding field of view, increasing the effective distance and make the optical system compact, lightweight. Especially, with the rapid development of space optics, space sensor resolution is required higher and viewing angle is requred larger. Aspheric surface will become one of the essential components in the optical system. After finishing Aspheric coarse Grinding surface profile error is about Tens of microns[1].In order to achieve the final requirement of surface accuracy,the aspheric surface must be quickly modified, high precision testing is the basement of rapid convergence of the surface error . There many methods on aspheric surface detection[2], Geometric ray detection, hartmann detection, ronchi text, knifeedge method, direct profile test, interferometry, while all of them have their disadvantage[6]. In recent years the measure of the aspheric surface become one of the import factors which are restricting the aspheric surface processing development. A two meter caliber industrial CMM coordinate measuring machine is avaiable, but it has many drawbacks such as large detection error and low repeatability precision in the measurement of aspheric surface coarse grinding , which seriously affects the convergence efficiency during the aspherical mirror processing. To solve those problems, this paper presents an effective error control, calibration and removal method by calibration mirror position of the real-time monitoring and other effective means of error control, calibration and removal by probe correction and the measurement mode selection method to measure the point distribution program development. This method verified by real engineer examples, this method increases the original industrial-grade coordinate system nominal measurement accuracy PV value of 7 microns to 4microns, Which effectively improves the grinding efficiency of aspheric mirrors and verifies the correctness of the method. This paper also investigates the error detection and operation control method, the error calibration of the CMM and the random error calibration of the CMM .

  8. Evaluation of Factors Affecting CGMS Calibration

    PubMed Central

    2006-01-01

    Background The optimal number/timing of calibrations entered into the Continuous Glucose Monitoring System (“CGMS”; Medtronic MiniMed, Northridge, CA) have not been previously described. Methods Fifty subjects with T1DM (10–18y) were hospitalized in a clinical research center for ~24h on two separate days. CGMS and OneTouch® Ultra® Meter (“Ultra”; LifeScan, Milpitas, CA) data were obtained. The CGMS was retrospectively recalibrated using the Ultra data varying the number and timing of calibrations. Resulting CGMS values were compared against laboratory reference values. Results There was a modest improvement in accuracy with increasing number of calibrations. The median relative absolute deviation (RAD) was 14%, 15%, 13% and 13% when using 3, 4, 5 and 7 calibration values, respectively (p<0.001). Corresponding percentages of CGMS-reference pairs meeting the ISO criteria were 66%, 67%, 71% and 72% (p<0.001). Nighttime accuracy improved when daytime calibrations (pre-lunch and pre-dinner) were removed leaving only two calibrations at 9p.m. and 6a.m. (median difference: −2 vs. −9mg/dL, p<0.001; median RAD: 12% vs. 15%, p=0.001). Accuracy was better on visits where the average absolute rate of glucose change at the times of calibration was lower. On visits with average absolute rates <0.5, 0.5-<1.0, 1.0-<1.5 and ≥1.5mg/dL/min, median RAD values were 13% vs. 14% vs. 17% vs. 19%, respectively (p=0.05). Conclusions Although accuracy is slightly improved with more calibrations, the timing of the calibrations appears more important. Modifying the algorithm to put less weight on daytime calibrations for nighttime values and calibrating during times of relative glucose stability may have greater impact on accuracy. PMID:16800753

  9. Hand-eye calibration for rigid laparoscopes using an invariant point.

    PubMed

    Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2016-06-01

    Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.

  10. Dynamic Calibration and Verification Device of Measurement System for Dynamic Characteristic Coefficients of Sliding Bearing

    PubMed Central

    Chen, Runlin; Wei, Yangyang; Shi, Zhaoyang; Yuan, Xiaoyang

    2016-01-01

    The identification accuracy of dynamic characteristics coefficients is difficult to guarantee because of the errors of the measurement system itself. A novel dynamic calibration method of measurement system for dynamic characteristics coefficients is proposed in this paper to eliminate the errors of the measurement system itself. Compared with the calibration method of suspension quality, this novel calibration method is different because the verification device is a spring-mass system, which can simulate the dynamic characteristics of sliding bearing. The verification device is built, and the calibration experiment is implemented in a wide frequency range, in which the bearing stiffness is simulated by the disc springs. The experimental results show that the amplitude errors of this measurement system are small in the frequency range of 10 Hz–100 Hz, and the phase errors increase along with the increasing of frequency. It is preliminarily verified by the simulated experiment of dynamic characteristics coefficients identification in the frequency range of 10 Hz–30 Hz that the calibration data in this frequency range can support the dynamic characteristics test of sliding bearing in this frequency range well. The bearing experiments in greater frequency ranges need higher manufacturing and installation precision of calibration device. Besides, the processes of calibration experiments should be improved. PMID:27483283

  11. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence.

    PubMed

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-02-18

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.

  12. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-01-01

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203

  13. Fast calibration of electromagnetically tracked oblique-viewing rigid endoscopes.

    PubMed

    Liu, Xinyang; Rice, Christina E; Shekhar, Raj

    2017-10-01

    The oblique-viewing (i.e., angled) rigid endoscope is a commonly used tool in conventional endoscopic surgeries. The relative rotation between its two moveable parts, the telescope and the camera head, creates a rotation offset between the actual and the projection of an object in the camera image. A calibration method tailored to compensate such offset is needed. We developed a fast calibration method for oblique-viewing rigid endoscopes suitable for clinical use. In contrast to prior approaches based on optical tracking, we used electromagnetic (EM) tracking as the external tracking hardware to improve compactness and practicality. Two EM sensors were mounted on the telescope and the camera head, respectively, with considerations to minimize EM tracking errors. Single-image calibration was incorporated into the method, and a sterilizable plate, laser-marked with the calibration pattern, was also developed. Furthermore, we proposed a general algorithm to estimate the rotation center in the camera image. Formulas for updating the camera matrix in terms of clockwise and counterclockwise rotations were also developed. The proposed calibration method was validated using a conventional [Formula: see text], 5-mm laparoscope. Freehand calibrations were performed using the proposed method, and the calibration time averaged 2 min and 8 s. The calibration accuracy was evaluated in a simulated clinical setting with several surgical tools present in the magnetic field of EM tracking. The root-mean-square re-projection error averaged 4.9 pixel (range 2.4-8.5 pixel, with image resolution of [Formula: see text] for rotation angles ranged from [Formula: see text] to [Formula: see text]. We developed a method for fast and accurate calibration of oblique-viewing rigid endoscopes. The method was also designed to be performed in the operating room and will therefore support clinical translation of many emerging endoscopic computer-assisted surgical systems.

  14. Standardization of Laser Methods and Techniques for Vibration Measurements and Calibrations

    NASA Astrophysics Data System (ADS)

    von Martens, Hans-Jürgen

    2010-05-01

    The realization and dissemination of the SI units of motion quantities (vibration and shock) have been based on laser interferometer methods specified in international documentary standards. New and refined laser methods and techniques developed by national metrology institutes and by leading manufacturers in the past two decades have been swiftly specified as standard methods for inclusion into in the series ISO 16063 of international documentary standards. A survey of ISO Standards for the calibration of vibration and shock transducers demonstrates the extended ranges and improved accuracy (measurement uncertainty) of laser methods and techniques for vibration and shock measurements and calibrations. The first standard for the calibration of laser vibrometers by laser interferometry or by a reference accelerometer calibrated by laser interferometry (ISO 16063-41) is on the stage of a Draft International Standard (DIS) and may be issued by the end of 2010. The standard methods with refined techniques proved to achieve wider measurement ranges and smaller measurement uncertainties than that specified in the ISO Standards. The applicability of different standardized interferometer methods to vibrations at high frequencies was recently demonstrated up to 347 kHz (acceleration amplitudes up to 350 km/s2). The relative deviations between the amplitude measurement results of the different interferometer methods that were applied simultaneously, differed by less than 1% in all cases.

  15. Efficient material decomposition method for dual-energy X-ray cargo inspection system

    NASA Astrophysics Data System (ADS)

    Lee, Donghyeon; Lee, Jiseoc; Min, Jonghwan; Lee, Byungcheol; Lee, Byeongno; Oh, Kyungmin; Kim, Jaehyun; Cho, Seungryong

    2018-03-01

    Dual-energy X-ray inspection systems are widely used today for it provides X-ray attenuation contrast of the imaged object and also its material information. Material decomposition capability allows a higher detection sensitivity of potential targets including purposely loaded impurities in agricultural product inspections and threats in security scans for example. Dual-energy X-ray transmission data can be transformed into two basis material thickness data, and its transformation accuracy heavily relies on a calibration of material decomposition process. The calibration process in general can be laborious and time consuming. Moreover, a conventional calibration method is often challenged by the nonuniform spectral characteristics of the X-ray beam in the entire field-of-view (FOV). In this work, we developed an efficient material decomposition calibration process for a linear accelerator (LINAC) based high-energy X-ray cargo inspection system. We also proposed a multi-spot calibration method to improve the decomposition performance throughout the entire FOV. Experimental validation of the proposed method has been demonstrated by use of a cargo inspection system that supports 6 MV and 9 MV dual-energy imaging.

  16. New Temperature Calibrations and Validation Tests of 5- and 6-Methyl brGDGTs in Lake Sediment

    NASA Astrophysics Data System (ADS)

    Russell, J. M.; Williams, J. W.; Jackson, S. T.; S Sinninghe Damsté, J.; Watson, B. I.

    2017-12-01

    Branched glycerol dialkyl glycerol tetraethers (brGDGTs) are increasingly used to reconstruct changes in temperature and other environmental variables. There are now multiple methods to measure brGDGTs, many different brGDGT calibrations for different environments, and many applications of the brGDGT proxy, yet brGDGT-based temperature reconstructions have rarely been tested against independent paleoclimate data to evaluate and validate the proxy. We present new temperature calibrations of brGDGTs preserved in 65 lake sediment samples determined using new, improved chromatographic methods that separate 5- and 6-methyl brGDGT isomers. We test these new calibrations, as well as calibrations using older methods that do not separate brGDGT isomers, in a sediment core spanning the last deglaciation from a classic North American site (Silver Lake, USA) against independent pollen-derived temperature estimates. The distributions of and environmental controls on 5- and 6-methyl brGDGTs differs significantly in lake sediments versus soils, suggesting different controls on bacterial membrane lipid compositions in the two environments. This results in different calibrations in soils and lake sediments; however, like soils, separation of 5- and 6-methyl isomers significantly improves the errors statistics of some brGDGT-temperature calibrations, with calibration errors of 2-2.5 ºC. Applying these calibrations to sediments from Silver Lake, we observe warming from the last glacial maximum to the Holocene of 10.5 ºC as well as clear Bolling-Allerod and Younger Dryas responses. The amplitude and structure of temperature changes inferred from brGDGTs match well with estimates from pollen, with correlations (r2) as high as 0.88, indicating GDGTs can provide accurate temperature reconstructions. We further observe relationships between brGDGT- and pollen-inferred temperature estimates that suggest GDGT proxies can provide information on vegetation responses to climate changes in the past.

  17. Existing methods for improving the accuracy of digital-to-analog converters

    NASA Astrophysics Data System (ADS)

    Eielsen, Arnfinn A.; Fleming, Andrew J.

    2017-09-01

    The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.

  18. Improved cross-calibration of Thomson scattering and electron cyclotron emission with ECH on DIII-D

    DOE PAGES

    Brookman, M. W.; Austin, M. E.; McLean, A. G.; ...

    2016-08-08

    Thomson scattering (TS) produces n e profiles from measurement of scattered laser beam intensity. In the case of Rayleigh scattering, it provides a first calibration of the relation n e / ITS, which depends on many factors (e.g. laser alignment and power, optics, and measurement systems). On DIII-D, the n e calibration is adjusted for each laser and optic path against an absolute n e measurement from a density-driven cutoff on the 48 channel 2nd harmonic X-mode electron cyclotron emission (ECE) system. This method has been used to calibrate Thompson densities from the edge to near the core (r/a >more » 0.15). Application of core electron cyclotron heating improves the quality of cutoff and depth of its penetration into the core. ECH also changes underlying MHD activity. Furthermore, on the removal of ECH power, cutoff penetrates in from the edge to the core and channels fall successively and smoothly into cutoff. This improves the quality of the TS n e calibration while minimizing wall loading.« less

  19. Calibration of HST wide field camera for quantitative analysis of faint galaxy images

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Griffiths, Richard E.; Casertano, Stefano; Neuschaefer, Lyman W.; Wyckoff, Eric W.

    1994-01-01

    We present the methods adopted to optimize the calibration of images obtained with the Hubble Space Telescope (HST) Wide Field Camera (WFC) (1991-1993). Our main goal is to improve quantitative measurement of faint images, with special emphasis on the faint (I approximately 20-24 mag) stars and galaxies observed as a part of the Medium-Deep Survey. Several modifications to the standard calibration procedures have been introduced, including improved bias and dark images, and a new supersky flatfield obtained by combining a large number of relatively object-free Medium-Deep Survey exposures of random fields. The supersky flat has a pixel-to-pixel rms error of about 2.0% in F555W and of 2.4% in F785LP; large-scale variations are smaller than 1% rms. Overall, our modifications improve the quality of faint images with respect to the standard calibration by about a factor of five in photometric accuracy and about 0.3 mag in sensitivity, corresponding to about a factor of two in observing time. The relevant calibration images have been made available to the scientific community.

  20. True logarithmic amplification of frequency clock in SS-OCT for calibration

    PubMed Central

    Liu, Bin; Azimi, Ehsan; Brezinski, Mark E.

    2011-01-01

    With swept source optical coherence tomography (SS-OCT), imprecise signal calibration prevents optimal imaging of biological tissues such as coronary artery. This work demonstrates an approach using a true logarithmic amplifier to precondition the clock signal, with the effort to minimize the noises and phase errors for optimal calibration. This method was validated and tested with a high-speed SS-OCT. The experimental results manifest its superior ability on optimization of the calibration and improvement of the imaging performance. Particularly, this hardware-based approach is suitable for real-time calibration in a high-speed system where computation time is constrained. PMID:21698036

  1. Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com

    2016-06-15

    The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiationmore » of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.« less

  2. NASA Astrophysics Data System (ADS)

    2017-11-01

    To deal with these problems investigators usually rely on a calibration method that makes use of a substance with an accurately known set of interatomic distances. The procedure consists of carrying out a diffraction experiment on the chosen calibrating substance, determining the value of the distances with use of the nominal (meter) value of the voltage, and then correcting the nominal voltage by an amount that produces the distances in the calibration substance. Examples of gases that have been used for calibration are carbon dioxide, carbon tetrachloride, carbon disulfide, and benzene; solids such as zinc oxide smoke (powder) deposited on a screen or slit have also been used. The question implied by the use of any standard molecule is, how accurate are the interatomic distance values assigned to the standard? For example, a solid calibrant is subject to heating by the electron beam, possibly producing unknown changes in the lattice constants, and polyatomic gaseous molecules require corrections for vibrational averaging ("shrinkage") effects that are uncertain at best. It has lately been necessary for us to investigate this matter in connection with on-going studies of several molecules in which size is the most important issue. These studies indicated that our usual method for retrieval of data captured on film needed improvement. The following is an account of these two issues - the accuracy of the distances assigned to the chosen standard molecule, and the improvements in our methods of retrieving the scattered intensity data.

  3. Study of continuous blood pressure estimation based on pulse transit time, heart rate and photoplethysmography-derived hemodynamic covariates.

    PubMed

    Feng, Jingjie; Huang, Zhongyi; Zhou, Congcong; Ye, Xuesong

    2018-06-01

    It is widely recognized that pulse transit time (PTT) can track blood pressure (BP) over short periods of time, and hemodynamic covariates such as heart rate, stiffness index may also contribute to BP monitoring. In this paper, we derived a proportional relationship between BP and PPT -2 and proposed an improved method adopting hemodynamic covariates in addition to PTT for continuous BP estimation. We divided 28 subjects from the Multi-parameter Intelligent Monitoring for Intensive Care database into two groups (with/without cardiovascular diseases) and utilized a machine learning strategy based on regularized linear regression (RLR) to construct BP models with different covariates for corresponding groups. RLR was performed for individuals as the initial calibration, while recursive least square algorithm was employed for the re-calibration. The results showed that errors of BP estimation by our method stayed within the Association of Advancement of Medical Instrumentation limits (- 0.98 ± 6.00 mmHg @ SBP, 0.02 ± 4.98 mmHg @ DBP) when the calibration interval extended to 1200-beat cardiac cycles. In comparison with other two representative studies, Chen's method kept accurate (0.32 ± 6.74 mmHg @ SBP, 0.94 ± 5.37 mmHg @ DBP) using a 400-beat calibration interval, while Poon's failed (- 1.97 ± 10.59 mmHg @ SBP, 0.70 ± 4.10 mmHg @ DBP) when using a 200-beat calibration interval. With additional hemodynamic covariates utilized, our method improved the accuracy of PTT-based BP estimation, decreased the calibration frequency and had the potential for better continuous BP estimation.

  4. Quasi-Static Calibration Method of a High-g Accelerometer

    PubMed Central

    Wang, Yan; Fan, Jinbiao; Zu, Jing; Xu, Peng

    2017-01-01

    To solve the problem of resonance during quasi-static calibration of high-g accelerometers, we deduce the relationship between the minimum excitation pulse width and the resonant frequency of the calibrated accelerometer according to the second-order mathematical model of the accelerometer, and improve the quasi-static calibration theory. We establish a quasi-static calibration testing system, which uses a gas gun to generate high-g acceleration signals, and apply a laser interferometer to reproduce the impact acceleration. These signals are used to drive the calibrated accelerometer. By comparing the excitation acceleration signal and the output responses of the calibrated accelerometer to the excitation signals, the impact sensitivity of the calibrated accelerometer is obtained. As indicated by the calibration test results, this calibration system produces excitation acceleration signals with a pulse width of less than 1000 μs, and realize the quasi-static calibration of high-g accelerometers with a resonant frequency above 20 kHz when the calibration error was 3%. PMID:28230743

  5. Availability of High Quality TRMM Ground Validation Data from Kwajalein, RMI: A Practical Application of the Relative Calibration Adjustment Technique

    NASA Technical Reports Server (NTRS)

    Marks, David A.; Wolff, David B.; Silberstein, David S.; Tokay, Ali; Pippitt, Jason L.; Wang, Jianxin

    2008-01-01

    Since the Tropical Rainfall Measuring Mission (TRMM) satellite launch in November 1997, the TRMM Satellite Validation Office (TSVO) at NASA Goddard Space Flight Center (GSFC) has been performing quality control and estimating rainfall from the KPOL S-band radar at Kwajalein, Republic of the Marshall Islands. Over this period, KPOL has incurred many episodes of calibration and antenna pointing angle uncertainty. To address these issues, the TSVO has applied the Relative Calibration Adjustment (RCA) technique to eight years of KPOL radar data to produce Ground Validation (GV) Version 7 products. This application has significantly improved stability in KPOL reflectivity distributions needed for Probability Matching Method (PMM) rain rate estimation and for comparisons to the TRMM Precipitation Radar (PR). In years with significant calibration and angle corrections, the statistical improvement in PMM distributions is dramatic. The intent of this paper is to show improved stability in corrected KPOL reflectivity distributions by using the PR as a stable reference. Inter-month fluctuations in mean reflectivity differences between the PR and corrected KPOL are on the order of 1-2 dB, and inter-year mean reflectivity differences fluctuate by approximately 1 dB. This represents a marked improvement in stability with confidence comparable to the established calibration and uncertainty boundaries of the PR. The practical application of the RCA method has salvaged eight years of radar data that would have otherwise been unusable, and has made possible a high-quality database of tropical ocean-based reflectivity measurements and precipitation estimates for the research community.

  6. Self-calibration method of the inner lever-arm parameters for a tri-axis RINS

    NASA Astrophysics Data System (ADS)

    Song, Tianxiao; Li, Kui; Sui, Jie; Liu, Zengjun; Liu, Juncheng

    2017-11-01

    A rotational inertial navigation system (RINS) could improve navigation performance by modulating the inertial sensor errors with rotatable gimbals. When an inertial measurement unit (IMU) rotates, the deviations between the accelerometer-sensitive points and the IMU center will lead to an inner lever-arm effect. In this paper, a self-calibration method of the inner lever-arm parameters for a tri-axis RINS is proposed. A novel rotation scheme with variable angular rate rotation is designed to motivate the velocity errors caused by the inner lever-arm effect. By extending all inner lever-arm parameters as filter states, a Kalman filter with velocity errors as measurement is established to achieve the calibration. The accuracy and feasibility of the proposed method are illustrated by both simulations and experiments. The final results indicate that the inner lever-arm effect is significantly restrained after compensation by the calibration results.

  7. Nonlinear bias analysis and correction of microwave temperature sounder observations for FY-3C meteorological satellite

    NASA Astrophysics Data System (ADS)

    Hu, Taiyang; Lv, Rongchuan; Jin, Xu; Li, Hao; Chen, Wenxin

    2018-01-01

    The nonlinear bias analysis and correction of receiving channels in Chinese FY-3C meteorological satellite Microwave Temperature Sounder (MWTS) is a key technology of data assimilation for satellite radiance data. The thermal-vacuum chamber calibration data acquired from the MWTS can be analyzed to evaluate the instrument performance, including radiometric temperature sensitivity, channel nonlinearity and calibration accuracy. Especially, the nonlinearity parameters due to imperfect square-law detectors will be calculated from calibration data and further used to correct the nonlinear bias contributions of microwave receiving channels. Based upon the operational principles and thermalvacuum chamber calibration procedures of MWTS, this paper mainly focuses on the nonlinear bias analysis and correction methods for improving the calibration accuracy of the important instrument onboard FY-3C meteorological satellite, from the perspective of theoretical and experimental studies. Furthermore, a series of original results are presented to demonstrate the feasibility and significance of the methods.

  8. Highly parameterized model calibration with cloud computing: an example of regional flow model calibration in northeast Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.

    2014-05-01

    Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.

  9. Radiometric Calibration of the Earth Observing System's Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Slater, Philip N. (Principal Investigator)

    1997-01-01

    The work on the grant was mainly directed towards developing new, accurate, redundant methods for the in-flight, absolute radiometric calibration of satellite multispectral imaging systems and refining the accuracy of methods already in use. Initially the work was in preparation for the calibration of MODIS and HIRIS (before the development of that sensor was canceled), with the realization it would be applicable to most imaging multi- or hyper-spectral sensors provided their spatial or spectral resolutions were not too coarse. The work on the grant involved three different ground-based, in-flight calibration methods reflectance-based radiance-based and diffuse-to-global irradiance ratio used with the reflectance-based method. This continuing research had the dual advantage of: (1) developing several independent methods to create the redundancy that is essential for the identification and hopefully the elimination of systematic errors; and (2) refining the measurement techniques and algorithms that can be used not only for improving calibration accuracy but also for the reverse process of retrieving ground reflectances from calibrated remote-sensing data. The grant also provided the support necessary for us to embark on other projects such as the ratioing radiometer approach to on-board calibration (this has been further developed by SBRS as the 'solar diffuser stability monitor' and is incorporated into the most important on-board calibration system for MODIS)- another example of the work, which was a spin-off from the grant funding, was a study of solar diffuser materials. Journal citations, titles and abstracts of publications authored by faculty, staff, and students are also attached.

  10. An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition

    NASA Astrophysics Data System (ADS)

    Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.

    2018-04-01

    Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  11. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  12. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  13. Improved pressure measurement system for calibration of the NASA LeRC 10x10 supersonic wind tunnel

    NASA Technical Reports Server (NTRS)

    Blumenthal, Philip Z.; Helland, Stephen M.

    1994-01-01

    This paper discusses a method used to provide a significant improvement in the accuracy of the Electronically Scanned Pressure (ESP) Measurement System by means of a fully automatic floating pressure generating system for the ESP calibration and reference pressures. This system was used to obtain test section Mach number and flow angularity measurements over the full envelope of test conditions for the 10 x 10 Supersonic Wind Tunnel. The uncertainty analysis and actual test data demonstrated that, for most test conditions, this method could reduce errors to about one-third to one-half that obtained with the standard system.

  14. New calibration method for I-scan sensors to enable the precise measurement of pressures delivered by 'pressure garments'.

    PubMed

    Macintyre, Lisa

    2011-11-01

    Accurate measurement of the pressure delivered by medical compression products is highly desirable both in monitoring treatment and in developing new pressure inducing garments or products. There are several complications in measuring pressure at the garment/body interface and at present no ideal pressure measurement tool exists for this purpose. This paper summarises a thorough evaluation of the accuracy and reproducibility of measurements taken following both of Tekscan Inc.'s recommended calibration procedures for I-scan sensors; and presents an improved method for calibrating and using I-scan pressure sensors. The proposed calibration method enables accurate (±2.1 mmHg) measurement of pressures delivered by pressure garments to body parts with a circumference ≥30 cm. This method is too cumbersome for routine clinical use but is very useful, accurate and reproducible for product development or clinical evaluation purposes. Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.

  15. An investigation of hydraulic conductivity estimation in a ground-water flow study of Northern Long Valley, New Jersey

    USGS Publications Warehouse

    Hill, Mary C.

    1985-01-01

    The purpose of this study was to develop a methodology to be used to investigate the aquifer characteristics and water supply potential of an aquifer system. In particular, the geohydrology of northern Long Valley, New Jersey, was investigated. Geohydrologic data were collected and analyzed to characterize the site. Analysis was accomplished by interpreting the available data and by using a numerical simulation of the watertable aquifer. Special attention was given to the estimation of hydraulic conductivity values and hydraulic conductivity structure which together define the hydraulic conductivity of the modeled aquifer. Hydraulic conductivity and all other aspects of the system were first estimated using the trial-and-error method of calibration. The estimation of hydraulic conductivity was improved using a least squares method to estimate hydraulic conductivity values and by improvements in the parameter structure. These efforts improved the calibration of the model far more than a preceding period of similar effort using the trial-and-error method of calibration. In addition, the proposed method provides statistical information on the reliability of estimated hydraulic conductivity values, calculated heads, and calculated flows. The methodology developed and applied in this work proved to be of substantial value in the evaluation of the aquifer considered.

  16. Magnetic nanoparticle thermometry independent of Brownian relaxation

    NASA Astrophysics Data System (ADS)

    Zhong, Jing; Schilling, Meinhard; Ludwig, Frank

    2018-01-01

    An improved method of magnetic nanoparticle (MNP) thermometry is proposed. The phase lag ϕ of the fundamental f 0 harmonic is measured to eliminate the influence of Brownian relaxation on the ratio of 3f 0 to f 0 harmonic amplitudes applying a phenomenological model, thus allowing measurements in high-frequency ac magnetic fields. The model is verified by simulations of the Fokker-Planck equation. An MNP spectrometer is calibrated for the measurements of the phase lag ϕ and the amplitudes of 3f 0 and f 0 harmonics. Calibration curves of the harmonic ratio and tanϕ are measured by varying the frequency (from 10 Hz to 1840 Hz) of ac magnetic fields with different amplitudes (from 3.60 mT to 4.00 mT) at a known temperature. A phenomenological model is employed to fit the calibration curves. Afterwards, the improved method is proposed to iteratively compensate the measured harmonic ratio with tanϕ, and consequently calculate temperature applying the static Langevin function. Experimental results on SHP-25 MNPs show that the proposed method significantly improves the systematic error to 2 K at maximum with a relative accuracy of about 0.63%. This demonstrates the feasibility of the proposed method for MNP thermometry with SHP-25 MNPs even if the MNP signal is affected by Brownian relaxation.

  17. Inter-printer color calibration using constrained printer gamut

    NASA Astrophysics Data System (ADS)

    Zeng, Huanzhao; Humet, Jacint

    2005-01-01

    Due to the drop size variation of the print heads in inkjet printers, consistent color reproduction becomes challenge for high quality color printing. To improve the color consistency, we developed a method and system to characterize a pair of printers using a colorimeter or a color scanner. Different from prior known approaches that simply try to match colors of one printer to the other without considering the gamut differences, we first constructed an overlapped gamut in which colors can be produced by both printers, and then characterized both printers using a pair of 3-D or 4-D lookup tables (LUT) to produce same colors limited to the overlapped gamut. Each LUT converts nominal device color values into engine-dependent device color values limited to the overlapped gamut. Compared to traditional approaches, the color calibration accuracy is significantly improved. This method can be simply extended to calibrate more than two engines. In a color imaging system that includes a scanner and more than one print engine, this method improves the color consistency very effectively without increasing hardware costs. A few examples for applying this method are: 1) one-pass bi-directional inkjet printing; 2) a printer with two or more sets of pens for printing; and 3) a system embedded with a pair of printers (the number of printers could be easily incremented).

  18. Fluorescence calibration method for single-particle aerosol fluorescence instruments

    NASA Astrophysics Data System (ADS)

    Shipley Robinson, Ellis; Gao, Ru-Shan; Schwarz, Joshua P.; Fahey, David W.; Perring, Anne E.

    2017-05-01

    Real-time, single-particle fluorescence instruments used to detect atmospheric bioaerosol particles are increasingly common, yet no standard fluorescence calibration method exists for this technique. This gap limits the utility of these instruments as quantitative tools and complicates comparisons between different measurement campaigns. To address this need, we have developed a method to produce size-selected particles with a known mass of fluorophore, which we use to calibrate the fluorescence detection of a Wideband Integrated Bioaerosol Sensor (WIBS-4A). We use mixed tryptophan-ammonium sulfate particles to calibrate one detector (FL1; excitation = 280 nm, emission = 310-400 nm) and pure quinine particles to calibrate the other (FL2; excitation = 280 nm, emission = 420-650 nm). The relationship between fluorescence and mass for the mixed tryptophan-ammonium sulfate particles is linear, while that for the pure quinine particles is nonlinear, likely indicating that not all of the quinine mass contributes to the observed fluorescence. Nonetheless, both materials produce a repeatable response between observed fluorescence and particle mass. This procedure allows users to set the detector gains to achieve a known absolute response, calculate the limits of detection for a given instrument, improve the repeatability of the instrumental setup, and facilitate intercomparisons between different instruments. We recommend calibration of single-particle fluorescence instruments using these methods.

  19. Analysis of characteristics of Si in blast furnace pig iron and calibration methods in the detection by laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Mei, Yaguang; Cheng, Yuxin; Cheng, Shusen; Hao, Zhongqi; Guo, Lianbo; Li, Xiangyou; Zeng, Xiaoyan

    2017-10-01

    During the iron-making process in blast furnace, the Si content in liquid pig iron was usually used to evaluate the quality of liquid iron and thermal state of blast furnace. None effective method was found for rapid detecting the Si concentration of liquid iron. Laser-induced breakdown spectroscopy (LIBS) is a kind of atomic emission spectrometry technology based on laser ablation. Its obvious advantage is realizing rapid, in-situ, online analysis of element concentration in open air without sample pretreatment. The characteristics of Si in liquid iron were analyzed from the aspect of thermodynamic theory and metallurgical technology. The relationship between Si and C, Mn, S, P or other alloy elements were revealed based on thermodynamic calculation. Subsequently, LIBS was applied on rapid detection of Si of pig iron in this work. During LIBS detection process, several groups of standard pig iron samples were employed in this work to calibrate the Si content in pig iron. The calibration methods including linear, quadratic and cubic internal standard calibration, multivariate linear calibration and partial least squares (PLS) were compared with each other. It revealed that the PLS improved by normalization was the best calibration method for Si detection by LIBS.

  20. SU-F-T-274: Modified Dose Calibration Methods for IMRT QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, W; Westlund, S

    2016-06-15

    Purpose: To investigate IMRT QA uncertainties caused by dose calibration and modify widely used dose calibration procedures to improve IMRT QA accuracy and passing rate. Methods: IMRT QA dose measurement is calibrated using a calibration factor (CF) that is the ratio between measured value and expected value corresponding to the reference fields delivered on a phantom. Two IMRT QA phantoms were used for this study: a 30×30×30 cm3 solid water cube phantom (Cube), and the PTW Octavius phantom. CF was obtained by delivering 100 MUs to the phantoms with different reference fields ranging from 3×3 cm2 to 20×20 cm{sup 2}.more » For Cube, CFs were obtained using the following beam arrangements: 2-AP Field - chamber at dmax, 2-AP Field - chamber at isocenter, 4-beam box - chamber at isocenter, and 8 equally spaced fields and chamber at isocenter. The same plans were delivered on Octavius and CFs were derived for the dose at the isocenter using the above beam arrangements. The Octavius plans were evaluated with PTW-VeriSoft (Gamma criteria of 3%/3mm). Results: Four head and neck IMRT plans were included in this study. For point dose measurement with Cube, the CFs with 4-Field gave the best agreement between measurement and calculation within 4% for large field plans. All the measurement results agreed within 2% for a small field plan. Compared with calibration field sizes, 5×5 to 15×15 were more accurate than other field sizes. For Octavius, 4-Field calibration increased passing rate by up to 10% compared to AP calibration. Passing rate also increased by up to 4% with the increase of field size from 3×3 to 20×20. Conclusion: IMRT QA results are correlated with calibration methods used. The dose calibration using 4-beam box with field sizes from 5×5 to 20×20 can improve IMRT QA accuracy and passing rate.« less

  1. Calibration and compensation method of three-axis geomagnetic sensor based on pre-processing total least square iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, X.; Xiao, W.

    2018-04-01

    As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.

  2. Classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.

    2002-01-01

    An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.

  3. Demonstration of emulator-based Bayesian calibration of safety analysis codes: Theory and formulation

    DOE PAGES

    Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert

    2015-05-28

    System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less

  4. Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique

    NASA Astrophysics Data System (ADS)

    Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua

    2018-05-01

    A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.

  5. Improved Detection System Description and New Method for Accurate Calibration of Micro-Channel Plate Based Instruments and Its Use in the Fast Plasma Investigation on NASA's Magnetospheric MultiScale Mission

    NASA Technical Reports Server (NTRS)

    Gliese, U.; Avanov, L. A.; Barrie, A. C.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Gershman, D. J.; Dorelli, J. C.; hide

    2015-01-01

    The Fast Plasma Investigation (FPI) on NASAs Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers (DESs) and 16 Dual Ion Spectrometers (DISs) with 4 of each type on each of 4 spacecraft to enable fast (30 ms for electrons; 150 ms for ions) and spatially differentiated measurements of the full 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity, the reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions of magnetically reconnecting plasmas. Traditionally, the micro-channel plate (MCP) based detection systems for electrostatic particle spectrometers have been calibrated using the plateau curve technique. In this, a fixed detection threshold is set. The detection system count rate is then measured as a function of MCP voltage to determine the MCP voltage that ensures the count rate has reached a constant value independent of further variation in the MCP voltage. This is achieved when most of the MCP pulse height distribution (PHD) is located at higher values (larger pulses) than the detection system discrimination threshold. This method is adequate in single-channel detection systems and in multi-channel detection systems with very low crosstalk between channels. However, in dense multi-channel systems, it can be inadequate. Furthermore, it fails to fully describe the behavior of the detection system and individually characterize each of its fundamental parameters. To improve this situation, we have developed a detailed phenomenological description of the detection system, its behavior and its signal, crosstalk and noise sources. Based on this, we have devised a new detection system calibration method that enables accurate and repeatable measurement and calibration of MCP gain, MCP efficiency, signal loss due to variation in gain and efficiency, crosstalk from effects both above and below the MCP, noise margin, and stability margin in one single measurement. More precise calibration is highly desirable as the instruments will produce higher quality raw data that will require less post-acquisition data correction using results from in-flight pitch angle distribution measurements and ground calibration measurements. The detection system description and the fundamental concepts of this new calibration method, named threshold scan, will be presented. It will be shown how to derive all the individual detection system parameters and how to choose the optimum detection system operating point. This new method has been successfully applied to achieve a highly accurate calibration of the DESs and DISs of the MMS mission. The practical application of the method will be presented together with the achieved calibration results and their significance. Finally, it will be shown that, with further detailed modeling, this method can be extended for use in flight to achieve and maintain a highly accurate detection system calibration across a large number of instruments during the mission.

  6. A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis.

    PubMed

    Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A

    2005-07-01

    Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.

  7. Laser Calibration of an Impact Disdrometer

    NASA Technical Reports Server (NTRS)

    Lane, John E.; Kasparis, Takis; Metzger, Philip T.; Jones, W. Linwood

    2014-01-01

    A practical approach to developing an operational low-cost disdrometer hinges on implementing an effective in situ adaptive calibration strategy. This calibration strategy lowers the cost of the device and provides a method to guarantee continued automatic calibration. In previous work, a collocated tipping bucket rain gauge was utilized to provide a calibration signal to the disdrometer's digital signal processing software. Rainfall rate is proportional to the 11/3 moment of the drop size distribution (a 7/2 moment can also be assumed, depending on the choice of terminal velocity relationship). In the previous case, the disdrometer calibration was characterized and weighted to the 11/3 moment of the drop size distribution (DSD). Optical extinction by rainfall is proportional to the 2nd moment of the DSD. Using visible laser light as a means to focus and generate an auxiliary calibration signal, the adaptive calibration processing is significantly improved.

  8. Improved cross-calibration of Thomson scattering and electron cyclotron emission with ECH on DIII-D.

    PubMed

    Brookman, M W; Austin, M E; McLean, A G; Carlstrom, T N; Hyatt, A W; Lohr, J

    2016-11-01

    Thomson scattering produces n e profiles from measurement of scattered laser beam intensity. Rayleigh scattering provides a first calibration of the relation n e ∝ I TS , which depends on many factors (e.g., laser alignment and power, optics, and measurement systems). On DIII-D, the n e calibration is adjusted against an absolute n e from the density-driven cutoff of the 48 channel 2nd harmonic X-mode electron cyclotron emission system. This method has been used to calibrate Thomson n e from the edge to near the core (r/a > 0.15). Application of core electron cyclotron heating improves the quality of cutoff and depth of its penetration into the core, and also changes underlying MHD activity, minimizing crashes which confound calibration. Less fueling is needed as "ECH pump-out" generates a plasma ready to take up gas. On removal of gyrotron power, cutoff penetrates into the core as channels fall successively and smoothly into cutoff.

  9. Recent Surface Reflectance Measurement Campaigns with Emphasis on Best Practices, SI Traceability and Uncertainty Estimation

    NASA Technical Reports Server (NTRS)

    Helder, Dennis; Thome, Kurtis John; Aaron, Dave; Leigh, Larry; Czapla-Myers, Jeff; Leisso, Nathan; Biggar, Stuart; Anderson, Nik

    2012-01-01

    A significant problem facing the optical satellite calibration community is limited knowledge of the uncertainties associated with fundamental measurements, such as surface reflectance, used to derive satellite radiometric calibration estimates. In addition, it is difficult to compare the capabilities of calibration teams around the globe, which leads to differences in the estimated calibration of optical satellite sensors. This paper reports on two recent field campaigns that were designed to isolate common uncertainties within and across calibration groups, particularly with respect to ground-based surface reflectance measurements. Initial results from these efforts suggest the uncertainties can be as low as 1.5% to 2.5%. In addition, methods for improving the cross-comparison of calibration teams are suggested that can potentially reduce the differences in the calibration estimates of optical satellite sensors.

  10. Updated radiometric calibration for the Landsat-5 thematic mapper reflective bands

    USGS Publications Warehouse

    Helder, D.L.; Markham, B.L.; Thome, K.J.; Barsi, J.A.; Chander, G.; Malla, R.

    2008-01-01

    The Landsat-5 Thematic Mapper (TM) has been the workhorse of the Landsat system. Launched in 1984, it continues collecting data through the time frame of this paper. Thus, it provides an invaluable link to the past history of the land features of the Earth's surface, and it becomes imperative to provide an accurate radiometric calibration of the reflective bands to the user community. Previous calibration has been based on information obtained from prelaunch, the onboard calibrator, vicarious calibration attempts, and cross-calibration with Landsat-7. Currently, additional data sources are available to improve this calibration. Specifically, improvements in vicarious calibration methods and development of the use of pseudoinvariant sites for trending provide two additional independent calibration sources. The use of these additional estimates has resulted in a consistent calibration approach that ties together all of the available calibration data sources. Results from this analysis indicate a simple exponential, or a constant model may be used for all bands throughout the lifetime of Landsat-5 TM. Where previously time constants for the exponential models were approximately one year, the updated model has significantly longer time constants in bands 1-3. In contrast, bands 4, 5, and 7 are shown to be best modeled by a constant. The models proposed in this paper indicate calibration knowledge of 5% or better early in life, decreasing to nearly 2% later in life. These models have been implemented at the U.S. Geological Survey Earth Resources Observation and Science (EROS) and are the default calibration used for all Landsat TM data now distributed through EROS. ?? 2008 IEEE.

  11. Improving evapotranspiration processes in distrubing hydrological models using Remote Sensing derived ET products.

    NASA Astrophysics Data System (ADS)

    Abitew, T. A.; van Griensven, A.; Bauwens, W.

    2015-12-01

    Evapotranspiration is the main process in hydrology (on average around 60%), though has not received as much attention in the evaluation and calibration of hydrological models. In this study, Remote Sensing (RS) derived Evapotranspiration (ET) is used to improve the spatially distributed processes of ET of SWAT model application in the upper Mara basin (Kenya) and the Blue Nile basin (Ethiopia). The RS derived ET data is obtained from recently compiled global datasets (continuously monthly data at 1 km resolution from MOD16NBI,SSEBop,ALEXI,CMRSET models) and from regionally applied Energy Balance Models (for several cloud free days). The RS-RT data is used in different forms: Method 1) to evaluate spatially distributed evapotransiration model resultsMethod 2) to calibrate the evotranspiration processes in hydrological modelMethod 3) to bias-correct the evapotranpiration in hydrological model during simulation after changing the SWAT codesAn inter-comparison of the RS-ET products shows that at present there is a significant bias, but at the same time an agreement on the spatial variability of ET. The ensemble mean of different ET products seems the most realistic estimation and was further used in this study.The results show that:Method 1) the spatially mapped evapotranspiration of hydrological models shows clear differences when compared to RS derived evapotranspiration (low correlations). Especially evapotranspiration in forested areas is strongly underestimated compared to other land covers.Method 2) Calibration allows to improve the correlations between the RS and hydrological model results to some extent.Method 3) Bias-corrections are efficient in producing (sesonal or annual) evapotranspiration maps from hydrological models which are very similar to the patterns obtained from RS data.Though the bias-correction is very efficient, it is advised to improve the model results by better representing the ET processes by improved plant/crop computations, improved agricultural management practices or by providing improved meteorological data.

  12. Measurement correction method for force sensor used in dynamic pressure calibration based on artificial neural network optimized by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2017-12-01

    We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.

  13. A Calibration of the MeteoSwiss RAman Lidar for Meteorological Observations (RALMO)Water Vapour Mixing Ratio Measurements using a Radiosonde Trajectory Method

    NASA Astrophysics Data System (ADS)

    Hicks-Jalali, Shannon; Sica, R. J.; Haefele, Alexander; Martucci, Giovanni

    2018-04-01

    With only 50% downtime from 2007-2016, the RALMO lidar in Payerne, Switzerland, has one of the largest continuous lidar data sets available. These measurements will be used to produce an extensive lidar water vapour climatology using the Optimal Estimation Method introduced by Sica and Haefele (2016). We will compare our improved technique for external calibration using radiosonde trajectories with the standard external methods, and present the evolution of the lidar constant from 2007 to 2016.

  14. Accuracy of Subcutaneous Continuous Glucose Monitoring in Critically Ill Adults: Improved Sensor Performance with Enhanced Calibrations

    PubMed Central

    Leelarathna, Lalantha; English, Shane W.; Thabit, Hood; Caldwell, Karen; Allen, Janet M.; Kumareswaran, Kavita; Wilinska, Malgorzata E.; Nodale, Marianna; Haidar, Ahmad; Evans, Mark L.; Burnstein, Rowan

    2014-01-01

    Abstract Objective: Accurate real-time continuous glucose measurements may improve glucose control in the critical care unit. We evaluated the accuracy of the FreeStyle® Navigator® (Abbott Diabetes Care, Alameda, CA) subcutaneous continuous glucose monitoring (CGM) device in critically ill adults using two methods of calibration. Subjects and Methods: In a randomized trial, paired CGM and reference glucose (hourly arterial blood glucose [ABG]) were collected over a 48-h period from 24 adults with critical illness (mean±SD age, 60±14 years; mean±SD body mass index, 29.6±9.3 kg/m2; mean±SD Acute Physiology and Chronic Health Evaluation score, 12±4 [range, 6–19]) and hyperglycemia. In 12 subjects, the CGM device was calibrated at variable intervals of 1–6 h using ABG. In the other 12 subjects, the sensor was calibrated according to the manufacturer's instructions (1, 2, 10, and 24 h) using arterial blood and the built-in point-of-care glucometer. Results: In total, 1,060 CGM–ABG pairs were analyzed over the glucose range from 4.3 to 18.8 mmol/L. Using enhanced calibration median (interquartile range) every 169 (122–213) min, the absolute relative deviation was lower (7.0% [3.5, 13.0] vs. 12.8% [6.3, 21.8], P<0.001), and the percentage of points in the Clarke error grid Zone A was higher (87.8% vs. 70.2%). Conclusions: Accuracy of the Navigator CGM device during critical illness was comparable to that observed in non–critical care settings. Further significant improvements in accuracy may be obtained by frequent calibrations with ABG measurements. PMID:24180327

  15. Solid matrix transformation and tracer addition using molten ammonium bifluoride salt as a sample preparation method for laser ablation inductively coupled plasma mass spectrometry.

    PubMed

    Grate, Jay W; Gonzalez, Jhanis J; O'Hara, Matthew J; Kellogg, Cynthia M; Morrison, Samuel S; Koppenaal, David W; Chan, George C-Y; Mao, Xianglei; Zorba, Vassilia; Russo, Richard E

    2017-09-08

    Solid sampling and analysis methods, such as laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), are challenged by matrix effects and calibration difficulties. Matrix-matched standards for external calibration are seldom available and it is difficult to distribute spikes evenly into a solid matrix as internal standards. While isotopic ratios of the same element can be measured to high precision, matrix-dependent effects in the sampling and analysis process frustrate accurate quantification and elemental ratio determinations. Here we introduce a potentially general solid matrix transformation approach entailing chemical reactions in molten ammonium bifluoride (ABF) salt that enables the introduction of spikes as tracers or internal standards. Proof of principle experiments show that the decomposition of uranium ore in sealed PFA fluoropolymer vials at 230 °C yields, after cooling, new solids suitable for direct solid sampling by LA. When spikes are included in the molten salt reaction, subsequent LA-ICP-MS sampling at several spots indicate that the spikes are evenly distributed, and that U-235 tracer dramatically improves reproducibility in U-238 analysis. Precisions improved from 17% relative standard deviation for U-238 signals to 0.1% for the ratio of sample U-238 to spiked U-235, a factor of over two orders of magnitude. These results introduce the concept of solid matrix transformation (SMT) using ABF, and provide proof of principle for a new method of incorporating internal standards into a solid for LA-ICP-MS. This new approach, SMT-LA-ICP-MS, provides opportunities to improve calibration and quantification in solids based analysis. Looking forward, tracer addition to transformed solids opens up LA-based methods to analytical methodologies such as standard addition, isotope dilution, preparation of matrix-matched solid standards, external calibration, and monitoring instrument drift against external calibration standards.

  16. Steps towards Improving GNSS Systematic Errors and Biases

    NASA Astrophysics Data System (ADS)

    Herring, T.; Moore, M.

    2017-12-01

    Four general areas of analysis method improvements, three related to data analysis models and the fourth to calibration methods, have been recommended at the recent unified analysis workshop (UAW) and we discuss aspects of these areas for improvement. The gravity fields used in the GNSS orbit integrations should be updated to match modern fields to make them consistent with the fields being used by the other IAG services. The update would include the static part of the field and a time variable component. The force models associated with radiation forces are the most uncertain and modeling of these forces can be made more consistent with the exchange of attitude information. The international GNSS service (IGS) will develop an attitude format and make attitude information available so that analysis centers can validate their models. The IGS has noted the appearance of the GPS draconitic period and harmonics of this period in time series of various geodetic products (e.g., positions and Earth orientation parameters). An updated short-period (diurnal and semidiurnal) model is needed and a method to determine the best model developed. The final area, not directly related to analysis models, is the recommendation that site dependent calibration of GNSS antennas are needed since these have a direct effect on the ITRF realization and position offsets when antennas are changed. Evaluation of the effects of the use of antenna specific phase center models will be investigated for those sites where these values are available without disturbing an existing antenna installation. Potential development of an in-situ antenna calibration system is strongly encouraged. In-situ calibration would be deployed at core sites where GNSS sites are tied to other geodetic systems. With recent expansion of the number of GPS satellites transmitting unencrypted codes on the GPS L2 frequency and the availability of software GNSS receivers in-situ calibration between an existing installation and a movable directional antenna is now more likely to generate accurate results than earlier analog switching systems. With all of these improvements, there is the expectation that there will be better agreement between the space geodetic methods thus allowing more definitive assessment and modeling of the Earth's time variable shape and gravity field.

  17. Calibration of sea ice dynamic parameters in an ocean-sea ice model using an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Massonnet, F.; Goosse, H.; Fichefet, T.; Counillon, F.

    2014-07-01

    The choice of parameter values is crucial in the course of sea ice model development, since parameters largely affect the modeled mean sea ice state. Manual tuning of parameters will soon become impractical, as sea ice models will likely include more parameters to calibrate, leading to an exponential increase of the number of possible combinations to test. Objective and automatic methods for parameter calibration are thus progressively called on to replace the traditional heuristic, "trial-and-error" recipes. Here a method for calibration of parameters based on the ensemble Kalman filter is implemented, tested and validated in the ocean-sea ice model NEMO-LIM3. Three dynamic parameters are calibrated: the ice strength parameter P*, the ocean-sea ice drag parameter Cw, and the atmosphere-sea ice drag parameter Ca. In twin, perfect-model experiments, the default parameter values are retrieved within 1 year of simulation. Using 2007-2012 real sea ice drift data, the calibration of the ice strength parameter P* and the oceanic drag parameter Cw improves clearly the Arctic sea ice drift properties. It is found that the estimation of the atmospheric drag Ca is not necessary if P* and Cw are already estimated. The large reduction in the sea ice speed bias with calibrated parameters comes with a slight overestimation of the winter sea ice areal export through Fram Strait and a slight improvement in the sea ice thickness distribution. Overall, the estimation of parameters with the ensemble Kalman filter represents an encouraging alternative to manual tuning for ocean-sea ice models.

  18. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    NASA Technical Reports Server (NTRS)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  19. Analysis of regional rainfall-runoff parameters for the Lake Michigan Diversion hydrological modeling

    USGS Publications Warehouse

    Soong, David T.; Over, Thomas M.

    2015-01-01

    Recalibration of the HSPF parameters to the updated inputs and land covers was completed on two representative watershed models selected from the nine by using a manual method (HSPEXP) and an automatic method (PEST). The objective of the recalibration was to develop a regional parameter set that improves the accuracy in runoff volume prediction for the nine study watersheds. Knowledge about flow and watershed characteristics plays a vital role for validating the calibration in both manual and automatic methods. The best performing parameter set was determined by the automatic calibration method on a two-watershed model. Applying this newly determined parameter set to the nine watersheds for runoff volume simulation resulted in “very good” ratings in five watersheds, an improvement as compared to “very good” ratings achieved for three watersheds by the North Branch parameter set.

  20. Calibration of Reduced Dynamic Models of Power Systems using Phasor Measurement Unit (PMU) Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Lu, Shuai; Singh, Ruchi

    2011-09-23

    Accuracy of a power system dynamic model is essential to the secure and efficient operation of the system. Lower confidence on model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, identification algorithms have been developed to calibrate parameters of individual components using measurement data from staged tests. To facilitate online dynamic studies for large power system interconnections, this paper proposes a model reduction and calibration approach using phasor measurement unit (PMU) data. First, a model reduction method is used to reduce the number of dynamic components. Then, a calibration algorithm is developed to estimatemore » parameters of the reduced model. This approach will help to maintain an accurate dynamic model suitable for online dynamic studies. The performance of the proposed method is verified through simulation studies.« less

  1. Auto-calibrated scanning-angle prism-type total internal reflection microscopy for nanometer-precision axial position determination and optional variable-illumination-depth pseudo total internal reflection microscopy

    DOEpatents

    Fang, Ning; Sun, Wei

    2015-04-21

    A method, apparatus, and system for improved VA-TIRFM microscopy. The method comprises automatically controlled calibration of one or more laser sources by precise control of presentation of each laser relative a sample for small incremental changes of incident angle over a range of critical TIR angles. The calibration then allows precise scanning of the sample for any of those calibrated angles for higher and more accurate resolution, and better reconstruction of the scans for super resolution reconstruction of the sample. Optionally the system can be controlled for incident angles of the excitation laser at sub-critical angles for pseudo TIRFM. Optionally both above-critical angle and sub critical angle measurements can be accomplished with the same system.

  2. Poster — Thur Eve — 14: Improving Tissue Segmentation for Monte Carlo Dose Calculation using DECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Salvio, A.; Bedwani, S.; Carrier, J-F.

    2014-08-15

    Purpose: To improve Monte Carlo dose calculation accuracy through a new tissue segmentation technique with dual energy CT (DECT). Methods: Electron density (ED) and effective atomic number (EAN) can be extracted directly from DECT data with a stoichiometric calibration method. Images are acquired with Monte Carlo CT projections using the user code egs-cbct and reconstructed using an FDK backprojection algorithm. Calibration is performed using projections of a numerical RMI phantom. A weighted parameter algorithm then uses both EAN and ED to assign materials to voxels from DECT simulated images. This new method is compared to a standard tissue characterization frommore » single energy CT (SECT) data using a segmented calibrated Hounsfield unit (HU) to ED curve. Both methods are compared to the reference numerical head phantom. Monte Carlo simulations on uniform phantoms of different tissues using dosxyz-nrc show discrepancies in depth-dose distributions. Results: Both SECT and DECT segmentation methods show similar performance assigning soft tissues. Performance is however improved with DECT in regions with higher density, such as bones, where it assigns materials correctly 8% more often than segmentation with SECT, considering the same set of tissues and simulated clinical CT images, i.e. including noise and reconstruction artifacts. Furthermore, Monte Carlo results indicate that kV photon beam depth-dose distributions can double between two tissues of density higher than muscle. Conclusions: A direct acquisition of ED and the added information of EAN with DECT data improves tissue segmentation and increases the accuracy of Monte Carlo dose calculation in kV photon beams.« less

  3. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    PubMed

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.

  4. Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System

    NASA Astrophysics Data System (ADS)

    Chan, T. O.; Lichti, D. D.; Belton, D.

    2013-10-01

    At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.

  5. Multichannel-Hadamard calibration of high-order adaptive optics systems.

    PubMed

    Guo, Youming; Rao, Changhui; Bao, Hua; Zhang, Ang; Zhang, Xuejun; Wei, Kai

    2014-06-02

    we present a novel technique of calibrating the interaction matrix for high-order adaptive optics systems, called the multichannel-Hadamard method. In this method, the deformable mirror actuators are firstly divided into a series of channels according to their coupling relationship, and then the voltage-oriented Hadamard method is applied to these channels. Taking the 595-element adaptive optics system as an example, the procedure is described in detail. The optimal channel dividing is discussed and tested by numerical simulation. The proposed method is also compared with the voltage-oriented Hadamard only method and the multichannel only method by experiments. Results show that the multichannel-Hadamard method can produce significant improvement on interaction matrix measurement.

  6. Calibration, reconstruction, and rendering of cylindrical millimeter-wave image data

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Hall, Thomas E.

    2011-05-01

    Cylindrical millimeter-wave imaging systems and technology have been under development at the Pacific Northwest National Laboratory (PNNL) for several years. This technology has been commercialized, and systems are currently being deployed widely across the United States and internationally. These systems are effective at screening for concealed items of all types; however, new sensor designs, image reconstruction techniques, and image rendering algorithms could potentially improve performance. At PNNL, a number of specific techniques have been developed recently to improve cylindrical imaging methods including wideband techniques, combining data from full 360-degree scans, polarimetric imaging techniques, calibration methods, and 3-D data visualization techniques. Many of these techniques exploit the three-dimensionality of the cylindrical imaging technique by optimizing the depth resolution of the system and using this information to enhance detection. Other techniques, such as polarimetric methods, exploit scattering physics of the millimeter-wave interaction with concealed targets on the body. In this paper, calibration, reconstruction, and three-dimensional rendering techniques will be described that optimize the depth information in these images and the display of the images to the operator.

  7. A Bayesian modelling method for post-processing daily sub-seasonal to seasonal rainfall forecasts from global climate models and evaluation for 12 Australian catchments

    NASA Astrophysics Data System (ADS)

    Schepen, Andrew; Zhao, Tongtiegang; Wang, Quan J.; Robertson, David E.

    2018-03-01

    Rainfall forecasts are an integral part of hydrological forecasting systems at sub-seasonal to seasonal timescales. In seasonal forecasting, global climate models (GCMs) are now the go-to source for rainfall forecasts. For hydrological applications however, GCM forecasts are often biased and unreliable in uncertainty spread, and calibration is therefore required before use. There are sophisticated statistical techniques for calibrating monthly and seasonal aggregations of the forecasts. However, calibration of seasonal forecasts at the daily time step typically uses very simple statistical methods or climate analogue methods. These methods generally lack the sophistication to achieve unbiased, reliable and coherent forecasts of daily amounts and seasonal accumulated totals. In this study, we propose and evaluate a Rainfall Post-Processing method for Seasonal forecasts (RPP-S), which is based on the Bayesian joint probability modelling approach for calibrating daily forecasts and the Schaake Shuffle for connecting the daily ensemble members of different lead times. We apply the method to post-process ACCESS-S forecasts for 12 perennial and ephemeral catchments across Australia and for 12 initialisation dates. RPP-S significantly reduces bias in raw forecasts and improves both skill and reliability. RPP-S forecasts are also more skilful and reliable than forecasts derived from ACCESS-S forecasts that have been post-processed using quantile mapping, especially for monthly and seasonal accumulations. Several opportunities to improve the robustness and skill of RPP-S are identified. The new RPP-S post-processed forecasts will be used in ensemble sub-seasonal to seasonal streamflow applications.

  8. A simultaneously calibration approach for installation and attitude errors of an INS/GPS/LDS target tracker.

    PubMed

    Cheng, Jianhua; Chen, Daidai; Sun, Xiangyu; Wang, Tongda

    2015-02-04

    To obtain the absolute position of a target is one of the basic topics for non-cooperated target tracking problems. In this paper, we present a simultaneously calibration method for an Inertial navigation system (INS)/Global position system (GPS)/Laser distance scanner (LDS) integrated system based target positioning approach. The INS/GPS integrated system provides the attitude and position of observer, and LDS offers the distance between the observer and the target. The two most significant errors are taken into jointly consideration and analyzed: (1) the attitude measure error of INS/GPS; (2) the installation error between INS/GPS and LDS subsystems. Consequently, a INS/GPS/LDS based target positioning approach considering these two errors is proposed. In order to improve the performance of this approach, a novel calibration method is designed to simultaneously estimate and compensate these two main errors. Finally, simulations are conducted to access the performance of the proposed target positioning approach and the designed simultaneously calibration method.

  9. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    PubMed Central

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H.; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-01-01

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221

  10. Calibration of an outdoor distributed camera network with a 3D point cloud.

    PubMed

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-07-29

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC).

  11. Poster - 53: Improving inter-linac DMLC IMRT dose precision by fine tuning of MLC leaf calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakonechny, Keith; Tran, Muoi; Sasaki, David

    Purpose: To develop a method to improve the inter-linac precision of DMLC IMRT dosimetry. Methods: The distance between opposing MLC leaf banks (“gap size”) can be finely tuned on Varian linacs. The dosimetric effect due to small deviations from the nominal gap size (“gap error”) was studied by introducing known errors for several DMLC sliding gap sizes, and for clinical plans based on the TG119 test cases. The plans were delivered on a single Varian linac and the relationship between gap error and the corresponding change in dose was measured. The plans were also delivered on eight Varian 2100 seriesmore » linacs (at two institutions) in order to quantify the inter-linac variation in dose before and after fine tuning the MLC calibration. Results: The measured dose differences for each field agreed well with the predictions of LoSasso et al. Using the default MLC calibration, the variation in the physical MLC gap size was determined to be less than 0.4 mm between all linacs studied. The dose difference between the linacs with the largest and smallest physical gap was up to 5.4% (spinal cord region of the head and neck TG119 test case). This difference was reduced to 2.5% after fine tuning the MLC gap calibration. Conclusions: The inter-linac dose precision for DMLC IMRT on Varian linacs can be improved using a simple modification of the MLC calibration procedure that involves fine adjustment of the nominal gap size.« less

  12. Metafitting: Weight optimization for least-squares fitting of PTTI data

    NASA Technical Reports Server (NTRS)

    Douglas, Rob J.; Boulanger, J.-S.

    1995-01-01

    For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.

  13. Local-scale spatial modelling for interpolating climatic temperature variables to predict agricultural plant suitability

    NASA Astrophysics Data System (ADS)

    Webb, Mathew A.; Hall, Andrew; Kidd, Darren; Minansy, Budiman

    2016-05-01

    Assessment of local spatial climatic variability is important in the planning of planting locations for horticultural crops. This study investigated three regression-based calibration methods (i.e. traditional versus two optimized methods) to relate short-term 12-month data series from 170 temperature loggers and 4 weather station sites with data series from nearby long-term Australian Bureau of Meteorology climate stations. The techniques trialled to interpolate climatic temperature variables, such as frost risk, growing degree days (GDDs) and chill hours, were regression kriging (RK), regression trees (RTs) and random forests (RFs). All three calibration methods produced accurate results, with the RK-based calibration method delivering the most accurate validation measures: coefficients of determination ( R 2) of 0.92, 0.97 and 0.95 and root-mean-square errors of 1.30, 0.80 and 1.31 °C, for daily minimum, daily maximum and hourly temperatures, respectively. Compared with the traditional method of calibration using direct linear regression between short-term and long-term stations, the RK-based calibration method improved R 2 and reduced root-mean-square error (RMSE) by at least 5 % and 0.47 °C for daily minimum temperature, 1 % and 0.23 °C for daily maximum temperature and 3 % and 0.33 °C for hourly temperature. Spatial modelling indicated insignificant differences between the interpolation methods, with the RK technique tending to be the slightly better method due to the high degree of spatial autocorrelation between logger sites.

  14. Calibration methods influence quantitative material decomposition in photon-counting spectral CT

    NASA Astrophysics Data System (ADS)

    Curtis, Tyler E.; Roeder, Ryan K.

    2017-03-01

    Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.

  15. Standard Reference Line Combined with One-Point Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS) to Quantitatively Analyze Stainless and Heat Resistant Steel.

    PubMed

    Fu, Hongbo; Wang, Huadong; Jia, Junwei; Ni, Zhibo; Dong, Fengzhong

    2018-01-01

    Due to the influence of major elements' self-absorption, scarce observable spectral lines of trace elements, and relative efficiency correction of experimental system, accurate quantitative analysis with calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is in fact not easy. In order to overcome these difficulties, standard reference line (SRL) combined with one-point calibration (OPC) is used to analyze six elements in three stainless-steel and five heat-resistant steel samples. The Stark broadening and Saha - Boltzmann plot of Fe are used to calculate the electron density and the plasma temperature, respectively. In the present work, we tested the original SRL method, the SRL with the OPC method, and intercept with the OPC method. The final calculation results show that the latter two methods can effectively improve the overall accuracy of quantitative analysis and the detection limits of trace elements.

  16. Automated image quality assessment for chest CT scans.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2018-02-01

    Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.

  17. SU-E-T-749: Thorough Calibration of MOSFET Dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plenkovich, D; Thomas, J

    Purpose: To improve the accuracy of the MOSFET calibration procedure by performing the measurement several times and calculating the average value of the calibration factor for various photon and electron energies. Methods: The output of three photon and six electron beams of Varian Trilogy linear accelerator SN 5878 was calibrated. Five reinforced standard sensitivity MOSFET dosimeters were placed in the calibration jig and connected to the Reader Module. As the backscatter material was used 7 cm of Virtual Water. The MOSFET dosimeters were covered with 1.5 cm thick bolus for the regular and SRS 6 MV beams, 3 cm bolusmore » for 15 MV beam, 1.5 cm bolus for 6 MeV electron beam, and 2 cm bolus for the electron energies of 9, 12, 15, 18, and 22 MeV. The dosimeters were exposed to 100 MU, and the calibration factor was determined using the mobileMOSFET software. To improve the accuracy of calibration, this procedure was repeated ten times and the calibration factors were averaged. Results: As the number of calibrations was increasing the variability of calibration factors of different dosimeters was decreasing. After ten calibrations, the calibration factors for all five dosimeters were within 1% of one another for all energies, except 6 MV SRS photons and 6 MeV electrons, for which the variability was 2%. Conclusions: The described process results in calibration factors which are almost independent of modality or energy. Once calibrated, the dosimeters may be used for in-vivo dosimetry or for daily verification of the beam output. Measurement of the radiation dose under bolus and scatter to the eye are examples of frequent use of calibrated MOSFET dosimeters. The calibration factor determined for full build-up is used under these circumstances. To the best of our knowledge, such thorough procedure for calibrating MOSFET dosimeters has not been reported previously. Best Medical Canada provided MOSFET dosimeters for this project.« less

  18. Improved design and in-situ measurements of new beam position monitors for Indus-2

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Babbar, L. K.; Holikatti, A. C.; Yadav, S.; Tyagi, Y.; Puntambekar, T. A.; Senecha, V. K.

    2018-01-01

    Beam position monitors (BPM) are important diagnostic devices used in particle accelerators to monitor position of the beam for various applications. Improved version of button electrode BPM has been designed using CST Studio Suite for Indus-2 ring. The new BPMs are designed to replace old BPMs which were designed and installed more than 12 years back. The improved BPMs have higher transfer impedance, resonance free output signal, equal sensitivity in horizontal and vertical planes and fast decaying wakefield as compared to old BPMs. The new BPMs have been calibrated using coaxial wire method. Measurement of transfer impedance and time domain signals has also been performed in-situ with electron beam during Indus-2 operation. The calibration and beam based measurements results showed close agreement with the design parameters. This paper presents design, electromagnetic simulations, calibration result and in-situ beam based measurements of newly designed BPMs.

  19. Calibration method of microgrid polarimeters with image interpolation.

    PubMed

    Chen, Zhenyue; Wang, Xia; Liang, Rongguang

    2015-02-10

    Microgrid polarimeters have large advantages over conventional polarimeters because of the snapshot nature and because they have no moving parts. However, they also suffer from several error sources, such as fixed pattern noise (FPN), photon response nonuniformity (PRNU), pixel cross talk, and instantaneous field-of-view (IFOV) error. A characterization method is proposed to improve the measurement accuracy in visible waveband. We first calibrate the camera with uniform illumination so that the response of the sensor is uniform over the entire field of view without IFOV error. Then a spline interpolation method is implemented to minimize IFOV error. Experimental results show the proposed method can effectively minimize the FPN and PRNU.

  20. Impact of influent data frequency and model structure on the quality of WWTP model calibration and uncertainty.

    PubMed

    Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar

    2012-01-01

    Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.

  1. A Nonlinear Calibration Algorithm Based on Harmonic Decomposition for Two-Axis Fluxgate Sensors

    PubMed Central

    Liu, Shibin

    2018-01-01

    Nonlinearity is a prominent limitation to the calibration performance for two-axis fluxgate sensors. In this paper, a novel nonlinear calibration algorithm taking into account the nonlinearity of errors is proposed. In order to establish the nonlinear calibration model, the combined effort of all time-invariant errors is analyzed in detail, and then harmonic decomposition method is utilized to estimate the compensation coefficients. Meanwhile, the proposed nonlinear calibration algorithm is validated and compared with a classical calibration algorithm by experiments. The experimental results show that, after the nonlinear calibration, the maximum deviation of magnetic field magnitude is decreased from 1302 nT to 30 nT, which is smaller than 81 nT after the classical calibration. Furthermore, for the two-axis fluxgate sensor used as magnetic compass, the maximum error of heading is corrected from 1.86° to 0.07°, which is approximately 11% in contrast with 0.62° after the classical calibration. The results suggest an effective way to improve the calibration performance of two-axis fluxgate sensors. PMID:29789448

  2. Developing new extension of GafChromic RTQA2 film to patient quality assurance field using a plan-based calibration method

    NASA Astrophysics Data System (ADS)

    Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang

    2015-10-01

    GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter. Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method. The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90%  ±  1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality. By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the calibration curve.

  3. A spectrally tunable LED sphere source enables accurate calibration of tristimulus colorimeters

    NASA Astrophysics Data System (ADS)

    Fryc, I.; Brown, S. W.; Ohno, Y.

    2006-02-01

    The Four-Color Matrix method (FCM) was developed to improve the accuracy of chromaticity measurements of various display colors. The method is valid for each type of display having similar spectra. To develop the Four-Color correction matrix, spectral measurements of primary red, green, blue, and white colors of a display are needed. Consequently, a calibration facility should be equipped with a number of different displays. This is very inconvenient and expensive. A spectrally tunable light source (STS) that can mimic different display spectral distributions would eliminate the need for maintaining a wide variety of displays and would enable a colorimeter to be calibrated for a number of different displays using the same setup. Simulations show that an STS that can create red, green, blue and white distributions that are close to the real spectral power distribution (SPD) of a display works well with the FCM for the calibration of colorimeters.

  4. Improvements to the DRASTIC ground-water vulnerability mapping method

    USGS Publications Warehouse

    Rupert, Michael G.

    1999-01-01

    Ground-water vulnerability maps are designed to show areas of greatest potential for ground-water contamination on the basis of hydrogeologic and anthropogenic (human) factors. The maps are developed by using computer mapping hardware and software called a geographic information system (GIS) to combine data layers such as land use, soils, and depth to water. Usually, ground-water vulnerability is determined by assigning point ratings to the individual data layers and then adding the point ratings together when those layers are combined into a vulnerability map. Probably the most widely used ground-water vulnerability mapping method is DRASTIC, named for the seven factors considered in the method: Depth to water, net Recharge, Aquifer media, Soil media, Topography, Impact of vadose zone media, and hydraulic Conductivity of the aquifer (Aller and others, 1985, p. iv). The DRASTIC method has been used to develop ground-water vulnerability maps in many parts of the Nation; however, the effectiveness of the method has met with mixed success (Koterba and others, 1993, p. 513; U.S. Environmental Protection Agency, 1993; Barbash and Resek, 1996; Rupert, 1997). DRASTIC maps usually are not calibrated to measured contaminant concentrations. The DRASTIC ground-water vulnerability mapping method was improved by calibrating the point rating scheme to measured nitrite plus nitrate as nitrogen (NO2+NO3–N) concentrations in ground water on the basis of statistical correlations between NO2+NO3–N concentrations and land use, soils, and depth to water (Rupert, 1997). This report describes the calibration method developed by Rupert and summarizes the improvements in results of this method over those of the uncalibrated DRASTIC method applied by Rupert and others (1991) in the eastern Snake River Plain, Idaho.

  5. Optical Mass Displacement Tracking: A simplified field calibration method for the electro-mechanical seismometer.

    NASA Astrophysics Data System (ADS)

    Burk, D. R.; Mackey, K. G.; Hartse, H. E.

    2016-12-01

    We have developed a simplified field calibration method for use in seismic networks that still employ the classical electro-mechanical seismometer. Smaller networks may not always have the financial capability to purchase and operate modern, state of the art equipment. Therefore these networks generally operate a modern, low-cost digitizer that is paired to an existing electro-mechanical seismometer. These systems are typically poorly calibrated. Calibration of the station is difficult to estimate because coil loading, digitizer input impedance, and amplifier gain differences vary by station and digitizer model. Therefore, it is necessary to calibrate the station channel as a complete system to take into account all components from instrument, to amplifier, to even the digitizer. Routine calibrations at the smaller networks are not always consistent, because existing calibration techniques require either specialized equipment or significant technical expertise. To improve station data quality at the small network, we developed a calibration method that utilizes open source software and a commonly available laser position sensor. Using a signal generator and a small excitation coil, we force the mass of the instrument to oscillate at various frequencies across its operating range. We then compare the channel voltage output to the laser-measured mass displacement to determine the instrument voltage sensitivity at each frequency point. Using the standard equations of forced motion, a representation of the calibration curve as a function of voltage per unit of ground velocity is calculated. A computer algorithm optimizes the curve and then translates the instrument response into a Seismic Analysis Code (SAC) poles & zeros format. Results have been demonstrated to fall within a few percent of a standard laboratory calibration. This method is an effective and affordable option for networks that employ electro-mechanical seismometers, and it is currently being deployed in regional networks throughout Russia and in Central Asia.

  6. Optimization and Calibration of Slat Position for a SPECT With Slit-Slat Collimator and Pixelated Detector Crystals

    NASA Astrophysics Data System (ADS)

    Deng, Xiao; Ma, Tianyu; Lecomte, Roger; Yao, Rutao

    2011-10-01

    To expand the availability of SPECT for biomedical research, we developed a SPECT imaging system on an existing animal PET detector by adding a slit-slat collimator. As the detector crystals are pixelated, the relative slat-to-crystal position (SCP) in the axial direction affects the photon flux distribution onto the crystals. The accurate knowledge of SCP is important to the axial resolution and sensitivity of the system. This work presents a method for optimizing SCP in system design and for determining SCP in system geometrical calibration. The optimization was achieved by finding the SCP that provides higher spatial resolution in terms of average-root-mean-square (R̅M̅S̅) width of the axial point spread function (PSF) without loss of sensitivity. The calibration was based on the least-square-error method that minimizes the difference between the measured and modeled axial point spread projections. The uniqueness and accuracy of the calibration results were validated through a singular value decomposition (SVD) based approach. Both the optimization and calibration techniques were evaluated with Monte Carlo (MC) simulated data. We showed that the [R̅M̅S̅] was improved about 15% with the optimal SCP as compared to the least-optimal SCP, and system sensitivity was not affected by SCP. The SCP error achieved by the proposed calibration method was less than 0.04 mm. The calibrated SCP value was used in MC simulation to generate the system matrix which was used for image reconstruction. The images of simulated phantoms showed the expected resolution performance and were artifact free. We conclude that the proposed optimization and calibration method is effective for the slit-slat collimator based SPECT systems.

  7. Historical Precision of an Ozone Correction Procedure for AM0 Solar Cell Calibration

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Jenkins, Phillip; Scheiman, David

    2005-01-01

    In an effort to improve the accuracy of the high altitude aircraft method for calibration of high band-gap solar cells, the ozone correction procedure has been revisited. The new procedure adjusts the measured short circuit current, Isc, according to satellite based ozone measurements and a model of the atmospheric ozone profile then extrapolates the measurements to air mass zero, AMO. The purpose of this paper is to assess the precision of the revised procedure by applying it to historical data sets. The average Isc of a silicon cell for a flying season increased 0.5% and the standard deviation improved from 0.5% to 0.3%. The 12 year average Isc of a GaAs cell increased 1% and the standard deviation improved from 0.8% to 0.5%. The slight increase in measured Isc and improvement in standard deviation suggests that the accuracy of the aircraft method may improve from 1% to nearly 0.5%.

  8. The recalibration of the IUE scientific instrument

    NASA Technical Reports Server (NTRS)

    Imhoff, Catherine L.; Oliversen, Nancy A.; Nichols-Bohlin, Joy; Casatella, Angelo; Lloyd, Christopher

    1988-01-01

    The IUE instrument was recalibrated because of long time-scale changes in the scientific instrument, a better understanding of the performance of the instrument, improved sets of calibration data, and improved analysis techniques. Calibrations completed or planned include intensity transfer functions (ITF), low-dispersion absolute calibrations, high-dispersion ripple corrections and absolute calibrations, improved geometric mapping of the ITFs to spectral images, studies to improve the signal-to-noise, enhanced absolute calibrations employing corrections for time, temperature, and aperture dependence, and photometric and geometric calibrations for the FES.

  9. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    USGS Publications Warehouse

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  10. Improved cross-calibration of Thomson scattering and electron cyclotron emission with ECH on DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brookman, M. W., E-mail: brookmanmw@fusion.gat.com; Austin, M. E.; McLean, A. G.

    2016-11-15

    Thomson scattering produces n{sub e} profiles from measurement of scattered laser beam intensity. Rayleigh scattering provides a first calibration of the relation n{sub e} ∝ I{sub TS}, which depends on many factors (e.g., laser alignment and power, optics, and measurement systems). On DIII-D, the n{sub e} calibration is adjusted against an absolute n{sub e} from the density-driven cutoff of the 48 channel 2nd harmonic X-mode electron cyclotron emission system. This method has been used to calibrate Thomson n{sub e} from the edge to near the core (r/a > 0.15). Application of core electron cyclotron heating improves the quality of cutoffmore » and depth of its penetration into the core, and also changes underlying MHD activity, minimizing crashes which confound calibration. Less fueling is needed as “ECH pump-out” generates a plasma ready to take up gas. On removal of gyrotron power, cutoff penetrates into the core as channels fall successively and smoothly into cutoff.« less

  11. A self-calibration method in single-axis rotational inertial navigation system with rotating mechanism

    NASA Astrophysics Data System (ADS)

    Chen, Yuanpei; Wang, Lingcao; Li, Kui

    2017-10-01

    Rotary inertial navigation modulation mechanism can greatly improve the inertial navigation system (INS) accuracy through the rotation. Based on the single-axis rotational inertial navigation system (RINS), a self-calibration method is put forward. The whole system is applied with the rotation modulation technique so that whole inertial measurement unit (IMU) of system can rotate around the motor shaft without any external input. In the process of modulation, some important errors can be decoupled. Coupled with the initial position information and attitude information of the system as the reference, the velocity errors and attitude errors in the rotation are used as measurement to perform Kalman filtering to estimate part of important errors of the system after which the errors can be compensated into the system. The simulation results show that the method can complete the self-calibration of the single-axis RINS in 15 minutes and estimate gyro drifts of three-axis, the installation error angle of the IMU and the scale factor error of the gyro on z-axis. The calibration accuracy of optic gyro drifts could be about 0.003°/h (1σ) as well as the scale factor error could be about 1 parts per million (1σ). The errors estimate reaches the system requirements which can effectively improve the longtime navigation accuracy of the vehicle or the boat.

  12. Uncertainty Evaluations of the CRCS In-orbit Field Radiometric Calibration Methods for Thermal Infrared Channels of FENGYUN Meteorological Satellites

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Rong, Z.; Min, M.; Hao, X.; Yang, H.

    2017-12-01

    Meteorological satellites have become an irreplaceable weather and ocean-observing tool in China. These satellites are used to monitor natural disasters and improve the efficiency of many sectors of Chinese national economy. It is impossible to ignore the space-derived data in the fields of meteorology, hydrology, and agriculture, as well as disaster monitoring in China, a large agricultural country. For this reason, China is making a sustained effort to build and enhance its meteorological observing system and application system. The first Chinese polar-orbiting weather satellite was launched in 1988. Since then China has launched 14 meteorological satellites, 7 of which are sun synchronous and 7 of which are geostationary satellites; China will continue its two types of meteorological satellite programs. In order to achieve the in-orbit absolute radiometric calibration of the operational meteorological satellites' thermal infrared channels, China radiometric calibration sites (CRCS) established a set of in-orbit field absolute radiometric calibration methods (FCM) for thermal infrared channels (TIR) and the uncertainty of this method was evaluated and analyzed based on TERRA/AQUA MODIS observations. Comparisons between the MODIS at pupil brightness temperatures (BTs) and the simulated BTs at the top of atmosphere using radiative transfer model (RTM) based on field measurements showed that the accuracy of the current in-orbit field absolute radiometric calibration methods was better than 1.00K (@300K, K=1) in thermal infrared channels. Therefore, the current CRCS field calibration method for TIR channels applied to Chinese metrological satellites was with favorable calibration accuracy: for 10.5-11.5µm channel was better than 0.75K (@300K, K=1) and for 11.5-12.5µm channel was better than 0.85K (@300K, K=1).

  13. Flux-gate magnetometer spin axis offset calibration using the electron drift instrument

    NASA Astrophysics Data System (ADS)

    Plaschke, Ferdinand; Nakamura, Rumi; Leinweber, Hannes K.; Chutter, Mark; Vaith, Hans; Baumjohann, Wolfgang; Steller, Manfred; Magnes, Werner

    2014-10-01

    Spin-stabilization of spacecraft immensely supports the in-flight calibration of on-board flux-gate magnetometers (FGMs). From 12 calibration parameters in total, 8 can be easily obtained by spectral analysis. From the remaining 4, the spin axis offset is known to be particularly variable. It is usually determined by analysis of Alfvénic fluctuations that are embedded in the solar wind. In the absence of solar wind observations, the spin axis offset may be obtained by comparison of FGM and electron drift instrument (EDI) measurements. The aim of our study is to develop methods that are readily usable for routine FGM spin axis offset calibration with EDI. This paper represents a major step forward in this direction. We improve an existing method to determine FGM spin axis offsets from EDI time-of-flight measurements by providing it with a comprehensive error analysis. In addition, we introduce a new, complementary method that uses EDI beam direction data instead of time-of-flight data. Using Cluster data, we show that both methods yield similarly accurate results, which are comparable yet more stable than those from a commonly used solar wind-based method.

  14. Improving the Thermal, Radial and Temporal Accuracy of the Analytical Ultracentrifuge through External References

    PubMed Central

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying

    2013-01-01

    Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724

  15. Improving the thermal, radial, and temporal accuracy of the analytical ultracentrifuge through external references.

    PubMed

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying

    2013-09-01

    Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.

  16. High accuracy position response calibration method for a micro-channel plate ion detector

    NASA Astrophysics Data System (ADS)

    Hong, R.; Leredde, A.; Bagdasarova, Y.; Fléchard, X.; García, A.; Müller, P.; Knecht, A.; Liénard, E.; Kossin, M.; Sternberg, M. G.; Swanson, H. E.; Zumwalt, D. W.

    2016-11-01

    We have developed a position response calibration method for a micro-channel plate (MCP) detector with a delay-line anode position readout scheme. Using an in situ calibration mask, an accuracy of 8 μm and a resolution of 85 μm (FWHM) have been achieved for MeV-scale α particles and ions with energies of ∼10 keV. At this level of accuracy, the difference between the MCP position responses to high-energy α particles and low-energy ions is significant. The improved performance of the MCP detector can find applications in many fields of AMO and nuclear physics. In our case, it helps reducing systematic uncertainties in a high-precision nuclear β-decay experiment.

  17. Solar Measurement and Modeling | Grid Modernization | NREL

    Science.gov Websites

    Energy SunShot Initiative by improving the tools and methods that measure solar radiation to reduce and disseminate accurate solar measurement and modeling methods, best practices and standards, and Normal Irradiance Measurements, Solar Energy (2016) Radiometer Calibration Methods and Resulting

  18. Man vs. Machine: An interactive poll to evaluate hydrological model performance of a manual and an automatic calibration

    NASA Astrophysics Data System (ADS)

    Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that produced the respective hydrograph. Therefore, the result of the poll can be seen as an additional quality criterion for the comparison of the two different approaches and help in the evaluation of the automatic calibration method.

  19. SU-F-T-271: Comparing IMRT QA Pass Rates Before and After MLC Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazza, A; Perrin, D; Fontenot, J

    Purpose: To compare IMRT QA pass rates before and after an in-house MLC leaf calibration procedure. Methods: The MLC leaves and backup jaws on four Elekta linear accelerators with MLCi2 heads were calibrated using the EPID-based RIT Hancock Test as the means for evaluation. The MLCs were considered to be successfully calibrated when they could pass the Hancock Test with criteria of 1 mm jaw position tolerance, and 1 mm leaf position tolerance. IMRT QA results were collected pre- and postcalibration and analyzed using gamma analysis with 3%/3mm DTA criteria. AAPM TG-119 test plans were also compared pre- and post-calibration,more » at both 2%/2mm DTA and 3%/3mm DTA. Results: A weighted average was performed on the results for all four linear accelerators. The pre-calibration IMRT QA pass rate was 98.3 ± 0.1%, compared with the post-calibration pass rate of 98.5 ± 0.1%. The TG-119 test plan results showed more of an improvement, particularly at the 2%/2mm criteria. The averaged results were 89.1% pre and 96.1% post for the C-shape plan, 94.8% pre and 97.1% post for the multi-target plan, 98.6% pre and 99.7% post for the prostate plan, 94.7% pre and 94.8% post for the head/neck plan. Conclusion: The patient QA results did not show statistically significant improvement at the 3%/3mm DTA criteria after the MLC calibration procedure. However, the TG-119 test cases did show significant improvement at the 2%/2mm level.« less

  20. Application of a self-compensation mechanism to a rotary-laser scanning measurement system

    NASA Astrophysics Data System (ADS)

    Guo, Siyang; Lin, Jiarui; Ren, Yongjie; Shi, Shendong; Zhu, Jigui

    2017-11-01

    In harsh environmental conditions, the relative orientations of transmitters of rotary-laser scanning measuring systems are easily influenced by low-frequency vibrations or creep deformation of the support structure. A self-compensation method that counters this problem is presented. This method is based on an improved workshop Measurement Positioning System (wMPS) with inclinometer-combined transmitters. A calibration method for the spatial rotation between the transmitter and inclinometer with an auxiliary horizontal reference frame is presented. It is shown that the calibration accuracy can be improved by a mechanical adjustment using a special bubble level. The orientation-compensation algorithm of the transmitters is described in detail. The feasibility of this compensation mechanism is validated by Monte Carlo simulations and experiments. The mechanism mainly provides a two-degrees-of-freedom attitude compensation.

  1. Calibration of X-Ray diffractometer by the experimental comparison method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dudka, A. P., E-mail: dudka@ns.crys.ras.ru

    2015-07-15

    A software for calibrating an X-ray diffractometer with area detector has been developed. It is proposed to search for detector and goniometer calibration models whose parameters are reproduced in a series of measurements on a reference crystal. Reference (standard) crystals are prepared during the investigation; they should provide the agreement of structural models in repeated analyses. The technique developed has been used to calibrate Xcalibur Sapphire and Eos, Gemini Ruby (Agilent) and Apex x8 and Apex Duo (Bruker) diffractometers. The main conclusions are as follows: the calibration maps are stable for several years and can be used to improve structuralmore » results, verified CCD detectors exhibit significant inhomogeneity of the efficiency (response) function, and a Bruker goniometer introduces smaller distortions than an Agilent goniometer.« less

  2. Improving zero-training brain-computer interfaces by mixing model estimators

    NASA Astrophysics Data System (ADS)

    Verhoeven, T.; Hübner, D.; Tangermann, M.; Müller, K. R.; Dambre, J.; Kindermans, P. J.

    2017-06-01

    Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method’s strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

  3. Absolute radiometric calibration of Landsat using a pseudo invariant calibration site

    USGS Publications Warehouse

    Helder, D.; Thome, K.J.; Mishra, N.; Chander, G.; Xiong, Xiaoxiong; Angal, A.; Choi, Tae-young

    2013-01-01

    Pseudo invariant calibration sites (PICS) have been used for on-orbit radiometric trending of optical satellite systems for more than 15 years. This approach to vicarious calibration has demonstrated a high degree of reliability and repeatability at the level of 1-3% depending on the site, spectral channel, and imaging geometries. A variety of sensors have used this approach for trending because it is broadly applicable and easy to implement. Models to describe the surface reflectance properties, as well as the intervening atmosphere have also been developed to improve the precision of the method. However, one limiting factor of using PICS is that an absolute calibration capability has not yet been fully developed. Because of this, PICS are primarily limited to providing only long term trending information for individual sensors or cross-calibration opportunities between two sensors. This paper builds an argument that PICS can be used more extensively for absolute calibration. To illustrate this, a simple empirical model is developed for the well-known Libya 4 PICS based on observations by Terra MODIS and EO-1 Hyperion. The model is validated by comparing model predicted top-of-atmosphere reflectance values to actual measurements made by the Landsat ETM+ sensor reflective bands. Following this, an outline is presented to develop a more comprehensive and accurate PICS absolute calibration model that can be Système international d'unités (SI) traceable. These initial concepts suggest that absolute calibration using PICS is possible on a broad scale and can lead to improved on-orbit calibration capabilities for optical satellite sensors.

  4. Larger Optics and Improved Calibration Techniques for Small Satellite Observations with the ERAU OSCOM System

    NASA Astrophysics Data System (ADS)

    Bilardi, S.; Barjatya, A.; Gasdia, F.

    OSCOM, Optical tracking and Spectral characterization of CubeSats for Operational Missions, is a system capable of providing time-resolved satellite photometry using commercial-off-the-shelf (COTS) hardware and custom tracking and analysis software. This system has acquired photometry of objects as small as CubeSats using a Celestron 11” RASA and an inexpensive CMOS machine vision camera. For satellites with known shapes, these light curves can be used to verify a satellite’s attitude and the state of its deployed solar panels or antennae. While the OSCOM system can successfully track satellites and produce light curves, there is ongoing improvement towards increasing its automation while supporting additional mounts and telescopes. A newly acquired Celestron 14” Edge HD can be used with a Starizona Hyperstar to increase the SNR for small objects as well as extend beyond the limiting magnitude of the 11” RASA. OSCOM currently corrects instrumental brightness measurements for satellite range and observatory site average atmospheric extinction, but calibrated absolute brightness is required to determine information about satellites other than their spin rate, such as surface albedo. A calibration method that automatically detects and identifies background stars can use their catalog magnitudes to calibrate the brightness of the satellite in the image. We present a photometric light curve from both the 14” Edge HD and 11” RASA optical systems as well as plans for a calibration method that will perform background star photometry to efficiently determine calibrated satellite brightness in each frame.

  5. [An attempt for standardization of serum CA19-9 levels, in order to dissolve the gap between three different methods].

    PubMed

    Hayashi, Kuniki; Hoshino, Tadashi; Yanai, Mitsuru; Tsuchiya, Tatsuyuki; Kumasaka, Kazunari; Kawano, Kinya

    2004-06-01

    It is well known that serious method-related differences exist in results of serum CA19-9, and the necessity of standardization has been pointed out. In this study, differences of serum tumor marker CA19-9 levels obtained by various immunoassay kits (CLEIA, FEIA, LPIA and RIA) were evaluated in sixty-seven clinical samples and five calibrators and the possibility to improve the inter-methodological differences were observed not only for clinical samples but also for calibrators. We supposed an assumed standard material using by a calibrator. We calculated the serum levels of CA19-9 when using the assumed standard material for three different measurement methods. We approximate the CA19-9 values using by this method. It is suggested that the obtained CA19-9 values could be approximated by recalculation with the assumed standard material would be able to correct between-method and between-laboratory discrepancies in particular systematic errors.

  6. Research on the attitude of small UAV based on MEMS devices

    NASA Astrophysics Data System (ADS)

    Shi, Xiaojie; Lu, Libin; Jin, Guodong; Tan, Lining

    2017-05-01

    This paper mainly introduces the research principle and implementation method of the small UAV navigation attitude system based on MEMS devices. The Gauss - Newton method based on least squares is used to calibrate the MEMS accelerometer and gyroscope for calibration. Improve the accuracy of the attitude by using the modified complementary filtering to correct the attitude angle error. The experimental data show that the design of the attitude and attitude system in this paper to meet the requirements of small UAV attitude accuracy to achieve a small, low cost.

  7. Development of new methodologies for evaluating the energy performance of new commercial buildings

    NASA Astrophysics Data System (ADS)

    Song, Suwon

    The concept of Measurement and Verification (M&V) of a new building continues to become more important because efficient design alone is often not sufficient to deliver an efficient building. Simulation models that are calibrated to measured data can be used to evaluate the energy performance of new buildings if they are compared to energy baselines such as similar buildings, energy codes, and design standards. Unfortunately, there is a lack of detailed M&V methods and analysis methods to measure energy savings from new buildings that would have hypothetical energy baselines. Therefore, this study developed and demonstrated several new methodologies for evaluating the energy performance of new commercial buildings using a case-study building in Austin, Texas. First, three new M&V methods were developed to enhance the previous generic M&V framework for new buildings, including: (1) The development of a method to synthesize weather-normalized cooling energy use from a correlation of Motor Control Center (MCC) electricity use when chilled water use is unavailable, (2) The development of an improved method to analyze measured solar transmittance against incidence angle for sample glazing using different solar sensor types, including Eppley PSP and Li-Cor sensors, and (3) The development of an improved method to analyze chiller efficiency and operation at part-load conditions. Second, three new calibration methods were developed and analyzed, including: (1) A new percentile analysis added to the previous signature method for use with a DOE-2 calibration, (2) A new analysis to account for undocumented exhaust air in DOE-2 calibration, and (3) An analysis of the impact of synthesized direct normal solar radiation using the Erbs correlation on DOE-2 simulation. Third, an analysis of the actual energy savings compared to three different energy baselines was performed, including: (1) Energy Use Index (EUI) comparisons with sub-metered data, (2) New comparisons against Standards 90.1-1989 and 90.1-2001, and (3) A new evaluation of the performance of selected Energy Conservation Design Measures (ECDMs). Finally, potential energy savings were also simulated from selected improvements, including: minimum supply air flow, undocumented exhaust air, and daylighting.

  8. Calibration and evaluation of the FAO56-Penman-Monteith, FAO24-radiation, and Priestly-Taylor reference evapotranspiration models using the spatially measured solar radiation across a large arid and semi-arid area in southern Iran

    NASA Astrophysics Data System (ADS)

    Didari, Shohreh; Ahmadi, Seyed Hamid

    2018-05-01

    Crop evapotranspiration (ET) is one of the main components in calculating the water balance in agricultural, hydrological, environmental, and climatological studies. Solar radiation (Rs) supplies the available energy for ET, and therefore, precise measurement of Rs is required for accurate ET estimation. However, measured Rs and ET and are not available in many areas and they should be estimated indirectly by the empirical methods. The Angström-Prescott (AP) is the most popular method for estimating Rs in areas where there are no measured data. In addition, the locally calibrated coefficients of AP are not yet available in many locations, and instead, the default coefficients are used. In this study, we investigated different approaches for Rs and ET calculations. The daily measured Rs values in 14 stations across arid and semi-arid areas of Fars province in south of Iran were used for calibrating the coefficients of the AP model. Results revealed that the calibrated AP coefficients were very different and higher than the default values. In addition, the reference ET (ET o ) was estimated by the FAO56 Penman-Monteith (FAO56 PM) and FAO24-radiation methods by using the measured Rs and were then compared with the measured pan evaporation as an indication of the potential atmospheric demand. Interestingly and unlike many previous studies, which have suggested the FAO56 PM as the standard method in calculation of ET o , the FAO24-radiation with the measured Rs showed better agreement with the mean pan evaporation. Therefore, the FAO24-radiation with the measured Rs was used as the reference method for the study area, which was also confirmed by the previous studies based on the lysimeter data. Moreover, the accuracy of calibrated Rs in the estimation of ET o by the FAO56 PM and FAO24-radiation was investigated. Results showed that the calibrated Rs improved the accuracy of the estimated ET o by the FAO24-radiation compared with the FAO24-radiation using the measured Rs as the reference method, whereas there was no improvement in the estimation of ET o by the FAO56 PM method compared with the FAO24-radiation using the measured Rs. Moreover, the empirical coefficient (α) of the Priestley and Taylor (PT) ET o estimation method was calibrated against the reference method and results indicated ca. 2 or higher α values than the recommended α = 1.26 in all stations. An empirical equation was suggested based on yearly mean relative humidity for estimation of α in the study area. Overall, this study showed that (1) the FAO24-radiation method with the either measured or calibrated Rs is more accurate than the FAO56 PM, (2) the spatially calibrated AP coefficients are very different from each other over an arid and semi-arid area and are higher than those proposed by the FAO56, (3) the original PT model is not applicable in arid and semi-arid area and substantially underestimates the ET o , and (4) the coefficient of the PT should be locally calibrated for each station over an arid and semi-arid area.

  9. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera

    PubMed Central

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-01-01

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416

  10. Video-guided calibration of an augmented reality mobile C-arm.

    PubMed

    Chen, Xin; Naik, Hemal; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal

    2014-11-01

    The augmented reality (AR) fluoroscope augments an X-ray image by video and provides the surgeon with a real-time in situ overlay of the anatomy. The overlay alignment is crucial for diagnostic and intra-operative guidance, so precise calibration of the AR fluoroscope is required. The first and most complex step of the calibration procedure is the determination of the X-ray source position. Currently, this is achieved using a biplane phantom with movable metallic rings on its top layer and fixed X-ray opaque markers on its bottom layer. The metallic rings must be moved to positions where at least two pairs of rings and markers are isocentric in the X-ray image. The current "trial and error" calibration process currently requires acquisition of many X-ray images, a task that is both time consuming and radiation intensive. An improved process was developed and tested for C-arm calibration. Video guidance was used to drive the calibration procedure to minimize both X-ray exposure and the time involved. For this, a homography between X-ray and video images is estimated. This homography is valid for the plane at which the metallic rings are positioned and is employed to guide the calibration procedure. Eight users having varying calibration experience (i.e., 2 experts, 2 semi-experts, 4 novices) were asked to participate in the evaluation. The video-guided technique reduced the number of intra-operative X-ray calibration images by 89% and decreased the total time required by 59%. A video-based C-arm calibration method has been developed that improves the usability of the AR fluoroscope with a friendlier interface, reduced calibration time and clinically acceptable radiation doses.

  11. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle.

    PubMed

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  12. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle

    NASA Astrophysics Data System (ADS)

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  13. SU-F-E-19: A Novel Method for TrueBeam Jaw Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corns, R; Zhao, Y; Huang, V

    2016-06-15

    Purpose: A simple jaw calibration method is proposed for Varian TrueBeam using an EPID-Encoder combination that gives accurate fields sizes and a homogeneous junction dose. This benefits clinical applications such as mono-isocentric half-beam block breast cancer or head and neck cancer treatment with junction/field matching. Methods: We use EPID imager with pixel size 0.392 mm × 0.392 mm to determine the radiation jaw position as measured from radio-opaque markers aligned with the crosshair. We acquire two images with different symmetric field sizes and record each individual jaw encoder values. A linear relationship between each jaw’s position and its encoder valuemore » is established, from which we predict the encoder values that produce the jaw positions required by TrueBeam’s calibration procedure. During TrueBeam’s jaw calibration procedure, we move the jaw with the pendant to set the jaw into position using the predicted encoder value. The overall accuracy is under 0.1 mm. Results: Our in-house software analyses images and provides sub-pixel accuracy to determine field centre and radiation edges (50% dose of the profile). We verified the TrueBeam encoder provides a reliable linear relationship for each individual jaw position (R{sup 2}>0.9999) from which the encoder values necessary to set jaw calibration points (1 cm and 19 cm) are predicted. Junction matching dose inhomogeneities were improved from >±20% to <±6% using this new calibration protocol. However, one technical challenge exists for junction matching, if the collimator walkout is large. Conclusion: Our new TrueBeam jaw calibration method can systematically calibrate the jaws to crosshair within sub-pixel accuracy and provides both good junction doses and field sizes. This method does not compensate for a larger collimator walkout, but can be used as the underlying foundation for addressing the walkout issue.« less

  14. A calibration procedure for load cells to improve accuracy of mini-lysimeters in monitoring evapotranspiration

    NASA Astrophysics Data System (ADS)

    Misra, R. K.; Padhi, J.; Payero, J. O.

    2011-08-01

    SummaryWe used twelve load cells (20 kg capacity) in a mini-lysimeter system to measure evapotranspiration simultaneously from twelve plants growing in separate pots in a glasshouse. A data logger combined with a multiplexer was used to connect all load cells with the full-bridge excitation mode to acquire load-cell signal. Each load cell was calibrated using fixed load within the range of 0-0.8 times the full load capacity of load cells. Performance of all load cells was assessed on the basis of signal settling time, excitation compensation, hysteresis and temperature. Final calibration of load cells included statistical consideration of these effects to allow prediction of lysimeter weights and evapotranspiration over short-time intervals for improved accuracy and sustained performance. Analysis of the costs for the mini-lysimeter system indicates that evapotranspiration can be measured economically at a reasonable accuracy and sufficient resolution with robust method of load-cell calibration.

  15. Objective measurement of erythema in psoriasis using digital color photography with color calibration.

    PubMed

    Raina, A; Hennessy, R; Rains, M; Allred, J; Hirshburg, J M; Diven, D G; Markey, M K

    2016-08-01

    Traditional metrics for evaluating the severity of psoriasis are subjective, which complicates efforts to measure effective treatments in clinical trials. We collected images of psoriasis plaques and calibrated the coloration of the images according to an included color card. Features were extracted from the images and used to train a linear discriminant analysis classifier with cross-validation to automatically classify the degree of erythema. The results were tested against numerical scores obtained by a panel of dermatologists using a standard rating system. Quantitative measures of erythema based on the digital color images showed good agreement with subjective assessment of erythema severity (κ = 0.4203). The color calibration process improved the agreement from κ = 0.2364 to κ = 0.4203. We propose a method for the objective measurement of the psoriasis severity parameter of erythema and show that the calibration process improved the results. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Position calibration of a 3-DOF hand-controller with hybrid structure

    NASA Astrophysics Data System (ADS)

    Zhu, Chengcheng; Song, Aiguo

    2017-09-01

    A hand-controller is a human-robot interactive device, which measures the 3-DOF (Degree of Freedom) position of the human hand and sends it as a command to control robot movement. The device also receives 3-DOF force feedback from the robot and applies it to the human hand. Thus, the precision of 3-DOF position measurements is a key performance factor for hand-controllers. However, when using a hybrid type 3-DOF hand controller, various errors occur and are considered originating from machining and assembly variations within the device. This paper presents a calibration method to improve the position tracking accuracy of hybrid type hand-controllers by determining the actual size of the hand-controller parts. By re-measuring and re-calibrating this kind of hand-controller, the actual size of the key parts that cause errors is determined. Modifying the formula parameters with the actual sizes, which are obtained in the calibrating process, improves the end position tracking accuracy of the device.

  17. Development of Long-term Datasets from Satellite BUV Instruments: The "Soft" Calibration Approach

    NASA Technical Reports Server (NTRS)

    Bhartia, Pawan K.; Taylor, Steven; Jaross, Glen

    2005-01-01

    The first BUV instrument was launched in April 1970 on NASA's Nimbus4 satellite. More than a dozen instruments, broadly based on the same principle, but using very different technologies, have been launched in the last 35 years on NASA, NOAA, Japanese and European satellites. In this paper we describe the basic principles of the "soft" calibration approach that we have successfully applied to the data from many of these instruments to produce a consistent long-term record of total ozone, ozone profile and aerosols. This approach is based on using accurate radiative transfer models and assumed/known properties of the atmosphere in ultraviolet to derive calibration parameters. Although the accuracy of the results inevitably depends upon how well the assumed atmospheric properties are known, the technique has several built-in cross- checks that improve the robustness of the method. To develop further confidence in the data the soft calibration technique can be combined with data collected from few well- calibrated ground-based instruments. We will use examples from past and present BUV instruments to show how the method works.

  18. Bulk and surface event identification in p-type germanium detectors

    NASA Astrophysics Data System (ADS)

    Yang, L. T.; Li, H. B.; Wong, H. T.; Agartioglu, M.; Chen, J. H.; Jia, L. P.; Jiang, H.; Li, J.; Lin, F. K.; Lin, S. T.; Liu, S. K.; Ma, J. L.; Sevda, B.; Sharma, V.; Singh, L.; Singh, M. K.; Singh, M. K.; Soma, A. K.; Sonay, A.; Yang, S. W.; Wang, L.; Wang, Q.; Yue, Q.; Zhao, W.

    2018-04-01

    The p-type point-contact germanium detectors have been adopted for light dark matter WIMP searches and the studies of low energy neutrino physics. These detectors exhibit anomalous behavior to events located at the surface layer. The previous spectral shape method to identify these surface events from the bulk signals relies on spectral shape assumptions and the use of external calibration sources. We report an improved method in separating them by taking the ratios among different categories of in situ event samples as calibration sources. Data from CDEX-1 and TEXONO experiments are re-examined using the ratio method. Results are shown to be consistent with the spectral shape method.

  19. Improving the accuracy of the gradient method for determining soil carbon dioxide efflux

    USDA-ARS?s Scientific Manuscript database

    Continuous soil CO2 efflux (Fsoil) estimates can be obtained by the gradient method (GM), but the utility of the method is hindered by uncertainties in the application of published models for the diffusion coefficient (Ds). We compared two in-situ methods for determining Ds, one based calibrating th...

  20. Ozone Correction for AM0 Calibrated Solar Cells for the Aircraft Method

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Lyons, Valerie J. (Technical Monitor)

    2002-01-01

    The aircraft solar cell calibration method has provided cells calibrated to space conditions for 37 years. However, it is susceptible to systematic errors due to ozone concentration in the stratosphere. The present correction procedure applies a 1% increase to the measured Isc values. High band-gap cells are more sensitive to ozone adsorbed wavelengths so it has become important to reassess the correction technique. This paper evaluates the ozone correction to be 1+{O3}sup Fo, where Fo is 29.5x10(exp-6)/d.u. for a Silicon solar cell and 42.2xl0(exp -6)/d.u. for a GaAs cell. Results will be presented for high band-gap cells. A comparison with flight data indicates that this method of correcting for the ozone density improves the uncertainty of AM0 Isc to 0.5%.

  1. Binocular optical axis parallelism detection precision analysis based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ying, Jiaju; Liu, Bingqi

    2018-02-01

    According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.

  2. Preconcentration for Improved Long-Term Monitoring of Contaminants in Groundwater: Sorbent Development

    DTIC Science & Technology

    2013-02-11

    calibration curves was ±5%. Ion chromatography (IC) was used for analysis of perchlorate and other ionic targets. Analysis was carried out on a...The methods utilize liquid or gas chromatography , techniques that do not lend themselves well to portable devices and methods. Portable methods are...

  3. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    PubMed

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  4. In situ calibration of the foil detector for an infrared imaging video bolometer using a carbon evaporation technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukai, K., E-mail: mukai.kiyofumi@LHD.nifs.ac.jp; Peterson, B. J.; SOKENDAI

    The InfraRed imaging Video Bolometer (IRVB) is a useful diagnostic for the multi-dimensional measurement of plasma radiation profiles. For the application of IRVB measurement to the neutron environment in fusion plasma devices such as the Large Helical Device (LHD), in situ calibration of the thermal characteristics of the foil detector is required. Laser irradiation tests of sample foils show that the reproducibility and uniformity of the carbon coating for the foil were improved using a vacuum evaporation method. Also, the principle of the in situ calibration system was justified.

  5. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    PubMed Central

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  6. Wavelength-Filter Based Spectral Calibrated Wave number - Linearization in 1.3 mm Spectral Domain Optical Coherence.

    PubMed

    Wijeisnghe, Ruchire Eranga Henry; Cho, Nam Hyun; Park, Kibeom; Shin, Yongseung; Kim, Jeehyun

    2013-12-01

    In this study, we demonstrate the enhanced spectral calibration method for 1.3 μm spectral-domain optical coherence tomography (SD-OCT). The calibration method using wavelength-filter simplifies the SD-OCT system, and also the axial resolution and the entire speed of the OCT system can be dramatically improved as well. An externally connected wavelength-filter is utilized to obtain the information of the wavenumber and the pixel position. During the calibration process the wavelength-filter is placed after a broadband source by connecting through an optical circulator. The filtered spectrum with a narrow line width of 0.5 nm is detected by using a line-scan camera. The method does not require a filter or a software recalibration algorithm for imaging as it simply resamples the OCT signal from the detector array without employing rescaling or interpolation methods. One of the main drawbacks of SD-OCT is the broadened point spread functions (PSFs) with increasing imaging depth can be compensated by increasing the wavenumber-linearization order. The sensitivity of our system was measured at 99.8 dB at an imaging depth of 2.1 mm compared with the uncompensated case.

  7. Interpretation of the rainbow color scale for quantitative medical imaging: perceptually linear color calibration (CSDF) versus DICOM GSDF

    NASA Astrophysics Data System (ADS)

    Chesterman, Frédérique; Manssens, Hannah; Morel, Céline; Serrell, Guillaume; Piepers, Bastian; Kimpe, Tom

    2017-03-01

    Medical displays for primary diagnosis are calibrated to the DICOM GSDF1 but there is no accepted standard today that describes how display systems for medical modalities involving color should be calibrated. Recently the Color Standard Display Function3,4 (CSDF), a calibration using the CIEDE2000 color difference metric to make a display as perceptually linear as possible has been proposed. In this work we present the results of a first observer study set up to investigate the interpretation accuracy of a rainbow color scale when a medical display is calibrated to CSDF versus DICOM GSDF and a second observer study set up to investigate the detectability of color differences when a medical display is calibrated to CSDF, DICOM GSDF and sRGB. The results of the first study indicate that the error when interpreting a rainbow color scale is lower for CSDF than for DICOM GSDF with statistically significant difference (Mann-Whitney U test) for eight out of twelve observers. The results correspond to what is expected based on CIEDE2000 color differences between consecutive colors along the rainbow color scale for both calibrations. The results of the second study indicate a statistical significant improvement in detecting color differences when a display is calibrated to CSDF compared to DICOM GSDF and a (non-significant) trend indicating improved detection for CSDF compared to sRGB. To our knowledge this is the first work that shows the added value of a perceptual color calibration method (CSDF) in interpreting medical color images using the rainbow color scale. Improved interpretation of the rainbow color scale may be beneficial in the area of quantitative medical imaging (e.g. PET SUV, quantitative MRI and CT and doppler US), where a medical specialist needs to interpret quantitative medical data based on a color scale and/or detect subtle color differences and where improved interpretation accuracy and improved detection of color differences may contribute to a better diagnosis. Our results indicate that for diagnostic applications involving both grayscale and color images, CSDF should be chosen over DICOM GSDF and sRGB as it assures excellent detection for color images and at the same time maintains DICOM GSDF for grayscale images.

  8. An Empirical Approach to Ocean Color Data: Reducing Bias and the Need for Post-Launch Radiometric Re-Calibration

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.; Casey, Nancy W.; O'Reilly, John E.; Esaias, Wayne E.

    2009-01-01

    A new empirical approach is developed for ocean color remote sensing. Called the Empirical Satellite Radiance-In situ Data (ESRID) algorithm, the approach uses relationships between satellite water-leaving radiances and in situ data after full processing, i.e., at Level-3, to improve estimates of surface variables while relaxing requirements on post-launch radiometric re-calibration. The approach is evaluated using SeaWiFS chlorophyll, which is the longest time series of the most widely used ocean color geophysical product. The results suggest that ESRID 1) drastically reduces the bias of ocean chlorophyll, most impressively in coastal regions, 2) modestly improves the uncertainty, and 3) reduces the sensitivity of global annual median chlorophyll to changes in radiometric re-calibration. Simulated calibration errors of 1% or less produce small changes in global median chlorophyll (less than 2.7%). In contrast, the standard NASA algorithm set is highly sensitive to radiometric calibration: similar 1% calibration errors produce changes in global median chlorophyll up to nearly 25%. We show that 0.1% radiometric calibration error (about 1% in water-leaving radiance) is needed to prevent radiometric calibration errors from changing global annual median chlorophyll more than the maximum interannual variability observed in the SeaWiFS 9-year record (+/- 3%), using the standard method. This is much more stringent than the goal for SeaWiFS of 5% uncertainty for water leaving radiance. The results suggest ocean color programs might consider less emphasis of expensive efforts to improve post-launch radiometric re-calibration in favor of increased efforts to characterize in situ observations of ocean surface geophysical products. Although the results here are focused on chlorophyll, in principle the approach described by ESRID can be applied to any surface variable potentially observable by visible remote sensing.

  9. Local Adaptive Calibration of the GLASS Surface Incident Shortwave Radiation Product Using Smoothing Spline

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Liang, S.; Wang, G.

    2015-12-01

    Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.

  10. Stellar Color Regression: A Spectroscopy-based Method for Color Calibration to a Few Millimagnitude Accuracy and the Recalibration of Stripe 82

    NASA Astrophysics Data System (ADS)

    Yuan, Haibo; Liu, Xiaowei; Xiang, Maosheng; Huang, Yang; Zhang, Huihua; Chen, Bingqiu

    2015-02-01

    In this paper we propose a spectroscopy-based stellar color regression (SCR) method to perform accurate color calibration for modern imaging surveys, taking advantage of millions of stellar spectra now available. The method is straightforward, insensitive to systematic errors in the spectroscopically determined stellar atmospheric parameters, applicable to regions that are effectively covered by spectroscopic surveys, and capable of delivering an accuracy of a few millimagnitudes for color calibration. As an illustration, we have applied the method to the Sloan Digital Sky Survey (SDSS) Stripe 82 data. With a total number of 23,759 spectroscopically targeted stars, we have mapped out the small but strongly correlated color zero-point errors present in the photometric catalog of Stripe 82, and we improve the color calibration by a factor of two to three. Our study also reveals some small but significant magnitude dependence errors in the z band for some charge-coupled devices (CCDs). Such errors are likely to be present in all the SDSS photometric data. Our results are compared with those from a completely independent test based on the intrinsic colors of red galaxies presented by Ivezić et al. The comparison, as well as other tests, shows that the SCR method has achieved a color calibration internally consistent at a level of about 5 mmag in u - g, 3 mmag in g - r, and 2 mmag in r - i and i - z. Given the power of the SCR method, we discuss briefly the potential benefits by applying the method to existing, ongoing, and upcoming imaging surveys.

  11. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  12. [Techniques for pixel response nonuniformity correction of CCD in interferential imaging spectrometer].

    PubMed

    Yao, Tao; Yin, Shi-Min; Xiangli, Bin; Lü, Qun-Bo

    2010-06-01

    Based on in-depth analysis of the relative radiation scaling theorem and acquired scaling data of pixel response nonuniformity correction of CCD (charge-coupled device) in spaceborne visible interferential imaging spectrometer, a pixel response nonuniformity correction method of CCD adapted to visible and infrared interferential imaging spectrometer system was studied out, and it availably resolved the engineering technical problem of nonuniformity correction in detector arrays for interferential imaging spectrometer system. The quantitative impact of CCD nonuniformity on interferogram correction and recovery spectrum accuracy was given simultaneously. Furthermore, an improved method with calibration and nonuniformity correction done after the instrument is successfully assembled was proposed. The method can save time and manpower. It can correct nonuniformity caused by other reasons in spectrometer system besides CCD itself's nonuniformity, can acquire recalibration data when working environment is changed, and can also more effectively improve the nonuniformity calibration accuracy of interferential imaging

  13. Balance Calibration – A Method for Assigning a Direct-Reading Uncertainty to an Electronic Balance.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mike Stears

    2010-07-01

    Paper Title: Balance Calibration – A method for assigning a direct-reading uncertainty to an electronic balance. Intended Audience: Those who calibrate or use electronic balances. Abstract: As a calibration facility, we provide on-site (at the customer’s location) calibrations of electronic balances for customers within our company. In our experience, most of our customers are not using their balance as a comparator, but simply putting an unknown quantity on the balance and reading the displayed mass value. Manufacturer’s specifications for balances typically include specifications such as readability, repeatability, linearity, and sensitivity temperature drift, but what does this all mean when themore » balance user simply reads the displayed mass value and accepts the reading as the true value? This paper discusses a method for assigning a direct-reading uncertainty to a balance based upon the observed calibration data and the environment where the balance is being used. The method requires input from the customer regarding the environment where the balance is used and encourages discussion with the customer regarding sources of uncertainty and possible means for improvement; the calibration process becomes an educational opportunity for the balance user as well as calibration personnel. This paper will cover the uncertainty analysis applied to the calibration weights used for the field calibration of balances; the uncertainty is calculated over the range of environmental conditions typically encountered in the field and the resulting range of air density. The temperature stability in the area of the balance is discussed with the customer and the temperature range over which the balance calibration is valid is decided upon; the decision is based upon the uncertainty needs of the customer and the desired rigor in monitoring by the customer. Once the environmental limitations are decided, the calibration is performed and the measurement data is entered into a custom spreadsheet. The spreadsheet uses measurement results, along with the manufacturer’s specifications, to assign a direct-read measurement uncertainty to the balance. The fact that the assigned uncertainty is a best-case uncertainty is discussed with the customer; the assigned uncertainty contains no allowance for contributions associated with the unknown weighing sample, such as density, static charges, magnetism, etc. The attendee will learn uncertainty considerations associated with balance calibrations along with one method for assigning an uncertainty to a balance used for non-comparison measurements.« less

  14. Improving regression-model-based streamwater constituent load estimates derived from serially correlated data

    USGS Publications Warehouse

    Aulenbach, Brent T.

    2013-01-01

    A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.

  15. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  16. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE PAGES

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...

    2016-12-15

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  17. Importance of Calibration Method in Central Blood Pressure for Cardiac Structural Abnormalities.

    PubMed

    Negishi, Kazuaki; Yang, Hong; Wang, Ying; Nolan, Mark T; Negishi, Tomoko; Pathan, Faraz; Marwick, Thomas H; Sharman, James E

    2016-09-01

    Central blood pressure (CBP) independently predicts cardiovascular risk, but calibration methods may affect accuracy of central systolic blood pressure (CSBP). Standard central systolic blood pressure (Stan-CSBP) from peripheral waveforms is usually derived with calibration using brachial SBP and diastolic BP (DBP). However, calibration using oscillometric mean arterial pressure (MAP) and DBP (MAP-CSBP) is purported to provide more accurate representation of true invasive CSBP. This study sought to determine which derived CSBP could more accurately discriminate cardiac structural abnormalities. A total of 349 community-based patients with risk factors (71±5years, 161 males) had CSBP measured by brachial oscillometry (Mobil-O-Graph, IEM GmbH, Stolberg, Germany) using 2 calibration methods: MAP-CSBP and Stan-CSBP. Left ventricular hypertrophy (LVH) and left atrial dilatation (LAD) were measured based on standard guidelines. MAP-CSBP was higher than Stan-CSBP (149±20 vs. 128±15mm Hg, P < 0.0001). Although they were modestly correlated (rho = 0.74, P < 0.001), the Bland-Altman plot demonstrated a large bias (21mm Hg) and limits of agreement (24mm Hg). In receiver operating characteristic (ROC) curve analyses, MAP-CSBP significantly better discriminated LVH compared with Stan-CSBP (area under the curve (AUC) 0.66 vs. 0.59, P = 0.0063) and brachial SBP (0.62, P = 0.027). Continuous net reclassification improvement (NRI) (P < 0.001) and integrated discrimination improvement (IDI) (P < 0.001) corroborated superior discrimination of LVH by MAP-CSBP. Similarly, MAP-CSBP better distinguished LAD than Stan-CSBP (AUC 0.63 vs. 0.56, P = 0.005) and conventional brachial SBP (0.58, P = 0.006), whereas Stan-CSBP provided no better discrimination than conventional brachial BP (P = 0.09). CSBP is calibration dependent and when oscillometric MAP and DBP are used, the derived CSBP is a better discriminator for cardiac structural abnormalities. © American Journal of Hypertension, Ltd 2016. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Review of technological advancements in calibration systems for laser vision correction

    NASA Astrophysics Data System (ADS)

    Arba-Mosquera, Samuel; Vinciguerra, Paolo; Verma, Shwetabh

    2018-02-01

    Using PubMed and our internal database, we extensively reviewed the literature on the technological advancements in calibration systems, with a motive to present an account of the development history, and latest developments in calibration systems used in refractive surgery laser systems. As a second motive, we explored the clinical impact of the error introduced due to the roughness in ablation and its corresponding effect on system calibration. The inclusion criterion for this review was strict relevance to the clinical questions under research. The existing calibration methods, including various plastic models, are highly affected by various factors involved in refractive surgery, such as temperature, airflow, and hydration. Surface roughness plays an important role in accurate measurement of ablation performance on calibration materials. The ratio of ablation efficiency between the human cornea and calibration material is very critical and highly dependent on the laser beam characteristics and test conditions. Objective evaluation of the calibration data and corresponding adjustment of the laser systems at regular intervals are essential for the continuing success and further improvements in outcomes of laser vision correction procedures.

  19. Absolute calorimetric calibration of low energy brachytherapy sources

    NASA Astrophysics Data System (ADS)

    Stump, Kurt E.

    In the past decade there has been a dramatic increase in the use of permanent radioactive source implants in the treatment of prostate cancer. A small radioactive source encapsulated in a titanium shell is used in this type of treatment. The radioisotopes used are generally 125I or 103Pd. Both of these isotopes have relatively short half-lives, 59.4 days and 16.99 days, respectively, and have low-energy emissions and a low dose rate. These factors make these sources well suited for this application, but the calibration of these sources poses significant metrological challenges. The current standard calibration technique involves the measurement of ionization in air to determine the source air-kerma strength. While this has proved to be an improvement over previous techniques, the method has been shown to be metrologically impure and may not be the ideal means of calbrating these sources. Calorimetric methods have long been viewed to be the most fundamental means of determining source strength for a radiation source. This is because calorimetry provides a direct measurement of source energy. However, due to the low energy and low power of the sources described above, current calorimetric methods are inadequate. This thesis presents work oriented toward developing novel methods to provide direct and absolute measurements of source power for low-energy low dose rate brachytherapy sources. The method is the first use of an actively temperature-controlled radiation absorber using the electrical substitution method to determine total contained source power of these sources. The instrument described operates at cryogenic temperatures. The method employed provides a direct measurement of source power. The work presented here is focused upon building a metrological foundation upon which to establish power-based calibrations of clinical-strength sources. To that end instrument performance has been assessed for these source strengths. The intent is to establish the limits of the current instrument to direct further work in this field. It has been found that for sources with powers above approximately 2 muW the instrument is able to determine the source power in agreement to within less than 7% of what is expected based upon the current source strength standard. For lower power sources, the agreement is still within the uncertainty of the power measurement, but the calorimeter noise dominates. Thus, to provide absolute calibration of lower power sources additional measures must be taken. The conclusion of this thesis describes these measures and how they will improve the factors that limit the current instrument. The results of the work presented in this thesis establish the methodology of active radiometric calorimetey for the absolute calibration of radioactive sources. The method is an improvement over previous techniques in that there is no reliance upon the thermal properties of the materials used or the heat flow pathways on the source measurements. The initial work presented here will help to shape future refinements of this technique to allow lower power sources to be calibrated with high precision and high accuracy.

  20. Calibrated Multivariate Regression with Application to Neural Semantic Basis Discovery.

    PubMed

    Liu, Han; Wang, Lie; Zhao, Tuo

    2015-08-01

    We propose a calibrated multivariate regression method named CMR for fitting high dimensional multivariate regression models. Compared with existing methods, CMR calibrates regularization for each regression task with respect to its noise level so that it simultaneously attains improved finite-sample performance and tuning insensitiveness. Theoretically, we provide sufficient conditions under which CMR achieves the optimal rate of convergence in parameter estimation. Computationally, we propose an efficient smoothed proximal gradient algorithm with a worst-case numerical rate of convergence O (1/ ϵ ), where ϵ is a pre-specified accuracy of the objective function value. We conduct thorough numerical simulations to illustrate that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR to solve a brain activity prediction problem and find that it is as competitive as a handcrafted model created by human experts. The R package camel implementing the proposed method is available on the Comprehensive R Archive Network http://cran.r-project.org/web/packages/camel/.

  1. Self-spectral calibration for spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Zhang, Xianling; Gao, Wanrong; Bian, Haiyi; Chen, Chaoliang; Liao, Jiuling

    2013-06-01

    A different real-time self-wavelength calibration method for spectral domain optical coherence tomography is presented in which interference spectra measured from two arbitrary points on the tissue surface are used for calibration. The method takes advantages of two favorable conditions of optical coherence tomography (OCT) signal. First, the signal back-scattered from the tissue surface is generally much stronger than that from positions in the tissue interior, so the spectral component of the surface interference could be extracted from the measured spectrum. Second, the tissue surface is not a plane and a phase difference exists between the light reflected from two different points on the surface. Compared with the zero-crossing automatic method, the introduced method has the advantage of removing the error due to dispersion mismatch or the common phase error. The method is tested experimentally to demonstrate the improved signal-to-noise ratio, higher axial resolution, and slower sensitivity degradation with depth when compared to the use of the zero-crossing method and applied to two-dimensional cross-sectional images of human finger skin.

  2. A novel 360-degree shape measurement using a simple setup with two mirrors and a laser MEMS scanner

    NASA Astrophysics Data System (ADS)

    Jin, Rui; Zhou, Xiang; Yang, Tao; Li, Dong; Wang, Chao

    2017-09-01

    There is no denying that 360-degree shape measurement technology plays an important role in the field of threedimensional optical metrology. Traditional optical 360-degree shape measurement methods are mainly two kinds: the first kind, by placing multiple scanners to achieve 360-degree measurements; the second kind, through the high-precision rotating device to get 360-degree shape model. The former increases the number of scanners and costly, while the latter using rotating devices lead to time consuming. This paper presents a low cost and fast optical 360-degree shape measurement method, which possesses the advantages of full static, fast and low cost. The measuring system consists of two mirrors with a certain angle, a laser projection system, a stereoscopic calibration block, and two cameras. And most of all, laser MEMS scanner can achieve precise movement of laser stripes without any movement mechanism, improving the measurement accuracy and efficiency. What's more, a novel stereo calibration technology presented in this paper can achieve point clouds data registration, and then get the 360-degree model of objects. A stereoscopic calibration block with special coded patterns on six sides is used in this novel stereo calibration method. Through this novel stereo calibration technology we can quickly get the 360-degree models of objects.

  3. Measurement of large steel plates based on linear scan structured light scanning

    NASA Astrophysics Data System (ADS)

    Xiao, Zhitao; Li, Yaru; Lei, Geng; Xi, Jiangtao

    2018-01-01

    A measuring method based on linear structured light scanning is proposed to achieve the accurate measurement of the complex internal shape of large steel plates. Firstly, by using a calibration plate with round marks, an improved line scanning calibration method is designed. The internal and external parameters of camera are determined through the calibration method. Secondly, the images of steel plates are acquired by line scan camera. Then the Canny edge detection method is used to extract approximate contours of the steel plate images, the Gauss fitting algorithm is used to extract the sub-pixel edges of the steel plate contours. Thirdly, for the problem of inaccurate restoration of contour size, by measuring the distance between adjacent points in the grid of known dimensions, the horizontal and vertical error curves of the images are obtained. Finally, these horizontal and vertical error curves can be used to correct the contours of steel plates, and then combined with the calibration parameters of internal and external, the size of these contours can be calculated. The experiments results demonstrate that the proposed method can achieve the error of 1 mm/m in 1.2m×2.6m field of view, which has satisfied the demands of industrial measurement.

  4. New robust bilinear least squares method for the analysis of spectral-pH matrix data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C

    2005-07-01

    A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.

  5. Synthetic aperture imaging in ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.

    2014-03-01

    Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica­ tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu­ rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.

  6. Multiplexed MRM-Based Protein Quantitation Using Two Different Stable Isotope-Labeled Peptide Isotopologues for Calibration.

    PubMed

    LeBlanc, André; Michaud, Sarah A; Percy, Andrew J; Hardie, Darryl B; Yang, Juncong; Sinclair, Nicholas J; Proudfoot, Jillaine I; Pistawka, Adam; Smith, Derek S; Borchers, Christoph H

    2017-07-07

    When quantifying endogenous plasma proteins for fundamental and biomedical research - as well as for clinical applications - precise, reproducible, and robust assays are required. Targeted detection of peptides in a bottom-up strategy is the most common and precise mass spectrometry-based quantitation approach when combined with the use of stable isotope-labeled peptides. However, when measuring protein in plasma, the unknown endogenous levels prevent the implementation of the best calibration strategies, since no blank matrix is available. Consequently, several alternative calibration strategies are employed by different laboratories. In this study, these methods were compared to a new approach using two different stable isotope-labeled standard (SIS) peptide isotopologues for each endogenous peptide to be quantified, enabling an external calibration curve as well as the quality control samples to be prepared in pooled human plasma without interference from endogenous peptides. This strategy improves the analytical performance of the assay and enables the accuracy of the assay to be monitored, which can also facilitate method development and validation.

  7. Precise X-ray and video overlay for augmented reality fluoroscopy.

    PubMed

    Chen, Xin; Wang, Lejing; Fallavollita, Pascal; Navab, Nassir

    2013-01-01

    The camera-augmented mobile C-arm (CamC) augments any mobile C-arm by a video camera and mirror construction and provides a co-registration of X-ray with video images. The accurate overlay between these images is crucial to high-quality surgical outcomes. In this work, we propose a practical solution that improves the overlay accuracy for any C-arm orientation by: (i) improving the existing CamC calibration, (ii) removing distortion effects, and (iii) accounting for the mechanical sagging of the C-arm gantry due to gravity. A planar phantom is constructed and placed at different distances to the image intensifier in order to obtain the optimal homography that co-registers X-ray and video with a minimum error. To alleviate distortion, both X-ray calibration based on equidistant grid model and Zhang's camera calibration method are implemented for distortion correction. Lastly, the virtual detector plane (VDP) method is adapted and integrated to reduce errors due to the mechanical sagging of the C-arm gantry. The overlay errors are 0.38±0.06 mm when not correcting for distortion, 0.27±0.06 mm when applying Zhang's camera calibration, and 0.27±0.05 mm when applying X-ray calibration. Lastly, when taking into account all angular and orbital rotations of the C-arm, as well as correcting for distortion, the overlay errors are 0.53±0.24 mm using VDP and 1.67±1.25 mm excluding VDP. The augmented reality fluoroscope achieves an accurate video and X-ray overlay when applying the optimal homography calculated from distortion correction using X-ray calibration together with the VDP.

  8. Estimation of future flow regime for a spatially varied Himalayan watershed using improved multi-site calibration method of SWAT model.

    NASA Astrophysics Data System (ADS)

    Pradhanang, S. M.; Hasan, M. A.; Booth, P.; Fallatah, O.

    2016-12-01

    The monsoon and snow driven regime in the Himalayan region has received increasing attention in the recent decade regarding the effects of climate change on hydrologic regimes. Modeling streamflow in such spatially varied catchment requires proper calibration and validation in hydrologic modeling. While calibration and validation are time consuming and computationally intensive, an effective regionalized approach with multi-site information is crucial for flow estimation, especially in daily scale. In this study, we adopted a multi-site approach to calibration and validation of the Soil Water Assessment Tool (SWAT) model for the Karnali river catchment, which is characterized as being the most vulnerable catchment to climate change in the Himalayan region. APHRODITE's (Asian Precipitation - Highly-Resolved Observational Data Integration Towards Evaluation) daily gridded precipitation data, one of the accurate and reliable weather date over this region were utilized in this study. The model evaluation of the entire catchment divided into four sub-catchments, utilizing discharge records from 1963 to 2010. In previous studies, multi-site calibration used only a single set of calibration parameters for all sub-catchment of a large watershed. In this study, we introduced a technique that can incorporate different sets of calibration parameters for each sub-basin, which eventually ameliorate the flow of the whole watershed. Results show that the calibrated model with new method can capture almost identical pattern of flow over the region. The predicted daily streamflow matched the observed values, with a Nash-Sutcliffe coefficient of 0.73 during calibration and 0.71 during validation period. The method perfumed better than existing multi-site calibration methods. To assess the influence of continued climate change on hydrologic processes, we modified the weather inputs for the model using precipitation and temperature changes for two Representative Concentration Pathways (RCPs) scenarios, RCP 4.5 and 8.5. Climate simulation for RCP scenarios were conducted from 1981-2100, where 1981-2005 was considered as baseline and 2006-2100 was considered as the future projection. The result shows that probability of flooding will eventually increase in future years due to increased flow in both scenarios.

  9. Self-Calibration Approach for Mixed Signal Circuits in Systems-on-Chip

    NASA Astrophysics Data System (ADS)

    Jung, In-Seok

    MOSFET scaling has served industry very well for a few decades by proving improvements in transistor performance, power, and cost. However, they require high test complexity and cost due to several issues such as limited pin count and integration of analog and digital mixed circuits. Therefore, self-calibration is an excellent and promising method to improve yield and to reduce manufacturing cost by simplifying the test complexity, because it is possible to address the process variation effects by means of self-calibration technique. Since the prior published calibration techniques were developed for a specific targeted application, it is not easy to be utilized for other applications. In order to solve the aforementioned issues, in this dissertation, several novel self-calibration design techniques in mixed-signal mode circuits are proposed for an analog to digital converter (ADC) to reduce mismatch error and improve performance. These are essential components in SOCs and the proposed self-calibration approach also compensates the process variations. The proposed novel self-calibration approach targets the successive approximation (SA) ADC. First of all, the offset error of the comparator in the SA-ADC is reduced using the proposed approach by enabling the capacitor array in the input nodes for better matching. In addition, the auxiliary capacitors for each capacitor of DAC in the SA-ADC are controlled by using synthesized digital controller to minimize the mismatch error of the DAC. Since the proposed technique is applied during foreground operation, the power overhead in SA-ADC case is minimal because the calibration circuit is deactivated during normal operation time. Another benefit of the proposed technique is that the offset voltage of the comparator is continuously adjusted for every step to decide one-bit code, because not only the inherit offset voltage of the comparator but also the mismatch of DAC are compensated simultaneously. Synthesized digital calibration control circuit operates as fore-ground mode, and the controller has been highly optimized for low power and better performance with simplified structure. In addition, in order to increase the sampling clock frequency of proposed self-calibration approach, novel variable clock period method is proposed. To achieve high speed SAR operation, a variable clock time technique is used to reduce not only peak current but also die area. The technique removes conversion time waste and extends the SAR operation speed easily. To verify and demonstrate the proposed techniques, a prototype charge-redistribution SA-ADCs with the proposed self-calibration is implemented in a 130nm standard CMOS process. The prototype circuit's silicon area is 0.0715 mm 2 and consumers 4.62mW with 1.2V power supply.

  10. On aspects of characterising and calibrating the interferometric gravitational wave detector, GEO 600

    NASA Astrophysics Data System (ADS)

    Hewitson, Martin R.

    Gravitational waves are small disturbances, or strains, in the fabric of space-time. The detection of these waves has been a major goal of modern physics since they were predicted as a consequence of Einstein's General Theory of Relativity. Large-scale astro- physical events, such as colliding neutron stars or supernovae, are predicted to release energy in the form of gravitational waves. However, even with such cataclysmic events, the strain amplitudes of the gravitational waves expected to be seen at the Earth are incredibly small: of the order 1 part in 10. 21 or less at audio frequencies. Because of theseextremely small amplitudes, the search for gravitational waves remains one of the most challenging goals of modem physics. This thesis starts by detailing the data recording system of GEO 600: an essential part of producing a calibrated data set. The full data acquisition system, including all hardware and software aspects, is described in detail. Comprehensive tests of the stability and timing accuracy of the system show that it has a typical duty cycle of greater than 99% with an absolute timing accuracy (measured against GPS) of the order 15 mus. The thesis then goes on to describe the design and implementation of a time-domain calibration method, based on the use of time-domain filters, for the power-recycled configuration of GEO 600. This time-domain method is then extended to deal with the more complicated case of calibrating the dual-recycled configuration of GEO 600. The time-domain calibration method was applied to two long data-taking (science) runs. The method proved successful in recovering (in real-time) a calibrated strain time-series suitable for use in astrophysical searches. The accuracy of the calibration process was shown to be good to 10% or less across the detection band of the detector. In principle, the time-domain method presents no restrictions in the achievable calibration accuracy; most of the uncertainty in the calibration process is shown to arise from the actuator used to inject the calibradon signals. The recovered strain series was shown to be equivalent to a frequency-domain calibration at the level of a few percent. A number of ways are presented in which the initial calibration pipeline can be improved to increase the calibration accuracy. The production and subsequent distribution of a calibrated time- series allows for a single point of control over the validity and quality of the calibrated data. The techniques developed in this thesis are currently being adopted by the LIGO interferometers to perform time-domain calibration of their three long-baseline detectors. In addition, a data storage system is currently being developed by the author, together with the LIGO calibration team, to allow all the information used in the time-domain calibration process to be captured in a concise and coherent form that is consistent across multiple detectors in the LSC. (Abstract shortened by ProQuest.).

  11. Investigating temporal field sampling strategies for site-specific calibration of three soil moisture-neutron intensity parameterisation methods

    NASA Astrophysics Data System (ADS)

    Iwema, J.; Rosolem, R.; Baatz, R.; Wagener, T.; Bogena, H. R.

    2015-07-01

    The Cosmic-Ray Neutron Sensor (CRNS) can provide soil moisture information at scales relevant to hydrometeorological modelling applications. Site-specific calibration is needed to translate CRNS neutron intensities into sensor footprint average soil moisture contents. We investigated temporal sampling strategies for calibration of three CRNS parameterisations (modified N0, HMF, and COSMIC) by assessing the effects of the number of sampling days and soil wetness conditions on the performance of the calibration results while investigating actual neutron intensity measurements, for three sites with distinct climate and land use: a semi-arid site, a temperate grassland, and a temperate forest. When calibrated with 1 year of data, both COSMIC and the modified N0 method performed better than HMF. The performance of COSMIC was remarkably good at the semi-arid site in the USA, while the N0mod performed best at the two temperate sites in Germany. The successful performance of COSMIC at all three sites can be attributed to the benefits of explicitly resolving individual soil layers (which is not accounted for in the other two parameterisations). To better calibrate these parameterisations, we recommend in situ soil sampled to be collected on more than a single day. However, little improvement is observed for sampling on more than 6 days. At the semi-arid site, the N0mod method was calibrated better under site-specific average wetness conditions, whereas HMF and COSMIC were calibrated better under drier conditions. Average soil wetness condition gave better calibration results at the two humid sites. The calibration results for the HMF method were better when calibrated with combinations of days with similar soil wetness conditions, opposed to N0mod and COSMIC, which profited from using days with distinct wetness conditions. Errors in actual neutron intensities were translated to average errors specifically to each site. At the semi-arid site, these errors were below the typical measurement uncertainties from in situ point-scale sensors and satellite remote sensing products. Nevertheless, at the two humid sites, reduction in uncertainty with increasing sampling days only reached typical errors associated with satellite remote sensing products. The outcomes of this study can be used by researchers as a CRNS calibration strategy guideline.

  12. A Nonlinearity Minimization-Oriented Resource-Saving Time-to-Digital Converter Implemented in a 28 nm Xilinx FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Liu, Chong

    2015-10-01

    Because large nonlinearity errors exist in the current tapped-delay line (TDL) style field programmable gate array (FPGA)-based time-to-digital converters (TDC), bin-by-bin calibration techniques have to be resorted for gaining a high measurement resolution. If the TDL in selected FPGAs is significantly affected by changes in ambient temperature, the bin-by-bin calibration table has to be updated as frequently as possible. The on-line calibration and calibration table updating increase the TDC design complexity and limit the system performance to some extent. This paper proposes a method to minimize the nonlinearity errors of TDC bins, so that the bin-by-bin calibration may not be needed while maintaining a reasonably high time resolution. The method is a two pass approach: By a bin realignment, the large number of wasted zero-width bins in the original TDL is reused and the granularity of the bins is improved; by a bin decimation, the bin size and its uniformity is traded-off, and the time interpolation by the delay line turns more precise so that the bin-by-bin calibration is not necessary. Using Xilinx 28 nm FPGAs, in which the TDL property is not very sensitive to ambient temperature, the proposed TDC achieves approximately 15 ps root-mean-square (RMS) time resolution by dual-channel measurements of time-intervals over the range of operating temperature. Because of removing the calibration and less logic resources required for the data post-processing, the method has bigger multi-channel capability.

  13. Passive Sampling Methods for Contaminated Sediments: Practical Guidance for Selection, Calibration, and Implementation

    EPA Science Inventory

    This article provides practical guidance on the use of passive sampling methods(PSMs) that target the freely dissolved concentration (Cfree) for improved exposure assessment of hydrophobic organic chemicals in sediments. Primary considerations for selecting a PSM for a specific a...

  14. Calibrating Laser Gas Measurements by Use of Natural CO2

    NASA Technical Reports Server (NTRS)

    Webster, Chris

    2003-01-01

    An improved method of calibration has been devised for instruments that utilize tunable lasers to measure the absorption spectra of atmospheric gases in order to determine the relative abundances of the gases. In this method, CO2 in the atmosphere is used as a natural calibration standard. Unlike in one prior calibration method, it is not necessary to perform calibration measurements in advance of use of the instrument and to risk deterioration of accuracy with time during use. Unlike in another prior calibration method, it is not necessary to include a calibration gas standard (and the attendant additional hardware) in the instrument and to interrupt the acquisition of atmospheric data to perform calibration measurements. In the operation of an instrument of this type, the beam from a tunable diode laser or a tunable quantum-cascade laser is directed along a path through the atmosphere, the laser is made to scan in wavelength over an infrared spectral region that contains one or two absorption spectral lines of a gas of interest, and the transmission (and, thereby, the absorption) of the beam is measured. The concentration of the gas of interest can then be calculated from the observed depth of the absorption line(s), given the temperature, pressure, and path length. CO2 is nearly ideal as a natural calibration gas for the following reasons: CO2 has numerous rotation/vibration infrared spectral lines, many of which are near absorption lines of other gases. The concentration of CO2 relative to the concentrations of the major constituents of the atmosphere is well known and varies slowly and by a small enough amount to be considered constant for calibration in the present context. Hence, absorption-spectral measurements of the concentrations of gases of interest can be normalized to the concentrations of CO2. Because at least one CO2 calibration line is present in every spectral scan of the laser during absorption measurements, the atmospheric CO2 serves continuously as a calibration standard for every measurement point. Figure 1 depicts simulated spectral transmission measurements in a wavenumber range that contains two absorption lines of N2O and one of CO2. The simulations were performed for two different upper-atmospheric pressures for an airborne instrument that has a path length of 80 m. The relative abundance of CO2 in air was assumed to be 360 parts per million by volume (approximately its natural level in terrestrial air). In applying the present method to measurements like these, one could average the signals from the two N2O absorption lines and normalize their magnitudes to that of the CO2 absorption line. Other gases with which this calibration method can be used include H2O, CH4, CO, NO, NO2, HOCl, C2H2, NH3, O3, and HCN. One can also take advantage of this method to eliminate an atmospheric-pressure gauge and thereby reduce the mass of the instrument: The atmospheric pressure can be calculated from the temperature, the known relative abundance of CO2, and the concentration of CO2 as measured by spectral absorption. Natural CO2 levels on Mars provide an ideal calibration standard. Figure 2 shows a second example of the application of this method to Mars atmospheric gas measurements. For sticky gases like H2O, the method is particularly powerful, since water is notoriously difficult to handle at low concentrations in pre-flight calibration procedures.

  15. ASTM clustering for improving coal analysis by near-infrared spectroscopy.

    PubMed

    Andrés, J M; Bona, M T

    2006-11-15

    Multivariate analysis techniques have been applied to near-infrared (NIR) spectra coals to investigate the relationship between nine coal properties (moisture (%), ash (%), volatile matter (%), fixed carbon (%), heating value (kcal/kg), carbon (%), hydrogen (%), nitrogen (%) and sulphur (%)) and the corresponding predictor variables. In this work, a whole set of coal samples was grouped into six more homogeneous clusters following the ASTM reference method for classification prior to the application of calibration methods to each coal set. The results obtained showed a considerable improvement of the error determination compared with the calibration for the whole sample set. For some groups, the established calibrations approached the quality required by the ASTM/ISO norms for laboratory analysis. To predict property values for a new coal sample it is necessary the assignation of that sample to its respective group. Thus, the discrimination and classification ability of coal samples by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS) in the NIR range was also studied by applying Soft Independent Modelling of Class Analogy (SIMCA) and Linear Discriminant Analysis (LDA) techniques. Modelling of the groups by SIMCA led to overlapping models that cannot discriminate for unique classification. On the other hand, the application of Linear Discriminant Analysis improved the classification of the samples but not enough to be satisfactory for every group considered.

  16. Hybrid dynamic radioactive particle tracking (RPT) calibration technique for multiphase flow systems

    NASA Astrophysics Data System (ADS)

    Khane, Vaibhav; Al-Dahhan, Muthanna H.

    2017-04-01

    The radioactive particle tracking (RPT) technique has been utilized to measure three-dimensional hydrodynamic parameters for multiphase flow systems. An analytical solution to the inverse problem of the RPT technique, i.e. finding the instantaneous tracer positions based upon instantaneous counts received in the detectors, is not possible. Therefore, a calibration to obtain a counts-distance map is needed. There are major shortcomings in the conventional RPT calibration method due to which it has limited applicability in practical applications. In this work, the design and development of a novel dynamic RPT calibration technique are carried out to overcome the shortcomings of the conventional RPT calibration method. The dynamic RPT calibration technique has been implemented around a test reactor with 1foot in diameter and 1 foot in height using Cobalt-60 as an isotopes tracer particle. Two sets of experiments have been carried out to test the capability of novel dynamic RPT calibration. In the first set of experiments, a manual calibration apparatus has been used to hold a tracer particle at known static locations. In the second set of experiments, the tracer particle was moved vertically downwards along a straight line path in a controlled manner. The obtained reconstruction results about the tracer particle position were compared with the actual known position and the reconstruction errors were estimated. The obtained results revealed that the dynamic RPT calibration technique is capable of identifying tracer particle positions with a reconstruction error between 1 to 5.9 mm for the conditions studied which could be improved depending on various factors outlined here.

  17. [Preparation of chicken red blood cells for calibration of flow cytometry].

    PubMed

    Yin, Jian; Zhao, Shutao; Wu, Xiaodong; Wang, Ce; Wu, Yunliang

    2013-01-01

    To prepare stable chicken red blood cells for the calibration of flow cytometry. The traditional isolation method of chicken red blood cells was modified by incorporating gelatin technique, Ca2+-free HBSS treatment and low-speed centrifugation. The effect of fluorescence staining of the cells was improved by the addition of TritonX-100 to enhance the membrane permeability and Rnase enzymes to disintegrate RNA tiles. The modified method was compared with the traditional method for viability of the freshly isolated cells and the DNA content coefficient of variation (CV) of the fixed cells. Chicken red blood cells obtained by the modified method showed a significantly higher viability than those obtained by the traditional method [(98.5∓3.5)% vs (93.5∓2.7)%, P<0.05]. After glutaraldehyde fixation, the isolated cells with the modified method were stable during the 90-day preservation with a significantly lower CV than the cells obtained by the traditional method [(6.0∓0.3)% to 6.2∓0.4% vs (8.6∓0.5)% to (13.1∓1.4)%, P<0.01]. The chicken red blood cells isolated using the modified method can be applicable for calibration of flow cytometry.

  18. Maximum likelihood estimation in calibrating a stereo camera setup.

    PubMed

    Muijtjens, A M; Roos, J M; Arts, T; Hasman, A

    1999-02-01

    Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.

  19. Network operability of ground-based microwave radiometers: Calibration and standardization efforts

    NASA Astrophysics Data System (ADS)

    Pospichal, Bernhard; Löhnert, Ulrich; Küchler, Nils; Czekala, Harald

    2017-04-01

    Ground-based microwave radiometers (MWR) are already widely used by national weather services and research institutions all around the world. Most of the instruments operate continuously and are beginning to be implemented into data assimilation for atmospheric models. Especially their potential for continuously observing boundary-layer temperature profiles as well as integrated water vapor and cloud liquid water path makes them valuable for improving short-term weather forecasts. However until now, most MWR have been operated as stand-alone instruments. In order to benefit from a network of these instruments, standardization of calibration, operation and data format is necessary. In the frame of TOPROF (COST Action ES1303) several efforts have been undertaken, such as uncertainty and bias assessment, or calibration intercomparison campaigns. The goal was to establish protocols for providing quality controlled (QC) MWR data and their uncertainties. To this end, standardized calibration procedures for MWR have been developed and recommendations for radiometer users compiled. Based on the results of the TOPROF campaigns, a new, high-accuracy liquid-nitrogen calibration load has been introduced for MWR manufactured by Radiometer Physics GmbH (RPG). The new load improves the accuracy of the measurements considerably and will lead to even more reliable atmospheric observations. Next to the recommendations for set-up, calibration and operation of ground-based MWR within a future network, we will present homogenized methods to determine the accuracy of a running calibration as well as means for automatic data quality control. This sets the stage for the planned microwave calibration center at JOYCE (Jülich Observatory for Cloud Evolution), which will be shortly introduced.

  20. The feasibility of using explicit method for linear correction of the particle size variation using NIR Spectroscopy combined with PLS2regression method

    NASA Astrophysics Data System (ADS)

    Yulia, M.; Suhandy, D.

    2018-03-01

    NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.

  1. A Consistent EPIC Visible Channel Calibration Using VIIRS and MODIS as a Reference.

    NASA Astrophysics Data System (ADS)

    Haney, C.; Doelling, D. R.; Minnis, P.; Bhatt, R.; Scarino, B. R.; Gopalan, A.

    2017-12-01

    The Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) satellite constantly images the sunlit disk of Earth from the Lagrange-1 (L1) point in 10 spectral channels spanning the UV, VIS, and NIR spectrums. Recently, the DSCOVR EPIC team has publicly released version 2 dataset, which has implemented improved navigation, stray-light correction, and flat-fielding of the CCD array. The EPIC 2-year data record must be well-calibrated for consistent cloud, aerosol, trace gas, land use and other retrievals. Because EPIC lacks onboard calibrators, the observations made by EPIC channels must be calibrated vicariously using the coincident measurements from radiometrically stable instruments that have onboard calibration systems. MODIS and VIIRS are best-suited instruments for this task as they contain similar spectral bands that are well-calibrated onboard using solar diffusers and lunar tracking. We have previously calibrated the EPIC version 1 dataset by using EPIC and VIIRS angularly matched radiance pairs over both all-sky ocean and deep convective clouds (DCC). We noted that the EPIC image required navigations adjustments, and that the EPIC stray-light correction provided an offset term closer to zero based on the linear regression of the EPIC and VIIRS ray-matched radiance pairs. We will evaluate the EPIC version 2 navigation and stray-light improvements using the same techniques. In addition, we will monitor the EPIC channel calibration over the two years for any temporal degradation or anomalous behavior. These two calibration methods will be further validated using desert and DCC invariant Earth targets. The radiometric characterization of the selected invariant targets is performed using multiple years of MODIS and VIIRS measurements. Results of these studies will be shown at the conference.

  2. A Consistent EPIC Visible Channel Calibration using VIIRS and MODIS as a Reference

    NASA Technical Reports Server (NTRS)

    Haney, C. O.; Doelling, D. R.; Minnis, P.; Bhatt, R.; Scarino, B. R.; Gopalan, A.

    2017-01-01

    The Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) satellite constantly images the sunlit disk of Earth from the Lagrange-1 (L1) point in 10 spectral channels spanning the UV, VIS, and NIR spectrums. Recently, the DSCOVR EPIC team has publicly released version 2 dataset, which has implemented improved navigation, stray-light correction, and flat-fielding of the CCD array. The EPIC 2-year data record must be well-calibrated for consistent cloud, aerosol, trace gas, land use and other retrievals. Because EPIC lacks onboard calibrators, the observations made by EPIC channels must be calibrated vicariously using the coincident measurements from radiometrically stable instruments that have onboard calibration systems. MODIS and VIIRS are best-suited instruments for this task as they contain similar spectral bands that are well-calibrated onboard using solar diffusers and lunar tracking. We have previously calibrated the EPIC version 1 dataset by using EPIC and VIIRS angularly matched radiance pairs over both all-sky ocean and deep convective clouds (DCC). We noted that the EPIC image required navigations adjustments, and that the EPIC stray-light correction provided an offset term closer to zero based on the linear regression of the EPIC and VIIRS ray-matched radiance pairs. We will evaluate the EPIC version 2 navigation and stray-light improvements using the same techniques. In addition, we will monitor the EPIC channel calibration over the two years for any temporal degradation or anomalous behavior. These two calibration methods will be further validated using desert and DCC invariant Earth targets. The radiometric characterization of the selected invariant targets is performed using multiple years of MODIS and VIIRS measurements. Results of these studies will be shown at the conference.

  3. SU-C-202-07: Protocol and Hardware for Improved Flood Field Calibration of TrueBeam FFF Cine Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamson, J; Faught, A; Yin, F

    2016-06-15

    Purpose: Flattening filter free photon energies are commonly used for high dose treatments such as SBRT, where localization accuracy is essential. Often, MV cine imaging may be employed to verify correct localization. TrueBeam Electronic Portal Imaging Devices (EPIDs) equipped with the 40×30cm{sup 2} Image Detection Unit (IDU) are prone to image saturation at the image center especially for higher dose rates. While saturation often does not occur for cine imaging during treatment because the beam is attenuated by the patient, the flood field calibration is affected when the standard calibration procedure is followed. Here we describe the hardware and protocolmore » to achieve improved image quality for this model of TrueBeam EPID. Methods: A stainless steel filter of uniform thickness was designed to have sufficient attenuation to avoid panel saturation for both 6XFFF and 10XFFF at the maximum dose rates (1400 MU/min & 2400 MU/min, respectively). The cine imaging flood field calibration was then acquired with the filter in place for the FFF energies under the standard calibration geometry (SDD=150cm). Image quality during MV cine was assessed with & without the modified flood field calibration using a low contrast resolution phantom and an anthropomorphic phantom. Results: When the flood field is acquired using the standard procedure (no filter in place), a pixel gain artifact is clearly present in the image center (r=3cm for 10XFFF at 2400 MU/min) which appears similar to and may be mis-attributed to panel saturation in the subject image. The artifact obscured all low contrast inserts at the image center and was also visible on the anthropomorphic phantom. Using the filter for flood field calibration eliminated the artifact. Conclusion: Use of a modified flood field calibration procedure improves image quality for cine MV imaging with TrueBeams equipped with the 40×30cm{sup 2} IDU.« less

  4. STELLAR COLOR REGRESSION: A SPECTROSCOPY-BASED METHOD FOR COLOR CALIBRATION TO A FEW MILLIMAGNITUDE ACCURACY AND THE RECALIBRATION OF STRIPE 82

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Haibo; Liu, Xiaowei; Xiang, Maosheng

    In this paper we propose a spectroscopy-based stellar color regression (SCR) method to perform accurate color calibration for modern imaging surveys, taking advantage of millions of stellar spectra now available. The method is straightforward, insensitive to systematic errors in the spectroscopically determined stellar atmospheric parameters, applicable to regions that are effectively covered by spectroscopic surveys, and capable of delivering an accuracy of a few millimagnitudes for color calibration. As an illustration, we have applied the method to the Sloan Digital Sky Survey (SDSS) Stripe 82 data. With a total number of 23,759 spectroscopically targeted stars, we have mapped out the smallmore » but strongly correlated color zero-point errors present in the photometric catalog of Stripe 82, and we improve the color calibration by a factor of two to three. Our study also reveals some small but significant magnitude dependence errors in the z band for some charge-coupled devices (CCDs). Such errors are likely to be present in all the SDSS photometric data. Our results are compared with those from a completely independent test based on the intrinsic colors of red galaxies presented by Ivezić et al. The comparison, as well as other tests, shows that the SCR method has achieved a color calibration internally consistent at a level of about 5 mmag in u – g, 3 mmag in g – r, and 2 mmag in r – i and i – z. Given the power of the SCR method, we discuss briefly the potential benefits by applying the method to existing, ongoing, and upcoming imaging surveys.« less

  5. [Determination of fat, protein and DM in raw milk by portable short-wave near infrared spectrometer].

    PubMed

    Li, Xiao-yun; Wang, Jia-hua; Huang, Ya-wei; Han, Dong-hai

    2011-03-01

    Near infrared diffuse reflectance spectroscopy calibrations of fat, protein and DM in raw milk were studied with partial least-squares (PLS) regression using portable short-wave near infrared spectrometer. The results indicated that good calibrations of fat and DM were found, the correlation coefficients were all 0.98, the RMSEC were 0.187 and 0.217, RMSEP were 0.187 and 0.296, the RPDs were 5.02 and 3.20 respectively; the calibration of protein needed to be improved but can be used for practice, the correlation coefficient was 0.95, RMSEC was 0.105, RMSEP was 0.120, and RPD was 2.60. Furthermore, the measuring accuracy was improved by analyzing the correction relation of fat and DM in raw milk This study will probably provide a new on-site method for nondestructive and rapid measurement of milk.

  6. Online C-arm calibration using a marked guide wire for 3D reconstruction of pulmonary arteries

    NASA Astrophysics Data System (ADS)

    Vachon, Étienne; Miró, Joaquim; Duong, Luc

    2017-03-01

    3D reconstruction of vessels from 2D X-ray angiography is highly relevant to improve the visualization and the assessment of vascular structures such as pulmonary arteries by interventional cardiologists. However, to ensure a robust and accurate reconstruction, C-arm gantry parameters must be properly calibrated to provide clinically acceptable results. Calibration procedures often rely on calibration objects and complex protocol which is not adapted to an intervention context. In this study, a novel calibration algorithm for C-arm gantry is presented using the instrumentation such as catheters and guide wire. This ensures the availability of a minimum set of correspondences and implies minimal changes to the clinical workflow. The method was evaluated on simulated data and on retrospective patient datasets. Experimental results on simulated datasets demonstrate a calibration that allows a 3D reconstruction of the guide wire up to a geometric transformation. Experiments with patients datasets show a significant decrease of the retro projection error to 0.17 mm 2D RMS. Consequently, such procedure might contribute to identify any calibration drift during the intervention.

  7. Predicting ambient aerosol Thermal Optical Reflectance (TOR) measurements from infrared spectra: organic carbon

    NASA Astrophysics Data System (ADS)

    Dillner, A. M.; Takahama, S.

    2014-11-01

    Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, the organic carbon concentration is measured using thermal methods such as Thermal-Optical Reflectance (TOR) from quartz fiber filters. Here, methods are presented whereby Fourier Transform Infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters are used to accurately predict TOR OC. Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filters. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites sampled during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to artifact-corrected TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date which leads to precise and accurate OC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), low bias (0.02 μg m-3, all μg m-3 values based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; this division also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass indicating that the calibration is linear. Using samples in the calibration set that have a different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples; providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).

  8. An improved error assessment for the GEM-T1 gravitational model

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Marsh, J. G.; Klosko, S. M.; Pavlis, E. C.; Patel, G. B.; Chinn, D. S.; Wagner, C. A.

    1988-01-01

    Several tests were designed to determine the correct error variances for the Goddard Earth Model (GEM)-T1 gravitational solution which was derived exclusively from satellite tracking data. The basic method employs both wholly independent and dependent subset data solutions and produces a full field coefficient estimate of the model uncertainties. The GEM-T1 errors were further analyzed using a method based upon eigenvalue-eigenvector analysis which calibrates the entire covariance matrix. Dependent satellite and independent altimetric and surface gravity data sets, as well as independent satellite deep resonance information, confirm essentially the same error assessment. These calibrations (utilizing each of the major data subsets within the solution) yield very stable calibration factors which vary by approximately 10 percent over the range of tests employed. Measurements of gravity anomalies obtained from altimetry were also used directly as observations to show that GEM-T1 is calibrated. The mathematical representation of the covariance error in the presence of unmodeled systematic error effects in the data is analyzed and an optimum weighting technique is developed for these conditions. This technique yields an internal self-calibration of the error model, a process which GEM-T1 is shown to approximate.

  9. Temporal dynamics of sand dune bidirectional reflectance characteristics for absolute radiometric calibration of optical remote sensing data

    NASA Astrophysics Data System (ADS)

    Coburn, Craig A.; Logie, Gordon S. J.

    2018-01-01

    Attempts to use pseudoinvariant calibration sites (PICS) for establishing absolute radiometric calibration of Earth observation (EO) satellites requires high-quality information about the nature of the bidirectional reflectance distribution function (BRDF) of the surfaces used for these calibrations. Past studies have shown that the PICS method is useful for evaluating the trend of sensors over time or for the intercalibration of sensors. The PICS method was not considered until recently for deriving absolute radiometric calibration. This paper presents BRDF data collected by a high-performance portable goniometer system to develop a temporal BRDF model for the Algodones Dunes in California. By sampling the BRDF of the sand surface at similar solar zenith angles to those normally encountered by EO satellites, additional information on the changing nature of the surface can improve models used to provide absolute radiometric correction. The results demonstrated that the BRDF of a reasonably simple sand surface was complex with changes in anisotropy taking place in response to changing solar zenith angles. For the majority of observation and illumination angles, the spectral reflectance anisotropy observed varied between 1% and 5% in patterns that repeat around solar noon.

  10. Error analysis of mechanical system and wavelength calibration of monochromator

    NASA Astrophysics Data System (ADS)

    Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong

    2018-02-01

    This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.

  11. Calibration method helps in seismic velocity interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzman, C.E.; Davenport, H.A.; Wilhelm, R.

    1997-11-03

    Acoustic velocities derived from seismic reflection data, when properly calibrated to subsurface measurements, help interpreters make pure velocity predictions. A method of calibrating seismic to measured velocities has improved interpretation of subsurface features in the Gulf of Mexico. In this method, the interpreter in essence creates a kind of gauge. Properly calibrated, the gauge enables the interpreter to match predicted velocities to velocities measured at wells. Slow-velocity zones are of special interest because they sometimes appear near hydrocarbon accumulations. Changes in velocity vary in strength with location; the structural picture is hidden unless the variations are accounted for by mappingmore » in depth instead of time. Preliminary observations suggest that the presence of hydrocarbons alters the lithology in the neighborhood of the trap; this hydrocarbon effect may be reflected in the rock velocity. The effect indicates a direct use of seismic velocity in exploration. This article uses the terms seismic velocity and seismic stacking velocity interchangeably. It uses ground velocity, checkshot average velocity, and well velocity interchangeably. Interval velocities are derived from seismic stacking velocities or well average velocities; they refer to velocities of subsurface intervals or zones. Interval travel time (ITT) is the reciprocal of interval velocity in microseconds per foot.« less

  12. System calibration method for Fourier ptychographic microscopy

    NASA Astrophysics Data System (ADS)

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.

  13. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    PubMed

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  14. Hadronic vector boson decay and the art of calorimeter calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobban, Olga Barbara

    2002-12-01

    Presented here are several studies involving the energy measurement of particles using calorimeters. The first study involves the effects of radiation damage on the response of a prototype calorimeter for the Compact Muon Solenoid experiment. We found that the effects of radiation damage on the calorimeter·s response arc dose dependent and that most of the damage will occur in the first year of running at the Large Hadron Collider. Another study involved the assessment of the Energy Flow Method an algorithm which combines the information from the calorimeter system is combined with that from the tracking system in an attmpt to improve the energy resolution for jet measurements. Using the Energy Flow method an improvement ofmore » $$\\sim30\\%$$ is found but this impovement decreases at high energies when the hadronic calorimeter resolution dominates the quality of the jet energy measurements. Finally, we developed a new method to calibrate a longitudinally segnmented calorimeter. This method eliminates problems with the traditional method used for the calorimeters at the Collider Detector at Fermilab. We applied this new method in the search for hadrunic decays of the $W$ and $Z$ bosons in a sample of dijet data taken during Tevatron Run IC. A signal of 9873±3950(sys) ±1130 events was found when the new calibration method was used. This corresponds to a cross section $$\\sigma(p\\bar{p} \\to W,Z) \\cdot B(W,Z \\to jets) = 35.6 \\pm 14.2 ({\\rm sys}) \\pm 4.1 (\\rm{stat})$$ nb.« less

  15. A calibration method for fringe reflection technique based on the analytical phase-slope description

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong

    2018-05-01

    The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.

  16. Calibration of EBT2 film using a red-channel PDD method in combination with a modified three-channel technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Liyun, E-mail: liyunc@isu.edu.tw; Ho, Sheng-Yow; Lee, Tsair-Fwu

    Purpose: Ashland Inc. EBT2 and EBT3 films are widely used in quality assurance for radiation therapy; however, there remains a relatively high degree of uncertainty [B. Hartmann, M. Martisikova, and O. Jakel, “Homogeneity of Gafchromic EBT2 film,” Med. Phys. 37, 1753–1756 (2010)]. Micke et al. (2011) recently improved the spatial homogeneity using all color channels of a flatbed scanner; however, van Hoof et al. (2012) pointed out that the corrected nonuniformity still requires further investigation for larger fields. To reduce the calibration errors and the uncertainty, the authors propose a new red-channel percentage-depth-dose method in combination with a modified three-channelmore » technique. Methods: For the ease of comparison, the EBT2 film image used in the authors’ previous study (2012) was reanalyzed using different approaches. Photon beams of 6-MV were delivered to two different films at two different beam on times, resulting in the absorption doses of ranging from approximately 30 to 300 cGy at the vertical midline of the film, which was set to be coincident with the central axis of the beam. The film was tightly sandwiched in a 30{sup 3}-cm{sup 3} polystyrene phantom, and the pixel values for red, green, and blue channels were extracted from 234 points on the central axis of the beam and compared with the corresponding depth doses. The film was first calibrated using the multichannel method proposed by Micke et al. (2010), accounting for nonuniformities in the scanner. After eliminating the scanner and dose-independent nonuniformities, the film was recalibrated via the dose-dependent optical density of the red channel and fitted to a power function. This calibration was verified via comparisons of the dose profiles extracted from the films, where three were exposed to a 60° physical wedge field and three were exposed to composite fields, and all of which were measured in a water phantom. A correction for optical attenuation was implemented, and treatment plans of intensity modulated radiation therapy and volumetric modulated arc therapy were evaluated. Results: The method described here demonstrated improved accuracy with reduced uncertainty. The relative error compared with the measurements of a water phantom was less than 1%, and the overall calibration uncertainty was less than 2%. Verification tests revealed that the results were close to those of the authors’ previous study, and all differences were within 3%, except those with a high-dose gradient. The gamma pass rates (2%/2 mm) of the treatment plan evaluated using the method described here were greater than 99%, and no obvious stripe patterns were observed in the dose-difference maps. Conclusions: Spatial homogeneity was significantly improved via the calibration method described here. This technique is both convenient and time-efficient because it does not require cutting the film, and only two exposures are necessary.« less

  17. Improved Calibration of Modeled Discharge and Storage Change in the Atchafalaya Floodplain Using SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Jung, Hahn Chul; Jasinski, Michael; Kim, Jin-Woo; Shum, C. K.; Bates, Paul; Neal, Jeffrey; Lee, Hyongki; Alsdorf, Doug

    2011-01-01

    This study focuses on the feasibility of using SAR interferometry to support 2D hydrodynamic model calibration and provide water storage change in the floodplain. Two-dimensional (2D) flood inundation modeling has been widely studied using storage cell approaches with the availability of high resolution, remotely sensed floodplain topography. The development of coupled 1D/2D flood modeling has shown improved calculation of 2D floodplain inundation as well as channel water elevation. Most floodplain model results have been validated using remote sensing methods for inundation extent. However, few studies show the quantitative validation of spatial variations in floodplain water elevations in the 2D modeling since most of the gauges are located along main river channels and traditional single track satellite altimetry over the floodplain are limited. Synthetic Aperture Radar (SAR) interferometry recently has been proven to be useful for measuring centimeter-scale water elevation changes over the floodplain. In the current study, we apply the LISFLOOD hydrodynamic model to the central Atchafalaya River Basin, Louisiana, during a 62 day period from 1 April to 1 June 2008 using two different calibration schemes for Manning's n. First, the model is calibrated in terms of water elevations from a single in situ gauge that represents a more traditional approach. Due to the gauge location in the channel, the calibration shows more sensitivity to channel roughness relative to floodplain roughness. Second, the model is calibrated in terms of water elevation changes calculated from ALOS PALSAR interferometry during 46 days of the image acquisition interval from 16 April 2008 to 1 June 2009. Since SAR interferometry receives strongly scatters in floodplain due to double bounce effect as compared to specular scattering of open water, the calibration shows more dependency to floodplain roughness. An iterative approach is used to determine the best-fit Manning's n for the two different calibration approaches. Results suggest similar floodplain roughness but slightly different channel roughness. However, application of SAR interferometry provides a unique view of the floodplain flow gradients, not possible with a single gauge calibration. These gradients, allow improved computation of water storage change over the 46-day simulation period. Overall, the results suggest that the use of 2D SAR water elevation changes in the Atchafalaya basin offers improved understanding and modeling of floodplain hydrodynamics.

  18. Correction to Method of Establishing the Absolute Radiometric Accuracy of Remote Sensing Systems While On-orbit Using Characterized Stellar Sources

    NASA Technical Reports Server (NTRS)

    Bowen, Howard S.; Cunningham, Douglas M.

    2007-01-01

    The contents include: 1) Brief history of related events; 2) Overview of original method used to establish absolute radiometric accuracy of remote sensing instruments using stellar sources; and 3) Considerations to improve the stellar calibration approach.

  19. From mobile ADCP to high-resolution SSC: a cross-section calibration tool

    USGS Publications Warehouse

    Boldt, Justin A.

    2015-01-01

    Sediment is a major cause of stream impairment, and improved sediment monitoring is a crucial need. Point samples of suspended-sediment concentration (SSC) are often not enough to provide an understanding to answer critical questions in a changing environment. As technology has improved, there now exists the opportunity to obtain discrete measurements of SSC and flux while providing a spatial scale unmatched by any other device. Acoustic instruments are ubiquitous in the U.S. Geological Survey (USGS) for making streamflow measurements but when calibrated with physical sediment samples, they may be used for sediment measurements as well. The acoustic backscatter measured by an acoustic Doppler current profiler (ADCP) has long been known to correlate well with suspended sediment, but until recently, it has mainly been qualitative in nature. This new method using acoustic surrogates has great potential to leverage the routine data collection to provide calibrated, quantitative measures of SSC which hold promise to be more accurate, complete, and cost efficient than other methods. This extended abstract presents a method for the measurement of high spatial and temporal resolution SSC using a down-looking, mobile ADCP from discrete cross-sections. The high-resolution scales of sediment data are a primary advantage and a vast improvement over other discrete methods for measuring SSC. Although acoustic surrogate technology using continuous, fixed-deployment ADCPs (side-looking) is proven, the same methods cannot be used with down-looking ADCPs due to the fact that the SSC and particle-size distribution variation in the vertical profile violates theory and complicates assumptions. A software tool was developed to assist in using acoustic backscatter from a down-looking, mobile ADCP as a surrogate for SSC. This tool has a simple graphical user interface that loads the data, assists in the calibration procedure, and provides data visualization and output options. This tool is designed to improve ongoing efforts to monitor and predict resource responses to a changing environment. Because ADCPs are used routinely for streamflow measurements, using acoustic backscatter from ADCPs as a surrogate for SSC has the potential to revolutionize sediment measurements by providing rapid measurements of sediment flux and distribution at spatial and temporal scales that are far beyond the capabilities of traditional physical samplers.

  20. Compound Radar Approach for Breast Imaging.

    PubMed

    Byrne, Dallan; Sarafianou, Mantalena; Craddock, Ian J

    2017-01-01

    Multistatic radar apertures record scattering at a number of receivers when the target is illuminated by a single transmitter, providing more scattering information than its monostatic counterpart per transmission angle. This paper considers the well-known problem of detecting tumor targets within breast phantoms using multistatic radar. To accurately image potentially cancerous targets size within the breast, a significant number of multistatic channels are required in order to adequately calibrate-out unwanted skin reflections, increase the immunity to clutter, and increase the dynamic range of a breast radar imaging system. However, increasing the density of antennas within a physical array is inevitably limited by the geometry of the antenna elements designed to operate with biological tissues at microwave frequencies. A novel compound imaging approach is presented to overcome these physical constraints and improve the imaging capabilities of a multistatic radar imaging modality for breast scanning applications. The number of transmit-receive (TX-RX) paths available for imaging are increased by performing a number of breast scans with varying array positions. A skin calibration method is presented to reduce the influence of skin reflections from each channel. Calibrated signals are applied to receive a beamforming method, compounding the data from each scan to produce a microwave radar breast profile. The proposed imaging method is evaluated with experimental data obtained from constructed phantoms of varying complexity, skin contour asymmetries, and challenging tumor positions and sizes. For each imaging scenario outlined in this study, the proposed compound imaging technique improves skin calibration, clearly detects small targets, and substantially reduces the level of undesirable clutter within the profile.

  1. Assessing groundwater vulnerability in the Kinshasa region, DR Congo, using a calibrated DRASTIC model

    NASA Astrophysics Data System (ADS)

    Mfumu Kihumba, Antoine; Vanclooster, Marnik; Ndembo Longo, Jean

    2017-02-01

    This study assessed the vulnerability of groundwater against pollution in the Kinshasa region, DR Congo, as a support of a groundwater protection program. The parametric vulnerability model (DRASTIC) was modified and calibrated to predict the intrinsic vulnerability as well as the groundwater pollution risk. The method uses groundwater body specific parameters for the calibration of the factor ratings and weightings of the original DRASTIC model. These groundwater specific parameters are inferred from the statistical relation between the original DRASTIC model and observed nitrate pollution for a specific period. In addition, site-specific land use parameters are integrated into the method. The method is fully embedded in a Geographic Information System (GIS). Following these modifications, the correlation coefficient between groundwater pollution risk and observed nitrate concentrations for the 2013-2014 survey improved from r = 0.42, for the original DRASTIC model, to r = 0.61 for the calibrated model. As a way to validate this pollution risk map, observed nitrate concentrations from another survey (2008) are compared to pollution risk indices showing a good degree of coincidence with r = 0.51. The study shows that a calibration of a vulnerability model is recommended when vulnerability maps are used for groundwater resource management and land use planning at the regional scale and that it is adapted to a specific area.

  2. A stoichiometric calibration method for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Bourque, Alexandra E.; Carrier, Jean-François; Bouchard, Hugo

    2014-04-01

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic reconstruction algorithm (filtered back projection). With a more advanced method (sinogram affirmed iterative technique), the values become 1.0 mm, 0.5 mm and 0.4 mm for protons, helium and carbon ions, respectively. These results allow one to conclude that the present adaptation of the stoichiometric calibration yields a highly accurate method for characterizing tissue with DECT for ion beam therapy and potentially for photon beam therapy.

  3. Corrections to the MODIS Aqua Calibration Derived From MODIS Aqua Ocean Color Products

    NASA Technical Reports Server (NTRS)

    Meister, Gerhard; Franz, Bryan Alden

    2013-01-01

    Ocean color products such as, e.g., chlorophyll-a concentration, can be derived from the top-of-atmosphere radiances measured by imaging sensors on earth-orbiting satellites. There are currently three National Aeronautics and Space Administration sensors in orbit capable of providing ocean color products. One of these sensors is the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, whose ocean color products are currently the most widely used of the three. A recent improvement to the MODIS calibration methodology has used land targets to improve the calibration accuracy. This study evaluates the new calibration methodology and describes further calibration improvements that are built upon the new methodology by including ocean measurements in the form of global temporally averaged water-leaving reflectance measurements. The calibration improvements presented here mainly modify the calibration at the scan edges, taking advantage of the good performance of the land target trending in the center of the scan.

  4. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    PubMed

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  5. The new camera calibration system at the US Geological Survey

    USGS Publications Warehouse

    Light, D.L.

    1992-01-01

    Modern computerized photogrammetric instruments are capable of utilizing both radial and decentering camera calibration parameters which can increase plotting accuracy over that of older analog instrumentation technology from previous decades. Also, recent design improvements in aerial cameras have minimized distortions and increased the resolving power of camera systems, which should improve the performance of the overall photogrammetric process. In concert with these improvements, the Geological Survey has adopted the rigorous mathematical model for camera calibration developed by Duane Brown. An explanation of the Geological Survey's calibration facility and the additional calibration parameters now being provided in the USGS calibration certificate are reviewed. -Author

  6. Technical Note: Improving proton stopping power ratio determination for a deformable silicone-based 3D dosimeter using dual energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taasti, Vicki Trier, E-mail: victaa@rm.dk; Høye, Ellen Marie; Hansen, David Christoffer

    Purpose: The aim of this study was to investigate whether the stopping power ratio (SPR) of a deformable, silicone-based 3D dosimeter could be determined more accurately using dual energy (DE) CT compared to using conventional methods based on single energy (SE) CT. The use of SECT combined with the stoichiometric calibration method was therefore compared to DECT-based determination. Methods: The SPR of the dosimeter was estimated based on its Hounsfield units (HUs) in both a SECT image and a DECT image set. The stoichiometric calibration method was used for converting the HU in the SECT image to a SPR valuemore » for the dosimeter while two published SPR calibration methods for dual energy were applied on the DECT images. Finally, the SPR of the dosimeter was measured in a 60 MeV proton by quantifying the range difference with and without the dosimeter in the beam path. Results: The SPR determined from SECT and the stoichiometric method was 1.10, compared to 1.01 with both DECT calibration methods. The measured SPR for the dosimeter material was 0.97. Conclusions: The SPR of the dosimeter was overestimated by 13% using the stoichiometric method and by 3% when using DECT. If the stoichiometric method should be applied for the dosimeter, the HU of the dosimeter must be manually changed in the treatment planning system in order to give a correct SPR estimate. Using a wrong SPR value will cause differences between the calculated and the delivered treatment plans.« less

  7. Uncertainty Estimate for the Outdoor Calibration of Solar Pyranometers: A Metrologist Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reda, I.; Myers, D.; Stoffel, T.

    2008-12-01

    Pyranometers are used outdoors to measure solar irradiance. By design, this type of radiometer can measure the; total hemispheric (global) or diffuse (sky) irradiance when the detector is unshaded or shaded from the sun disk, respectively. These measurements are used in a variety of applications including solar energy conversion, atmospheric studies, agriculture, and materials science. Proper calibration of pyranometers is essential to ensure measurement quality. This paper describes a step-by-step method for calculating and reporting the uncertainty of the calibration, using the guidelines of the ISO 'Guide to the Expression of Uncertainty in Measurement' or GUM, that is applied tomore » the pyranometer; calibration procedures used at the National Renewable Energy Laboratory (NREL). The NREL technique; characterizes a responsivity function of a pyranometer as a function of the zenith angle, as well as reporting a single; calibration responsivity value for a zenith angle of 45 ..deg... The uncertainty analysis shows that a lower uncertainty can be achieved by using the response function of a pyranometer determined as a function of zenith angle, in lieu of just using; the average value at 45..deg... By presenting the contribution of each uncertainty source to the total uncertainty; users will be able to troubleshoot and improve their calibration process. The uncertainty analysis method can also be used to determine the uncertainty of different calibration techniques and applications, such as deriving the uncertainty of field measurements.« less

  8. Combined use of a priori data for fast system self-calibration of a non-rigid multi-camera fringe projection system

    NASA Astrophysics Data System (ADS)

    Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard

    2017-06-01

    In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.

  9. Web-based assessments of physical activity in youth: considerations for design and scale calibration.

    PubMed

    Saint-Maurice, Pedro F; Welk, Gregory J

    2014-12-01

    This paper describes the design and methods involved in calibrating a Web-based self-report instrument to estimate physical activity behavior. The limitations of self-report measures are well known, but calibration methods enable the reported information to be equated to estimates obtained from objective data. This paper summarizes design considerations for effective development and calibration of physical activity self-report measures. Each of the design considerations is put into context and followed by a practical application based on our ongoing calibration research with a promising online self-report tool called the Youth Activity Profile (YAP). We first describe the overall concept of calibration and how this influences the selection of appropriate self-report tools for this population. We point out the advantages and disadvantages of different monitoring devices since the choice of the criterion measure and the strategies used to minimize error in the measure can dramatically improve the quality of the data. We summarize strategies to ensure quality control in data collection and discuss analytical considerations involved in group- vs individual-level inference. For cross-validation procedures, we describe the advantages of equivalence testing procedures that directly test and quantify agreement. Lastly, we introduce the unique challenges encountered when transitioning from paper to a Web-based tool. The Web offers considerable potential for broad adoption but an iterative calibration approach focused on continued refinement is needed to ensure that estimates are generalizable across individuals, regions, seasons and countries.

  10. Impact of heart disease and calibration interval on accuracy of pulse transit time-based blood pressure estimation.

    PubMed

    Ding, Xiaorong; Zhang, Yuanting; Tsang, Hon Ki

    2016-02-01

    Continuous blood pressure (BP) measurement without a cuff is advantageous for the early detection and prevention of hypertension. The pulse transit time (PTT) method has proven to be promising for continuous cuffless BP measurement. However, the problem of accuracy is one of the most challenging aspects before the large-scale clinical application of this method. Since PTT-based BP estimation relies primarily on the relationship between PTT and BP under certain assumptions, estimation accuracy will be affected by cardiovascular disorders that impair this relationship and by the calibration frequency, which may violate these assumptions. This study sought to examine the impact of heart disease and the calibration interval on the accuracy of PTT-based BP estimation. The accuracy of a PTT-BP algorithm was investigated in 37 healthy subjects and 48 patients with heart disease at different calibration intervals, namely 15 min, 2 weeks, and 1 month after initial calibration. The results showed that the overall accuracy of systolic BP estimation was significantly lower in subjects with heart disease than in healthy subjects, but diastolic BP estimation was more accurate in patients than in healthy subjects. The accuracy of systolic and diastolic BP estimation becomes less reliable with longer calibration intervals. These findings demonstrate that both heart disease and the calibration interval can influence the accuracy of PTT-based BP estimation and should be taken into consideration to improve estimation accuracy.

  11. Accuracy of subcutaneous continuous glucose monitoring in critically ill adults: improved sensor performance with enhanced calibrations.

    PubMed

    Leelarathna, Lalantha; English, Shane W; Thabit, Hood; Caldwell, Karen; Allen, Janet M; Kumareswaran, Kavita; Wilinska, Malgorzata E; Nodale, Marianna; Haidar, Ahmad; Evans, Mark L; Burnstein, Rowan; Hovorka, Roman

    2014-02-01

    Accurate real-time continuous glucose measurements may improve glucose control in the critical care unit. We evaluated the accuracy of the FreeStyle(®) Navigator(®) (Abbott Diabetes Care, Alameda, CA) subcutaneous continuous glucose monitoring (CGM) device in critically ill adults using two methods of calibration. In a randomized trial, paired CGM and reference glucose (hourly arterial blood glucose [ABG]) were collected over a 48-h period from 24 adults with critical illness (mean±SD age, 60±14 years; mean±SD body mass index, 29.6±9.3 kg/m(2); mean±SD Acute Physiology and Chronic Health Evaluation score, 12±4 [range, 6-19]) and hyperglycemia. In 12 subjects, the CGM device was calibrated at variable intervals of 1-6 h using ABG. In the other 12 subjects, the sensor was calibrated according to the manufacturer's instructions (1, 2, 10, and 24 h) using arterial blood and the built-in point-of-care glucometer. In total, 1,060 CGM-ABG pairs were analyzed over the glucose range from 4.3 to 18.8 mmol/L. Using enhanced calibration median (interquartile range) every 169 (122-213) min, the absolute relative deviation was lower (7.0% [3.5, 13.0] vs. 12.8% [6.3, 21.8], P<0.001), and the percentage of points in the Clarke error grid Zone A was higher (87.8% vs. 70.2%). Accuracy of the Navigator CGM device during critical illness was comparable to that observed in non-critical care settings. Further significant improvements in accuracy may be obtained by frequent calibrations with ABG measurements.

  12. Improved Quantitative Analysis of Ion Mobility Spectrometry by Chemometric Multivariate Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fraga, Carlos G.; Kerr, Dayle; Atkinson, David A.

    2009-09-01

    Traditional peak-area calibration and the multivariate calibration methods of principle component regression (PCR) and partial least squares (PLS), including unfolded PLS (U-PLS) and multi-way PLS (N-PLS), were evaluated for the quantification of 2,4,6-trinitrotoluene (TNT) and cyclo-1,3,5-trimethylene-2,4,6-trinitramine (RDX) in Composition B samples analyzed by temperature step desorption ion mobility spectrometry (TSD-IMS). The true TNT and RDX concentrations of eight Composition B samples were determined by high performance liquid chromatography with UV absorbance detection. Most of the Composition B samples were found to have distinct TNT and RDX concentrations. Applying PCR and PLS on the exact same IMS spectra used for themore » peak-area study improved quantitative accuracy and precision approximately 3 to 5 fold and 2 to 4 fold, respectively. This in turn improved the probability of correctly identifying Composition B samples based upon the estimated RDX and TNT concentrations from 11% with peak area to 44% and 89% with PLS. This improvement increases the potential of obtaining forensic information from IMS analyzers by providing some ability to differentiate or match Composition B samples based on their TNT and RDX concentrations.« less

  13. Research on self-calibration biaxial autocollimator based on ZYNQ

    NASA Astrophysics Data System (ADS)

    Guo, Pan; Liu, Bingguo; Liu, Guodong; Zhong, Yao; Lu, Binghui

    2018-01-01

    Autocollimators are mainly based on computers or the electronic devices that can be connected to the internet, and its precision, measurement range and resolution are all defective, and external displays are needed to display images in real time. What's more, there is no real-time calibration for autocollimator in the market. In this paper, we propose a biaxial autocollimator based on the ZYNQ embedded platform to solve the above problems. Firstly, the traditional optical system is improved and a light path is added for real-time calibration. Then, in order to improve measurement speed, the embedded platform based on ZYNQ that combines Linux operating system with autocollimator is designed. In this part, image acquisition, image processing, image display and the man-machine interaction interface based on Qt are achieved. Finally, the system realizes two-dimensional small angle measurement. Experimental results showed that the proposed method can improve the angle measurement accuracy. The standard deviation of the close distance (1.5m) is 0.15" in horizontal direction of image and 0.24"in vertical direction, the repeatability of measurement of the long distance (10m) is improved by 0.12 in horizontal direction of image and 0.3 in vertical direction.

  14. Retrieving Storm Electric Fields from Aircraft Field Mill Data. Part 1; Theory

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.

    2006-01-01

    It is shown that the problem of retrieving storm electric fields from an aircraft instrumented with several electric field mill sensors can be expressed in terms of a standard Lagrange multiplier optimization problem. The method naturally removes aircraft charge from the retrieval process without having to use a high voltage stinger and linearly combined mill data values. It allows a variety of user-supplied physical constraints (the so-called side constraints in the theory of Lagrange multipliers) and also helps improve absolute calibration. Additionally, this paper introduces an alternate way of performing the absolute calibration of an aircraft that has some benefits over conventional analyses. It is accomplished by using the time derivatives of mill and pitch data for a pitch down maneuver performed at high (greater than 1 km) altitude. In Part II of this study, the above methods are tested and then applied to complete a full calibration of a Citation aircraft.

  15. PowderSim: Lagrangian Discrete and Mesh-Free Continuum Simulation Code for Cohesive Soils

    NASA Technical Reports Server (NTRS)

    Johnson, Scott; Walton, Otis; Settgast, Randolph

    2013-01-01

    PowderSim is a calculation tool that combines a discrete-element method (DEM) module, including calibrated interparticle-interaction relationships, with a mesh-free, continuum, SPH (smoothed-particle hydrodynamics) based module that utilizes enhanced, calibrated, constitutive models capable of mimicking both large deformations and the flow behavior of regolith simulants and lunar regolith under conditions anticipated during in situ resource utilization (ISRU) operations. The major innovation introduced in PowderSim is to use a mesh-free method (SPH-based) with a calibrated and slightly modified critical-state soil mechanics constitutive model to extend the ability of the simulation tool to also address full-scale engineering systems in the continuum sense. The PowderSim software maintains the ability to address particle-scale problems, like size segregation, in selected regions with a traditional DEM module, which has improved contact physics and electrostatic interaction models.

  16. A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Shengzhi; Ming, Bo; Huang, Qiang

    It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecastingmore » models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.« less

  17. Calibration of the DRASTIC ground water vulnerability mapping method

    USGS Publications Warehouse

    Rupert, M.G.

    2001-01-01

    Ground water vulnerability maps developed using the DRASTIC method have been produced in many parts of the world. Comparisons of those maps with actual ground water quality data have shown that the DRASTIC method is typically a poor predictor of ground water contamination. This study significantly improved the effectiveness of a modified DRASTIC ground water vulnerability map by calibrating the point rating schemes to actual ground water quality data by using nonparametric statistical techniques and a geographic information system. Calibration was performed by comparing data on nitrite plus nitrate as nitrogen (NO2 + NO3-N) concentrations in ground water to land-use, soils, and depth to first-encountered ground water data. These comparisons showed clear statistical differences between NO2 + NO3-N concentrations and the various categories. Ground water probability point ratings for NO2 + NO3-N contamination were developed from the results of these comparisons, and a probability map was produced. This ground water probability map was then correlated with an independent set of NO2 + NO3-N data to demonstrate its effectiveness in predicting elevated NO2 + NO3-N concentrations in ground water. This correlation demonstrated that the probability map was effective, but a vulnerability map produced with the uncalibrated DRASTIC method in the same area and using the same data layers was not effective. Considerable time and expense have been outlaid to develop ground water vulnerability maps with the DRASTIC method. This study demonstrates a cost-effective method to improve and verify the effectiveness of ground water vulnerability maps.

  18. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  19. Improving a complex finite-difference ground water flow model through the use of an analytic element screening model

    USGS Publications Warehouse

    Hunt, R.J.; Anderson, M.P.; Kelson, V.A.

    1998-01-01

    This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.

  20. Development of high power UV irradiance meter calibration device

    NASA Astrophysics Data System (ADS)

    Xia, Ming; Gao, Jianqiang; Yin, Dejin; Li, Tiecheng

    2016-09-01

    With the rapid development of China's economy, many industries have more requirements for UV light applications, such as machinery manufacturing, aircraft manufacturing using high power UV light for detection, IT industry using high power UV light for curing component assembly, building materials, ink, paint and other industries using high power UV light for material aging test etc. In these industries, there are many measuring instruments for high power UV irradiance which are need to traceability. But these instruments are mostly imported instruments, these imported UV radiation meter are large range, wide wavelength range and high accuracy. They have exceeded our existing calibration capability. Expand the measuring range and improve the measurement accuracy of UV irradiance calibration device is a pressing matter of the moment. The newly developed high power UV irradiance calibration device is mainly composed of high power UV light, UV filter, condenser, UV light guide, optical alignment system, standard cavity absolute radiometer. The calibration device is using optical alignment system to form uniform light radiation field. The standard is standard cavity absolute radiometer, which can through the electrical substitution method, by means of adjusting and measuring the applied DC electric power at the receiver on a heating wire, which is equivalent to the thermo-electromotive force generated by the light radiation power, to achieve absolute optical radiation measurement. This method is the commonly used effective method for accurate measurement of light irradiation. The measuring range of calibration device is (0.2 200) mW/cm2, and the uncertainty of measurement results can reached 2.5% (k=2).

  1. Calibration of Passive Samplers for the Monitoring of Pharmaceuticals in Water-Sampling Rate Variation.

    PubMed

    Męczykowska, Hanna; Kobylis, Paulina; Stepnowski, Piotr; Caban, Magda

    2017-05-04

    Passive sampling is one of the most efficient methods of monitoring pharmaceuticals in environmental water. The reliability of the process relies on a correctly performed calibration experiment and a well-defined sampling rate (R s ) for target analytes. Therefore, in this review the state-of-the-art methods of passive sampler calibration for the most popular pharmaceuticals: antibiotics, hormones, β-blockers and non-steroidal anti-inflammatory drugs (NSAIDs), along with the sampling rate variation, were presented. The advantages and difficulties in laboratory and field calibration were pointed out, according to the needs of control of the exact conditions. Sampling rate calculating equations and all the factors affecting the R s value - temperature, flow, pH, salinity of the donor phase and biofouling - were discussed. Moreover, various calibration parameters gathered from the literature published in the last 16 years, including the device types, were tabled and compared. What is evident is that the sampling rate values for pharmaceuticals are impacted by several factors, whose influence is still unclear and unpredictable, while there is a big gap in experimental data. It appears that the calibration procedure needs to be improved, for example, there is a significant deficiency of PRCs (Performance Reference Compounds) for pharmaceuticals. One of the suggestions is to introduce correction factors for R s values estimated in laboratory conditions.

  2. Validation and calibration of structural models that combine information from multiple sources.

    PubMed

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  3. Method for outlier detection: a tool to assess the consistency between laboratory data and ultraviolet-visible absorbance spectra in wastewater samples.

    PubMed

    Zamora, D; Torres, A

    2014-01-01

    Reliable estimations of the evolution of water quality parameters by using in situ technologies make it possible to follow the operation of a wastewater treatment plant (WWTP), as well as improving the understanding and control of the operation, especially in the detection of disturbances. However, ultraviolet (UV)-Vis sensors have to be calibrated by means of a local fingerprint laboratory reference concentration-value data-set. The detection of outliers in these data-sets is therefore important. This paper presents a method for detecting outliers in UV-Vis absorbances coupled to water quality reference laboratory concentrations for samples used for calibration purposes. Application to samples from the influent of the San Fernando WWTP (Medellín, Colombia) is shown. After the removal of outliers, improvements in the predictability of the influent concentrations using absorbance spectra were found.

  4. Calibration of EBT2 film using a red-channel PDD method in combination with a modified three-channel technique.

    PubMed

    Chang, Liyun; Ho, Sheng-Yow; Lee, Tsair-Fwu; Yeh, Shyh-An; Ding, Hueisch-Jy; Chen, Pang-Yu

    2015-10-01

    Ashland Inc. EBT2 and EBT3 films are widely used in quality assurance for radiation therapy; however, there remains a relatively high degree of uncertainty [B. Hartmann, M. Martisikova, and O. Jakel, "Homogeneity of Gafchromic EBT2 film," Med. Phys. 37, 1753-1756 (2010)]. Micke et al. (2011) recently improved the spatial homogeneity using all color channels of a flatbed scanner; however, van Hoof et al. (2012) pointed out that the corrected nonuniformity still requires further investigation for larger fields. To reduce the calibration errors and the uncertainty, the authors propose a new red-channel percentage-depth-dose method in combination with a modified three-channel technique. For the ease of comparison, the EBT2 film image used in the authors' previous study (2012) was reanalyzed using different approaches. Photon beams of 6-MV were delivered to two different films at two different beam on times, resulting in the absorption doses of ranging from approximately 30 to 300 cGy at the vertical midline of the film, which was set to be coincident with the central axis of the beam. The film was tightly sandwiched in a 30(3)-cm(3) polystyrene phantom, and the pixel values for red, green, and blue channels were extracted from 234 points on the central axis of the beam and compared with the corresponding depth doses. The film was first calibrated using the multichannel method proposed by Micke et al. (2010), accounting for nonuniformities in the scanner. After eliminating the scanner and dose-independent nonuniformities, the film was recalibrated via the dose-dependent optical density of the red channel and fitted to a power function. This calibration was verified via comparisons of the dose profiles extracted from the films, where three were exposed to a 60° physical wedge field and three were exposed to composite fields, and all of which were measured in a water phantom. A correction for optical attenuation was implemented, and treatment plans of intensity modulated radiation therapy and volumetric modulated arc therapy were evaluated. The method described here demonstrated improved accuracy with reduced uncertainty. The relative error compared with the measurements of a water phantom was less than 1%, and the overall calibration uncertainty was less than 2%. Verification tests revealed that the results were close to those of the authors' previous study, and all differences were within 3%, except those with a high-dose gradient. The gamma pass rates (2%/2 mm) of the treatment plan evaluated using the method described here were greater than 99%, and no obvious stripe patterns were observed in the dose-difference maps. Spatial homogeneity was significantly improved via the calibration method described here. This technique is both convenient and time-efficient because it does not require cutting the film, and only two exposures are necessary.

  5. Collaborative study for the validation of an improved HPLC assay for recombinant IFN-alfa-2.

    PubMed

    Jönsson, K H; Daas, A; Buchheit, K H; Terao, E

    2016-01-01

    The current European Pharmacopoeia (Ph. Eur.) texts for Interferon (IFN)-alfa-2 include a nonspecific photometric protein assay using albumin as calibrator and a highly variable cell-based assay for the potency determination of the protective effects. A request was expressed by the Official Medicines Control Laboratories (OMCLs) for improved methods for the batch control of recombinant interferon alfa-2 bulk and market surveillance testing of finished products, including those formulated with Human Serum Albumin (HSA). A HPLC method was developed at the Medical Products Agency (MPA, Sweden) for the testing of IFN-alfa-2 products. An initial collaborative study run under the Biological Standardisation Programme (BSP; study code BSP039) revealed the need for minor changes to improve linearity of the calibration curves, assay reproducibility and robustness. The goal of the collaborative study, coded BSP071, was to transfer and further validate this improved HPLC method. Ten laboratories participated in the study. Four marketed IFN-alfa-2 preparations (one containing HSA) together with the Ph. Eur. Chemical Reference Substance (CRS) for IFN-alfa-2a and IFN-alfa-2b, and in-house reference standards from two manufacturers were used for the quantitative assay. The modified method was successfully transferred to all laboratories despite local variation in equipment. The resolution between the main and the oxidised forms of IFN-alfa-2 was improved compared to the results from the BSP039 study. The improved method even allowed partial resolution of an extra peak after the principal peak. Symmetry of the main IFN peak was acceptable for all samples in all laboratories. Calibration curves established with the Ph. Eur. IFN-alfa-2a and IFN-alfa-2b CRSs showed excellent linearity with intercepts close to the origin and coefficients of determination greater than 0.9995. Assay repeatability, intermediate precision and reproducibility varied with the tested sample within acceptable ranges. Test accuracy estimated by comparing the values obtained by the participants to the declared contents determined by the manufacturers was good despite the absence of a common reference preparation. In conclusion, the present study showed that the new method is suitable, reproducible and transferable. Proposals for the revision of Ph. Eur. texts are presented.

  6. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  7. A controlled experiment in ground water flow model calibration

    USGS Publications Warehouse

    Hill, M.C.; Cooley, R.L.; Pollock, D.W.

    1998-01-01

    Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.

  8. A New First Break Picking for Three-Component VSP Data Using Gesture Sensor and Polarization Analysis

    PubMed Central

    Li, Huailiang; Tuo, Xianguo; Shen, Tong; Wang, Ruili; Courtois, Jérémie; Yan, Minhao

    2017-01-01

    A new first break picking for three-component (3C) vertical seismic profiling (VSP) data is proposed to improve the estimation accuracy of first arrivals, which adopts gesture detection calibration and polarization analysis based on the eigenvalue of the covariance matrix. This study aims at addressing the problem that calibration is required for VSP data using the azimuth and dip angle of geophones, due to the direction of geophones being random when applied in a borehole, which will further lead to the first break picking possibly being unreliable. Initially, a gesture-measuring module is integrated in the seismometer to rapidly obtain high-precision gesture data (including azimuth and dip angle information). Using re-rotating and re-projecting using earlier gesture data, the seismic dataset of each component will be calibrated to the direction that is consistent with the vibrator shot orientation. It will promote the reliability of the original data when making each component waveform calibrated to the same virtual reference component, and the corresponding first break will also be properly adjusted. After achieving 3C data calibration, an automatic first break picking algorithm based on the autoregressive-Akaike information criterion (AR-AIC) is adopted to evaluate the first break. Furthermore, in order to enhance the accuracy of the first break picking, the polarization attributes of 3C VSP recordings is applied to constrain the scanning segment of AR-AIC picker, which uses the maximum eigenvalue calculation of the covariance matrix. The contrast results between pre-calibration and post-calibration using field data show that it can further improve the quality of the 3C VSP waveform, which is favorable to subsequent picking. Compared to the obtained short-term average to long-term average (STA/LTA) and the AR-AIC algorithm, the proposed method, combined with polarization analysis, can significantly reduce the picking error. Applications of actual field experiments have also confirmed that the proposed method may be more suitable for the first break picking of 3C VSP. Test using synthesized 3C seismic data with low SNR indicates that the first break is picked with an error between 0.75 ms and 1.5 ms. Accordingly, the proposed method can reduce the picking error for 3C VSP data. PMID:28925981

  9. Techniques for improving the accuracy of cyrogenic temperature measurement in ground test programs

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Fabik, Richard H.

    1993-01-01

    The performance of a sensor is often evaluated by determining to what degree of accuracy a measurement can be made using this sensor. The absolute accuracy of a sensor is an important parameter considered when choosing the type of sensor to use in research experiments. Tests were performed to improve the accuracy of cryogenic temperature measurements by calibration of the temperature sensors when installed in their experimental operating environment. The calibration information was then used to correct for temperature sensor measurement errors by adjusting the data acquisition system software. This paper describes a method to improve the accuracy of cryogenic temperature measurements using corrections in the data acquisition system software such that the uncertainty of an individual temperature sensor is improved from plus or minus 0.90 deg R to plus or minus 0.20 deg R over a specified range.

  10. Formulation of immunoassay calibrators in pasteurized albumin can significantly enhance their durability.

    PubMed

    Warren, David J; Nordlund, Marianne S; Paus, Elisabeth

    2010-02-28

    Calibrator matrix can have significant effects on the commutability of assay standards and on the maintenance of their integrity. We have observed marked instability in progastrin-releasing peptide (proGRP) assay standards traceable to the bovine serum albumin (BSA) used in matrix formulation. Attempts were made to improve calibrator stability using different albumin pretreatments. Observed analyte recoveries in calibrators prepared with untreated BSA were consistently less than 45% after 1 week of storage at 4 degrees C. Pre-treating the BSA by chromatography on immobilized heparin or benzamidine failed to improve calibrator durability with day 7 recoveries of less than 55%. In marked contrast, calibrators formulated with albumin pasteurized at pH 3.0 displayed remarkable stability. Recoveries of >97% were observed after 4 weeks of storage at either 4 degrees C or room temperature. Even calibrators incubated for 4 weeks at 37 degrees C gave recoveries between 91-106%. This improvement was not seen with BSA pasteurized at neutral pH. Albumin pretreatment is straightforward, easily scalable and dramatically improves calibrator stability. Matrix formulated with acid-pasteurized BSA may prove more generally useful when assays are plagued by poor calibrator durability. 2009 Elsevier B.V. All rights reserved.

  11. Hand-Eye Calibration of Robonaut

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin; Huber, Eric

    2004-01-01

    NASA's Human Space Flight program depends heavily on Extra-Vehicular Activities (EVA's) performed by human astronauts. EVA is a high risk environment that requires extensive training and ground support. In collaboration with the Defense Advanced Research Projects Agency (DARPA), NASA is conducting a ground development project to produce a robotic astronaut's assistant, called Robonaut, that could help reduce human EVA time and workload. The project described in this paper designed and implemented a hand-eye calibration scheme for Robonaut, Unit A. The intent of this calibration scheme is to improve hand-eye coordination of the robot. The basic approach is to use kinematic and stereo vision measurements, namely the joint angles self-reported by the right arm and 3-D positions of a calibration fixture as measured by vision, to estimate the transformation from Robonaut's base coordinate system to its hand coordinate system and to its vision coordinate system. Two methods of gathering data sets have been developed, along with software to support each. In the first, the system observes the robotic arm and neck angles as the robot is operated under external control, and measures the 3-D position of a calibration fixture using Robonaut's stereo cameras, and logs these data. In the second, the system drives the arm and neck through a set of pre-recorded configurations, and data are again logged. Two variants of the calibration scheme have been developed. The full calibration scheme is a batch procedure that estimates all relevant kinematic parameters of the arm and neck of the robot The daily calibration scheme estimates only joint offsets for each rotational joint on the arm and neck, which are assumed to change from day to day. The schemes have been designed to be automatic and easy to use so that the robot can be fully recalibrated when needed such as after repair, upgrade, etc, and can be partially recalibrated after each power cycle. The scheme has been implemented on Robonaut Unit A and has been shown to reduce mismatch between kinematically derived positions and visually derived positions from a mean of 13.75cm using the previous calibration to means of 1.85cm using a full calibration and 2.02cm using a suboptimal but faster daily calibration. This improved calibration has already enabled the robot to more accurately reach for and grasp objects that it sees within its workspace. The system has been used to support an autonomous wrench-grasping experiment and significantly improved the workspace positioning of the hand based on visually derived wrench position. estimates.

  12. Research on the calibration of ultraviolet energy meters

    NASA Astrophysics Data System (ADS)

    Lin, Fangsheng; Yin, Dejin; Li, Tiecheng; Lai, Lei; Xia, Ming

    2016-10-01

    Ultraviolet (UV) radiation is a kind of non-lighting radiation with the wavelength range from 100nm to 400nm. Ultraviolet irradiance meters are now widely used in many areas. However, as the development of science and technology, especially in the field of light-curing industry, there are more and more UV energy meters or UV-integrators need to be measured. Because the structure, wavelength band and measured power intensity of UV energy meters are different from traditional UV irradiance meters, it is important for us to take research on the calibration. With reference to JJG879-2002, we SIMT have independently developed the UV energy calibration device and the standard of operation and experimental methods for UV energy calibration in detail. In the calibration process of UV energy meter, many influencing factors will affect the final results, including different UVA-band UV light sources, different spectral response for different brands of UV energy meters, instability and no uniformity of UV light source and temperature. Therefore we need to take all of these factors into consideration to improve accuracy in UV energy calibration.

  13. Reprocessing VIIRS sensor data records from the early SNPP mission

    NASA Astrophysics Data System (ADS)

    Blonski, Slawomir; Cao, Changyong

    2016-10-01

    The Visible-Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite began acquiring Earth observations in November 2011. VIIRS data from all spectral bands became available three months after launch when all infrared-band detectors were cooled down to operational temperature. Before that, VIIRS sensor data record (SDR) products were successfully generated for the visible and near infrared (VNIR) bands. Although VIIRS calibration has been significantly improved through the four years of the SNPP mission, SDR reprocessing for this early mission phase has yet to be performed. Despite a rapid decrease in the telescope throughput that occurred during the first few months on orbit, calibration coefficients for the VNIR bands were recently successfully generated using an automated procedure that is currently deployed in the operational SDR production system. The reanalyzed coefficients were derived from measurements collected during solar calibration events that occur on every SNPP orbit since the beginning of the mission. The new coefficients can be further used to reprocess the VIIRS SDR products. In this study, they are applied to reprocess VIIRS data acquired over pseudo-invariant calibration sites Libya 4 and Sudan 1 in Sahara between November 2011 and February 2012. Comparison of the reprocessed SDR products with the original ones demonstrates improvements in the VIIRS calibration provided by the reprocessing. Since SNPP is the first satellite in a series that will form the Joint Polar Satellite System (JPSS), calibration methods developed for the SNPP VIIRS will also apply to the future JPSS measurements.

  14. WE-D-9A-06: Open Source Monitor Calibration and Quality Control Software for Enterprise Display Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevins, N; Vanderhoek, M; Lang, S

    2014-06-15

    Purpose: Medical display monitor calibration and quality control present challenges to medical physicists. The purpose of this work is to demonstrate and share experiences with an open source package that allows for both initial monitor setup and routine performance evaluation. Methods: A software package, pacsDisplay, has been developed over the last decade to aid in the calibration of all monitors within the radiology group in our health system. The software is used to calibrate monitors to follow the DICOM Grayscale Standard Display Function (GSDF) via lookup tables installed on the workstation. Additional functionality facilitates periodic evaluations of both primary andmore » secondary medical monitors to ensure satisfactory performance. This software is installed on all radiology workstations, and can also be run as a stand-alone tool from a USB disk. Recently, a database has been developed to store and centralize the monitor performance data and to provide long-term trends for compliance with internal standards and various accrediting organizations. Results: Implementation and utilization of pacsDisplay has resulted in improved monitor performance across the health system. Monitor testing is now performed at regular intervals and the software is being used across multiple imaging modalities. Monitor performance characteristics such as maximum and minimum luminance, ambient luminance and illuminance, color tracking, and GSDF conformity are loaded into a centralized database for system performance comparisons. Compliance reports for organizations such as MQSA, ACR, and TJC are generated automatically and stored in the same database. Conclusion: An open source software solution has simplified and improved the standardization of displays within our health system. This work serves as an example method for calibrating and testing monitors within an enterprise health system.« less

  15. Spatial calibration of a tokamak neutral beam diagnostic using in situ neutral beam emission

    DOE PAGES

    Chrystal, Colin; Burrell, Keith H.; Grierson, Brian A.; ...

    2015-10-20

    Neutral beam injection is used in tokamaks to heat, apply torque, drive non-inductive current, and diagnose plasmas. Neutral beam diagnostics need accurate spatial calibrations to benefit from the measurement localization provided by the neutral beam. A new technique has been developed that uses in-situ measurements of neutral beam emission to determine the spatial location of the beam and the associated diagnostic views. This technique was developed to improve the charge exchange recombination diagnostic (CER) at the DIII-D tokamak and uses measurements of the Doppler shift and Stark splitting of neutral beam emission made by that diagnostic. These measurements contain informationmore » about the geometric relation between the diagnostic views and the neutral beams when they are injecting power. This information is combined with standard spatial calibration measurements to create an integrated spatial calibration that provides a more complete description of the neutral beam-CER system. The integrated spatial calibration results are very similar to the standard calibration results and derived quantities from CER measurements are unchanged within their measurement errors. Lastly, the methods developed to perform the integrated spatial calibration could be useful for tokamaks with limited physical access.« less

  16. Tissue Cancellation in Dual Energy Mammography Using a Calibration Phantom Customized for Direct Mapping.

    PubMed

    Han, Seokmin; Kang, Dong-Goo

    2014-01-01

    An easily implementable tissue cancellation method for dual energy mammography is proposed to reduce anatomical noise and enhance lesion visibility. For dual energy calibration, the images of an imaging object are directly mapped onto the images of a customized calibration phantom. Each pixel pair of the low and high energy images of the imaging object was compared to pixel pairs of the low and high energy images of the calibration phantom. The correspondence was measured by absolute difference between the pixel values of imaged object and those of the calibration phantom. Then the closest pixel pair of the calibration phantom images is marked and selected. After the calibration using direct mapping, the regions with lesion yielded different thickness from the background tissues. Taking advantage of the different thickness, the visibility of cancerous lesions was enhanced with increased contrast-to-noise ratio, depending on the size of lesion and breast thickness. However, some tissues near the edge of imaged object still remained after tissue cancellation. These remaining residuals seem to occur due to the heel effect, scattering, nonparallel X-ray beam geometry and Poisson distribution of photons. To improve its performance further, scattering and the heel effect should be compensated.

  17. Spatial calibration of a tokamak neutral beam diagnostic using in situ neutral beam emission

    NASA Astrophysics Data System (ADS)

    Chrystal, C.; Burrell, K. H.; Grierson, B. A.; Pace, D. C.

    2015-10-01

    Neutral beam injection is used in tokamaks to heat, apply torque, drive non-inductive current, and diagnose plasmas. Neutral beam diagnostics need accurate spatial calibrations to benefit from the measurement localization provided by the neutral beam. A new technique has been developed that uses in situ measurements of neutral beam emission to determine the spatial location of the beam and the associated diagnostic views. This technique was developed to improve the charge exchange recombination (CER) diagnostic at the DIII-D tokamak and uses measurements of the Doppler shift and Stark splitting of neutral beam emission made by that diagnostic. These measurements contain information about the geometric relation between the diagnostic views and the neutral beams when they are injecting power. This information is combined with standard spatial calibration measurements to create an integrated spatial calibration that provides a more complete description of the neutral beam-CER system. The integrated spatial calibration results are very similar to the standard calibration results and derived quantities from CER measurements are unchanged within their measurement errors. The methods developed to perform the integrated spatial calibration could be useful for tokamaks with limited physical access.

  18. Algorithm for Lossless Compression of Calibrated Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2010-01-01

    A two-stage predictive method was developed for lossless compression of calibrated hyperspectral imagery. The first prediction stage uses a conventional linear predictor intended to exploit spatial and/or spectral dependencies in the data. The compressor tabulates counts of the past values of the difference between this initial prediction and the actual sample value. To form the ultimate predicted value, in the second stage, these counts are combined with an adaptively updated weight function intended to capture information about data regularities introduced by the calibration process. Finally, prediction residuals are losslessly encoded using adaptive arithmetic coding. Algorithms of this type are commonly tested on a readily available collection of images from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral imager. On the standard calibrated AVIRIS hyperspectral images that are most widely used for compression benchmarking, the new compressor provides more than 0.5 bits/sample improvement over the previous best compression results. The algorithm has been implemented in Mathematica. The compression algorithm was demonstrated as beneficial on 12-bit calibrated AVIRIS images.

  19. FAST Model Calibration and Validation of the OC5-DeepCwind Floating Offshore Wind System Against Wave Tank Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  20. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  1. Self-Calibration and Optimal Response in Intelligent Sensors Design Based on Artificial Neural Networks

    PubMed Central

    Rivera, José; Carrillo, Mariano; Chacón, Mario; Herrera, Gilberto; Bojorquez, Gilberto

    2007-01-01

    The development of smart sensors involves the design of reconfigurable systems capable of working with different input sensors. Reconfigurable systems ideally should spend the least possible amount of time in their calibration. An autocalibration algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity, as accurately as possible. This paper describes a new autocalibration methodology for nonlinear intelligent sensors based on artificial neural networks, ANN. The methodology involves analysis of several network topologies and training algorithms. The proposed method was compared against the piecewise and polynomial linearization methods. Method comparison was achieved using different number of calibration points, and several nonlinear levels of the input signal. This paper also shows that the proposed method turned out to have a better overall accuracy than the other two methods. Besides, experimentation results and analysis of the complete study, the paper describes the implementation of the ANN in a microcontroller unit, MCU. In order to illustrate the method capability to build autocalibration and reconfigurable systems, a temperature measurement system was designed and tested. The proposed method is an improvement over the classic autocalibration methodologies, because it impacts on the design process of intelligent sensors, autocalibration methodologies and their associated factors, like time and cost.

  2. Calibration of High Frequency MEMS Microphones

    NASA Technical Reports Server (NTRS)

    Shams, Qamar A.; Humphreys, William M.; Bartram, Scott M.; Zuckewar, Allan J.

    2007-01-01

    Understanding and controlling aircraft noise is one of the major research topics of the NASA Fundamental Aeronautics Program. One of the measurement technologies used to acquire noise data is the microphone directional array (DA). Traditional direction array hardware, consisting of commercially available condenser microphones and preamplifiers can be too expensive and their installation in hard-walled wind tunnel test sections too complicated. An emerging micro-machining technology coupled with the latest cutting edge technologies for smaller and faster systems have opened the way for development of MEMS microphones. The MEMS microphone devices are available in the market but suffer from certain important shortcomings. Based on early experiments with array prototypes, it has been found that both the bandwidth and the sound pressure level dynamic range of the microphones should be increased significantly to improve the performance and flexibility of the overall array. Thus, in collaboration with an outside MEMS design vendor, NASA Langley modified commercially available MEMS microphone as shown in Figure 1 to meet the new requirements. Coupled with the design of the enhanced MEMS microphones was the development of a new calibration method for simultaneously obtaining the sensitivity and phase response of the devices over their entire broadband frequency range. Over the years, several methods have been used for microphone calibration. Some of the common methods of microphone calibration are Coupler (Reciprocity, Substitution, and Simultaneous), Pistonphone, Electrostatic actuator, and Free-field calibration (Reciprocity, Substitution, and Simultaneous). Traditionally, electrostatic actuators (EA) have been used to characterize air-condenser microphones for wideband frequency ranges; however, MEMS microphones are not adaptable to the EA method due to their construction and very small diaphragm size. Hence a substitution-based, free-field method was developed to calibrate these microphones at frequencies up to 80 kHz. The technique relied on the use of a random, ultrasonic broadband centrifugal sound source located in a small anechoic chamber. Phase calibrations of the MEMS microphones were derived from cross spectral phase comparisons between the reference and test substitution microphones and an adjacent and invariant grazing-incidence 1/8-inch standard microphone.

  3. Predicting ambient aerosol thermal-optical reflectance (TOR) measurements from infrared spectra: organic carbon

    NASA Astrophysics Data System (ADS)

    Dillner, A. M.; Takahama, S.

    2015-03-01

    Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, organic carbon is measured from a quartz fiber filter that has been exposed to a volume of ambient air and analyzed using thermal methods such as thermal-optical reflectance (TOR). Here, methods are presented that show the feasibility of using Fourier transform infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters to accurately predict TOR OC. This work marks an initial step in proposing a method that can reduce the operating costs of large air quality monitoring networks with an inexpensive, non-destructive analysis technique using routinely collected PTFE filter samples which, in addition to OC concentrations, can concurrently provide information regarding the composition of organic aerosol. This feasibility study suggests that the minimum detection limit and errors (or uncertainty) of FT-IR predictions are on par with TOR OC such that evaluation of long-term trends and epidemiological studies would not be significantly impacted. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least-squares regression is used to calibrate sample FT-IR absorbance spectra to TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date. The calibration produces precise and accurate TOR OC predictions of the test set samples by FT-IR as indicated by high coefficient of variation (R2; 0.96), low bias (0.02 μg m-3, the nominal IMPROVE sample volume is 32.8 m3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC ratio, which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; these divisions also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact-correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass, indicating that the calibration is linear. Using samples in the calibration set that have different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least-squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples - providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).

  4. Quantitative estimation of α-PVP metabolites in urine by GC-APCI-QTOFMS with nitrogen chemiluminescence detection based on parent drug calibration.

    PubMed

    Mesihää, Samuel; Rasanen, Ilpo; Ojanperä, Ilkka

    2018-05-01

    Gas chromatography (GC) hyphenated with nitrogen chemiluminescence detection (NCD) and quadrupole time-of-flight mass spectrometry (QTOFMS) was applied for the first time to the quantitative analysis of new psychoactive substances (NPS) in urine, based on the N-equimolar response of NCD. A method was developed and validated to estimate the concentrations of three metabolites of the common stimulant NPS α-pyrrolidinovalerophenone (α-PVP) in spiked urine samples, simulating an analysis having no authentic reference standards for the metabolites and using the parent drug instead for quantitative calibration. The metabolites studied were OH-α-PVP (M1), 2″-oxo-α-PVP (M3), and N,N-bis-dealkyl-PVP (2-amino-1-phenylpentan-1-one; M5). Sample preparation involved liquid-liquid extraction with a mixture of ethyl acetate and butyl chloride at a basic pH and subsequent silylation of the sec-hydroxyl and prim-amino groups of M1 and M5, respectively. Simultaneous compound identification was based on the accurate masses of the protonated molecules for each compound by QTOFMS following atmospheric pressure chemical ionization. The accuracy of quantification of the parent-calibrated NCD method was compared with that of the corresponding parent-calibrated QTOFMS method, as well as with a reference QTOFMS method calibrated with the authentic reference standards. The NCD method produced an equally good accuracy to the reference method for α-PVP, M3 and M5, while a higher negative bias (25%) was obtained for M1, best explainable by recovery and stability issues. The performance of the parent-calibrated QTOFMS method was inferior to the reference method with an especially high negative bias (60%) for M1. The NCD method enabled better quantitative precision than the QTOFMS methods To evaluate the novel approach in casework, twenty post- mortem urine samples previously found positive for α-PVP were analyzed by the parent calibrated NCD method and the reference QTOFMS method. The highest difference in the quantitative results between the two methods was only 33%, and the NCD method's precision as the coefficient of variation was better than 13%. The limit of quantification for the NCD method was approximately 0.25μg/mL in urine, which generally allowed the analysis of α-PVP and the main metabolite M1. However, the sensitivity was not sufficient for the low concentrations of M3 and M5. Consequently, while having potential for instant analysis of NPS and metabolites in moderate concentrations without reference standards, the NCD method should be further developed for improved sensitivity to be more generally applicable. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Calibration procedures of the Tore-Supra infrared endoscopes

    NASA Astrophysics Data System (ADS)

    Desgranges, C.; Jouve, M.; Balorin, C.; Reichle, R.; Firdaouss, M.; Lipa, M.; Chantant, M.; Gardarein, J. L.; Saille, A.; Loarer, T.

    2018-01-01

    Five endoscopes equipped with infrared cameras working in the medium infrared range (3-5 μm) are installed on the controlled thermonuclear fusion research device Tore-Supra. These endoscopes aim at monitoring the plasma facing components surface temperature to prevent their overheating. Signals delivered by infrared cameras through endoscopes are analysed and used on the one hand through a real time feedback control loop acting on the heating systems of the plasma to decrease plasma facing components surface temperatures when necessary, on the other hand for physics studies such as determination of the incoming heat flux . To ensure these two roles a very accurate knowledge of the absolute surface temperatures is mandatory. Consequently the infrared endoscopes must be calibrated through a very careful procedure. This means determining their transmission coefficients which is a delicate operation. Methods to calibrate infrared endoscopes during the shutdown period of the Tore-Supra machine will be presented. As they do not allow determining the possible transmittances evolution during operation an in-situ method is presented. It permits the validation of the calibration performed in laboratory as well as the monitoring of their evolution during machine operation. This is possible by the use of the endoscope shutter and a dedicated plasma scenario developed to heat it. Possible improvements of this method are briefly evoked.

  6. System calibration method for Fourier ptychographic microscopy.

    PubMed

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  7. Estimating daily time series of streamflow using hydrological model calibrated based on satellite observations of river water surface width: Toward real world applications.

    PubMed

    Sun, Wenchao; Ishidaira, Hiroshi; Bastola, Satish; Yu, Jingshan

    2015-05-01

    Lacking observation data for calibration constrains applications of hydrological models to estimate daily time series of streamflow. Recent improvements in remote sensing enable detection of river water-surface width from satellite observations, making possible the tracking of streamflow from space. In this study, a method calibrating hydrological models using river width derived from remote sensing is demonstrated through application to the ungauged Irrawaddy Basin in Myanmar. Generalized likelihood uncertainty estimation (GLUE) is selected as a tool for automatic calibration and uncertainty analysis. Of 50,000 randomly generated parameter sets, 997 are identified as behavioral, based on comparing model simulation with satellite observations. The uncertainty band of streamflow simulation can span most of 10-year average monthly observed streamflow for moderate and high flow conditions. Nash-Sutcliffe efficiency is 95.7% for the simulated streamflow at the 50% quantile. These results indicate that application to the target basin is generally successful. Beyond evaluating the method in a basin lacking streamflow data, difficulties and possible solutions for applications in the real world are addressed to promote future use of the proposed method in more ungauged basins. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Testing the molecular clock using mechanistic models of fossil preservation and molecular evolution

    PubMed Central

    2017-01-01

    Molecular sequence data provide information about relative times only, and fossil-based age constraints are the ultimate source of information about absolute times in molecular clock dating analyses. Thus, fossil calibrations are critical to molecular clock dating, but competing methods are difficult to evaluate empirically because the true evolutionary time scale is never known. Here, we combine mechanistic models of fossil preservation and sequence evolution in simulations to evaluate different approaches to constructing fossil calibrations and their impact on Bayesian molecular clock dating, and the relative impact of fossil versus molecular sampling. We show that divergence time estimation is impacted by the model of fossil preservation, sampling intensity and tree shape. The addition of sequence data may improve molecular clock estimates, but accuracy and precision is dominated by the quality of the fossil calibrations. Posterior means and medians are poor representatives of true divergence times; posterior intervals provide a much more accurate estimate of divergence times, though they may be wide and often do not have high coverage probability. Our results highlight the importance of increased fossil sampling and improved statistical approaches to generating calibrations, which should incorporate the non-uniform nature of ecological and temporal fossil species distributions. PMID:28637852

  9. Improving Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2012-02-01

    New test procedure evaluates quality and accuracy of energy analysis tools for the residential building retrofit market. Reducing the energy use of existing homes in the United States offers significant energy-saving opportunities, which can be identified through building simulation software tools that calculate optimal packages of efficiency measures. To improve the accuracy of energy analysis for residential buildings, the National Renewable Energy Laboratory's (NREL) Buildings Research team developed the Building Energy Simulation Test for Existing Homes (BESTEST-EX), a method for diagnosing and correcting errors in building energy audit software and calibration procedures. BESTEST-EX consists of building physics and utility billmore » calibration test cases, which software developers can use to compare their tools simulation findings to reference results generated with state-of-the-art simulation tools. Overall, the BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX is helping software developers identify and correct bugs in their software, as well as develop and test utility bill calibration procedures.« less

  10. Color accuracy and reproducibility in whole slide imaging scanners

    PubMed Central

    Shrestha, Prarthana; Hulsken, Bas

    2014-01-01

    Abstract We propose a workflow for color reproduction in whole slide imaging (WSI) scanners, such that the colors in the scanned images match to the actual slide color and the inter-scanner variation is minimum. We describe a new method of preparation and verification of the color phantom slide, consisting of a standard IT8-target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several International Color Consortium (ICC) compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color space. Based on the quality of the color reproduction in histopathology slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed workflow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We quantify color difference using the CIE-DeltaE2000 metric, where DeltaE values below 1 are considered imperceptible. Our evaluation on 14 phantom slides, manufactured according to the proposed method, shows an average inter-slide color difference below 1 DeltaE. The proposed workflow is implemented and evaluated in 35 WSI scanners developed at Philips, called the Ultra Fast Scanners (UFS). The color accuracy, measured as DeltaE between the scanner reproduced colors and the reference colorimetric values of the phantom patches, is improved on average to 3.5 DeltaE in calibrated scanners from 10 DeltaE in uncalibrated scanners. The average inter-scanner color difference is found to be 1.2 DeltaE. The improvement in color performance upon using the proposed method is apparent with the visual color quality of the tissue scans. PMID:26158041

  11. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  12. Development and accuracy of a multipoint method for measuring visibility.

    PubMed

    Tai, Hongda; Zhuang, Zibo; Sun, Dongsong

    2017-10-01

    Accurate measurements of visibility are of great importance in many fields. This paper reports a multipoint visibility measurement (MVM) method to measure and calculate the atmospheric transmittance, extinction coefficient, and meteorological optical range (MOR). The relative errors of atmospheric transmittance and MOR measured by the MVM method and traditional transmissometer method are analyzed and compared. Experiments were conducted indoors, and the data were simultaneously processed. The results revealed that the MVM can effectively improve the accuracy under different visibility conditions. The greatest improvement of accuracy was 27%. The MVM can be used to calibrate and evaluate visibility meters.

  13. Automatic colorimetric calibration of human wounds

    PubMed Central

    2010-01-01

    Background Recently, digital photography in medicine is considered an acceptable tool in many clinical domains, e.g. wound care. Although ever higher resolutions are available, reproducibility is still poor and visual comparison of images remains difficult. This is even more the case for measurements performed on such images (colour, area, etc.). This problem is often neglected and images are freely compared and exchanged without further thought. Methods The first experiment checked whether camera settings or lighting conditions could negatively affect the quality of colorimetric calibration. Digital images plus a calibration chart were exposed to a variety of conditions. Precision and accuracy of colours after calibration were quantitatively assessed with a probability distribution for perceptual colour differences (dE_ab). The second experiment was designed to assess the impact of the automatic calibration procedure (i.e. chart detection) on real-world measurements. 40 Different images of real wounds were acquired and a region of interest was selected in each image. 3 Rotated versions of each image were automatically calibrated and colour differences were calculated. Results 1st Experiment: Colour differences between the measurements and real spectrophotometric measurements reveal median dE_ab values respectively 6.40 for the proper patches of calibrated normal images and 17.75 for uncalibrated images demonstrating an important improvement in accuracy after calibration. The reproducibility, visualized by the probability distribution of the dE_ab errors between 2 measurements of the patches of the images has a median of 3.43 dE* for all calibrated images, 23.26 dE_ab for all uncalibrated images. If we restrict ourselves to the proper patches of normal calibrated images the median is only 2.58 dE_ab! Wilcoxon sum-rank testing (p < 0.05) between uncalibrated normal images and calibrated normal images with proper squares were equal to 0 demonstrating a highly significant improvement of reproducibility. In the second experiment, the reproducibility of the chart detection during automatic calibration is presented using a probability distribution of dE_ab errors between 2 measurements of the same ROI. Conclusion The investigators proposed an automatic colour calibration algorithm that ensures reproducible colour content of digital images. Evidence was provided that images taken with commercially available digital cameras can be calibrated independently of any camera settings and illumination features. PMID:20298541

  14. Note: An improved calibration system with phase correction for electronic transformers with digital output.

    PubMed

    Cheng, Han-miao; Li, Hong-bin

    2015-08-01

    The existing electronic transformer calibration systems employing data acquisition cards cannot satisfy some practical applications, because the calibration systems have phase measurement errors when they work in the mode of receiving external synchronization signals. This paper proposes an improved calibration system scheme with phase correction to improve the phase measurement accuracy. We employ NI PCI-4474 to design a calibration system, and the system has the potential to receive external synchronization signals and reach extremely high accuracy classes. Accuracy verification has been carried out in the China Electric Power Research Institute, and results demonstrate that the system surpasses the accuracy class 0.05. Furthermore, this system has been used to test the harmonics measurement accuracy of all-fiber optical current transformers. In the same process, we have used an existing calibration system, and a comparison of the test results is presented. The system after improvement is suitable for the intended applications.

  15. Evaluation of the use of performance reference compounds in an oasis-HLB adsorbent based passive sampler for improving water concentration estimates of polar herbicides in freshwater

    USGS Publications Warehouse

    Mazzella, N.; Lissalde, S.; Moreira, S.; Delmas, F.; Mazellier, P.; Huckins, J.N.

    2010-01-01

    Passive samplers such as the Polar Organic Chemical Integrative Sampler (POCIS) are useful tools for monitoring trace levels of polar organic chemicals in aquatic environments. The use of performance reference compounds (PRC) spiked into the POCIS adsorbent for in situ calibration may improve the semiquantitative nature of water concentration estimates based on this type of sampler. In this work, deuterium labeled atrazine-desisopropyl (DIA-d5) was chosen as PRC because of its relatively high fugacity from Oasis HLB (the POCIS adsorbent used) and our earlier evidence of its isotropic exchange. In situ calibration of POCIS spiked with DIA-d5was performed, and the resulting time-weighted average concentration estimates were compared with similar values from an automatic sampler equipped with Oasis HLB cartridges. Before PRC correction, water concentration estimates based on POCIS data sampling ratesfrom a laboratory calibration exposure were systematically lower than the reference concentrations obtained with the automatic sampler. Use of the DIA-d5 PRC data to correct POCIS sampling rates narrowed differences between corresponding values derived from the two methods. Application of PRCs for in situ calibration seems promising for improving POCIS-derived concentration estimates of polar pesticides. However, careful attention must be paid to the minimization of matrix effects when the quantification is performed by HPLC-ESI-MS/MS. ?? 2010 American Chemical Society.

  16. Calibration and combination of dynamical seasonal forecasts to enhance the value of predicted probabilities for managing risk

    NASA Astrophysics Data System (ADS)

    Dutton, John A.; James, Richard P.; Ross, Jeremy D.

    2013-06-01

    Seasonal probability forecasts produced with numerical dynamics on supercomputers offer great potential value in managing risk and opportunity created by seasonal variability. The skill and reliability of contemporary forecast systems can be increased by calibration methods that use the historical performance of the forecast system to improve the ongoing real-time forecasts. Two calibration methods are applied to seasonal surface temperature forecasts of the US National Weather Service, the European Centre for Medium Range Weather Forecasts, and to a World Climate Service multi-model ensemble created by combining those two forecasts with Bayesian methods. As expected, the multi-model is somewhat more skillful and more reliable than the original models taken alone. The potential value of the multimodel in decision making is illustrated with the profits achieved in simulated trading of a weather derivative. In addition to examining the seasonal models, the article demonstrates that calibrated probability forecasts of weekly average temperatures for leads of 2-4 weeks are also skillful and reliable. The conversion of ensemble forecasts into probability distributions of impact variables is illustrated with degree days derived from the temperature forecasts. Some issues related to loss of stationarity owing to long-term warming are considered. The main conclusion of the article is that properly calibrated probabilistic forecasts possess sufficient skill and reliability to contribute to effective decisions in government and business activities that are sensitive to intraseasonal and seasonal climate variability.

  17. A novel method to calibrate DOI function of a PET detector with a dual-ended-scintillator readout.

    PubMed

    Shao, Yiping; Yao, Rutao; Ma, Tianyu

    2008-12-01

    The detection of depth-of-interaction (DOI) is a critical detector capability to improve the PET spatial resolution uniformity across the field-of-view and will significantly enhance, in particular, small bore system performance for brain, breast, and small animal imaging. One promising technique of DOI detection is to use dual-ended-scintillator readout that uses two photon sensors to detect scintillation light from both ends of a scintillator array and estimate DOI based on the ratio of signals (similar to Anger logic). This approach needs a careful DOI function calibration to establish accurate relationship between DOI and signal ratios, and to recalibrate if the detection condition is shifted due to the drift of sensor gain, bias variations, or degraded optical coupling, etc. However, the current calibration method that uses coincident events to locate interaction positions inside a single scintillator crystal has severe drawbacks, such as complicated setup, long and repetitive measurements, and being prone to errors from various possible misalignments among the source and detector components. This method is also not practically suitable to calibrate multiple DOI functions of a crystal array. To solve these problems, a new method has been developed that requires only a uniform flood source to irradiate a crystal array without the need to locate the interaction positions, and calculates DOI functions based solely on the uniform probability distribution of interactions over DOI positions without knowledge or assumption of detector responses. Simulation and experiment have been studied to validate the new method, and the results show that the new method, with a simple setup and one single measurement, can provide consistent and accurate DOI functions for the entire array of multiple scintillator crystals. This will enable an accurate, simple, and practical DOI function calibration for the PET detectors based on the design of dual-ended-scintillator readout. In addition, the new method can be generally applied to calibrating other types of detectors that use the similar dual-ended readout to acquire the radiation interaction position.

  18. A novel method to calibrate DOI function of a PET detector with a dual-ended-scintillator readout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao Yiping; Yao Rutao; Ma Tianyu

    The detection of depth-of-interaction (DOI) is a critical detector capability to improve the PET spatial resolution uniformity across the field-of-view and will significantly enhance, in particular, small bore system performance for brain, breast, and small animal imaging. One promising technique of DOI detection is to use dual-ended-scintillator readout that uses two photon sensors to detect scintillation light from both ends of a scintillator array and estimate DOI based on the ratio of signals (similar to Anger logic). This approach needs a careful DOI function calibration to establish accurate relationship between DOI and signal ratios, and to recalibrate if the detectionmore » condition is shifted due to the drift of sensor gain, bias variations, or degraded optical coupling, etc. However, the current calibration method that uses coincident events to locate interaction positions inside a single scintillator crystal has severe drawbacks, such as complicated setup, long and repetitive measurements, and being prone to errors from various possible misalignments among the source and detector components. This method is also not practically suitable to calibrate multiple DOI functions of a crystal array. To solve these problems, a new method has been developed that requires only a uniform flood source to irradiate a crystal array without the need to locate the interaction positions, and calculates DOI functions based solely on the uniform probability distribution of interactions over DOI positions without knowledge or assumption of detector responses. Simulation and experiment have been studied to validate the new method, and the results show that the new method, with a simple setup and one single measurement, can provide consistent and accurate DOI functions for the entire array of multiple scintillator crystals. This will enable an accurate, simple, and practical DOI function calibration for the PET detectors based on the design of dual-ended-scintillator readout. In addition, the new method can be generally applied to calibrating other types of detectors that use the similar dual-ended readout to acquire the radiation interaction position.« less

  19. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  20. Parameter de-correlation and model-identification in hybrid-style terrestrial laser scanner self-calibration

    NASA Astrophysics Data System (ADS)

    Lichti, Derek D.; Chow, Jacky; Lahamy, Hervé

    One of the important systematic error parameters identified in terrestrial laser scanners is the collimation axis error, which models the non-orthogonality between two instrumental axes. The quality of this parameter determined by self-calibration, as measured by its estimated precision and its correlation with the tertiary rotation angle κ of the scanner exterior orientation, is strongly dependent on instrument architecture. While the quality is generally very high for panoramic-type scanners, it is comparably poor for hybrid-style instruments. Two methods for improving the quality of the collimation axis error in hybrid instrument self-calibration are proposed herein: (1) the inclusion of independent observations of the tertiary rotation angle κ; and (2) the use of a new collimation axis error model. Five real datasets were captured with two different hybrid-style scanners to test each method's efficacy. While the first method achieves the desired outcome of complete decoupling of the collimation axis error from κ, it is shown that the high correlation is simply transferred to other model variables. The second method achieves partial parameter de-correlation to acceptable levels. Importantly, it does so without any adverse, secondary correlations and is therefore the method recommended for future use. Finally, systematic error model identification has been greatly aided in previous studies by graphical analyses of self-calibration residuals. This paper presents results showing the architecture dependence of this technique, revealing its limitations for hybrid scanners.

  1. A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry

    NASA Astrophysics Data System (ADS)

    Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.

    2018-03-01

    Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.

  2. Sensitivity-Based Guided Model Calibration

    NASA Astrophysics Data System (ADS)

    Semnani, M.; Asadzadeh, M.

    2017-12-01

    A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.

  3. Fricke-gel dosimeter: overview of Xylenol Orange chemical behavior

    NASA Astrophysics Data System (ADS)

    Liosi, G. M.; Dondi, D.; Vander Griend, D. A.; Lazzaroni, S.; D'Agostino, G.; Mariani, M.

    2017-11-01

    The complexation between Xylenol Orange (XO) and Fe3+ ions plays a key role in Fricke-gel dosimeters for the determination of the absorbed dose via UV-vis analysis. In this study, the effect of XO and the acidity of the solution on the complexation mechanism was investigated. Moreover, starting from the results of complexation titration and Equilibrium Restricted Factor Analysis, four XO-Fe3+ complexes were identified to contribute to the absorption spectra. Based on the acquired knowledge, a new [Fe3+] vs dose calibration method is proposed. The preliminary results show a significant improvement of the sensitivity and dose threshold with respect to the commonly used Abs vs dose calibration method.

  4. The Chandra Source Catalog 2.0: Calibrations

    NASA Astrophysics Data System (ADS)

    Graessle, Dale E.; Evans, Ian N.; Rots, Arnold H.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    Among the many enhancements implemented for the release of Chandra Source Catalog (CSC) 2.0 are improvements in the processing calibration database (CalDB). We have included a thorough overhaul of the CalDB software used in the processing. The software system upgrade, called "CalDB version 4," allows for a more rational and consistent specification of flight configurations and calibration boundary conditions. Numerous improvements in the specific calibrations applied have also been added. Chandra's radiometric and detector response calibrations vary considerably with time, detector operating temperature, and position on the detector. The CalDB has been enhanced to provide the best calibrations possible to each observation over the fifteen-year period included in CSC 2.0. Calibration updates include an improved ACIS contamination model, as well as updated time-varying gain (i.e., photon energy) and quantum efficiency maps for ACIS and HRC-I. Additionally, improved corrections for the ACIS quantum efficiency losses due to CCD charge transfer inefficiency (CTI) have been added for each of the ten ACIS detectors. These CTI corrections are now time and temperature-dependent, allowing ACIS to maintain a 0.3% energy calibration accuracy over the 0.5-7.0 keV range for any ACIS source in the catalog. Radiometric calibration (effective area) accuracy is estimated at ~4% over that range. We include a few examples where improvements in the Chandra CalDB allow for improved data reduction and modeling for the new CSC.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  5. Efficient gradient calibration based on diffusion MRI

    PubMed Central

    Teh, Irvin; Maguire, Mahon L.

    2016-01-01

    Purpose To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. Methods The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. Results The errors in apparent diffusion coefficients along orthogonal axes ranged from −9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and −0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from −5.5% to + 4.5% precalibration and were likewise reduced to −0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Conclusion Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170–179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. PMID:26749277

  6. In-depth analysis and discussions of water absorption-typed high power laser calorimeter

    NASA Astrophysics Data System (ADS)

    Wei, Ji Feng

    2017-02-01

    In high-power and high-energy laser measurement, the absorber materials can be easily destroyed under long-term direct laser irradiation. In order to improve the calorimeter's measuring capacity, a measuring system directly using water flow as the absorber medium was built. The system's basic principles and the designing parameters of major parts were elaborated. The system's measuring capacity, the laser working modes, and the effects of major parameters were analyzed deeply. Moreover, the factors that may affect the accuracy of measurement were analyzed and discussed. The specific control measures and methods were elaborated. The self-calibration and normal calibration experiments show that this calorimeter has very high accuracy. In electrical calibration, the average correction coefficient is only 1.015, with standard deviation of only 0.5%. In calibration experiments, the standard deviation relative to a middle-power standard calorimeter is only 1.9%.

  7. Accuracy and Calibration of High Explosive Thermodynamic Equations of State

    NASA Astrophysics Data System (ADS)

    Baker, Ernest L.; Capellos, Christos; Stiel, Leonard I.; Pincay, Jack

    2010-10-01

    The Jones-Wilkins-Lee-Baker (JWLB) equation of state (EOS) was developed to more accurately describe overdriven detonation while maintaining an accurate description of high explosive products expansion work output. The increased mathematical complexity of the JWLB high explosive equations of state provides increased accuracy for practical problems of interest. Increased numbers of parameters are often justified based on improved physics descriptions but can also mean increased calibration complexity. A generalized extent of aluminum reaction Jones-Wilkins-Lee (JWL)-based EOS was developed in order to more accurately describe the observed behavior of aluminized explosives detonation products expansion. A calibration method was developed to describe the unreacted, partially reacted, and completely reacted explosive using nonlinear optimization. A reasonable calibration of a generalized extent of aluminum reaction JWLB EOS as a function of aluminum reaction fraction has not yet been achieved due to the increased mathematical complexity of the JWLB form.

  8. A tunable laser system for precision wavelength calibration of spectra

    NASA Astrophysics Data System (ADS)

    Cramer, Claire

    2010-02-01

    We present a novel laser-based wavelength calibration technique that improves the precision of astronomical spectroscopy, and solves a calibration problem inherent to multi-object spectroscopy. We have tested a prototype with the Hectochelle spectrograph at the MMT 6.5 m telescope. The Hectochelle is a high-dispersion, fiber-fed, multi-object spectrograph capable of recording up to 240 spectra simultaneously with a resolving power of 40000. The standard wavelength calibration method uses of spectra from ThAr hollow-cathode lamps shining directly onto the fibers. The difference in light path between calibration and science light as well as the uneven distribution of spectral lines are believed to introduce errors of up to several hundred m/s in the wavelength scale. Our tunable laser wavelength calibrator is bright enough for use with a dome screen, allowing the calibration light path to better match the science light path. Further, the laser is tuned in regular steps across a spectral order, creating a comb of evenly-spaced lines on the detector. Using the solar spectrum reflected from the atmosphere to record the same spectrum in every fiber, we show that laser wavelength calibration brings radial velocity uncertainties down below 100 m/s. We also present results from studies of globular clusters, and explain how the calibration technique can aid in stellar age determinations, studies of young stars, and searches for dark matter clumping in the galactic halo. )

  9. Novel quantitative calibration approach for multi-configuration electromagnetic induction (EMI) systems using data acquired at multiple elevations

    NASA Astrophysics Data System (ADS)

    Tan, Xihe; Mester, Achim; von Hebel, Christian; van der Kruk, Jan; Zimmermann, Egon; Vereecken, Harry; van Waasen, Stefan

    2017-04-01

    Electromagnetic induction (EMI) systems offer a great potential to obtain highly resolved layered electrical conductivity models of the shallow subsurface. State-of-the-art inversion procedures require quantitative calibration of EMI data, especially for short-offset EMI systems where significant data shifts are often observed. These shifts are caused by external influences such as the presence of the operator, zero-leveling procedures, the field setup used to move the EMI system and/or cables close by. Calibrations can be performed by using collocated electrical resistivity measurements or taking soil samples, however, these two methods take a lot of time in the field. To improve the calibration in a fast and concise way, we introduce a novel on-site calibration method using a series of apparent electrical conductivity (ECa) values acquired at multiple elevations for a multi-configuration EMI system. No additional instrument or pre-knowledge of the subsurface is needed to acquire quantitative ECa data. By using this calibration method, we correct each coil configuration, i.e., transmitter and receiver coil separation and the horizontal or vertical coplanar (HCP or VCP) coil orientation with a unique set of calibration parameters. A multi-layer soil structure at the corresponding measurement location is inverted together with the calibration parameters using full-solution Maxwell equations for the forward modelling within the shuffled complex evolution (SCE) algorithm to find the optimum solution under a user-defined parameter space. Synthetic data verified the feasibility for calibrating HCP and VCP measurements of a custom made six-coil EMI system with coil offsets between 0.35 m and 1.8 m for quantitative data inversions. As a next step, we applied the calibration approach on acquired experimental data from a bare soil test field (Selhausen, Germany) for the considered EMI system. The obtained calibration parameters were applied to measurements over a 30 m transect line that covers a range of conductivities between 5 and 40 mS/m. Inverted calibrated EMI data of the transect line showed very similar electrical conductivity distributions and layer interfaces of the subsurface compared to reference data obtained from vertical electrical sounding (VES) measurements. These results show that a combined calibration and inversion of multi-configuration EMI data is possible when including measurements at different elevations, which will speed up the measurement process to obtain quantitative EMI data since the labor intensive electrical resistivity measurement or soil coring is not necessary anymore.

  10. XUV Photometer System (XPS): New Dark-Count Corrections Model and Improved Data Products

    NASA Astrophysics Data System (ADS)

    Elliott, J. P.; Vanier, B.; Woods, T. N.

    2017-12-01

    We present newly updated dark-count calibrations for the SORCE XUV Photometer System (XPS) and the resultant improved data products released in March of 2017. The SORCE mission has provided a 14-year solar spectral irradiance record, and the XPS contributes to this record in the 0.1 nm to 40 nm range. The SORCE spacecraft has been operating in what is known as Day-Only Operations (DO-Op) mode since February of 2014. In this mode it is not possible to collect data, including dark-counts, when the spacecraft is in eclipse as we did prior to DO-Op. Instead, we take advantage of the position of the XPS filter-wheel, and collect these data when the wheel position is in a "dark" position. Further, in this mode dark data are not always available for all observations, requiring an extrapolation in order to calibrate data at these times. To extrapolate, we model this with a piece-wise 2D nonlinear least squares surface fit in the time and temperature dimensions. Our model allows us to calibrate XPS data into the DO-Op phase of the mission by extrapolating along this surface. The XPS version 11 data product release benefits from this new calibration. We present comparisons of the previous and current calibration methods in addition to planned future upgrades of our data products.

  11. Refinement of moisture calibration curves for nuclear gage.

    DOT National Transportation Integrated Search

    1973-01-01

    Over the last three years the Virginia Highway Research Council has directed a research effort toward improving the method of determining the moisture content of soils with a nuclear gage. The first task in this research was the determination of the ...

  12. [Application of AOTF in spectral analysis. 1. Hardware and software designs for the self-constructed visible AOTF spectrophotometer].

    PubMed

    He, Jia-yao; Peng, Rong-fei; Zhang, Zhan-xia

    2002-02-01

    A self-constructed visible spectrophotometer using an acousto-optic tunable filter(AOTF) as a dispersing element is described. Two different AOTFs (one from The Institute for Silicate (Shanghai, China) and the other from Brimrose(USA)) are tested. The software written with visual C++ and operated on a Window98 platform is an applied program with dual database and multi-windows. Four independent windows, namely scanning, quantitative, calibration and result are incorporated. The Fourier self-deconvolution algorithm is also incorporated to improve the spectral resolution. The wavelengths are calibrated using the polynomial curve fitting method. The spectra and calibration curves of soluble aniline blue and phenol red are presented to show the feasibility of the constructed spectrophotometer.

  13. Probabilistic calibration of the distributed hydrological model RIBS applied to real-time flood forecasting: the Harod river basin case study (Israel)

    NASA Astrophysics Data System (ADS)

    Nesti, Alice; Mediero, Luis; Garrote, Luis; Caporali, Enrica

    2010-05-01

    An automatic probabilistic calibration method for distributed rainfall-runoff models is presented. The high number of parameters in hydrologic distributed models makes special demands on the optimization procedure to estimate model parameters. With the proposed technique it is possible to reduce the complexity of calibration while maintaining adequate model predictions. The first step of the calibration procedure of the main model parameters is done manually with the aim to identify their variation range. Afterwards a Monte-Carlo technique is applied, which consists on repetitive model simulations with randomly generated parameters. The Monte Carlo Analysis Toolbox (MCAT) includes a number of analysis methods to evaluate the results of these Monte Carlo parameter sampling experiments. The study investigates the use of a global sensitivity analysis as a screening tool to reduce the parametric dimensionality of multi-objective hydrological model calibration problems, while maximizing the information extracted from hydrological response data. The method is applied to the calibration of the RIBS flood forecasting model in the Harod river basin, placed on Israel. The Harod basin has an extension of 180 km2. The catchment has a Mediterranean climate and it is mainly characterized by a desert landscape, with a soil that is able to absorb large quantities of rainfall and at the same time is capable to generate high peaks of discharge. Radar rainfall data with 6 minute temporal resolution are available as input to the model. The aim of the study is the validation of the model for real-time flood forecasting, in order to evaluate the benefits of improved precipitation forecasting within the FLASH European project.

  14. Estimation of k-ε parameters using surrogate models and jet-in-crossflow data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan

    2014-11-01

    We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of themore » calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.« less

  15. The Optical Field Angle Distortion Calibration of HST Fine Guidance Sensors 1R and 3

    NASA Technical Reports Server (NTRS)

    McArthur, B.; Benedict, G. F.; Jefferys, W. H.; Nelan, E.

    2006-01-01

    To date five OFAD (Optical Field Angle Distortion) calibrations have been performed with a star field in M35, four on FGS3 and one on FGS1, all analyzed by the Astrometry Science Team. We have recently completed an improved FGS1R OFAD calibration. The ongoing Long Term Stability Tests have also been analyzed and incorporated into these calibrations, which are time-dependent due to on-orbit changes in the FGS. Descriptions of these tests and the results of our OFAD modeling are given. Because all OFAD calibrations use the same star field, we calibrate FGS 1 and FGS 3 simultaneously. This increases the precision of our input catalog,resulting in an improvement in both the FGS 1 and FGS 3 calibrations. A redetermination of the proper motions,using 12 years of HST data has significantly improved our calibration. Residuals to our OFAD modeling indicate that FGS 1 will provide astrometry superior to FGS 3 by approx. 20%. Past and future FGS astrometric science supported by these calibrations is briefly reviewed.

  16. An improved multilevel Monte Carlo method for estimating probability distribution functions in stochastic oil reservoir simulations

    DOE PAGES

    Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...

    2016-12-30

    In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less

  17. Improving radiation data quality of USDA UV-B monitoring and research program and evaluating UV decomposition in DayCent and its ecological impacts

    NASA Astrophysics Data System (ADS)

    Chen, Maosi

    Solar radiation impacts many aspects of the Earth's atmosphere and biosphere. The total solar radiation impacts the atmospheric temperature profile and the Earth's surface radiative energy budget. The solar visible (VIS) radiation is the energy source of photosynthesis. The solar ultraviolet (UV) radiation impacts plant's physiology, microbial activities, and human and animal health. Recent studies found that solar UV significantly shifts the mass loss and nitrogen patterns of plant litter decomposition in semi-arid and arid ecosystems. The potential mechanisms include the production of labile materials from direct and indirect photolysis of complex organic matters, the facilitation of microbial decomposition with more labile materials, and the UV inhibition of microbes' population. However, the mechanisms behind UV decomposition and its ecological impacts are still uncertain. Accurate and reliable ground solar radiation measurements help us better retrieve the atmosphere composition, validate satellite radiation products, and simulate ecosystem processes. Incorporating the UV decomposition into the DayCent biogeochemical model helps to better understand long-term ecological impacts. Improving the accuracy of UV irradiance data is the goal of the first part of this research and examining the importance of UV radiation in the biogeochemical model DayCent is the goal of the second part of the work. Thus, although the dissertation is separated into two parts, accurate UV irradiance measurement links them in what follows. In part one of this work the accuracy and reliability of the current operational calibration method for the (UV-) Multi-Filter Rotating Shadowband Radiometer (MFRSR), which is used by the U.S. Department of Agriculture UV-B Monitoring and Research Program (UVMRP), is improved. The UVMRP has monitored solar radiation in the 14 narrowband UV and VIS spectral channels at 37 sites across U.S. since 1992. The improvements in the quality of the data result from an improved cloud screening algorithm that utilizes an iterative rejection of cloudy points based on a decreasing tolerance of unstable optical depth behavior when calibration information is unknown. A MODTRAN radiative transfer model simulation showed the new cloud screening algorithm was capable of screening cloudy points while retaining clear-sky points. The comparison results showed that the cloud-free points determined by the new cloud screening algorithm generated significantly (56%) more and unbiased Langley offset voltages (VLOs) for both partly cloudy days and sunny days at two testing sites, Hawaii and Florida. The V¬LOs are proportional to the radiometric sensitivity. The stability of the calibration is also improved by the development of a two-stage reference channel calibration method for collocated UV-MFRSR and MFRSR instruments. Special channels where aerosol is the only contributor to total optical depth (TOD) variation (e.g. 368-nm channel) were selected and the radiative transfer model (MODTRAN) used to calculate direct normal and diffuse horizontal ratios which were used to evaluate the stability of TOD in cloud-free points. The spectral dependence of atmospheric constituents' optical properties and previously calibrated channels were used to find stable TOD points and perform Langley calibration at spectrally adjacent channels. The test of this method on the UV-B program site at Homestead, Florida (FL02) showed that the new method generated more clustered and abundant VLOs at all (UV-) MFRSR channels and potentially improved the accuracy by 2-4% at most channels and over 10% at 300-nm and 305-nm channels. In the second major part of this work, I calibrated the DayCent-UV model with ecosystem variables (e.g. soil water, live biomass), allowed maximum photodecay rate to vary with litter's initial lignin fraction in the model, and validated the optimized model with LIDET observation of remaining carbon and nitrogen at three semi-arid sites. I also explored the ecological impacts of UV decomposition with the optimized DayCent-UV model. The DayCent-UV model showed significant better performance compared to models without UV decomposition in simulating the observed linear carbon loss pattern and the persistent net nitrogen mineralization in the 10-year LIDET experiment at the three sites. The DayCent-UV equilibrium model runs showed that UV decomposition increased aboveground and belowground plant production, surface net nitrogen mineralization, and surface litter nitrogen pool, while decreased surface litter carbon, soil net nitrogen mineralization and mineral soil carbon and nitrogen. In addition, UV decomposition showed minimal impacts (i.e. less than 1% change) on trace gases emission and biotic decomposition rates. Overall, my dissertation provided a comprehensive solution to improve the calibration accuracy and reliability of MFRSR and therefore the quality of radiation products. My dissertation also improved the understanding of UV decomposition and its long-term ecological impacts.

  18. An Improved Calibration Method for Hydrazine Monitors for the United States Air Force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korsah, K

    2003-07-07

    This report documents the results of Phase 1 of the ''Air Force Hydrazine Detector Characterization and Calibration Project''. A method for calibrating model MDA 7100 hydrazine detectors in the United States Air Force (AF) inventory has been developed. The calibration system consists of a Kintek 491 reference gas generation system, a humidifier/mixer system which combines the dry reference hydrazine gas with humidified diluent or carrier gas to generate the required humidified reference for calibrations, and a gas sampling interface. The Kintek reference gas generation system itself is periodically calibrated using an ORNL-constructed coulometric titration system to verify the hydrazine concentrationmore » of the sample atmosphere in the interface module. The Kintek reference gas is then used to calibrate the hydrazine monitors. Thus, coulometric titration is only used to periodically assess the performance of the Kintek reference gas generation system, and is not required for hydrazine monitor calibrations. One advantage of using coulometric titration for verifying the concentration of the reference gas is that it is a primary standard (if used for simple solutions), thereby guaranteeing, in principle, that measurements will be traceable to SI units (i.e., to the mole). The effect of humidity of the reference gas was characterized by using the results of concentrations determined by coulometric titration to develop a humidity correction graph for the Kintek 491 reference gas generation system. Using this calibration method, calibration uncertainty has been reduced by 50% compared to the current method used to calibrate hydrazine monitors in the Air Force inventory and calibration time has also been reduced by more than 20%. Significant findings from studies documented in this report are the following: (1) The Kintek 491 reference gas generation system (generator, humidifier and interface module) can be used to calibrate hydrazine detectors. (2) The Kintek system output concentration is less than the calculated output of the generator alone but can be calibrated as a system by using coulometric titration of gas samples collected with impingers. (3) The calibrated Kintek system output concentration is reproducible even after having been disassembled and moved and reassembled. (4) The uncertainty of the reference gas concentration generated by the Kintek system is less than half the uncertainty of the Zellweger Analytics' (ZA) reference gas concentration and can be easily lowered to one third or less of the ZA method by using lower-uncertainty flow rate or total flow measuring instruments. (5) The largest sources of uncertainty in the current ORNL calibration system are the permeation rate of the permeation tubes and the flow rate of the impinger sampling pump used to collect gas samples for calibrating the Kintek system. Upgrading the measurement equipment, as stated in (4), can reduce both of these. (6) The coulometric titration technique can be used to periodically assess the performance of the Kintek system and determine a suitable recalibration interval. (7) The Kintek system has been used to calibrate two MDA 7100s and an Interscan 4187 in less than one workday. The system can be upgraded (e.g., by automating it) to provide more calibrations per day. (8) The humidity of both the reference gas and the environment of the Chemcassette affect the MDA 7100 hydrazine detector's readings. However, ORNL believes that the environmental effect is less significant than the effect of the reference gas humidity. (9) The ORNL calibration method based on the Kintek 491 M-B gas standard can correct for the effect of the humidity of the reference gas to produce the same calibration as that of ZA's. Zellweger Analytics calibrations are typically performed at 45%-55% relative humidity. (10) Tests using the Interscan 4187 showed that the instrument was not accurate in its lower (0-100 ppb) range. Subsequent discussions with Kennedy Space Center (KSC) personnel also indicated that the Interscan units were not reproducible when new sensors were used. KSC had discovered that the Interscan units read incorrectly on the low range because of the presence of carbon dioxide. ORNL did not test the carbon dioxide effect, but it was found that the units did not read zero when a test gas containing no hydrazine was sampled. According to the KSC personnel that ORNL had these discussions with, NASA is phasing out the use of these Interscan detectors.« less

  19. Verification of the ISO calibration method for field pyranometers under tropical sky conditions

    NASA Astrophysics Data System (ADS)

    Janjai, Serm; Tohsing, Korntip; Pattarapanitchai, Somjet; Detkhon, Pasakorn

    2017-02-01

    Field pyranomters need to be annually calibrated and the International Organization for Standardization (ISO) has defined a standard method (ISO 9847) for calibrating these pyranometers. According to this standard method for outdoor calibration, the field pyranometers have to be compared to a reference pyranometer for the period of 2 to 14 days, depending on sky conditions. In this work, the ISO 9847 standard method was verified under tropical sky conditions. To verify the standard method, calibration of field pyranometers was conducted at a tropical site located in Nakhon Pathom (13.82o N, 100.04o E), Thailand under various sky conditions. The conditions of the sky were monitored by using a sky camera. The calibration results for different time periods used for the calibration under various sky conditions were analyzed. It was found that the calibration periods given by this standard method could be reduced without significant change in the final calibration result. In addition, recommendation and discussion on the use of this standard method in the tropics were also presented.

  20. Research on camera on orbit radial calibration based on black body and infrared calibration stars

    NASA Astrophysics Data System (ADS)

    Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng

    2018-05-01

    Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.

  1. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  2. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  3. An evaluation of fossil tip-dating versus node-age calibrations in tetraodontiform fishes (Teleostei: Percomorphaceae).

    PubMed

    Arcila, Dahiana; Alexander Pyron, R; Tyler, James C; Ortí, Guillermo; Betancur-R, Ricardo

    2015-01-01

    Time-calibrated phylogenies based on molecular data provide a framework for comparative studies. Calibration methods to combine fossil information with molecular phylogenies are, however, under active development, often generating disagreement about the best way to incorporate paleontological data into these analyses. This study provides an empirical comparison of the most widely used approach based on node-dating priors for relaxed clocks implemented in the programs BEAST and MrBayes, with two recently proposed improvements: one using a new fossilized birth-death process model for node dating (implemented in the program DPPDiv), and the other using a total-evidence or tip-dating method (implemented in MrBayes and BEAST). These methods are applied herein to tetraodontiform fishes, a diverse group of living and extinct taxa that features one of the most extensive fossil records among teleosts. Previous estimates of time-calibrated phylogenies of tetraodontiforms using node-dating methods reported disparate estimates for their age of origin, ranging from the late Jurassic to the early Paleocene (ca. 150-59Ma). We analyzed a comprehensive dataset with 16 loci and 210 morphological characters, including 131 taxa (95 extant and 36 fossil species) representing all families of fossil and extant tetraodontiforms, under different molecular clock calibration approaches. Results from node-dating methods produced consistently younger ages than the tip-dating approaches. The older ages inferred by tip dating imply an unlikely early-late Jurassic (ca. 185-119Ma) origin for this order and the existence of extended ghost lineages in their fossil record. Node-based methods, by contrast, produce time estimates that are more consistent with the stratigraphic record, suggesting a late Cretaceous (ca. 86-96Ma) origin. We show that the precision of clade age estimates using tip dating increases with the number of fossils analyzed and with the proximity of fossil taxa to the node under assessment. This study suggests that current implementations of tip dating may overestimate ages of divergence in calibrated phylogenies. It also provides a comprehensive phylogenetic framework for tetraodontiform systematics and future comparative studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Design, calibration and validation of a novel 3D printed instrumented spatial linkage that measures changes in the rotational axes of the tibiofemoral joint.

    PubMed

    Bonny, Daniel P; Hull, M L; Howell, S M

    2014-01-01

    An accurate axis-finding technique is required to measure any changes from normal caused by total knee arthroplasty in the flexion-extension (F-E) and longitudinal rotation (LR) axes of the tibiofemoral joint. In a previous paper, we computationally determined how best to design and use an instrumented spatial linkage (ISL) to locate the F-E and LR axes such that rotational and translational errors were minimized. However, the ISL was not built and consequently was not calibrated; thus the errors in locating these axes were not quantified on an actual ISL. Moreover, previous methods to calibrate an ISL used calibration devices with accuracies that were either undocumented or insufficient for the device to serve as a gold-standard. Accordingly, the objectives were to (1) construct an ISL using the previously established guidelines,(2) calibrate the ISL using an improved method, and (3) quantify the error in measuring changes in the F-E and LR axes. A 3D printed ISL was constructed and calibrated using a coordinate measuring machine, which served as a gold standard. Validation was performed using a fixture that represented the tibiofemoral joint with an adjustable F-E axis and the errors in measuring changes to the positions and orientations of the F-E and LR axes were quantified. The resulting root mean squared errors (RMSEs) of the calibration residuals using the new calibration method were 0.24, 0.33, and 0.15 mm for the anterior-posterior, medial-lateral, and proximal-distal positions, respectively, and 0.11, 0.10, and 0.09 deg for varus-valgus, flexion-extension, and internal-external orientations, respectively. All RMSEs were below 0.29% of the respective full-scale range. When measuring changes to the F-E or LR axes, each orientation error was below 0.5 deg; when measuring changes in the F-E axis, each position error was below 1.0 mm. The largest position RMSE was when measuring a medial-lateral change in the LR axis (1.2 mm). Despite the large size of the ISL, these calibration residuals were better than those for previously published ISLs, particularly when measuring orientations, indicating that using a more accurate gold standard was beneficial in limiting the calibration residuals. The validation method demonstrated that this ISL is capable of accurately measuring clinically important changes (i.e. 1 mm and 1 deg) in the F-E and LR axes.

  5. Evaluating the accuracy of soil water sensors for irrigation scheduling to conserve freshwater

    NASA Astrophysics Data System (ADS)

    Ganjegunte, Girisha K.; Sheng, Zhuping; Clark, John A.

    2012-06-01

    In the Trans-Pecos area, pecan [ Carya illinoinensis (Wangenh) C. Koch] is a major irrigated cash crop. Pecan trees require large amounts of water for their growth and flood (border) irrigation is the most common method of irrigation. Pecan crop is often over irrigated using traditional method of irrigation scheduling by counting number of calendar days since the previous irrigation. Studies in other pecan growing areas have shown that the water use efficiency can be improved significantly and precious freshwater can be saved by scheduling irrigation based on soil moisture conditions. This study evaluated the accuracy of three recent low cost soil water sensors (ECH2O-5TE, Watermark 200SS and Tensiometer model R) to monitor volumetric soil water content (θv) to develop improved irrigation scheduling in a mature pecan orchard in El Paso, Texas. Results indicated that while all three sensors were successful in following the general trends of soil moisture conditions during the growing season, actual measurements differed significantly. Statistical analyses of results indicated that Tensiometer provided relatively accurate soil moisture data than ECH2O-5TE and Watermark without site-specific calibration. While ECH2O-5TE overestimated the soil water content, Watermark and Tensiometer underestimated. Results of this study suggested poor accuracy of all three sensors if factory calibration and reported soil water retention curve for study site soil texture were used. This indicated that sensors needed site-specific calibration to improve their accuracy in estimating soil water content data.

  6. Improving the performance of the mass transfer-based reference evapotranspiration estimation approaches through a coupled wavelet-random forest methodology

    NASA Astrophysics Data System (ADS)

    Shiri, Jalal

    2018-06-01

    Among different reference evapotranspiration (ETo) modeling approaches, mass transfer-based methods have been less studied. These approaches utilize temperature and wind speed records. On the other hand, the empirical equations proposed in this context generally produce weak simulations, except when a local calibration is used for improving their performance. This might be a crucial drawback for those equations in case of local data scarcity for calibration procedure. So, application of heuristic methods can be considered as a substitute for improving the performance accuracy of the mass transfer-based approaches. However, given that the wind speed records have usually higher variation magnitudes than the other meteorological parameters, application of a wavelet transform for coupling with heuristic models would be necessary. In the present paper, a coupled wavelet-random forest (WRF) methodology was proposed for the first time to improve the performance accuracy of the mass transfer-based ETo estimation approaches using cross-validation data management scenarios in both local and cross-station scales. The obtained results revealed that the new coupled WRF model (with the minimum scatter index values of 0.150 and 0.192 for local and external applications, respectively) improved the performance accuracy of the single RF models as well as the empirical equations to great extent.

  7. Characterization of highly multiplexed monolithic PET / gamma camera detector modules.

    PubMed

    Pierce, L A; Pedemonte, S; DeWitt, D; MacDonald, L; Hunter, W C J; Van Leemput, K; Miyaoka, R

    2018-03-29

    PET detectors use signal multiplexing to reduce the total number of electronics channels needed to cover a given area. Using measured thin-beam calibration data, we tested a principal component based multiplexing scheme for scintillation detectors. The highly-multiplexed detector signal is no longer amenable to standard calibration methodologies. In this study we report results of a prototype multiplexing circuit, and present a new method for calibrating the detector module with multiplexed data. A [Formula: see text] mm 3 LYSO scintillation crystal was affixed to a position-sensitive photomultiplier tube with [Formula: see text] position-outputs and one channel that is the sum of the other 64. The 65-channel signal was multiplexed in a resistive circuit, with 65:5 or 65:7 multiplexing. A 0.9 mm beam of 511 keV photons was scanned across the face of the crystal in a 1.52 mm grid pattern in order to characterize the detector response. New methods are developed to reject scattered events and perform depth-estimation to characterize the detector response of the calibration data. Photon interaction position estimation of the testing data was performed using a Gaussian Maximum Likelihood estimator and the resolution and scatter-rejection capabilities of the detector were analyzed. We found that using a 7-channel multiplexing scheme (65:7 compression ratio) with 1.67 mm depth bins had the best performance with a beam-contour of 1.2 mm FWHM (from the 0.9 mm beam) near the center of the crystal and 1.9 mm FWHM near the edge of the crystal. The positioned events followed the expected Beer-Lambert depth distribution. The proposed calibration and positioning method exhibited a scattered photon rejection rate that was a 55% improvement over the summed signal energy-windowing method.

  8. Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter

    NASA Astrophysics Data System (ADS)

    Tulyakov, Stepan; Ivanov, Anton; Thomas, Nicolas; Roloff, Victoria; Pommerol, Antoine; Cremonese, Gabriele; Weigel, Thomas; Fleuret, Francois

    2018-01-01

    There are many geometric calibration methods for "standard" cameras. These methods, however, cannot be used for the calibration of telescopes with large focal lengths and complex off-axis optics. Moreover, specialized calibration methods for the telescopes are scarce in literature. We describe the calibration method that we developed for the Colour and Stereo Surface Imaging System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO). Although our method is described in the context of CaSSIS, with camera-specific experiments, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.

  9. Multicenter Evaluation of a Commercial Cytomegalovirus Quantitative Standard: Effects of Commutability on Interlaboratory Concordance

    PubMed Central

    Shahbazian, M. D.; Valsamakis, A.; Boonyaratanakornkit, J.; Cook, L.; Pang, X. L.; Preiksaitis, J. K.; Schönbrunner, E. R.; Caliendo, A. M.

    2013-01-01

    Commutability of quantitative reference materials has proven important for reliable and accurate results in clinical chemistry. As international reference standards and commercially produced calibration material have become available to address the variability of viral load assays, the degree to which such materials are commutable and the effect of commutability on assay concordance have been questioned. To investigate this, 60 archived clinical plasma samples, which previously tested positive for cytomegalovirus (CMV), were retested by five different laboratories, each using a different quantitative CMV PCR assay. Results from each laboratory were calibrated both with lab-specific quantitative CMV standards (“lab standards”) and with common, commercially available standards (“CMV panel”). Pairwise analyses among laboratories were performed using mean results from each clinical sample, calibrated first with lab standards and then with the CMV panel. Commutability of the CMV panel was determined based on difference plots for each laboratory pair showing plotted values of standards that were within the 95% prediction intervals for the clinical specimens. Commutability was demonstrated for 6 of 10 laboratory pairs using the CMV panel. In half of these pairs, use of the CMV panel improved quantitative agreement compared to use of lab standards. Two of four laboratory pairs for which the CMV panel was noncommutable showed reduced quantitative agreement when that panel was used as a common calibrator. Commutability of calibration material varies across different quantitative PCR methods. Use of a common, commutable quantitative standard can improve agreement across different assays; use of a noncommutable calibrator can reduce agreement among laboratories. PMID:24025907

  10. Application of the Shiono and Knight Method in asymmetric compound channels with different side slopes of the internal wall

    NASA Astrophysics Data System (ADS)

    Alawadi, Wisam; Al-Rekabi, Wisam S.; Al-Aboodi, Ali H.

    2018-03-01

    The Shiono and Knight Method (SKM) is widely used to predict the lateral distribution of depth-averaged velocity and boundary shear stress for flows in compound channels. Three calibrating coefficients need to be estimated for applying the SKM, namely eddy viscosity coefficient ( λ), friction factor ( f) and secondary flow coefficient ( k). There are several tested methods which can satisfactorily be used to estimate λ, f. However, the calibration of secondary flow coefficients k to account for secondary flow effects correctly is still problematic. In this paper, the calibration of secondary flow coefficients is established by employing two approaches to estimate correct values of k for simulating asymmetric compound channel with different side slopes of the internal wall. The first approach is based on Abril and Knight (2004) who suggest fixed values for main channel and floodplain regions. In the second approach, the equations developed by Devi and Khatua (2017) that relate the variation of the secondary flow coefficients with the relative depth ( β) and width ratio ( α) are used. The results indicate that the calibration method developed by Devi and Khatua (2017) is a better choice for calibrating the secondary flow coefficients than using the first approach which assumes a fixed value of k for different flow depths. The results also indicate that the boundary condition based on the shear force continuity can successfully be used for simulating rectangular compound channels, while the continuity of depth-averaged velocity and its gradient is accepted boundary condition in simulations of trapezoidal compound channels. However, the SKM performance for predicting the boundary shear stress over the shear layer region may not be improved by only imposing the suitable calibrated values of secondary flow coefficients. This is because difficulties of modelling the complex interaction that develops between the flows in the main channel and on the floodplain in this region.

  11. A hybrid method for accurate star tracking using star sensor and gyros.

    PubMed

    Lu, Jiazhen; Yang, Lie; Zhang, Hao

    2017-10-01

    Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.

  12. A Comparison and Calibration of a Wrist-Worn Blood Pressure Monitor for Patient Management: Assessing the Reliability of Innovative Blood Pressure Devices

    PubMed Central

    Melville, Sarah; Teskey, Robert; Philip, Shona; Simpson, Jeremy A; Lutchmedial, Sohrab

    2018-01-01

    Background Clinical guidelines recommend monitoring of blood pressure at home using an automatic blood pressure device for the management of hypertension. Devices are not often calibrated against direct blood pressure measures, leaving health care providers and patients with less reliable information than is possible with current technology. Rigorous assessments of medical devices are necessary for establishing clinical utility. Objective The purpose of our study was 2-fold: (1) to assess the validity and perform iterative calibration of indirect blood pressure measurements by a noninvasive wrist cuff blood pressure device in direct comparison with simultaneously recorded peripheral and central intra-arterial blood pressure measurements and (2) to assess the validity of the measurements thereafter of the noninvasive wrist cuff blood pressure device in comparison with measurements by a noninvasive upper arm blood pressure device to the Canadian hypertension guidelines. Methods The cloud-based blood pressure algorithms for an oscillometric wrist cuff device were iteratively calibrated to direct pressure measures in 20 consented patient participants. We then assessed measurement validity of the device, using Bland-Altman analysis during routine cardiovascular catheterization. Results The precalibrated absolute mean difference between direct intra-arterial to wrist cuff pressure measurements were 10.8 (SD 9.7) for systolic and 16.1 (SD 6.3) for diastolic. The postcalibrated absolute mean difference was 7.2 (SD 5.1) for systolic and 4.3 (SD 3.3) for diastolic pressures. This is an improvement in accuracy of 33% systolic and 73% diastolic with a 48% reduction in the variability for both measures. Furthermore, the wrist cuff device demonstrated similar sensitivity in measuring high blood pressure compared with the direct intra-arterial method. The device, when calibrated to direct aortic pressures, demonstrated the potential to reduce a treatment gap in high blood pressure measurements. Conclusions The systolic pressure measurements of the wrist cuff have been iteratively calibrated using gold standard central (ascending aortic) pressure. This improves the accuracy of the indirect measures and potentially reduces the treatment gap. Devices that undergo auscultatory (indirect) calibration for licensing can be greatly improved by additional iterative calibration via intra-arterial (direct) measures of blood pressure. Further clinical trials with repeated use of the device over time are needed to assess the reliability of the device in accordance with current and evolving guidelines for informed decision making in the management of hypertension. Trial Registration ClinicalTrials.gov NCT03015363; https://clinicaltrials.gov/ct2/show/NCT03015363 (Archived by WebCite at http://www.webcitation.org/6xPZgseYS) PMID:29695375

  13. Note: An improved calibration system with phase correction for electronic transformers with digital output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Han-miao, E-mail: chenghanmiao@hust.edu.cn; Li, Hong-bin, E-mail: lihongbin@hust.edu.cn; State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074

    The existing electronic transformer calibration systems employing data acquisition cards cannot satisfy some practical applications, because the calibration systems have phase measurement errors when they work in the mode of receiving external synchronization signals. This paper proposes an improved calibration system scheme with phase correction to improve the phase measurement accuracy. We employ NI PCI-4474 to design a calibration system, and the system has the potential to receive external synchronization signals and reach extremely high accuracy classes. Accuracy verification has been carried out in the China Electric Power Research Institute, and results demonstrate that the system surpasses the accuracy classmore » 0.05. Furthermore, this system has been used to test the harmonics measurement accuracy of all-fiber optical current transformers. In the same process, we have used an existing calibration system, and a comparison of the test results is presented. The system after improvement is suitable for the intended applications.« less

  14. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, Francis J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  15. In-flight calibration of the spin axis offset of a fluxgate magnetometer with an electron drift instrument

    NASA Astrophysics Data System (ADS)

    Leinweber, H. K.; Russell, C. T.; Torkar, K.

    2012-10-01

    We show that the spin axis offset of a fluxgate magnetometer can be calibrated with an electron drift instrument (EDI) and that the required input time interval is relatively short. For missions such as Cluster or the upcoming Magnetospheric Multiscale (MMS) mission the spin axis offset of a fluxgate magnetometer could be determined on an orbital basis. An improvement of existing methods for finding spin axis offsets via comparison of accurate measurements of the field magnitude is presented, that additionally matches the gains of the two instruments that are being compared. The technique has been applied to EDI data from the Cluster Active Archive and fluxgate magnetometer data processed with calibration files also from the Cluster Active Archive. The method could prove to be valuable for the MMS mission because the four MMS spacecraft will only be inside the interplanetary field (where spin axis offsets can be calculated from Alfvénic fluctuations) for short periods of time and during unusual solar wind conditions.

  16. Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics

    NASA Astrophysics Data System (ADS)

    Lazarus, S. M.; Holman, B. P.; Splitt, M. E.

    2017-12-01

    A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.

  17. Optimal Design of Calibration Signals in Space-Borne Gravitational Wave Detectors

    NASA Technical Reports Server (NTRS)

    Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Ferroni, Valerio; hide

    2016-01-01

    Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterisation of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.

  18. Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors

    NASA Technical Reports Server (NTRS)

    Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.

    2014-01-01

    Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.

  19. An investigation into force-moment calibration techniques applicable to a magnetic suspension and balance system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Eskins, Jonathan

    1988-01-01

    The problem of determining the forces and moments acting on a wind tunnel model suspended in a Magnetic Suspension and Balance System is addressed. Two calibration methods were investigated for three types of model cores, i.e., Alnico, Samarium-Cobalt, and a superconducting solenoid. Both methods involve calibrating the currents in the electromagnetic array against known forces and moments. The first is a static calibration method using calibration weights and a system of pulleys. The other method, dynamic calibration, involves oscillating the model and using its inertia to provide calibration forces and moments. Static calibration data, found to produce the most reliable results, is presented for three degrees of freedom at 0, 15, and -10 deg angle of attack. Theoretical calculations are hampered by the inability to represent iron-cored electromagnets. Dynamic calibrations, despite being quicker and easier to perform, are not as accurate as static calibrations. Data for dynamic calibrations at 0 and 15 deg is compared with the relevant static data acquired. Distortion of oscillation traces is cited as a major source of error in dynamic calibrations.

  20. Augmenting epidemiological models with point-of-care diagnostics data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.

    Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less

  1. Augmenting epidemiological models with point-of-care diagnostics data

    DOE PAGES

    Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.; ...

    2016-04-20

    Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less

  2. Data multiplexing in radio interferometric calibration

    NASA Astrophysics Data System (ADS)

    Yatawatta, Sarod; Diblen, Faruk; Spreeuw, Hanno; Koopmans, L. V. E.

    2018-03-01

    New and upcoming radio interferometers will produce unprecedented amount of data that demand extremely powerful computers for processing. This is a limiting factor due to the large computational power and energy costs involved. Such limitations restrict several key data processing steps in radio interferometry. One such step is calibration where systematic errors in the data are determined and corrected. Accurate calibration is an essential component in reaching many scientific goals in radio astronomy and the use of consensus optimization that exploits the continuity of systematic errors across frequency significantly improves calibration accuracy. In order to reach full consensus, data at all frequencies need to be calibrated simultaneously. In the SKA regime, this can become intractable if the available compute agents do not have the resources to process data from all frequency channels simultaneously. In this paper, we propose a multiplexing scheme that is based on the alternating direction method of multipliers with cyclic updates. With this scheme, it is possible to simultaneously calibrate the full data set using far fewer compute agents than the number of frequencies at which data are available. We give simulation results to show the feasibility of the proposed multiplexing scheme in simultaneously calibrating a full data set when a limited number of compute agents are available.

  3. The calibration methods for Multi-Filter Rotating Shadowband Radiometer: a review

    NASA Astrophysics Data System (ADS)

    Chen, Maosi; Davis, John; Tang, Hongzhao; Ownby, Carolyn; Gao, Wei

    2013-09-01

    The continuous, over two-decade data record from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) is ideal for climate research which requires timely and accurate information of important atmospheric components such as gases, aerosols, and clouds. Except for parameters derived from MFRSR measurement ratios, which are not impacted by calibration error, most applications require accurate calibration factor(s), angular correction, and spectral response function(s) from calibration. Although a laboratory lamp (or reference) calibration can provide all the information needed to convert the instrument readings to actual radiation, in situ calibration methods are implemented routinely (daily) to fill the gaps between lamp calibrations. In this paper, the basic structure and the data collection and pretreatment of the MFRSR are described. The laboratory lamp calibration and its limitations are summarized. The cloud screening algorithms for MFRSR data are presented. The in situ calibration methods, the standard Langley method and its variants, the ratio-Langley method, the general method, Alexandrov's comprehensive method, and Chen's multi-channel method, are outlined. The reason that all these methods do not fit for all situations is that they assume some properties, such as aerosol optical depth (AOD), total optical depth (TOD), precipitable water vapor (PWV), effective size of aerosol particles, or angstrom coefficient, are invariant over time. These properties are not universal and some of them rarely happen. In practice, daily calibration factors derived from these methods should be smoothed to restrain error.

  4. VIIRS reflective solar bands on-orbit calibration and performance: a three-year update

    NASA Astrophysics Data System (ADS)

    Sun, Junqiang; Wang, Menghua

    2014-11-01

    The on-orbit calibration of the reflective solar bands (RSBs) of VIIRS and the result from the analysis of the up-to-date 3 years of mission data are presented. The VIIRS solar diffuser (SD) and lunar calibration methodology are discussed, and the calibration coefficients, called F-factors, for the RSBs are given for the latest reincarnation. The coefficients derived from the two calibrations are compared and the uncertainties of the calibrations are discussed. Numerous improvements are made, with the major improvement to the calibration result come mainly from the improved bidirectional reflectance factor (BRF) of the SD and the vignetting functions of both the SD screen and the sun-view screen. The very clean results, devoid of many previously known noises and artifacts, assures that VIIRS has performed well for the three years on orbit since launch, and in particular that the solar diffuser stability monitor (SDSM) is functioning essentially without flaws. The SD degradation, or H-factors, for most part shows the expected decline except for the surprising rise on day 830 lasting for 75 days signaling a new degradation phenomenon. Nevertheless the SDSM and the calibration methodology have successfully captured the SD degradation for RSB calibration. The overall improvement has the most significant and direct impact on the ocean color products which demands high accuracy from RSB observations.

  5. Short-Chain Polysaccharide Analysis in Ethanol-Water Solutions.

    PubMed

    Yan, Xun

    2017-07-01

    This study demonstrates that short-chain polysaccharides, or oligosaccharides, could be sufficiently separated with hydrophilic interaction LC (HILIC) conditions and quantified by evaporative light-scattering detection (ELSD). The multianalyte calibration approach improved the efficiency of calibrating the nonlinear detector response. The method allowed easy quantification of short-chain carbohydrates. Using the HILIC method, the oligosaccharide solubility and its profile in water/alcohol solutions at room temperature were able to be quantified. The results showed that the polysaccharide solubility in ethanol-water solutions decreased as ethanol content increased. The results also showed oligosaccharides to have minimal solubility in pure ethanol. In a saturated maltodextrin ethanol (80%) solution, oligosaccharide components with a degree of polymerization >12 were practically insoluble and contributed less than 0.2% to the total solute dry weight. The HILIC-ELSD method allows for the identification and quantification of low-MW carbohydrates individually and served as an alternative method to current gel permeation chromatography procedures.

  6. Simple transfer calibration method for a Cimel Sun-Moon photometer: calculating lunar calibration coefficients from Sun calibration constants.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane

    2016-09-20

    The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.

  7. Features calibration of the dynamic force transducers

    NASA Astrophysics Data System (ADS)

    Sc., M. Yu Prilepko D.; Lysenko, V. G.

    2018-04-01

    The article discusses calibration methods of dynamic forces measuring instruments. The relevance of work is dictated by need to valid definition of the dynamic forces transducers metrological characteristics taking into account their intended application. The aim of this work is choice justification of calibration method, which provides the definition dynamic forces transducers metrological characteristics under simulation operating conditions for determining suitability for using in accordance with its purpose. The following tasks are solved: the mathematical model and the main measurements equation of calibration dynamic forces transducers by load weight, the main budget uncertainty components of calibration are defined. The new method of dynamic forces transducers calibration with use the reference converter “force-deformation” based on the calibrated elastic element and measurement of his deformation by a laser interferometer is offered. The mathematical model and the main measurements equation of the offered method is constructed. It is shown that use of calibration method based on measurements by the laser interferometer of calibrated elastic element deformations allows to exclude or to considerably reduce the uncertainty budget components inherent to method of load weight.

  8. MODIS Instrument Operation and Calibration Improvements

    NASA Technical Reports Server (NTRS)

    Xiong, X.; Angal, A.; Madhavan, S.; Link, D.; Geng, X.; Wenny, B.; Wu, A.; Chen, H.; Salomonson, V.

    2014-01-01

    Terra and Aqua MODIS have successfully operated for over 14 and 12 years since their respective launches in 1999 and 2002. The MODIS on-orbit calibration is performed using a set of on-board calibrators, which include a solar diffuser for calibrating the reflective solar bands (RSB) and a blackbody for the thermal emissive bands (TEB). On-orbit changes in the sensor responses as well as key performance parameters are monitored using the measurements of these on-board calibrators. This paper provides an overview of MODIS on-orbit operation and calibration activities, and instrument long-term performance. It presents a brief summary of the calibration enhancements made in the latest MODIS data collection 6 (C6). Future improvements in the MODIS calibration and their potential applications to the S-NPP VIIRS are also discussed.

  9. Experimental investigation of the response of an amorphous silicon EPID to intensity modulated radiotherapy beams.

    PubMed

    Greer, Peter B; Vial, Philip; Oliver, Lyn; Baldock, Clive

    2007-11-01

    The aim of this work was to experimentally determine the difference in response of an amorphous silicon (a-Si) electronic portal imaging device (EPID) to the open and multileaf collimator (MLC) transmitted beam components of intensity modulated radiation therapy (IMRT) beams. EPID dose response curves were measured for open and MLC transmitted (MLCtr) 10 x 10 cm2 beams at central axis and with off axis distance using a shifting field technique. The EPID signal was obtained by replacing the flood-field correction with a pixel sensitivity variation matrix correction. This signal, which includes energy-dependent response, was then compared to ion-chamber measurements. An EPID calibration method to remove the effect of beam energy variations on EPID response was developed for IMRT beams. This method uses the component of open and MLCtr fluence to an EPID pixel calculated from the MLC delivery file and applies separate radially dependent calibration factors for each component. The calibration procedure does not correct for scatter differences between ion chamber in water measurements and EPID response; these must be accounted for separately with a kernel-based approach or similar method. The EPID response at central axis for the open beam was found to be 1.28 +/- 0.03 of the response for the MLCtr beam, with the ratio increasing to 1.39 at 12.5 cm off axis. The EPID response to MLCtr radiation did not change with off-axis distance. Filtering the beam with copper plates to reduce the beam energy difference between open and MLCtr beams was investigated; however, these were not effective at reducing EPID response differences. The change in EPID response for uniform sliding window IMRT beams with MLCtr dose components from 0.3% to 69% was predicted to within 2.3% using the separate EPID response calibration factors for each dose component. A clinical IMRT image calibrated with this method differed by nearly 30% in high MLCtr regions from an image calibrated with an open beam calibration factor only. Accounting for the difference in EPID response to open and MLCtr radiation should improve IMRT dosimetry with a-Si EPIDs.

  10. Robust radio interferometric calibration using the t-distribution

    NASA Astrophysics Data System (ADS)

    Kazemi, S.; Yatawatta, S.

    2013-10-01

    A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.

  11. Multielevation calibration of frequency-domain electromagnetic data

    USGS Publications Warehouse

    Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.

    2014-01-01

    Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.

  12. Pulse Transit Time Based Continuous Cuffless Blood Pressure Estimation: A New Extension and A Comprehensive Evaluation.

    PubMed

    Ding, Xiaorong; Yan, Bryan P; Zhang, Yuan-Ting; Liu, Jing; Zhao, Ni; Tsang, Hon Ki

    2017-09-14

    Cuffless technique enables continuous blood pressure (BP) measurement in an unobtrusive manner, and thus has the potential to revolutionize the conventional cuff-based approaches. This study extends the pulse transit time (PTT) based cuffless BP measurement method by introducing a new indicator - the photoplethysmogram (PPG) intensity ratio (PIR). The performance of the models with PTT and PIR was comprehensively evaluated in comparison with six models that are based on sole PTT. The validation conducted on 33 subjects with and without hypertension, at rest and under various maneuvers with induced BP changes, and over an extended calibration interval, respectively. The results showed that, comparing to the PTT models, the proposed methods achieved better accuracy on each subject group at rest state and over 24 hours calibration interval. Although the BP estimation errors under dynamic maneuvers and over extended calibration interval were significantly increased for all methods, the proposed methods still outperformed the compared methods in the latter situation. These findings suggest that additional BP-related indicator other than PTT has added value for improving the accuracy of cuffless BP measurement. This study also offers insights into future research in cuffless BP measurement for tracking dynamic BP changes and over extended periods of time.

  13. Mathematical Model and Calibration Experiment of a Large Measurement Range Flexible Joints 6-UPUR Six-Axis Force Sensor

    PubMed Central

    Zhao, Yanzhi; Zhang, Caifeng; Zhang, Dan; Shi, Zhongpan; Zhao, Tieshi

    2016-01-01

    Nowadays improving the accuracy and enlarging the measuring range of six-axis force sensors for wider applications in aircraft landing, rocket thrust, and spacecraft docking testing experiments has become an urgent objective. However, it is still difficult to achieve high accuracy and large measuring range with traditional parallel six-axis force sensors due to the influence of the gap and friction of the joints. Therefore, to overcome the mentioned limitations, this paper proposed a 6-Universal-Prismatic-Universal-Revolute (UPUR) joints parallel mechanism with flexible joints to develop a large measurement range six-axis force sensor. The structural characteristics of the sensor are analyzed in comparison with traditional parallel sensor based on the Stewart platform. The force transfer relation of the sensor is deduced, and the force Jacobian matrix is obtained using screw theory in two cases of the ideal state and the state of flexibility of each flexible joint is considered. The prototype and loading calibration system are designed and developed. The K value method and least squares method are used to process experimental data, and in errors of kind Ι and kind II linearity are obtained. The experimental results show that the calibration error of the K value method is more than 13.4%, and the calibration error of the least squares method is 2.67%. The experimental results prove the feasibility of the sensor and the correctness of the theoretical analysis which are expected to be adopted in practical applications. PMID:27529244

  14. Mapping Capacitive Coupling Among Pixels in a Sensor Array

    NASA Technical Reports Server (NTRS)

    Seshadri, Suresh; Cole, David M.; Smith, Roger M.

    2010-01-01

    An improved method of mapping the capacitive contribution to cross-talk among pixels in an imaging array of sensors (typically, an imaging photodetector array) has been devised for use in calibrating and/or characterizing such an array. The method involves a sequence of resets of subarrays of pixels to specified voltages and measurement of the voltage responses of neighboring non-reset pixels.

  15. Use of Inverse-Modeling Methods to Improve Ground-Water-Model Calibration and Evaluate Model-Prediction Uncertainty, Camp Edwards, Cape Cod, Massachusetts

    USGS Publications Warehouse

    Walter, Donald A.; LeBlanc, Denis R.

    2008-01-01

    Historical weapons testing and disposal activities at Camp Edwards, which is located on the Massachusetts Military Reservation, western Cape Cod, have resulted in the release of contaminants into an underlying sand and gravel aquifer that is the sole source of potable water to surrounding communities. Ground-water models have been used at the site to simulate advective transport in the aquifer in support of field investigations. Reasonable models developed by different groups and calibrated by trial and error often yield different predictions of advective transport, and the predictions lack quantitative measures of uncertainty. A recently (2004) developed regional model of western Cape Cod, modified to include the sensitivity and parameter-estimation capabilities of MODFLOW-2000, was used in this report to evaluate the utility of inverse (statistical) methods to (1) improve model calibration and (2) assess model-prediction uncertainty. Simulated heads and flows were most sensitive to recharge and to the horizontal hydraulic conductivity of the Buzzards Bay and Sandwich Moraines and the Buzzards Bay and northern parts of the Mashpee outwash plains. Conversely, simulated heads and flows were much less sensitive to vertical hydraulic conductivity. Parameter estimation (inverse calibration) improved the match to observed heads and flows; the absolute mean residual for heads improved by 0.32 feet and the absolute mean residual for streamflows improved by about 0.2 cubic feet per second. Advective-transport predictions in Camp Edwards generally were most sensitive to the parameters with the highest precision (lowest coefficients of variation), indicating that the numerical model is adequate for evaluating prediction uncertainties in and around Camp Edwards. The incorporation of an advective-transport observation, representing the leading edge of a contaminant plume that had been difficult to match by using trial-and-error calibration, improved the match between an observed and simulated plume path; however, a modified representation of local geology was needed to simultaneously maintain a reasonable calibration to heads and flows and to the plume path. Advective-transport uncertainties were expressed as about 68-, 95-, and 99-percent confidence intervals on three dimensional simulated particle positions. The confidence intervals can be graphically represented as ellipses around individual particle positions in the X-Y (geographic) plane and in the X-Z or Y-Z (vertical) planes. The merging of individual ellipses allows uncertainties on forward particle tracks to be displayed in map or cross-sectional view as a cone of uncertainty around a simulated particle path; uncertainties on reverse particle-track endpoints - representing simulated recharge locations - can be geographically displayed as areas at the water table around the discrete particle endpoints. This information gives decisionmakers insight into the level of confidence they can have in particle-tracking results and can assist them in the efficient use of available field resources.

  16. Improving calibration and validation of cosmic-ray neutron sensors in the light of spatial sensitivity

    NASA Astrophysics Data System (ADS)

    Schrön, Martin; Köhli, Markus; Scheiffele, Lena; Iwema, Joost; Bogena, Heye R.; Lv, Ling; Martini, Edoardo; Baroni, Gabriele; Rosolem, Rafael; Weimar, Jannis; Mai, Juliane; Cuntz, Matthias; Rebmann, Corinna; Oswald, Sascha E.; Dietrich, Peter; Schmidt, Ulrich; Zacharias, Steffen

    2017-10-01

    In the last few years the method of cosmic-ray neutron sensing (CRNS) has gained popularity among hydrologists, physicists, and land-surface modelers. The sensor provides continuous soil moisture data, averaged over several hectares and tens of decimeters in depth. However, the signal still may contain unidentified features of hydrological processes, and many calibration datasets are often required in order to find reliable relations between neutron intensity and water dynamics. Recent insights into environmental neutrons accurately described the spatial sensitivity of the sensor and thus allowed one to quantify the contribution of individual sample locations to the CRNS signal. Consequently, data points of calibration and validation datasets are suggested to be averaged using a more physically based weighting approach. In this work, a revised sensitivity function is used to calculate weighted averages of point data. The function is different from the simple exponential convention by the extraordinary sensitivity to the first few meters around the probe, and by dependencies on air pressure, air humidity, soil moisture, and vegetation. The approach is extensively tested at six distinct monitoring sites: two sites with multiple calibration datasets and four sites with continuous time series datasets. In all cases, the revised averaging method improved the performance of the CRNS products. The revised approach further helped to reveal hidden hydrological processes which otherwise remained unexplained in the data or were lost in the process of overcalibration. The presented weighting approach increases the overall accuracy of CRNS products and will have an impact on all their applications in agriculture, hydrology, and modeling.

  17. Improved atmospheric effect elimination method for the roughness estimation of painted surfaces.

    PubMed

    Zhang, Ying; Xuan, Jiabin; Zhao, Huijie; Song, Ping; Zhang, Yi; Xu, Wujian

    2018-03-01

    We propose a method for eliminating the atmospheric effect in polarimetric imaging remote sensing by using polarimetric imagers to simultaneously detect ground targets and skylight, which does not need calibrated targets. In addition, calculation efficiencies are improved by the skylight division method without losing estimation accuracy. Outdoor experiments are performed to obtain the polarimetric bidirectional reflectance distribution functions of painted surfaces and skylight under different weather conditions. Finally, the roughness of the painted surfaces is estimated. We find that the estimation accuracy with the proposed method is 6% on cloudy weather, while it is 30.72% without atmospheric effect elimination.

  18. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  19. Refined shape model fitting methods for detecting various types of phenological information on major U.S. crops

    NASA Astrophysics Data System (ADS)

    Sakamoto, Toshihiro

    2018-04-01

    Crop phenological information is a critical variable in evaluating the influence of environmental stress on the final crop yield in spatio-temporal dimensions. Although the MODIS (Moderate Resolution Imaging Spectroradiometer) Land Cover Dynamics product (MCD12Q2) is widely used in place of crop phenological information, the definitions of MCD12Q2-derived phenological events (e.g. green-up date, dormancy date) were not completely consistent with those of crop development stages used in statistical surveys (e.g. emerged date, harvested date). It has been necessary to devise an alternative method focused on detecting continental-scale crop developmental stages using a different approach. Therefore, this study aimed to refine the Shape Model Fitting (SMF) method to improve its applicability to multiple major U.S. crops. The newly-refined SMF methods could estimate the timing of 36 crop-development stages of major U.S. crops, including corn, soybeans, winter wheat, spring wheat, barley, sorghum, rice, and cotton. The newly-developed calibration process did not require any long-term field observation data, and could calibrate crop-specific phenological parameters, which were used as coefficients in estimated equation, by using only freely accessible public data. The calibration of phenological parameters was conducted in two steps. In the first step, the national common phenological parameters, referred to as X0[base], were calibrated by using the statistical data of 2008. The SMF method coupled using X0[base] was named the rSMF[base] method. The second step was a further calibration to gain regionally-adjusted phenological parameters for each state, referred to as X0[local], by using additional statistical data of 2015 and 2016. The rSMF method using the X0[local] was named the rSMF[local] method. This second calibration process improved the estimation accuracy for all tested crops. When applying the rSMF[base] method to the validation data set (2009-2014), the root mean square error (RMSE) of the rSMF[base]-derived estimates ranged from 7.1 days (corn) to 15.7 days (winter wheat). When using the rSMF[local] method, the RMSE of the rSMF[local]-derived estimates improved and ranged from 5.6 days (corn) to 12.3 days (winter wheat). The results showed that the second calibration step for the rSMF[local] method could correct the region-dependent bias error between the rSMF[base]-derived estimates and the statistical data. A comparison between the performances of the refined SMF methods and the MCD12Q2 products, indicated that both of the rSMF methods were superior to the MCD12Q2 products in estimating all phenological stages, except for the case of the rSMF[base]-derived barley emerged stages. The phenological stages for which the rSMF[local] showed the best estimation accuracy were the corn silking stage (RMSE = 4.3 days); the soybeans dropping leaves stage (RMSE = 4.9 days); the headed stages of winter wheat (RMSE = 11.1 days), barley (RMSE = 6.1 days), and sorghum (RMSE = 9.5 days); the spring-wheat harvested stage (RMSE = 5.5 days); the rice emerged stage (RMSE = 5.5 days), and the cotton squaring stage (RMSE = 6.6 days). These were more accurate than the results achieved by the MCD12Q2 products. In addition, the rSMF[local]-derived estimates were superior in terms of the reproducibility of the annual variation range, particularly of the late reproductive stages, such as the mature and harvested stages. The crop phenology maps derived from the SMF [local] method were also in good agreement with the relevant maps derived from statistics, and could reveal the characteristic spatial pattern of the key phenological stages at the continental scale with fine spatial resolution. For example, the winter-wheat headed stage clearly became later from south to north. The cotton squaring stage became earlier from the central region towards both coastal regions.

  20. Broadband standard dipole antenna for antenna calibration

    NASA Astrophysics Data System (ADS)

    Koike, Kunimasa; Sugiura, Akira; Morikawa, Takao

    1995-06-01

    Antenna calibration of EMI antennas is mostly performed by the standard antenna method at an open-field test site using a specially designed dipole antenna as a reference. In order to develop broadband standard antennas, the antenna factors of shortened dipples are theoretically investigated. First, the effects of the dipole length are analyzed using the induced emf method. Then, baluns and loads are examined to determine their influence on the antenna factors. It is found that transformer-type baluns are very effective for improving the height dependence of the antenna factors. Resistive loads are also useful for flattening the frequency dependence. Based on these studies, a specification is developed for a broadband standard antenna operating in the 30 to 150 MHz frequency range.

  1. FAST Model Calibration and Validation of the OC5- DeepCwind Floating Offshore Wind System Against Wave Tank Test Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  2. Uncertainty quantification for constitutive model calibration of brain tissue.

    PubMed

    Brewick, Patrick T; Teferra, Kirubel

    2018-05-31

    The results of a study comparing model calibration techniques for Ogden's constitutive model that describes the hyperelastic behavior of brain tissue are presented. One and two-term Ogden models are fit to two different sets of stress-strain experimental data for brain tissue using both least squares optimization and Bayesian estimation. For the Bayesian estimation, the joint posterior distribution of the constitutive parameters is calculated by employing Hamiltonian Monte Carlo (HMC) sampling, a type of Markov Chain Monte Carlo method. The HMC method is enriched in this work to intrinsically enforce the Drucker stability criterion by formulating a nonlinear parameter constraint function, which ensures the constitutive model produces physically meaningful results. Through application of the nested sampling technique, 95% confidence bounds on the constitutive model parameters are identified, and these bounds are then propagated through the constitutive model to produce the resultant bounds on the stress-strain response. The behavior of the model calibration procedures and the effect of the characteristics of the experimental data are extensively evaluated. It is demonstrated that increasing model complexity (i.e., adding an additional term in the Ogden model) improves the accuracy of the best-fit set of parameters while also increasing the uncertainty via the widening of the confidence bounds of the calibrated parameters. Despite some similarity between the two data sets, the resulting distributions are noticeably different, highlighting the sensitivity of the calibration procedures to the characteristics of the data. For example, the amount of uncertainty reported on the experimental data plays an essential role in how data points are weighted during the calibration, and this significantly affects how the parameters are calibrated when combining experimental data sets from disparate sources. Published by Elsevier Ltd.

  3. Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2004 and 2005 Tests)

    NASA Technical Reports Server (NTRS)

    Arrington, E. Allen; Pastor, Christine M.; Gonsalez, Jose C.; Curry, Monroe R., III

    2010-01-01

    A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel was completed in 2004 following the replacement of the inlet guide vanes upstream of the tunnel drive system and improvement to the facility total temperature instrumentation. This calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT. The 2004 test was also the first to use the 2-D RTD array, an improved total temperature calibration measurement platform.

  4. Bayesian calibration of terrestrial ecosystem models: A study of advanced Markov chain Monte Carlo methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this study, a Differential Evolution Adaptive Metropolis (DREAM) algorithm was used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The DREAM is a multi-chainmore » method and uses differential evolution technique for chain movement, allowing it to be efficiently applied to high-dimensional problems, and can reliably estimate heavy-tailed and multimodal distributions that are difficult for single-chain schemes using a Gaussian proposal distribution. The results were evaluated against the popular Adaptive Metropolis (AM) scheme. DREAM indicated that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identified one mode. The calibration of DREAM resulted in a better model fit and predictive performance compared to the AM. DREAM provides means for a good exploration of the posterior distributions of model parameters. Lastly, it reduces the risk of false convergence to a local optimum and potentially improves the predictive performance of the calibrated model.« less

  5. Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.

    2005-01-01

    This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.

  6. Bayesian calibration of terrestrial ecosystem models: A study of advanced Markov chain Monte Carlo methods

    DOE PAGES

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony; ...

    2017-02-22

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this study, a Differential Evolution Adaptive Metropolis (DREAM) algorithm was used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The DREAM is a multi-chainmore » method and uses differential evolution technique for chain movement, allowing it to be efficiently applied to high-dimensional problems, and can reliably estimate heavy-tailed and multimodal distributions that are difficult for single-chain schemes using a Gaussian proposal distribution. The results were evaluated against the popular Adaptive Metropolis (AM) scheme. DREAM indicated that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identified one mode. The calibration of DREAM resulted in a better model fit and predictive performance compared to the AM. DREAM provides means for a good exploration of the posterior distributions of model parameters. Lastly, it reduces the risk of false convergence to a local optimum and potentially improves the predictive performance of the calibrated model.« less

  7. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    PubMed Central

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  8. A novel multi-digital camera system based on tilt-shift photography technology.

    PubMed

    Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi

    2015-03-31

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.

  9. Finding trap stiffness of optical tweezers using digital filters.

    PubMed

    Almendarez-Rangel, Pedro; Morales-Cruzado, Beatriz; Sarmiento-Gómez, Erick; Pérez-Gutiérrez, Francisco G

    2018-02-01

    Obtaining trap stiffness and calibration of the position detection system is the basis of a force measurement using optical tweezers. Both calibration quantities can be calculated using several experimental methods available in the literature. In most cases, stiffness determination and detection system calibration are performed separately, often requiring procedures in very different conditions, and thus confidence of calibration methods is not assured due to possible changes in the environment. In this work, a new method to simultaneously obtain both the detection system calibration and trap stiffness is presented. The method is based on the calculation of the power spectral density of positions through digital filters to obtain the harmonic contributions of the position signal. This method has the advantage of calculating both trap stiffness and photodetector calibration factor from the same dataset in situ. It also provides a direct method to avoid unwanted frequencies that could greatly affect calibration procedure, such as electric noise, for example.

  10. Improvement of the repeatability of parallel transmission at 7T using interleaved acquisition in the calibration scan.

    PubMed

    Kameda, Hiroyuki; Kudo, Kohsuke; Matsuda, Tsuyoshi; Harada, Taisuke; Iwadate, Yuji; Uwano, Ikuko; Yamashita, Fumio; Yoshioka, Kunihiro; Sasaki, Makoto; Shirato, Hiroki

    2017-12-04

    Respiration-induced phase shift affects B 0 /B 1 + mapping repeatability in parallel transmission (pTx) calibration for 7T brain MRI, but is improved by breath-holding (BH). However, BH cannot be applied during long scans. To examine whether interleaved acquisition during calibration scanning could improve pTx repeatability and image homogeneity. Prospective. Nine healthy subjects. 7T MRI with a two-channel RF transmission system was used. Calibration scanning for B 0 /B 1 + mapping was performed under sequential acquisition/free-breathing (Seq-FB), Seq-BH, and interleaved acquisition/FB (Int-FB) conditions. The B 0 map was calculated with two echo times, and the B 1 + map was obtained using the Bloch-Siegert method. Actual flip-angle imaging (AFI) and gradient echo (GRE) imaging were performed using pTx and quadrature-Tx (qTx). All scans were acquired in five sessions. Repeatability was evaluated using intersession standard deviation (SD) or coefficient of variance (CV), and in-plane homogeneity was evaluated using in-plane CV. A paired t-test with Bonferroni correction for multiple comparisons was used. The intersession CV/SDs for the B 0 /B 1 + maps were significantly smaller in Int-FB than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The intersession CVs for the AFI and GRE images were also significantly smaller in Int-FB, Seq-BH, and qTx than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The in-plane CVs for the AFI and GRE images in Seq-FB, Int-FB, and Seq-BH were significantly smaller than in qTx (Bonferroni-corrected P < 0.01 for all). Using interleaved acquisition during calibration scans of pTx for 7T brain MRI improved the repeatability of B 0 /B 1 + mapping, AFI, and GRE images, without BH. 1 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Using measured 30-150 kVp polychromatic tungsten x-ray spectra to determine ion chamber calibration factors, Nx (Gy C(-1)).

    PubMed

    Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M

    2000-10-01

    Two methods for determining ion chamber calibration factors (Nx) are presented for polychromatic tungsten x-ray beams whose spectra differ from beams with known Nx. Both methods take advantage of known x-ray fluence and kerma spectral distributions. In the first method, the x-ray tube potential is unchanged and spectra of differing filtration are measured. A primary standard ion chamber with known Nx for one beam is used to calculate the x-ray fluence spectrum of a second beam. Accurate air energy absorption coefficients are applied to the x-ray fluence spectra of the second beam to calculate actual air kerma and Nx. In the second method, two beams of differing tube potential and filtration with known Nx are used to bracket a beam of unknown Nx. A heuristically derived Nx interpolation scheme based on spectral characteristics of all three beams is described. Both methods are validated. Both methods improve accuracy over the current half value layer Nx estimating technique.

  12. Calibration Plans for the Global Precipitation Measurement (GPM)

    NASA Technical Reports Server (NTRS)

    Bidwell, S. W.; Flaming, G. M.; Adams, W. J.; Everett, D. F.; Mendelsohn, C. R.; Smith, E. A.; Turk, J.

    2002-01-01

    The Global Precipitation Measurement (GPM) is an international effort led by the National Aeronautics and Space Administration (NASA) of the U.S.A. and the National Space Development Agency of Japan (NASDA) for the purpose of improving research into the global water and energy cycle. GPM will improve climate, weather, and hydrological forecasts through more frequent and more accurate measurement of precipitation world-wide. Comprised of U.S. domestic and international partners, GPM will incorporate and assimilate data streams from many spacecraft with varied orbital characteristics and instrument capabilities. Two of the satellites will be provided directly by GPM, the core satellite and a constellation member. The core satellite, at the heart of GPM, is scheduled for launch in November 2007. The core will carry a conical scanning microwave radiometer, the GPM Microwave Imager (GMI), and a two-frequency cross-track-scanning radar, the Dual-frequency Precipitation Radar (DPR). The passive microwave channels and the two radar frequencies of the core are carefully chosen for investigating the varying character of precipitation over ocean and land, and from the tropics to the high-latitudes. The DPR will enable microphysical characterization and three-dimensional profiling of precipitation. The GPM-provided constellation spacecraft will carry a GMI radiometer identical to that on the core spacecraft. This paper presents calibration plans for the GPM, including on-board instrument calibration, external calibration methods, and the role of ground validation. Particular emphasis is on plans for inter-satellite calibration of the GPM constellation. With its Unique instrument capabilities, the core spacecraft will serve as a calibration transfer standard to the GPM constellation. In particular the Dual-frequency Precipitation Radar aboard the core will check the accuracy of retrievals from the GMI radiometer and will enable improvement of the radiometer retrievals. Observational intersections of the core with the constellation spacecraft are essential in applying this technique to the member satellites. Information from core spacecraft retrievals during intersection events will be transferred to the constellation radiometer instruments in the form of improved calibration and, with experience, improved radiometric algorithms. In preparation for the transfer standard technique, comparisons using the Tropical Rainfall Measuring Mission (TRMM) with sun-synchronous radiometers have been conducted. Ongoing research involves study of critical variables in the inter-comparison, such as correlation with spatial-temporal separation of intersection events, frequency of intersection events, variable azimuth look angles, and variable resolution cells for the various sensors.

  13. Calibration methods for explosives detectors

    NASA Astrophysics Data System (ADS)

    MacDonald, Stephen J.; Rounbehler, David P.

    1992-05-01

    Airport security has become an important concern to cultures in every corner of the world. Presently, efforts to improve airport security have brought additional technological solutions, in the form of advanced instrumentation for the detection of explosives, into use at airport terminals in many countries. This new generation of explosives detectors is often used to augment existing security measures and provide a more encompassing screening capability for airline passengers. This paper describes two calibration procedures used for the Thermedics' EGIS explosives detectors. The systems were designed to screen people, electronic components, luggage, automobiles, and other objects for the presence of concealed explosives. The detectors have the ability to detect a wide range of explosives in both the vapor state or as surface adsorbed solids, therefore, calibrations were designed to challenge the system with explosives in each form.

  14. Calibration and filtering strategies for frequency domain electromagnetic data

    USGS Publications Warehouse

    Minsley, Burke J.; Smith, Bruce D.; Hammack, Richard; Sams, James I.; Veloski, Garret

    2010-01-01

    echniques for processing frequency-domain electromagnetic (FDEM) data that address systematic instrument errors and random noise are presented, improving the ability to invert these data for meaningful earth models that can be quantitatively interpreted. A least-squares calibration method, originally developed for airborne electromagnetic datasets, is implemented for a ground-based survey in order to address systematic instrument errors, and new insights are provided into the importance of calibration for preserving spectral relationships within the data that lead to more reliable inversions. An alternative filtering strategy based on principal component analysis, which takes advantage of the strong correlation observed in FDEM data, is introduced to help address random noise in the data without imposing somewhat arbitrary spatial smoothing.Read More: http://library.seg.org/doi/abs/10.4133/1.3445431

  15. Langley Wind Tunnel Data Quality Assurance-Check Standard Results

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Grubb, John P.; Krieger, William B.; Cler, Daniel L.

    2000-01-01

    A framework for statistical evaluation, control and improvement of wind funnel measurement processes is presented The methodology is adapted from elements of the Measurement Assurance Plans developed by the National Bureau of Standards (now the National Institute of Standards and Technology) for standards and calibration laboratories. The present methodology is based on the notions of statistical quality control (SQC) together with check standard testing and a small number of customer repeat-run sets. The results of check standard and customer repeat-run -sets are analyzed using the statistical control chart-methods of Walter A. Shewhart long familiar to the SQC community. Control chart results are presented for. various measurement processes in five facilities at Langley Research Center. The processes include test section calibration, force and moment measurements with a balance, and instrument calibration.

  16. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.

  17. Radiometric calibration of the Earth observing system's imaging sensors

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1987-01-01

    Philosophy, requirements, and methods of calibration of multispectral space sensor systems as applicable to the Earth Observing System (EOS) are discussed. Vicarious methods for calibration of low spatial resolution systems, with respect to the Advanced Very High Resolution Radiometer (AVHRR), are then summarized. Finally, a theoretical introduction is given to a new vicarious method of calibration using the ratio of diffuse-to-global irradiance at the Earth's surfaces as the key input. This may provide an additional independent method for in-flight calibration.

  18. Configurations and calibration methods for passive sampling techniques.

    PubMed

    Ouyang, Gangfeng; Pawliszyn, Janusz

    2007-10-19

    Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.

  19. Calibration improvements to electronically scanned pressure systems and preliminary statistical assessment

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.

    1996-01-01

    Orifice-to-orifice inconsistencies in data acquired with an electronically-scanned pressure system at the beginning of a wind tunnel experiment forced modifications to the standard, instrument calibration procedures. These modifications included a large increase in the number of calibration points which would allow a critical examination of the calibration curve-fit process, and a subsequent post-test reduction of the pressure data. Evaluation of these data has resulted in an improved functional representation of the pressure-voltage signature for electronically-scanned pressures sensors, which can reduce the errors due to calibration curve fit to under 0.10 percent of reading compared to the manufacturer specified 0.10 percent of full scale. Application of the improved calibration function allows a more rational selection of the calibration set-point pressures. These pressures should be adjusted to achieve a voltage output which matches the physical shape of the pressure-voltage signature of the sensor. This process is conducted in lieu of the more traditional approach where a calibration pressure is specified and the resulting sensor voltage is recorded. The fifteen calibrations acquired over the two-week duration of the wind tunnel test were further used to perform a preliminary, statistical assessment of the variation in the calibration process. The results allowed the estimation of the bias uncertainty for a single instrument calibration; and, they form the precursor for more extensive and more controlled studies in the laboratory.

  20. Testing the molecular clock using mechanistic models of fossil preservation and molecular evolution.

    PubMed

    Warnock, Rachel C M; Yang, Ziheng; Donoghue, Philip C J

    2017-06-28

    Molecular sequence data provide information about relative times only, and fossil-based age constraints are the ultimate source of information about absolute times in molecular clock dating analyses. Thus, fossil calibrations are critical to molecular clock dating, but competing methods are difficult to evaluate empirically because the true evolutionary time scale is never known. Here, we combine mechanistic models of fossil preservation and sequence evolution in simulations to evaluate different approaches to constructing fossil calibrations and their impact on Bayesian molecular clock dating, and the relative impact of fossil versus molecular sampling. We show that divergence time estimation is impacted by the model of fossil preservation, sampling intensity and tree shape. The addition of sequence data may improve molecular clock estimates, but accuracy and precision is dominated by the quality of the fossil calibrations. Posterior means and medians are poor representatives of true divergence times; posterior intervals provide a much more accurate estimate of divergence times, though they may be wide and often do not have high coverage probability. Our results highlight the importance of increased fossil sampling and improved statistical approaches to generating calibrations, which should incorporate the non-uniform nature of ecological and temporal fossil species distributions. © 2017 The Authors.

  1. Precise Haptic Device Co-Location for Visuo-Haptic Augmented Reality.

    PubMed

    Eck, Ulrich; Pankratz, Frieder; Sandor, Christian; Klinker, Gudrun; Laga, Hamid

    2015-12-01

    Visuo-haptic augmented reality systems enable users to see and touch digital information that is embedded in the real world. PHANToM haptic devices are often employed to provide haptic feedback. Precise co-location of computer-generated graphics and the haptic stylus is necessary to provide a realistic user experience. Previous work has focused on calibration procedures that compensate the non-linear position error caused by inaccuracies in the joint angle sensors. In this article we present a more complete procedure that additionally compensates for errors in the gimbal sensors and improves position calibration. The proposed procedure further includes software-based temporal alignment of sensor data and a method for the estimation of a reference for position calibration, resulting in increased robustness against haptic device initialization and external tracker noise. We designed our procedure to require minimal user input to maximize usability. We conducted an extensive evaluation with two different PHANToMs, two different optical trackers, and a mechanical tracker. Compared to state-of-the-art calibration procedures, our approach significantly improves the co-location of the haptic stylus. This results in higher fidelity visual and haptic augmentations, which are crucial for fine-motor tasks in areas such as medical training simulators, assembly planning tools, or rapid prototyping applications.

  2. Signal inference with unknown response: calibration-uncertainty renormalized estimator.

    PubMed

    Dorn, Sebastian; Enßlin, Torsten A; Greiner, Maksim; Selig, Marco; Boehm, Vanessa

    2015-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.

  3. An improved procedure for detection and enumeration of walrus signatures in airborne thermal imagery

    USGS Publications Warehouse

    Burn, Douglas M.; Udevitz, Mark S.; Speckman, Suzann G.; Benter, R. Bradley

    2009-01-01

    In recent years, application of remote sensing to marine mammal surveys has been a promising area of investigation for wildlife managers and researchers. In April 2006, the United States and Russia conducted an aerial survey of Pacific walrus (Odobenus rosmarus divergens) using thermal infrared sensors to detect groups of animals resting on pack ice in the Bering Sea. The goal of this survey was to estimate the size of the Pacific walrus population. An initial analysis of the U.S. data using previously-established methods resulted in lower detectability of walrus groups in the imagery and higher variability in calibration models than was expected based on pilot studies. This paper describes an improved procedure for detection and enumeration of walrus groups in airborne thermal imagery. Thermal images were first subdivided into smaller 200 x 200 pixel "tiles." We calculated three statistics to represent characteristics of walrus signatures from the temperature histogram for each the. Tiles that exhibited one or more of these characteristics were examined further to determine if walrus signatures were present. We used cluster analysis on tiles that contained walrus signatures to determine which pixels belonged to each group. We then calculated a thermal index value for each walrus group in the imagery and used generalized linear models to estimate detection functions (the probability of a group having a positive index value) and calibration functions (the size of a group as a function of its index value) based on counts from matched digital aerial photographs. The new method described here improved our ability to detect walrus groups at both 2 m and 4 m spatial resolution. In addition, the resulting calibration models have lower variance than the original method. We anticipate that the use of this new procedure will greatly improve the quality of the population estimate derived from these data. This procedure may also have broader applicability to thermal infrared surveys of other wildlife species. Published by Elsevier B.V.

  4. Structured light system calibration method with optimal fringe angle.

    PubMed

    Li, Beiwen; Zhang, Song

    2014-11-20

    For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H)  mm×250(W)  mm×500(D)  mm.

  5. Improved Radial Velocity Precision with a Tunable Laser Calibrator

    NASA Astrophysics Data System (ADS)

    Cramer, Claire; Brown, S.; Dupree, A. K.; Lykke, K. R.; Smith, A.; Szentgyorgyi, A.

    2010-01-01

    We present radial velocities obtained using a novel laser-based wavelength calibration technique. We have built a prototype laser calibrator for the Hectochelle spectrograph at the MMT 6.5 m telescope. The Hectochelle is a high-dispersion, fiber-fed, multi-object spectrograph capable of recording up to 240 spectra simultaneously with a resolving power of 40000. The standard wavelength calibration method makes use of spectra from thorium-argon hollow cathode lamps shining directly onto the fibers. The difference in light path between calibration and science light as well as the uneven distribution of spectral lines are believed to introduce errors of up to several hundred m/s in the wavelength scale. Our tunable laser wavelength calibrator solves these problems. The laser is bright enough for use with a dome screen, allowing the calibration light path to better match the science light path. Further, the laser is tuned in regular steps across a spectral order to generate a calibration spectrum, creating a comb of evenly-spaced lines on the detector. Using the solar spectrum reflected from the atmosphere to record the same spectrum in every fiber, we show that laser wavelength calibration brings radial velocity uncertainties down below 100 m/s. We present these results as well as an application of tunable laser calibration to stellar radial velocities determined with the infrared Ca triplet in globular clusters M15 and NGC 7492. We also suggest how the tunable laser could be useful for other instruments, including single-object, cross-dispersed echelle spectrographs, and adapted for infrared spectroscopy.

  6. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  7. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  8. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  9. Improved quantification of important beer quality parameters based on nonlinear calibration methods applied to FT-MIR spectra.

    PubMed

    Cernuda, Carlos; Lughofer, Edwin; Klein, Helmut; Forster, Clemens; Pawliczek, Marcin; Brandstetter, Markus

    2017-01-01

    During the production process of beer, it is of utmost importance to guarantee a high consistency of the beer quality. For instance, the bitterness is an essential quality parameter which has to be controlled within the specifications at the beginning of the production process in the unfermented beer (wort) as well as in final products such as beer and beer mix beverages. Nowadays, analytical techniques for quality control in beer production are mainly based on manual supervision, i.e., samples are taken from the process and analyzed in the laboratory. This typically requires significant lab technicians efforts for only a small fraction of samples to be analyzed, which leads to significant costs for beer breweries and companies. Fourier transform mid-infrared (FT-MIR) spectroscopy was used in combination with nonlinear multivariate calibration techniques to overcome (i) the time consuming off-line analyses in beer production and (ii) already known limitations of standard linear chemometric methods, like partial least squares (PLS), for important quality parameters Speers et al. (J I Brewing. 2003;109(3):229-235), Zhang et al. (J I Brewing. 2012;118(4):361-367) such as bitterness, citric acid, total acids, free amino nitrogen, final attenuation, or foam stability. The calibration models are established with enhanced nonlinear techniques based (i) on a new piece-wise linear version of PLS by employing fuzzy rules for local partitioning the latent variable space and (ii) on extensions of support vector regression variants (-PLSSVR and ν-PLSSVR), for overcoming high computation times in high-dimensional problems and time-intensive and inappropriate settings of the kernel parameters. Furthermore, we introduce a new model selection scheme based on bagged ensembles in order to improve robustness and thus predictive quality of the final models. The approaches are tested on real-world calibration data sets for wort and beer mix beverages, and successfully compared to linear methods, showing a clear out-performance in most cases and being able to meet the model quality requirements defined by the experts at the beer company. Figure Workflow for calibration of non-Linear model ensembles from FT-MIR spectra in beer production .

  10. Improving the S-Shape Solar Radiation Estimation Method for Supporting Crop Models

    PubMed Central

    Fodor, Nándor

    2012-01-01

    In line with the critical comments formulated in relation to the S-shape global solar radiation estimation method, the original formula was improved via a 5-step procedure. The improved method was compared to four-reference methods on a large North-American database. According to the investigated error indicators, the final 7-parameter S-shape method has the same or even better estimation efficiency than the original formula. The improved formula is able to provide radiation estimates with a particularly low error pattern index (PIdoy) which is especially important concerning the usability of the estimated radiation values in crop models. Using site-specific calibration, the radiation estimates of the improved S-shape method caused an average of 2.72 ± 1.02 (α = 0.05) relative error in the calculated biomass. Using only readily available site specific metadata the radiation estimates caused less than 5% relative error in the crop model calculations when they were used for locations in the middle, plain territories of the USA. PMID:22645451

  11. Predicting ambient aerosol thermal-optical reflectance measurements from infrared spectra: elemental carbon

    NASA Astrophysics Data System (ADS)

    Dillner, A. M.; Takahama, S.

    2015-10-01

    Elemental carbon (EC) is an important constituent of atmospheric particulate matter because it absorbs solar radiation influencing climate and visibility and it adversely affects human health. The EC measured by thermal methods such as thermal-optical reflectance (TOR) is operationally defined as the carbon that volatilizes from quartz filter samples at elevated temperatures in the presence of oxygen. Here, methods are presented to accurately predict TOR EC using Fourier transform infrared (FT-IR) absorbance spectra from atmospheric particulate matter collected on polytetrafluoroethylene (PTFE or Teflon) filters. This method is similar to the procedure developed for OC in prior work (Dillner and Takahama, 2015). Transmittance FT-IR analysis is rapid, inexpensive and nondestructive to the PTFE filter samples which are routinely collected for mass and elemental analysis in monitoring networks. FT-IR absorbance spectra are obtained from 794 filter samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to collocated TOR EC measurements. The FT-IR spectra are divided into calibration and test sets. Two calibrations are developed: one developed from uniform distribution of samples across the EC mass range (Uniform EC) and one developed from a uniform distribution of Low EC mass samples (EC < 2.4 μg, Low Uniform EC). A hybrid approach which applies the Low EC calibration to Low EC samples and the Uniform EC calibration to all other samples is used to produce predictions for Low EC samples that have mean error on par with parallel TOR EC samples in the same mass range and an estimate of the minimum detection limit (MDL) that is on par with TOR EC MDL. For all samples, this hybrid approach leads to precise and accurate TOR EC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), no bias (0.00 μg m-3, a concentration value based on the nominal IMPROVE sample volume of 32.8 m3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples, providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).

  12. Predicting ambient aerosol Thermal Optical Reflectance (TOR) measurements from infrared spectra: elemental carbon

    NASA Astrophysics Data System (ADS)

    Dillner, A. M.; Takahama, S.

    2015-06-01

    Elemental carbon (EC) is an important constituent of atmospheric particulate matter because it absorbs solar radiation influencing climate and visibility and it adversely affects human health. The EC measured by thermal methods such as Thermal-Optical Reflectance (TOR) is operationally defined as the carbon that volatilizes from quartz filter samples at elevated temperatures in the presence of oxygen. Here, methods are presented to accurately predict TOR EC using Fourier Transform Infrared (FT-IR) absorbance spectra from atmospheric particulate matter collected on polytetrafluoroethylene (PTFE or Teflon) filters. This method is similar to the procedure tested and developed for OC in prior work (Dillner and Takahama, 2015). Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filter samples which are routinely collected for mass and elemental analysis in monitoring networks. FT-IR absorbance spectra are obtained from 794 filter samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to collocated TOR EC measurements. The FTIR spectra are divided into calibration and test sets. Two calibrations are developed, one which is developed from uniform distribution of samples across the EC mass range (Uniform EC) and one developed from a~uniform distribution of low EC mass samples (EC < 2.4 μg, Low Uniform EC). A hybrid approach which applies the low EC calibration to low EC samples and the Uniform EC calibration to all other samples is used to produces predictions for low EC samples that have mean error on par with parallel TOR EC samples in the same mass range and an estimate of the minimum detection limit (MDL) that is on par with TOR EC MDL. For all samples, this hybrid approach leads to precise and accurate TOR EC predictions by FT-IR as indicated by high coefficient of variation (R2; 0.96), no bias (0.00 μg m-3, concentration value based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples; providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter (OM) estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).

  13. Study on Parameter Identification of Assembly Robot based on Screw Theory

    NASA Astrophysics Data System (ADS)

    Yun, Shi; Xiaodong, Zhang

    2017-11-01

    The kinematic model of assembly robot is one of the most important factors affecting repetitive precision. In order to improve the accuracy of model positioning, this paper first establishes the exponential product model of ER16-1600 assembly robot on the basis of screw theory, and then based on iterative least squares method, using ER16-1600 model robot parameter identification. By comparing the experiment before and after the calibration, it is proved that the method has obvious improvement on the positioning accuracy of the assembly robot.

  14. Learning an Eddy Viscosity Model Using Shrinkage and Bayesian Calibration: A Jet-in-Crossflow Case Study

    DOE PAGES

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...

    2017-09-07

    In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less

  15. Learning an Eddy Viscosity Model Using Shrinkage and Bayesian Calibration: A Jet-in-Crossflow Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan

    In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less

  16. Calibration of PCB-132 Sensors in a Shock Tube

    NASA Technical Reports Server (NTRS)

    Berridge, Dennis C.; Schneider, Steven P.

    2012-01-01

    While PCB-132 sensors have proven useful for measuring second-mode instability waves in many hypersonic wind tunnels, they are currently limited by their calibration. Until now, the factory calibration has been all that was available, which is a single-point calibration at an amplitude three orders of magnitude higher than a second-mode wave. In addition, little information has been available about the frequency response or spatial resolution of the sensors, which is important for measuring high-frequency instability waves. These shortcomings make it difficult to compare measurements at different conditions and between different sensors. If accurate quantitative measurements could be performed, comparisons of the growth and breakdown of instability waves could be made in different facilities, possibly leading to a method of predicting the amplitude at which the waves break down into turbulence, improving transition prediction. A method for calibrating the sensors is proposed using a newly-built shock tube at Purdue University. This shock tube, essentially a half-scale version of the 6-Inch shock tube at the Graduate Aerospace Laboratories at Caltech, has been designed to attain a moderate vacuum in the driven section. Low driven pressures should allow the creation of very weak, yet still relatively thin shock waves. It is expected that static pressure rises within the range of second-mode amplitudes should be possible. The shock tube has been designed to create clean, planar shock waves with a laminar boundary layer to allow for accurate calibrations. Stronger shock waves can be used to identify the frequency response of the sensors out to hundreds of kilohertz.

  17. Advances in the RXTE Proportional Counter Array Calibration: Nearing the Statistical Limit

    NASA Technical Reports Server (NTRS)

    Shaposhnikov, Nikolai; Jahoda, Keith; Markwardt, Craig; Swank, Jean; Strohmayer, Tod

    2012-01-01

    During its 16 years of service Rossi X-ray Timing Explorer (RXTE) mission has provided an extensive archive of data, which will serve as a primary source of high cadence observation of variable X-ray sources for fast timing studies. It is, therefore, very important to have the most reliable calibration of RXTE instruments. The Proportional Counter Array (PCA) is the primary instrument on-board RXTE which provides data in 2-50 keY with higher than millisecond time resolution in up to 256 energy channels. In 2009 RXTE team revised the response residual minimization method used to derive the parameters of the PCA physical model. The procedure is now based on the residual minimization between the model spectrum for Crab nebula emission and a calibration data set consisting of a number of spectra from the Crab and the on-board Am241 calibration source, uniformly covering a whole RXTE span. The new method led to a much more effective model convergence and allowed for better understanding of the behavior of the PCA energy-to-channel relationship. It greatly improved the response matrix performance. We describe the new version of the RXTE/PCA response generator PCARMF vll.7 along with the corresponding energy-to-channel conversion table (version e05v04) and their difference from the previous releases of PCA calibration. The new PCA response adequately represents the spectrum of the calibration sources and successfully predicts the energy of the narrow iron emission line in Cas-A throughout the RXTE mission.

  18. A novel pretreatment method combining sealing technique with direct injection technique applied for improving biosafety.

    PubMed

    Wang, Xinyu; Gao, Jing-Lin; Du, Chaohui; An, Jing; Li, MengJiao; Ma, Haiyan; Zhang, Lina; Jiang, Ye

    2017-01-01

    People today have a stronger interest in the risk of biosafety in clinical bioanalysis. A safe, simple, effective method of preparation is needed urgently. To improve biosafety of clinical analysis, we used antiviral drugs of adefovir and tenofovir as model drugs and developed a safe pretreatment method combining sealing technique with direct injection technique. The inter- and intraday precision (RSD %) of the method were <4%, and the extraction recoveries ranged from 99.4 to 100.7%. Meanwhile, the results showed that standard solution could be used to prepare calibration curve instead of spiking plasma, acquiring more accuracy result. Compared with traditional methods, the novel method not only improved biosecurity of the pretreatment method significantly, but also achieved several advantages including higher precision, favorable sensitivity and satisfactory recovery. With these highly practical and desirable characteristics, the novel method may become a feasible platform in bioanalysis.

  19. Improving the Traceability of Meteorological Measurements at Automatic Weather Stations in Thailand

    NASA Astrophysics Data System (ADS)

    Keawprasert, T.; Sinhaneti, T.; Phuuntharo, P.; Phanakulwijit, S.; Nimsamer, A.

    2017-08-01

    A joint project between the National Institute of Metrology Thailand (NIMT) and the Thai Meteorology Department (TMD) was established for improving the traceability of meteorology measurements at automatic weather stations (AWSs) in Thailand. The project aimed to improve traceability of air temperature, relative humidity and atmospheric pressure by implementing on-site calibration facilities and developing of new calibration procedures. First, new portable calibration facilities for air temperature, humidity and pressure were set up as working standard of the TMD. A portable humidity calibrator was applied as a uniform and stable source for calibration of thermo-hygrometers. A dew-point hygrometer was employed as reference hygrometer and a platinum resistance thermometer (PRT) traceable to NIMT was used as reference thermometer. The uniformity and stability in both temperature and relative humidity were characterized at NIMT. A transportable pressure calibrator was used for calibration of air pressure sensor. The estimate overall uncertainty of the calibration setup is 0.2 K for air temperature, 1.0 % for relative humidity and 0.2 hPa for atmospheric pressure, respectively. Second, on-site calibration procedures were developed and four AWSs in the central part and the northern of Thailand were chosen as pilot stations for on-site calibration using the new calibration setups and developed calibration procedures. At each station, the calibration was done at the minimum temperature, average temperature and maximum temperature of the year, for air temperature, 20 %, 55 % and 90 % for relative humidity at the average air temperature of that station and at a one-year statistics pressure range for atmospheric pressure at ambient temperature. Additional in-field uncertainty contributions such as the temperature dependence on relative humidity measurement were evaluated and included in the overall uncertainty budget. Preliminary calibration results showed that using a separate PRT probe at these AWSs would be recommended for improving the accuracy of air temperature measurement. In case of relative humidity measurement, the data logger software is needed to be upgraded for achieving higher accuracy of less than 3 %. For atmospheric pressure measurement, a higher accuracy barometer traceable to NIMT could be used to reduce the calibration uncertainty to below 0.2 hPa.

  20. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less

  1. Calibrating random forests for probability estimation.

    PubMed

    Dankowski, Theresa; Ziegler, Andreas

    2016-09-30

    Probabilities can be consistently estimated using random forests. It is, however, unclear how random forests should be updated to make predictions for other centers or at different time points. In this work, we present two approaches for updating random forests for probability estimation. The first method has been proposed by Elkan and may be used for updating any machine learning approach yielding consistent probabilities, so-called probability machines. The second approach is a new strategy specifically developed for random forests. Using the terminal nodes, which represent conditional probabilities, the random forest is first translated to logistic regression models. These are, in turn, used for re-calibration. The two updating strategies were compared in a simulation study and are illustrated with data from the German Stroke Study Collaboration. In most simulation scenarios, both methods led to similar improvements. In the simulation scenario in which the stricter assumptions of Elkan's method were not met, the logistic regression-based re-calibration approach for random forests outperformed Elkan's method. It also performed better on the stroke data than Elkan's method. The strength of Elkan's method is its general applicability to any probability machine. However, if the strict assumptions underlying this approach are not met, the logistic regression-based approach is preferable for updating random forests for probability estimation. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  2. A new optical method coupling light polarization and Vis-NIR spectroscopy to improve the measured absorbance signal's quality of soil samples.

    NASA Astrophysics Data System (ADS)

    Gobrecht, Alexia; Bendoula, Ryad; Roger, Jean-Michel; Bellon-Maurel, Véronique

    2014-05-01

    Visible - Near-infrared spectroscopy (Vis-NIRS) is now commonly used to measure different physical and chemical parameters of soils, including carbon content. However, prediction model accuracy is insufficient for Vis-NIRS to replace routine laboratory analysis. One of the biggest issues this technique is facing up to is light scattering due to soil particles. It causes departure in the assumed linear relationship between the Absorbance spectrum and the concentration of the chemicals of interest as stated by Beer-Lambert's Law, which underpins the calibration models. Therefore it becomes essential to improve the metrological quality of the measured signal in order to optimize calibration as light/matter interactions are at the basis of the resulting linear modeling. Optics can help to mitigate scattering effect on the signal. We put forward a new optical setup coupling linearly polarized light with a Vis-NIR spectrometer to free the measured spectra from multi-scattering effect. The corrected measured spectrum was then used to compute an Absorbance spectrum of the sample, using Dahm's Equation in the frame of the Representative Layer Theory. This method has been previously tested and validated on liquid (milk+ dye) and powdered (sand + dye) samples showing scattering (and absorbing) properties. The obtained Absorbance was a very good approximation of the Beer-Lambert's law absorbance. Here, we tested the method on a set of 54 soil samples to predict Soil Organic Carbon content. In order to assess the signal quality improvement by this method, we built and compared calibration models using Partial Least Square (PLS) algorithm. The prediction model built from new Absorbance spectrum outperformed the model built with the classical Absorbance traditionally obtained with Vis-NIR diffuse reflectance. This study is a good illustration of the high influence of signal quality on prediction model's performances.

  3. Comparison between a model-based and a conventional pyramid sensor reconstructor.

    PubMed

    Korkiakoski, Visa; Vérinaud, Christophe; Le Louarn, Miska; Conan, Rodolphe

    2007-08-20

    A model of a non-modulated pyramid wavefront sensor (P-WFS) based on Fourier optics has been presented. Linearizations of the model represented as Jacobian matrices are used to improve the P-WFS phase estimates. It has been shown in simulations that a linear approximation of the P-WFS is sufficient in closed-loop adaptive optics. Also a method to compute model-based synthetic P-WFS command matrices is shown, and its performance is compared to the conventional calibration. It was observed that in poor visibility the new calibration is better than the conventional.

  4. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  5. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  6. A framework for propagation of uncertainty contributed by parameterization, input data, model structure, and calibration/validation data in watershed modeling

    USDA-ARS?s Scientific Manuscript database

    The progressive improvement of computer science and development of auto-calibration techniques means that calibration of simulation models is no longer a major challenge for watershed planning and management. Modelers now increasingly focus on challenges such as improved representation of watershed...

  7. Cross-calibration of A.M. constellation sensors for long term monitoring of land surface processes

    USGS Publications Warehouse

    Meyer, D.; Chander, G.

    2006-01-01

    Data from multiple sensors must be used together to gain a more complete understanding of land surface processes at a variety of scales. Although higher-level products derived from different sensors (e.g., vegetation cover, albedo, surface temperature) can be validated independently, the degree to which these sensors and their products can be compared to one another is vastly improved if their relative spectro-radiometric responses are known. Most often, sensors are directly calibrated to diffuse solar irradiation or vicariously to ground targets. However, space-based targets are not traceable to metrological standards, and vicarious calibrations are expensive and provide a poor sampling of a sensor's full dynamic range. Cross-calibration of two sensors can augment these methods if certain conditions can be met: (1) the spectral responses are similar, (2) the observations are reasonably concurrent (similar atmospheric & solar illumination conditions), (3) errors due to misregistrations of inhomogeneous surfaces can be minimized (including scale differences), and (4) the viewing geometry is similar (or, some reasonable knowledge of surface bi-directional reflectance distribution functions is available). This study extends on a previous study of Terra/MODIS and Landsat/ETM+ cross calibration by including the Terra/ASTER and EO-1/ALI sensors, exploring the impacts of cross-calibrating sensors when conditions described above are met to some degree but not perfectly. Measures for spectral response differences and methods for cross calibrating such sensors are provided in this study. These instruments are cross calibrated using the Railroad Valley playa in Nevada. Best fit linear coefficients (slope and offset) are provided for ALI-to-MODIS and ETM+-to-MODIS cross calibrations, and root-mean-squared errors (RMSEs) and correlation coefficients are provided to quantify the uncertainty in these relationships. Due to problems with direct calibration of ASTER data, linear fits were developed between ASTER and ETM+ to assess the impacts of spectral bandpass differences between the two systems. In theory, the linear fits and uncertainties can be used to compare radiance and reflectance products derived from each instrument.

  8. Absolute photometric calibration of IRAC: lessons learned using nine years of flight data

    NASA Astrophysics Data System (ADS)

    Carey, S.; Ingalls, J.; Hora, J.; Surace, J.; Glaccum, W.; Lowrance, P.; Krick, J.; Cole, D.; Laine, S.; Engelke, C.; Price, S.; Bohlin, R.; Gordon, K.

    2012-09-01

    Significant improvements in our understanding of various photometric effects have occurred in the more than nine years of flight operations of the Infrared Array Camera aboard the Spitzer Space Telescope. With the accumulation of calibration data, photometric variations that are intrinsic to the instrument can now be mapped with high fidelity. Using all existing data on calibration stars, the array location-dependent photometric correction (the variation of flux with position on the array) and the correction for intra-pixel sensitivity variation (pixel-phase) have been modeled simultaneously. Examination of the warm mission data enabled the characterization of the underlying form of the pixelphase variation in cryogenic data. In addition to the accumulation of calibration data, significant improvements in the calibration of the truth spectra of the calibrators has taken place. Using the work of Engelke et al. (2006), the KIII calibrators have no offset as compared to the AV calibrators, providing a second pillar of the calibration scheme. The current cryogenic calibration is better than 3% in an absolute sense, with most of the uncertainty still in the knowledge of the true flux densities of the primary calibrators. We present the final state of the cryogenic IRAC calibration and a comparison of the IRAC calibration to an independent calibration methodology using the HST primary calibrators.

  9. [Determination of the content of sulfur of coal by the infrared absorption method with high acccuracy].

    PubMed

    Wang, Hai-Feng; Lu, Hai; Li, Jia; Sun, Guo-Hua; Wang, Jun; Dai, Xin-Hua

    2014-02-01

    The present paper reported the differential scanning calorimetry-thermogravimetry curves and the infrared (IR) absorption spectrometry under the temperature program analyzed by the combined simultaneous thermal analysis-IR spectrometer. The gas products of coal were identified by the IR spectrometry. This paper emphasized on the combustion at high temperature-IR absorption method, a convenient and accurate method, which measures the content of sulfur in coal indirectly through the determination of the content of sulfur dioxide in the mixed gas products by IR absorption. It was demonstrated, when the instrument was calibrated by varied pure compounds containing sulfur and certified reference materials (CRMs) for coal, that there was a large deviation in the measured sulfur contents. It indicates that the difference in chemical speciations of sulfur between CRMs and the analyte results in a systematic error. The time-IR absorption curve was utilized to analyze the composition of sulfur at low temperatures and high temperatures and then the sulfur content of coal sample was determined by using a CRM for coal with a close composition of sulfur. Therefore, the systematic error due to the difference in chemical speciations of sulfur between the CRM and analyte was eliminated. On the other hand, in this combustion at high temperature-IR absorption method, the mass of CRM and analyte were adjusted to assure the sulfur mass equal and then the CRM and the analyte were measured alternately. This single-point calibration method reduced the effect of the drift of the IR detector and improved the repeatability of results, compared with the conventional multi-point calibration method using the calibration curves of signal intensity vs sulfur mass. The sulfur content results and their standard deviations of an anthracite coal and a bituminous coal with a low sulfur content determined by this modified method were 0.345% (0.004%) and 0.372% (0.008%), respectively. The uncertainty (U, k =2) of sulfur contents of two coal samples was evaluated to be 0.019% and 0.021%, respectively. Two main modifications, namely the calibration using the coal CRM with a similar composition of low-temperature sulfur and high temperature sulfur, and the single-point calibration alternating CRM and analyte, endow the combustion at high temperature-IR absorption method with an accuracy obviously better than that of the ASTM method. Therefore, this modified method has a well potential in the analysis of sulfur content.

  10. Accuracy, reproducibility, and uncertainty analysis of thyroid-probe-based activity measurements for determination of dose calibrator settings.

    PubMed

    Esquinas, Pedro L; Tanguay, Jesse; Gonzalez, Marjorie; Vuckovic, Milan; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna

    2016-12-01

    In the nuclear medicine department, the activity of radiopharmaceuticals is measured using dose calibrators (DCs) prior to patient injection. The DC consists of an ionization chamber that measures current generated by ionizing radiation (emitted from the radiotracer). In order to obtain an activity reading, the current is converted into units of activity by applying an appropriate calibration factor (also referred to as DC dial setting). Accurate determination of DC dial settings is crucial to ensure that patients receive the appropriate dose in diagnostic scans or radionuclide therapies. The goals of this study were (1) to describe a practical method to experimentally determine dose calibrator settings using a thyroid-probe (TP) and (2) to investigate the accuracy, reproducibility, and uncertainties of the method. As an illustration, the TP method was applied to determine 188 Re dial settings for two dose calibrator models: Atomlab 100plus and Capintec CRC-55tR. Using the TP to determine dose calibrator settings involved three measurements. First, the energy-dependent efficiency of the TP was determined from energy spectra measurements of two calibration sources ( 152 Eu and 22 Na). Second, the gamma emissions from the investigated isotope ( 188 Re) were measured using the TP and its activity was determined using γ-ray spectroscopy methods. Ambient background, scatter, and source-geometry corrections were applied during the efficiency and activity determination steps. Third, the TP-based 188 Re activity was used to determine the dose calibrator settings following the calibration curve method [B. E. Zimmerman et al., J. Nucl. Med. 40, 1508-1516 (1999)]. The interobserver reproducibility of TP measurements was determined by the coefficient of variation (COV) and uncertainties associated to each step of the measuring process were estimated. The accuracy of activity measurements using the proposed method was evaluated by comparing the TP activity estimates of 99m Tc, 188 Re, 131 I, and 57 Co samples to high purity Ge (HPGe) γ-ray spectroscopy measurements. The experimental 188 Re dial settings determined with the TP were 76.5 ± 4.8 and 646 ± 43 for Atomlab 100plus and Capintec CRC-55tR, respectively. In the case of Atomlab 100plus, the TP-based dial settings improved the accuracy of 188 Re activity measurements (confirmed by HPGe measurements) as compared to manufacturer-recommended settings. For Capintec CRC-55tR, the TP-based settings were in agreement with previous results [B. E. Zimmerman et al., J. Nucl. Med. 40, 1508-1516 (1999)] which demonstrated that manufacturer-recommended settings overestimate 188 Re activity by more than 20%. The largest source of uncertainty in the experimentally determined dial settings was due to the application of a geometry correction factor, followed by the uncertainty of the scatter-corrected photopeak counts and the uncertainty of the TP efficiency calibration experiment. When using the most intense photopeak of the sample's emissions, the TP method yielded accurate (within 5% errors) and reproducible (COV = 2%) measurements of sample's activity. The relative uncertainties associated with such measurements ranged from 6% to 8% (expanded uncertainty at 95% confidence interval, k = 2). Accurate determination/verification of dose calibrator dial settings can be performed using a thyroid-probe in the nuclear medicine department.

  11. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  12. Reconstructing paleoclimate fields using online data assimilation with a linear inverse model

    NASA Astrophysics Data System (ADS)

    Perkins, Walter A.; Hakim, Gregory J.

    2017-05-01

    We examine the skill of a new approach to climate field reconstructions (CFRs) using an online paleoclimate data assimilation (PDA) method. Several recent studies have foregone climate model forecasts during assimilation due to the computational expense of running coupled global climate models (CGCMs) and the relatively low skill of these forecasts on longer timescales. Here we greatly diminish the computational cost by employing an empirical forecast model (linear inverse model, LIM), which has been shown to have skill comparable to CGCMs for forecasting annual-to-decadal surface temperature anomalies. We reconstruct annual-average 2 m air temperature over the instrumental period (1850-2000) using proxy records from the PAGES 2k Consortium Phase 1 database; proxy models for estimating proxy observations are calibrated on GISTEMP surface temperature analyses. We compare results for LIMs calibrated using observational (Berkeley Earth), reanalysis (20th Century Reanalysis), and CMIP5 climate model (CCSM4 and MPI) data relative to a control offline reconstruction method. Generally, we find that the usage of LIM forecasts for online PDA increases reconstruction agreement with the instrumental record for both spatial fields and global mean temperature (GMT). Specifically, the coefficient of efficiency (CE) skill metric for detrended GMT increases by an average of 57 % over the offline benchmark. LIM experiments display a common pattern of skill improvement in the spatial fields over Northern Hemisphere land areas and in the high-latitude North Atlantic-Barents Sea corridor. Experiments for non-CGCM-calibrated LIMs reveal region-specific reductions in spatial skill compared to the offline control, likely due to aspects of the LIM calibration process. Overall, the CGCM-calibrated LIMs have the best performance when considering both spatial fields and GMT. A comparison with the persistence forecast experiment suggests that improvements are associated with the linear dynamical constraints of the forecast and not simply persistence of temperature anomalies.

  13. Advancing computational methods for calibration of the Soil and Water Assessment Tool (SWAT): Application for modeling climate change impacts on water resources in the Upper Neuse Watershed of North Carolina

    NASA Astrophysics Data System (ADS)

    Ercan, Mehmet Bulent

    Watershed-scale hydrologic models are used for a variety of applications from flood prediction, to drought analysis, to water quality assessments. A particular challenge in applying these models is calibration of the model parameters, many of which are difficult to measure at the watershed-scale. A primary goal of this dissertation is to contribute new computational methods and tools for calibration of watershed-scale hydrologic models and the Soil and Water Assessment Tool (SWAT) model, in particular. SWAT is a physically-based, watershed-scale hydrologic model developed to predict the impact of land management practices on water quality and quantity. The dissertation follows a manuscript format meaning it is comprised of three separate but interrelated research studies. The first two research studies focus on SWAT model calibration, and the third research study presents an application of the new calibration methods and tools to study climate change impacts on water resources in the Upper Neuse Watershed of North Carolina using SWAT. The objective of the first two studies is to overcome computational challenges associated with calibration of SWAT models. The first study evaluates a parallel SWAT calibration tool built using the Windows Azure cloud environment and a parallel version of the Dynamically Dimensioned Search (DDS) calibration method modified to run in Azure. The calibration tool was tested for six model scenarios constructed using three watersheds of increasing size (the Eno, Upper Neuse, and Neuse) for both a 2 year and 10 year simulation duration. Leveraging the cloud as an on demand computing resource allowed for a significantly reduced calibration time such that calibration of the Neuse watershed went from taking 207 hours on a personal computer to only 3.4 hours using 256 cores in the Azure cloud. The second study aims at increasing SWAT model calibration efficiency by creating an open source, multi-objective calibration tool using the Non-Dominated Sorting Genetic Algorithm II (NSGA-II). This tool was demonstrated through an application for the Upper Neuse Watershed in North Carolina, USA. The objective functions used for the calibration were Nash-Sutcliffe (E) and Percent Bias (PB), and the objective sites were the Flat, Little, and Eno watershed outlets. The results show that the use of multi-objective calibration algorithms for SWAT calibration improved model performance especially in terms of minimizing PB compared to the single objective model calibration. The third study builds upon the first two studies by leveraging the new calibration methods and tools to study future climate impacts on the Upper Neuse watershed. Statistically downscaled outputs from eight Global Circulation Models (GCMs) were used for both low and high emission scenarios to drive a well calibrated SWAT model of the Upper Neuse watershed. The objective of the study was to understand the potential hydrologic response of the watershed, which serves as a public water supply for the growing Research Triangle Park region of North Carolina, under projected climate change scenarios. The future climate change scenarios, in general, indicate an increase in precipitation and temperature for the watershed in coming decades. The SWAT simulations using the future climate scenarios, in general, suggest an increase in soil water and water yield, and a decrease in evapotranspiration within the Upper Neuse watershed. In summary, this dissertation advances the field of watershed-scale hydrologic modeling by (i) providing some of the first work to apply cloud computing for the computationally-demanding task of model calibration; (ii) providing a new, open source library that can be used by SWAT modelers to perform multi-objective calibration of their models; and (iii) advancing understanding of climate change impacts on water resources for an important watershed in the Research Triangle Park region of North Carolina. The third study leveraged the methodological advances presented in the first two studies. Therefore, the dissertation contains three independent by interrelated studies that collectively advance the field of watershed-scale hydrologic modeling and analysis.

  14. High precision time calibration of the Permo-Triassic boundary mass extinction by U-Pb geochronology

    NASA Astrophysics Data System (ADS)

    Baresel, Björn; Bucher, Hugo; Brosse, Morgane; Schaltegger, Urs

    2014-05-01

    U-Pb dating using Chemical Abrasion, Isotope Dilution Thermal Ionization Mass Spectrometry (CA-ID-TIMS) is the analytical method of choice for geochronologists, who are seeking highest temporal resolution and a high degree of accuracy for single grains of zircon. The use of double-isotope tracer solutions, cross-calibrated and assessed in different EARTHTIME labs, coinciding with the reassessment of the uranium decay constants and further improvements in ion counting technology led to unprecedented precision better than 0.1% for single grain, and 0.05% for population ages, respectively. These analytical innovations now allow calibrating magmatic and biological timescales at resolution adequate for both groups of processes. To construct a revised and high resolution calibrated time scale for the Permian-Triassic boundary (PTB) we use (i) high-precision U-Pb zircon age determinations of a unique succession of volcanic ash beds interbedded with shallow to deep water fossiliferous sediments in the Nanpanjiang Basin (South China) combined with (ii) accurate quantitative biochronology based on ammonoids and conodonts and (iii) carbon isotope excursions across the PTB. Using these alignments allows (i) positioning the PTB in different depositional environments and (ii) solving age/stratigraphic contradictions generated by the index, water depth-controlled conodont Hindeodus parvus, whose diachronous first occurrences are arbitrarily used for placing the base of the Triassic. This new age framework provides the basis for a combined calibration of chemostratigraphic records with high-resolution biochronozones of the Late Permian and Early Triassic. Besides the general improvement of the radio-isotopic calibration of the PTB at the ±100 ka level, this will also lead to a better understanding of cause and effect relations involved in this mass extinction.

  15. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE PAGES

    Xi, Maolong; Lu, Dan; Gui, Dongwei; ...

    2016-11-27

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  16. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    NASA Astrophysics Data System (ADS)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  17. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xi, Maolong; Lu, Dan; Gui, Dongwei

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  18. Evaluating the use of in-situ turbidity measurements to quantify fluvial sediment and phosphorus concentrations and fluxes in agricultural streams.

    PubMed

    Stutter, Marc; Dawson, Julian J C; Glendell, Miriam; Napier, Fiona; Potts, Jacqueline M; Sample, James; Vinten, Andrew; Watson, Helen

    2017-12-31

    Accurate quantification of suspended sediments (SS) and particulate phosphorus (PP) concentrations and loads is complex due to episodic delivery associated with storms and management activities often missed by infrequent sampling. Surrogate measurements such as turbidity can improve understanding of pollutant behaviour, providing calibrations can be made cost-effectively and with quantified uncertainties. Here, we compared fortnightly and storm intensive water quality sampling with semi-continuous turbidity monitoring calibrated against spot samples as three potential methods for determining SS and PP concentrations and loads in an agricultural catchment over two-years. In the second year of sampling we evaluated the transferability of turbidity calibration relationships to an adjacent catchment with similar soils and land cover. When data from nine storm events were pooled, both SS and PP concentrations (all in log space) were better related to turbidity than they were to discharge. Developing separate calibration relationship for the rising and falling limbs of the hydrograph provided further improvement. However, the ability to transfer calibrations between adjacent catchments was not evident as the relationships of both SS and PP with turbidity differed both in gradient and intercept on the rising limb of the hydrograph between the two catchments. We conclude that the reduced uncertainty in load estimation derived from the use of turbidity as a proxy for specific water quality parameters in long-term regulatory monitoring programmes, must be considered alongside the increased capital and maintenance costs of turbidity equipment, potentially noisy turbidity data and the need for site-specific prolonged storm calibration periods. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Detection of heavy metal Cd in polluted fresh leafy vegetables by laser-induced breakdown spectroscopy.

    PubMed

    Yao, Mingyin; Yang, Hui; Huang, Lin; Chen, Tianbing; Rao, Gangfu; Liu, Muhua

    2017-05-10

    In seeking a novel method with the ability of green analysis in monitoring toxic heavy metals residue in fresh leafy vegetables, laser-induced breakdown spectroscopy (LIBS) was applied to prove its capability in performing this work. The spectra of fresh vegetable samples polluted in the lab were collected by optimized LIBS experimental setup, and the reference concentrations of cadmium (Cd) from samples were obtained by conventional atomic absorption spectroscopy after wet digestion. The direct calibration employing intensity of single Cd line and Cd concentration exposed the weakness of this calibration method. Furthermore, the accuracy of linear calibration can be improved a little by triple Cd lines as characteristic variables, especially after the spectra were pretreated. However, it is not enough in predicting Cd in samples. Therefore, partial least-squares regression (PLSR) was utilized to enhance the robustness of quantitative analysis. The results of the PLSR model showed that the prediction accuracy of the Cd target can meet the requirement of determination in food safety. This investigation presented that LIBS is a promising and emerging method in analyzing toxic compositions in agricultural products, especially combined with suitable chemometrics.

  20. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    PubMed Central

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726

Top