Sample records for external calibration method

  1. Externally Calibrated Parallel Imaging for 3D Multispectral Imaging Near Metallic Implants Using Broadband Ultrashort Echo Time Imaging

    PubMed Central

    Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.

    2017-01-01

    Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613

  2. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    PubMed

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  3. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    PubMed Central

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. PMID:28672823

  4. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    PubMed

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  5. Calibration of mass spectrometric peptide mass fingerprint data without specific external or internal calibrants

    PubMed Central

    Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut

    2005-01-01

    Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175

  6. General Matrix Inversion Technique for the Calibration of Electric Field Sensor Arrays on Aircraft Platforms

    NASA Technical Reports Server (NTRS)

    Mach, D. M.; Koshak, W. J.

    2007-01-01

    A matrix calibration procedure has been developed that uniquely relates the electric fields measured at the aircraft with the external vector electric field and net aircraft charge. The calibration method can be generalized to any reasonable combination of electric field measurements and aircraft. A calibration matrix is determined for each aircraft that represents the individual instrument responses to the external electric field. The aircraft geometry and configuration of field mills (FMs) uniquely define the matrix. The matrix can then be inverted to determine the external electric field and net aircraft charge from the FM outputs. A distinct advantage of the method is that if one or more FMs need to be eliminated or deemphasized [e.g., due to a malfunction), it is a simple matter to reinvert the matrix without the malfunctioning FMs. To demonstrate the calibration technique, data are presented from several aircraft programs (ER-2, DC-8, Altus, and Citation).

  7. On using summary statistics from an external calibration sample to correct for covariate measurement error.

    PubMed

    Guo, Ying; Little, Roderick J; McConnell, Daniel S

    2012-01-01

    Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.

  8. General Matrix Inversion for the Calibration of Electric Field Sensor Arrays on Aircraft Platforms

    NASA Technical Reports Server (NTRS)

    Mach, D. M.; Koshak, W. J.

    2006-01-01

    We have developed a matrix calibration procedure that uniquely relates the electric fields measured at the aircraft with the external vector electric field and net aircraft charge. Our calibration method is being used with all of our aircraft/electric field sensing combinations and can be generalized to any reasonable combination of electric field measurements and aircraft. We determine a calibration matrix that represents the individual instrument responses to the external electric field. The aircraft geometry and configuration of field mills (FMs) uniquely define the matrix. The matrix can then be inverted to determine the external electric field and net aircraft charge from the FM outputs. A distinct advantage of the method is that if one or more FMs need to be eliminated or de-emphasized (for example, due to a malfunction), it is a simple matter to reinvert the matrix without the malfunctioning FMs. To demonstrate our calibration technique, we present data from several of our aircraft programs (ER-2, DC-8, Altus, Citation).

  9. Infrared non-destructive evaluation method and apparatus

    DOEpatents

    Baleine, Erwan; Erwan, James F; Lee, Ching-Pang; Stinelli, Stephanie

    2014-10-21

    A method of nondestructive evaluation and related system. The method includes arranging a test piece (14) having an internal passage (18) and an external surface (15) and a thermal calibrator (12) within a field of view (42) of an infrared sensor (44); generating a flow (16) of fluid characterized by a fluid temperature; exposing the test piece internal passage (18) and the thermal calibrator (12) to fluid from the flow (16); capturing infrared emission information of the test piece external surface (15) and of the thermal calibrator (12) simultaneously using the infrared sensor (44), wherein the test piece infrared emission information includes emission intensity information, and wherein the thermal calibrator infrared emission information includes a reference emission intensity associated with the fluid temperature; and normalizing the test piece emission intensity information against the reference emission intensity.

  10. Correction of amplitude-phase distortion for polarimetric active radar calibrator

    NASA Astrophysics Data System (ADS)

    Lin, Jianzhi; Li, Weixing; Zhang, Yue; Chen, Zengping

    2015-01-01

    The polarimetric active radar calibrator (PARC) is extensively used as an external test target for system distortion compensation and polarimetric calibration for the high-resolution polarimetric radar. However, the signal undergoes distortion in the PARC, affecting the effectiveness of the compensation and the calibration. The system distortion compensation resulting from the distortion of the amplitude and phase in the PARC was analyzed based on the "method of paired echoes." Then the correction method was proposed, which separated the ideal signals from the distorted signals. Experiments were carried on real radar data, and the experimental results were in good agreement with the theoretical analysis. After the correction, the PARC can be better used as an external test target for the system distortion compensation.

  11. A Calibration of the MeteoSwiss RAman Lidar for Meteorological Observations (RALMO)Water Vapour Mixing Ratio Measurements using a Radiosonde Trajectory Method

    NASA Astrophysics Data System (ADS)

    Hicks-Jalali, Shannon; Sica, R. J.; Haefele, Alexander; Martucci, Giovanni

    2018-04-01

    With only 50% downtime from 2007-2016, the RALMO lidar in Payerne, Switzerland, has one of the largest continuous lidar data sets available. These measurements will be used to produce an extensive lidar water vapour climatology using the Optimal Estimation Method introduced by Sica and Haefele (2016). We will compare our improved technique for external calibration using radiosonde trajectories with the standard external methods, and present the evolution of the lidar constant from 2007 to 2016.

  12. External validation of a Cox prognostic model: principles and methods

    PubMed Central

    2013-01-01

    Background A prognostic model should not enter clinical practice unless it has been demonstrated that it performs a useful role. External validation denotes evaluation of model performance in a sample independent of that used to develop the model. Unlike for logistic regression models, external validation of Cox models is sparsely treated in the literature. Successful validation of a model means achieving satisfactory discrimination and calibration (prediction accuracy) in the validation sample. Validating Cox models is not straightforward because event probabilities are estimated relative to an unspecified baseline function. Methods We describe statistical approaches to external validation of a published Cox model according to the level of published information, specifically (1) the prognostic index only, (2) the prognostic index together with Kaplan-Meier curves for risk groups, and (3) the first two plus the baseline survival curve (the estimated survival function at the mean prognostic index across the sample). The most challenging task, requiring level 3 information, is assessing calibration, for which we suggest a method of approximating the baseline survival function. Results We apply the methods to two comparable datasets in primary breast cancer, treating one as derivation and the other as validation sample. Results are presented for discrimination and calibration. We demonstrate plots of survival probabilities that can assist model evaluation. Conclusions Our validation methods are applicable to a wide range of prognostic studies and provide researchers with a toolkit for external validation of a published Cox model. PMID:23496923

  13. A new approach for the pixel map sensitivity (PMS) evaluation of an electronic portal imaging device (EPID)

    PubMed Central

    Lucio, Francesco; Calamia, Elisa; Russi, Elvio; Marchetto, Flavio

    2013-01-01

    When using an electronic portal imaging device (EPID) for dosimetric verifications, the calibration of the sensitive area is of paramount importance. Two calibration methods are generally adopted: one, empirical, based on an external reference dosimeter or on multiple narrow beam irradiations, and one based on the EPID response simulation. In this paper we present an alternative approach based on an intercalibration procedure, independent from external dosimeters and from simulations, and is quick and easy to perform. Each element of a detector matrix is characterized by a different gain; the aim of the calibration procedure is to relate the gain of each element to a reference one. The method that we used to compute the relative gains is based on recursive acquisitions with the EPID placed in different positions, assuming a constant fluence of the beam for subsequent deliveries. By applying an established procedure and analysis algorithm, the EPID calibration was repeated in several working conditions. Data show that both the photons energy and the presence of a medium between the source and the detector affect the calibration coefficients less than 1%. The calibration coefficients were then applied to the acquired images, comparing the EPID dose images with films. Measurements were performed with open field, placing the film at the level of the EPID. The standard deviation of the distribution of the point‐to‐point difference is 0.6%. An approach of this type for the EPID calibration has many advantages with respect to the standard methods — it does not need an external dosimeter, it is not related to the irradiation techniques, and it is easy to implement in the clinical practice. Moreover, it can be applied in case of transit or nontransit dosimetry, solving the problem of the EPID calibration independently from the dose reconstruction method. PACS number: 87.56.‐v PMID:24257285

  14. Evaluation of dual energy quantitative CT for determining the spatial distributions of red marrow and bone for dosimetry in internal emitter radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsitt, Mitchell M., E-mail: goodsitt@umich.edu; Shenoy, Apeksha; Howard, David

    2014-05-15

    Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correctionmore » factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa.« less

  15. Evaluation of dual energy quantitative CT for determining the spatial distributions of red marrow and bone for dosimetry in internal emitter radiation therapy

    PubMed Central

    Goodsitt, Mitchell M.; Shenoy, Apeksha; Shen, Jincheng; Howard, David; Schipper, Matthew J.; Wilderman, Scott; Christodoulou, Emmanuel; Chun, Se Young; Dewaraja, Yuni K.

    2014-01-01

    Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correction factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa. PMID:24784380

  16. Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, M.; Xing, L.; Liu, Y.

    2018-04-01

    Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.

  17. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.

  18. Comparison of spectral radiance responsivity calibration techniques used for backscatter ultraviolet satellite instruments

    NASA Astrophysics Data System (ADS)

    Kowalewski, M. G.; Janz, S. J.

    2015-02-01

    Methods of absolute radiometric calibration of backscatter ultraviolet (BUV) satellite instruments are compared as part of an effort to minimize pre-launch calibration uncertainties. An internally illuminated integrating sphere source has been used for the Shuttle Solar BUV, Total Ozone Mapping Spectrometer, Ozone Mapping Instrument, and Global Ozone Monitoring Experiment 2 using standardized procedures traceable to national standards. These sphere-based spectral responsivities agree to within the derived combined standard uncertainty of 1.87% relative to calibrations performed using an external diffuser illuminated by standard irradiance sources, the customary spectral radiance responsivity calibration method for BUV instruments. The combined standard uncertainty for these calibration techniques as implemented at the NASA Goddard Space Flight Center’s Radiometric Calibration and Development Laboratory is shown to less than 2% at 250 nm when using a single traceable calibration standard.

  19. Online geometrical calibration of a mobile C-arm using external sensors

    NASA Astrophysics Data System (ADS)

    Mitschke, Matthias M.; Navab, Nassir; Schuetz, Oliver

    2000-04-01

    3D tomographic reconstruction of high contrast objects such as contrast agent enhanced blood vessels or bones from x-ray images acquired by isocentric C-arm systems recently gained interest. For tomographic reconstruction, a sequence of images is captured during the C-arm rotation around the patient and the precise projection geometry has to be determined for each image. This is a difficult task, as C- arms usually do not provide accurate information about their projection geometry. Standard methods propose the use of an x-ray calibration phantom and an offline calibration, when the motion of the C-arm is supposed to be reproducible between calibration and patient run. However, mobile C-arms usually do not have this desirable property. Therefore, an online recovery of projection geometry is necessary. Here, we study the use of external tracking systems such as Polaris or Optotrak from Northern Digital, Inc., for online calibration. In order to use the external tracking system for recovery of x-ray projection geometry two unknown transformations have to be estimated. We describe our attempt to solve this calibration problem. These are the relations between x-ray imaging system and marker plate of the tracking system as well as worked and sensor coordinate system. Experimental result son anatomical data are presented and visually compared with the results of estimating the projection geometry with an x-ray calibration phantom.

  20. Measurement of large steel plates based on linear scan structured light scanning

    NASA Astrophysics Data System (ADS)

    Xiao, Zhitao; Li, Yaru; Lei, Geng; Xi, Jiangtao

    2018-01-01

    A measuring method based on linear structured light scanning is proposed to achieve the accurate measurement of the complex internal shape of large steel plates. Firstly, by using a calibration plate with round marks, an improved line scanning calibration method is designed. The internal and external parameters of camera are determined through the calibration method. Secondly, the images of steel plates are acquired by line scan camera. Then the Canny edge detection method is used to extract approximate contours of the steel plate images, the Gauss fitting algorithm is used to extract the sub-pixel edges of the steel plate contours. Thirdly, for the problem of inaccurate restoration of contour size, by measuring the distance between adjacent points in the grid of known dimensions, the horizontal and vertical error curves of the images are obtained. Finally, these horizontal and vertical error curves can be used to correct the contours of steel plates, and then combined with the calibration parameters of internal and external, the size of these contours can be calculated. The experiments results demonstrate that the proposed method can achieve the error of 1 mm/m in 1.2m×2.6m field of view, which has satisfied the demands of industrial measurement.

  1. Analysis of potential migrants from plastic materials in milk by liquid chromatography-mass spectrometry with liquid-liquid extraction and low-temperature purification.

    PubMed

    Bodai, Zsolt; Szabó, Bálint Sámuel; Novák, Márton; Hámori, Susanne; Nyiri, Zoltán; Rikker, Tamás; Eke, Zsuzsanna

    2014-10-15

    A simple and fast analytical method was developed for the determination of six UV stabilizers (Cyasorb UV-1164, Tinuvin P, Tinuvin 234, Tinuvin 326, Tinuvin 327, and Tinuvin 1577) and five antioxidants (Irgafos 168, Irganox 1010, Irganox 3114, Irganox 3790, and Irganox 565) in milk. For sample preparation liquid-liquid extraction with low-temperature purification combined with centrifugation was used to remove fats, proteins, and sugars. After the cleanup step, the sample was analyzed with high-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS). External standard and matrix calibrations were tested. External calibration proved to be acceptable for Tinuvin P, Tinuvin 234, Tinuvin 326, Tinuvin 327, Irganox 3114, and Irganox 3790. The method was successfully validated with matrix calibration for all compounds. Method detection limits were between 0.25 and 10 μg/kg. Accuracies ranged from 93 to 109%, and intraday precisions were <13%.

  2. Use of internal scintillator radioactivity to calibrate DOI function of a PET detector with a dual-ended-scintillator readout

    PubMed Central

    Bircher, Chad; Shao, Yiping

    2012-01-01

    Purpose: Positron emission tomography (PET) detectors that use a dual-ended-scintillator readout to measure depth-of-interaction (DOI) must have an accurate DOI function to provide the relationship between DOI and signal ratios to be used for detector calibration and recalibration. In a previous study, the authors used a novel and simple method to accurately and quickly measure DOI function by irradiating the detector with an external uniform flood source; however, as a practical concern, implementing external uniform flood sources in an assembled PET system is technically challenging and expensive. In the current study, therefore, the authors investigated whether the same method could be used to acquire DOI function from scintillator-generated (i.e., internal) radiation. The authors also developed a method for calibrating the energy scale necessary to select the events within the desired energy window. Methods: The authors measured the DOI function of a PET detector with lutetium yttrium orthosilicate (LYSO) scintillators. Radiation events originating from the scintillators’ internal Lu-176 beta decay were used to measure DOI functions which were then compared with those measured from both an external uniform flood source and an electronically collimated external point source. The authors conducted these studies with several scintillators of differing geometries (1.5 × 1.5 and 2.0 × 2.0 mm2 cross-section area and 20, 30, and 40 mm length) and various surface finishes (mirror-finishing, saw-cut rough, and other finishes in between), and in a prototype array. Results: All measured results using internal and external radiation sources showed excellent agreement in DOI function measurement. The mean difference among DOI values for all scintillators measured from internal and external radiation sources was less than 1.0 mm for different scintillator geometries and various surface finishes. Conclusions: The internal radioactivity of LYSO scintillators can be used to accurately measure DOI function in PET detectors, regardless of scintillator geometry or surface finish. Because an external radiation source is not needed, this method of DOI function measurement can be practically applied to individual PET detectors as well as assembled systems. PMID:22320787

  3. Use of internal scintillator radioactivity to calibrate DOI function of a PET detector with a dual-ended-scintillator readout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bircher, Chad; Shao Yiping

    Purpose: Positron emission tomography (PET) detectors that use a dual-ended-scintillator readout to measure depth-of-interaction (DOI) must have an accurate DOI function to provide the relationship between DOI and signal ratios to be used for detector calibration and recalibration. In a previous study, the authors used a novel and simple method to accurately and quickly measure DOI function by irradiating the detector with an external uniform flood source; however, as a practical concern, implementing external uniform flood sources in an assembled PET system is technically challenging and expensive. In the current study, therefore, the authors investigated whether the same method couldmore » be used to acquire DOI function from scintillator-generated (i.e., internal) radiation. The authors also developed a method for calibrating the energy scale necessary to select the events within the desired energy window. Methods: The authors measured the DOI function of a PET detector with lutetium yttrium orthosilicate (LYSO) scintillators. Radiation events originating from the scintillators' internal Lu-176 beta decay were used to measure DOI functions which were then compared with those measured from both an external uniform flood source and an electronically collimated external point source. The authors conducted these studies with several scintillators of differing geometries (1.5 x 1.5 and 2.0 x 2.0 mm{sup 2} cross-section area and 20, 30, and 40 mm length) and various surface finishes (mirror-finishing, saw-cut rough, and other finishes in between), and in a prototype array. Results: All measured results using internal and external radiation sources showed excellent agreement in DOI function measurement. The mean difference among DOI values for all scintillators measured from internal and external radiation sources was less than 1.0 mm for different scintillator geometries and various surface finishes. Conclusions: The internal radioactivity of LYSO scintillators can be used to accurately measure DOI function in PET detectors, regardless of scintillator geometry or surface finish. Because an external radiation source is not needed, this method of DOI function measurement can be practically applied to individual PET detectors as well as assembled systems.« less

  4. Use of internal scintillator radioactivity to calibrate DOI function of a PET detector with a dual-ended-scintillator readout.

    PubMed

    Bircher, Chad; Shao, Yiping

    2012-02-01

    Positron emission tomography (PET) detectors that use a dual-ended-scintillator readout to measure depth-of-interaction (DOI) must have an accurate DOI function to provide the relationship between DOI and signal ratios to be used for detector calibration and recalibration. In a previous study, the authors used a novel and simple method to accurately and quickly measure DOI function by irradiating the detector with an external uniform flood source; however, as a practical concern, implementing external uniform flood sources in an assembled PET system is technically challenging and expensive. In the current study, therefore, the authors investigated whether the same method could be used to acquire DOI function from scintillator-generated (i.e., internal) radiation. The authors also developed a method for calibrating the energy scale necessary to select the events within the desired energy window. The authors measured the DOI function of a PET detector with lutetium yttrium orthosilicate (LYSO) scintillators. Radiation events originating from the scintillators' internal Lu-176 beta decay were used to measure DOI functions which were then compared with those measured from both an external uniform flood source and an electronically collimated external point source. The authors conducted these studies with several scintillators of differing geometries (1.5 × 1.5 and 2.0 × 2.0 mm(2) cross-section area and 20, 30, and 40 mm length) and various surface finishes (mirror-finishing, saw-cut rough, and other finishes in between), and in a prototype array. All measured results using internal and external radiation sources showed excellent agreement in DOI function measurement. The mean difference among DOI values for all scintillators measured from internal and external radiation sources was less than 1.0 mm for different scintillator geometries and various surface finishes. The internal radioactivity of LYSO scintillators can be used to accurately measure DOI function in PET detectors, regardless of scintillator geometry or surface finish. Because an external radiation source is not needed, this method of DOI function measurement can be practically applied to individual PET detectors as well as assembled systems.

  5. Solid matrix transformation and tracer addition using molten ammonium bifluoride salt as a sample preparation method for laser ablation inductively coupled plasma mass spectrometry.

    PubMed

    Grate, Jay W; Gonzalez, Jhanis J; O'Hara, Matthew J; Kellogg, Cynthia M; Morrison, Samuel S; Koppenaal, David W; Chan, George C-Y; Mao, Xianglei; Zorba, Vassilia; Russo, Richard E

    2017-09-08

    Solid sampling and analysis methods, such as laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), are challenged by matrix effects and calibration difficulties. Matrix-matched standards for external calibration are seldom available and it is difficult to distribute spikes evenly into a solid matrix as internal standards. While isotopic ratios of the same element can be measured to high precision, matrix-dependent effects in the sampling and analysis process frustrate accurate quantification and elemental ratio determinations. Here we introduce a potentially general solid matrix transformation approach entailing chemical reactions in molten ammonium bifluoride (ABF) salt that enables the introduction of spikes as tracers or internal standards. Proof of principle experiments show that the decomposition of uranium ore in sealed PFA fluoropolymer vials at 230 °C yields, after cooling, new solids suitable for direct solid sampling by LA. When spikes are included in the molten salt reaction, subsequent LA-ICP-MS sampling at several spots indicate that the spikes are evenly distributed, and that U-235 tracer dramatically improves reproducibility in U-238 analysis. Precisions improved from 17% relative standard deviation for U-238 signals to 0.1% for the ratio of sample U-238 to spiked U-235, a factor of over two orders of magnitude. These results introduce the concept of solid matrix transformation (SMT) using ABF, and provide proof of principle for a new method of incorporating internal standards into a solid for LA-ICP-MS. This new approach, SMT-LA-ICP-MS, provides opportunities to improve calibration and quantification in solids based analysis. Looking forward, tracer addition to transformed solids opens up LA-based methods to analytical methodologies such as standard addition, isotope dilution, preparation of matrix-matched solid standards, external calibration, and monitoring instrument drift against external calibration standards.

  6. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  7. Impact of dose calibrators quality control programme in Argentina

    NASA Astrophysics Data System (ADS)

    Furnari, J. C.; de Cabrejas, M. L.; del C. Rotta, M.; Iglicki, F. A.; Milá, M. I.; Magnavacca, C.; Dima, J. C.; Rodríguez Pasqués, R. H.

    1992-02-01

    The national Quality Control (QC) programme for radionuclide calibrators started 12 years ago. Accuracy and the implementation of a QC programme were evaluated over all these years at 95 nuclear medicine laboratories where dose calibrators were in use. During all that time, the Metrology Group of CNEA has distributed 137Cs sealed sources to check stability and has been performing periodic "checking rounds" and postal surveys using unknown samples (external quality control). An account of the results of both methods is presented. At present, more of 65% of the dose calibrators measure activities with an error less than 10%.

  8. Robustness of near-infrared calibration models for the prediction of milk constituents during the milking process.

    PubMed

    Melfsen, Andreas; Hartung, Eberhard; Haeussermann, Angelika

    2013-02-01

    The robustness of in-line raw milk analysis with near-infrared spectroscopy (NIRS) was tested with respect to the prediction of the raw milk contents fat, protein and lactose. Near-infrared (NIR) spectra of raw milk (n = 3119) were acquired on three different farms during the milking process of 354 milkings over a period of six months. Calibration models were calculated for: a random data set of each farm (fully random internal calibration); first two thirds of the visits per farm (internal calibration); whole datasets of two of the three farms (external calibration), and combinations of external and internal datasets. Validation was done either on the remaining data set per farm (internal validation) or on data of the remaining farms (external validation). Excellent calibration results were obtained when fully randomised internal calibration sets were used for milk analysis. In this case, RPD values of around ten, five and three for the prediction of fat, protein and lactose content, respectively, were achieved. Farm internal calibrations achieved much poorer prediction results especially for the prediction of protein and lactose with RPD values of around two and one respectively. The prediction accuracy improved when validation was done on spectra of an external farm, mainly due to the higher sample variation in external calibration sets in terms of feeding diets and individual cow effects. The results showed that further improvements were achieved when additional farm information was added to the calibration set. One of the main requirements towards a robust calibration model is the ability to predict milk constituents in unknown future milk samples. The robustness and quality of prediction increases with increasing variation of, e.g., feeding and cow individual milk composition in the calibration model.

  9. Prediction models for clustered data: comparison of a random intercept and standard regression model

    PubMed Central

    2013-01-01

    Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436

  10. Target analyte quantification by isotope dilution LC-MS/MS directly referring to internal standard concentrations--validation for serum cortisol measurement.

    PubMed

    Maier, Barbara; Vogeser, Michael

    2013-04-01

    Isotope dilution LC-MS/MS methods used in the clinical laboratory typically involve multi-point external calibration in each analytical series. Our aim was to test the hypothesis that determination of target analyte concentrations directly derived from the relation of the target analyte peak area to the peak area of a corresponding stable isotope labelled internal standard compound [direct isotope dilution analysis (DIDA)] may be not inferior to conventional external calibration with respect to accuracy and reproducibility. Quality control samples and human serum pools were analysed in a comparative validation protocol for cortisol as an exemplary analyte by LC-MS/MS. Accuracy and reproducibility were compared between quantification either involving a six-point external calibration function, or a result calculation merely based on peak area ratios of unlabelled and labelled analyte. Both quantification approaches resulted in similar accuracy and reproducibility. For specified analytes, reliable analyte quantification directly derived from the ratio of peak areas of labelled and unlabelled analyte without the need for a time consuming multi-point calibration series is possible. This DIDA approach is of considerable practical importance for the application of LC-MS/MS in the clinical laboratory where short turnaround times often have high priority.

  11. Quantifying prognosis with risk predictions.

    PubMed

    Pace, Nathan L; Eberhart, Leopold H J; Kranke, Peter R

    2012-01-01

    Prognosis is a forecast, based on present observations in a patient, of their probable outcome from disease, surgery and so on. Research methods for the development of risk probabilities may not be familiar to some anaesthesiologists. We briefly describe methods for identifying risk factors and risk scores. A probability prediction rule assigns a risk probability to a patient for the occurrence of a specific event. Probability reflects the continuum between absolute certainty (Pi = 1) and certified impossibility (Pi = 0). Biomarkers and clinical covariates that modify risk are known as risk factors. The Pi as modified by risk factors can be estimated by identifying the risk factors and their weighting; these are usually obtained by stepwise logistic regression. The accuracy of probabilistic predictors can be separated into the concepts of 'overall performance', 'discrimination' and 'calibration'. Overall performance is the mathematical distance between predictions and outcomes. Discrimination is the ability of the predictor to rank order observations with different outcomes. Calibration is the correctness of prediction probabilities on an absolute scale. Statistical methods include the Brier score, coefficient of determination (Nagelkerke R2), C-statistic and regression calibration. External validation is the comparison of the actual outcomes to the predicted outcomes in a new and independent patient sample. External validation uses the statistical methods of overall performance, discrimination and calibration and is uniformly recommended before acceptance of the prediction model. Evidence from randomised controlled clinical trials should be obtained to show the effectiveness of risk scores for altering patient management and patient outcomes.

  12. EXACTRAC x-ray and beam isocenters-what's the difference?

    PubMed

    Tideman Arp, Dennis; Carl, Jesper

    2012-03-01

    To evaluate the geometric accuracy of the isocenter of an image-guidance system, as implemented in the exactrac system from brainlab, relative to the linear accelerator radiation isocenter. Subsequently to correct the x-ray isocenter of the exactrac system for any geometric discrepancies between the two isocenters. Five Varian linear accelerators all equipped with electronic imaging devices and exactrac with robotics from brainlab were evaluated. A commercially available Winston-Lutz phantom and an in-house made adjustable base were used in the setup. The electronic portal imaging device of the linear accelerators was used to acquire MV-images at various gantry angles. Stereoscopic pairs of x-ray images were acquired using the exactrac system. The deviation between the position of the external laser isocenter and the exactrac isocenter was evaluated using the commercial software of the exactrac system. In-house produced software was used to analyze the MV-images and evaluate the deviation between the external laser isocenter and the radiation isocenter of the linear accelerator. Subsequently, the deviation between the radiation isocenter and the isocenter of the exactrac system was calculated. A new method of calibrating the isocenter of the exactrac system was applied to reduce the deviations between the radiation isocenter and the exactrac isocenter. To evaluate the geometric accuracy a 3D deviation vector was calculated for each relative isocenter position. The 3D deviation between the external laser isocenter and the isocenter of the exactrac system varied from 0.21 to 0.42 mm. The 3D deviation between the external laser isocenter and the linac radiation isocenter ranged from 0.37 to 0.83 mm. The 3D deviation between the radiation isocenter and the isocenter of the exactrac system ranged from 0.31 to 1.07 mm. Using the new method of calibrating the exactrac isocenter the 3D deviation of one linac was reduced from 0.90 to 0.23 mm. The results were complicated due to routine maintenance of the linac, including laser calibration. It was necessary to repeat the measurements in order to perform the calibration of the exactrac isocenter. The deviations between the linac radiation isocenter and the exactrac isocenter were of an order that may have clinical relevance. An alternative method of calibrating the isocenter of the exactrac system was applied and reduced the deviations between the two isocenters.

  13. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  14. Forecasting Emergency Department Crowding: An External, Multi-Center Evaluation

    PubMed Central

    Hoot, Nathan R.; Epstein, Stephen K.; Allen, Todd L.; Jones, Spencer S.; Baumlin, Kevin M.; Chawla, Neal; Lee, Anna T.; Pines, Jesse M.; Klair, Amandeep K.; Gordon, Bradley D.; Flottemesch, Thomas J.; LeBlanc, Larry J.; Jones, Ian; Levin, Scott R.; Zhou, Chuan; Gadd, Cynthia S.; Aronsky, Dominik

    2009-01-01

    Objective To apply a previously described tool to forecast ED crowding at multiple institutions, and to assess its generalizability for predicting the near-future waiting count, occupancy level, and boarding count. Methods The ForecastED tool was validated using historical data from five institutions external to the development site. A sliding-window design separated the data for parameter estimation and forecast validation. Observations were sampled at consecutive 10-minute intervals during 12 months (n = 52,560) at four sites and 10 months (n = 44,064) at the fifth. Three outcome measures – the waiting count, occupancy level, and boarding count – were forecast 2, 4, 6, and 8 hours beyond each observation, and forecasts were compared to observed data at corresponding times. The reliability and calibration were measured following previously described methods. After linear calibration, the forecasting accuracy was measured using the median absolute error (MAE). Results The tool was successfully used for five different sites. Its forecasts were more reliable, better calibrated, and more accurate at 2 hours than at 8 hours. The reliability and calibration of the tool were similar between the original development site and external sites; the boarding count was an exception, which was less reliable at four out of five sites. Some variability in accuracy existed among institutions; when forecasting 4 hours into the future, the MAE of the waiting count ranged between 0.6 and 3.1 patients, the MAE of the occupancy level ranged between 9.0 and 14.5% of beds, and the MAE of the boarding count ranged between 0.9 and 2.7 patients. Conclusion The ForecastED tool generated potentially useful forecasts of input and throughput measures of ED crowding at five external sites, without modifying the underlying assumptions. Noting the limitation that this was not a real-time validation, ongoing research will focus on integrating the tool with ED information systems. PMID:19716629

  15. Comparison of Analytical Methods for the Determination of Uranium in Seawater Using Inductively Coupled Plasma Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Jordana R.; Gill, Gary A.; Kuo, Li-Jung

    2016-04-20

    Trace element determinations in seawater by inductively coupled plasma mass spectrometry are analytically challenging due to the typically very low concentrations of the trace elements and the potential interference of the salt matrix. In this study, we did a comparison for uranium analysis using inductively coupled plasma mass spectrometry (ICP-MS) of Sequim Bay seawater samples and three seawater certified reference materials (SLEW-3, CASS-5 and NASS-6) using seven different analytical approaches. The methods evaluated include: direct analysis, Fe/Pd reductive precipitation, standard addition calibration, online automated dilution using an external calibration with and without matrix matching, and online automated pre-concentration. The methodmore » which produced the most accurate results was the method of standard addition calibration, recovering uranium from a Sequim Bay seawater sample at 101 ± 1.2%. The on-line preconcentration method and the automated dilution with matrix-matched calibration method also performed well. The two least effective methods were the direct analysis and the Fe/Pd reductive precipitation using sodium borohydride« less

  16. Multivariate meta-analysis of individual participant data helped externally validate the performance and implementation of a prediction model.

    PubMed

    Snell, Kym I E; Hua, Harry; Debray, Thomas P A; Ensor, Joie; Look, Maxime P; Moons, Karel G M; Riley, Richard D

    2016-01-01

    Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  17. Automated image quality assessment for chest CT scans.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2018-02-01

    Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.

  18. Bulk and surface event identification in p-type germanium detectors

    NASA Astrophysics Data System (ADS)

    Yang, L. T.; Li, H. B.; Wong, H. T.; Agartioglu, M.; Chen, J. H.; Jia, L. P.; Jiang, H.; Li, J.; Lin, F. K.; Lin, S. T.; Liu, S. K.; Ma, J. L.; Sevda, B.; Sharma, V.; Singh, L.; Singh, M. K.; Singh, M. K.; Soma, A. K.; Sonay, A.; Yang, S. W.; Wang, L.; Wang, Q.; Yue, Q.; Zhao, W.

    2018-04-01

    The p-type point-contact germanium detectors have been adopted for light dark matter WIMP searches and the studies of low energy neutrino physics. These detectors exhibit anomalous behavior to events located at the surface layer. The previous spectral shape method to identify these surface events from the bulk signals relies on spectral shape assumptions and the use of external calibration sources. We report an improved method in separating them by taking the ratios among different categories of in situ event samples as calibration sources. Data from CDEX-1 and TEXONO experiments are re-examined using the ratio method. Results are shown to be consistent with the spectral shape method.

  19. Fast, externally triggered, digital phase controller for an optical lattice

    NASA Astrophysics Data System (ADS)

    Sadgrove, Mark; Nakagawa, Ken'ichi

    2011-11-01

    We present a method to control the phase of an optical lattice according to an external trigger signal. The method has a latency of less than 30 μs. Two phase locked digital synthesizers provide the driving signal for two acousto-optic modulators which control the frequency and phase of the counter-propagating beams which form a standing wave (optical lattice). A micro-controller with an external interrupt function is connected to the desired external signal, and updates the phase register of one of the synthesizers when the external signal changes. The standing wave (period λ/2 = 390 nm) can be moved by units of 49 nm with a mean jitter of 28 nm. The phase change is well known due to the digital nature of the synthesizer, and does not need calibration. The uses of the scheme include coherent control of atomic matter-wave dynamics.

  20. Robust calibration of an optical-lattice depth based on a phase shift

    NASA Astrophysics Data System (ADS)

    Cabrera-Gutiérrez, C.; Michon, E.; Brunaud, V.; Kawalec, T.; Fortun, A.; Arnal, M.; Billy, J.; Guéry-Odelin, D.

    2018-04-01

    We report on a method to calibrate the depth of an optical lattice. It consists of triggering the intrasite dipole mode of the cloud by a sudden phase shift. The corresponding oscillatory motion is directly related to the interband frequencies on a large range of lattice depths. Remarkably, for a moderate displacement, a single frequency dominates the oscillation of the zeroth and first orders of the interference pattern observed after a sufficiently long time of flight. The method is robust against atom-atom interactions and the exact value of the extra weak external confinement superimposed to the optical lattice.

  1. Double modulation pyrometry: A radiometric method to measure surface temperatures of directly irradiated samples

    NASA Astrophysics Data System (ADS)

    Potamias, Dimitrios; Alxneit, Ivo; Wokaun, Alexander

    2017-09-01

    The design, implementation, calibration, and assessment of double modulation pyrometry to measure surface temperatures of radiatively heated samples in our 1 kW imaging furnace is presented. The method requires that the intensity of the external radiation can be modulated. This was achieved by a rotating blade mounted parallel to the optical axis of the imaging furnace. Double modulation pyrometry independently measures the external radiation reflected by the sample as well as the sum of thermal and reflected radiation and extracts the thermal emission as the difference of these signals. Thus a two-step calibration is required: First, the relative gains of the measured signals are equalized and then a temperature calibration is performed. For the latter, we transfer the calibration from a calibrated solar blind pyrometer that operates at a different wavelength. We demonstrate that the worst case systematic error associated with this procedure is about 300 K but becomes negligible if a reasonable estimate of the sample's emissivity is used. An analysis of the influence of the uncertainties in the calibration coefficients reveals that one (out of the five) coefficient contributes almost 50% to the final temperature error. On a low emission sample like platinum, the lower detection limit is around 1700 K and the accuracy typically about 20 K. Note that these moderate specifications are specific for the use of double modulation pyrometry at the imaging furnace. It is mainly caused by the difficulty to achieve and maintain good overlap of the hot zone with a diameter of about 3 mm Full Width at Half Height and the measurement spot both of which are of similar size.

  2. Double modulation pyrometry: A radiometric method to measure surface temperatures of directly irradiated samples.

    PubMed

    Potamias, Dimitrios; Alxneit, Ivo; Wokaun, Alexander

    2017-09-01

    The design, implementation, calibration, and assessment of double modulation pyrometry to measure surface temperatures of radiatively heated samples in our 1 kW imaging furnace is presented. The method requires that the intensity of the external radiation can be modulated. This was achieved by a rotating blade mounted parallel to the optical axis of the imaging furnace. Double modulation pyrometry independently measures the external radiation reflected by the sample as well as the sum of thermal and reflected radiation and extracts the thermal emission as the difference of these signals. Thus a two-step calibration is required: First, the relative gains of the measured signals are equalized and then a temperature calibration is performed. For the latter, we transfer the calibration from a calibrated solar blind pyrometer that operates at a different wavelength. We demonstrate that the worst case systematic error associated with this procedure is about 300 K but becomes negligible if a reasonable estimate of the sample's emissivity is used. An analysis of the influence of the uncertainties in the calibration coefficients reveals that one (out of the five) coefficient contributes almost 50% to the final temperature error. On a low emission sample like platinum, the lower detection limit is around 1700 K and the accuracy typically about 20 K. Note that these moderate specifications are specific for the use of double modulation pyrometry at the imaging furnace. It is mainly caused by the difficulty to achieve and maintain good overlap of the hot zone with a diameter of about 3 mm Full Width at Half Height and the measurement spot both of which are of similar size.

  3. Improving the Thermal, Radial and Temporal Accuracy of the Analytical Ultracentrifuge through External References

    PubMed Central

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying

    2013-01-01

    Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724

  4. Improving the thermal, radial, and temporal accuracy of the analytical ultracentrifuge through external references.

    PubMed

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying

    2013-09-01

    Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.

  5. Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta

    2012-10-01

    A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruuge, A; Erdi, Y; Mahmood, U

    Purpose: The conformance of Primary Diagnostic Monitors (PDMs) to the DICOM GSDF is increasingly required by several state and city regulators. Our purpose was to quantitatively characterize the luminance performance of the internal, built in photometer of BARCO monitors against an externally calibrated luminance meter. Methods: Thirty one PDMs (BARCO) were included in our analysis. An externally calibrated photometer (RaySafe Solo Light) was used to measure the luminance and illuminance values. Measured monitors were located at various hospital sites, radiology physicians’ offices and radiology reading rooms. All measured PDMs were equipped with the manufacturer’s built-in photometers and connected to Barcomore » MediCal QA web service for manual and automatic quality control measurements. PDM combinations included 1, 2 and 4 monitors depending on the location. TG-18 and SMPTE test patterns were used to evaluate monitor performance. Results: All the PDMs exceeded the luminance ratio of 250:1, as required by NYC PDM guidelines. One PDM failed the NYC requirement for the minimal luminance level of 350 cd/m2. As compared to the external photometer, the difference in measurement of the maximum luminance with the built-in photometer was found to exceed 5% on 6 of the PDM measured, with a maximum deviation of 10%. The age of the monitors that failed was on average 5 years. All monitors passed the luminance uniformity test, which was 30% from the center of the monitor to the 4 corner locations. Four PDMs failed the Gray Scale Display Function (GSDF) calibration. Conclusion: For the consistent display of medical images and continued conformance with the DICOM GSDF standard, it is essential to compare the performance of the built-in photometer with an externally calibrated photometer for monitors that are older than 5 years.« less

  7. Evaluation of the impact of matrix effect on quantification of pesticides in foods by gas chromatography-mass spectrometry using isotope-labeled internal standards.

    PubMed

    Yarita, Takashi; Aoyagi, Yoshie; Otake, Takamitsu

    2015-05-29

    The impact of the matrix effect in GC-MS quantification of pesticides in food using the corresponding isotope-labeled internal standards was evaluated. A spike-and-recovery study of nine target pesticides was first conducted using paste samples of corn, green soybean, carrot, and pumpkin. The observed analytical values using isotope-labeled internal standards were more accurate for most target pesticides than that obtained using the external calibration method, but were still biased from the spiked concentrations when a matrix-free calibration solution was used for calibration. The respective calibration curves for each target pesticide were also prepared using matrix-free calibration solutions and matrix-matched calibration solutions with blank soybean extract. The intensity ratio of the peaks of most target pesticides to that of the corresponding isotope-labeled internal standards was influenced by the presence of the matrix in the calibration solution; therefore, the observed slope varied. The ratio was also influenced by the type of injection method (splitless or on-column). These results indicated that matrix-matching of the calibration solution is required for very accurate quantification, even if isotope-labeled internal standards were used for calibration. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. System for characterizing semiconductor materials and photovoltaic devices through calibration

    DOEpatents

    Sopori, Bhushan L.; Allen, Larry C.; Marshall, Craig; Murphy, Robert C.; Marshall, Todd

    1998-01-01

    A method and apparatus for measuring characteristics of a piece of material, typically semiconductor materials including photovoltaic devices. The characteristics may include dislocation defect density, grain boundaries, reflectance, external LBIC, internal LBIC, and minority carrier diffusion length. The apparatus includes a light source, an integrating sphere, and a detector communicating with a computer. The measurement or calculation of the characteristics is calibrated to provide accurate, absolute values. The calibration is performed by substituting a standard sample for the piece of material, the sample having a known quantity of one or more of the relevant characteristics. The quantity measured by the system of the relevant characteristic is compared to the known quantity and a calibration constant is created thereby.

  9. System for characterizing semiconductor materials and photovoltaic devices through calibration

    DOEpatents

    Sopori, B.L.; Allen, L.C.; Marshall, C.; Murphy, R.C.; Marshall, T.

    1998-05-26

    A method and apparatus are disclosed for measuring characteristics of a piece of material, typically semiconductor materials including photovoltaic devices. The characteristics may include dislocation defect density, grain boundaries, reflectance, external LBIC, internal LBIC, and minority carrier diffusion length. The apparatus includes a light source, an integrating sphere, and a detector communicating with a computer. The measurement or calculation of the characteristics is calibrated to provide accurate, absolute values. The calibration is performed by substituting a standard sample for the piece of material, the sample having a known quantity of one or more of the relevant characteristics. The quantity measured by the system of the relevant characteristic is compared to the known quantity and a calibration constant is created thereby. 44 figs.

  10. Fast calibration of electromagnetically tracked oblique-viewing rigid endoscopes.

    PubMed

    Liu, Xinyang; Rice, Christina E; Shekhar, Raj

    2017-10-01

    The oblique-viewing (i.e., angled) rigid endoscope is a commonly used tool in conventional endoscopic surgeries. The relative rotation between its two moveable parts, the telescope and the camera head, creates a rotation offset between the actual and the projection of an object in the camera image. A calibration method tailored to compensate such offset is needed. We developed a fast calibration method for oblique-viewing rigid endoscopes suitable for clinical use. In contrast to prior approaches based on optical tracking, we used electromagnetic (EM) tracking as the external tracking hardware to improve compactness and practicality. Two EM sensors were mounted on the telescope and the camera head, respectively, with considerations to minimize EM tracking errors. Single-image calibration was incorporated into the method, and a sterilizable plate, laser-marked with the calibration pattern, was also developed. Furthermore, we proposed a general algorithm to estimate the rotation center in the camera image. Formulas for updating the camera matrix in terms of clockwise and counterclockwise rotations were also developed. The proposed calibration method was validated using a conventional [Formula: see text], 5-mm laparoscope. Freehand calibrations were performed using the proposed method, and the calibration time averaged 2 min and 8 s. The calibration accuracy was evaluated in a simulated clinical setting with several surgical tools present in the magnetic field of EM tracking. The root-mean-square re-projection error averaged 4.9 pixel (range 2.4-8.5 pixel, with image resolution of [Formula: see text] for rotation angles ranged from [Formula: see text] to [Formula: see text]. We developed a method for fast and accurate calibration of oblique-viewing rigid endoscopes. The method was also designed to be performed in the operating room and will therefore support clinical translation of many emerging endoscopic computer-assisted surgical systems.

  11. OPERATION SUNBEAM, SHOT SMALL BOY. Project Officer’s Report - Project 2.2, Measurement of Fast-Neutron Dose Rate as a Function of Time

    DTIC Science & Technology

    1985-09-01

    Calibration 44 3.1.3 The SPIDER Calibration 45 3.1.*» Thermistor Temperature Detector Calibration. . 45 3.2 Amplifier Calibration 45 3.2.1...of a material with high conductivity and preferably high permeability. For the bunker construction, welded one-inch soft-steel plates were chosen for...Kovar flanges (metal- to-ceramic seal). The external plates are hel1arc- welded to the flanges. The external plate facing away from the incoming

  12. Adulteration of diesel/biodiesel blends by vegetable oil as determined by Fourier transform (FT) near infrared spectrometry and FT-Raman spectroscopy.

    PubMed

    Oliveira, Flavia C C; Brandão, Christian R R; Ramalho, Hugo F; da Costa, Leonardo A F; Suarez, Paulo A Z; Rubim, Joel C

    2007-03-28

    In this work it has been shown that the routine ASTM methods (ASTM 4052, ASTM D 445, ASTM D 4737, ASTM D 93, and ASTM D 86) recommended by the ANP (the Brazilian National Agency for Petroleum, Natural Gas and Biofuels) to determine the quality of diesel/biodiesel blends are not suitable to prevent the adulteration of B2 or B5 blends with vegetable oils. Considering the previous and actual problems with fuel adulterations in Brazil, we have investigated the application of vibrational spectroscopy (Fourier transform (FT) near infrared spectrometry and FT-Raman) to identify adulterations of B2 and B5 blends with vegetable oils. Partial least square regression (PLS), principal component regression (PCR), and artificial neural network (ANN) calibration models were designed and their relative performances were evaluated by external validation using the F-test. The PCR, PLS, and ANN calibration models based on the Fourier transform (FT) near infrared spectrometry and FT-Raman spectroscopy were designed using 120 samples. Other 62 samples were used in the validation and external validation, for a total of 182 samples. The results have shown that among the designed calibration models, the ANN/FT-Raman presented the best accuracy (0.028%, w/w) for samples used in the external validation.

  13. On the use of mobile phones and wearable microphones for noise exposure measurements: Calibration and measurement accuracy

    NASA Astrophysics Data System (ADS)

    Dumoulin, Romain

    Despite the fact that noise-induced hearing loss remains the number one occupational disease in developed countries, individual noise exposure levels are still rarely known and infrequently tracked. Indeed, efforts to standardize noise exposure levels present disadvantages such as costly instrumentation and difficulties associated with on site implementation. Given their advanced technical capabilities and widespread daily usage, mobile phones could be used to measure noise levels and make noise monitoring more accessible. However, the use of mobile phones for measuring noise exposure is currently limited due to the lack of formal procedures for their calibration and challenges regarding the measurement procedure. Our research investigated the calibration of mobile phone-based solutions for measuring noise exposure using a mobile phone's built-in microphones and wearable external microphones. The proposed calibration approach integrated corrections that took into account microphone placement error. The corrections were of two types: frequency-dependent, using a digital filter and noise level-dependent, based on the difference between the C-weighted noise level minus A-weighted noise level of the noise measured by the phone. The electro-acoustical limitations and measurement calibration procedure of the mobile phone were investigated. The study also sought to quantify the effect of noise exposure characteristics on the accuracy of calibrated mobile phone measurements. Measurements were carried out in reverberant and semi-anechoic chambers with several mobiles phone units of the same model, two types of external devices (an earpiece and a headset with an in-line microphone) and an acoustical test fixture (ATF). The proposed calibration approach significantly improved the accuracy of the noise level measurements in diffuse and free fields, with better results in the diffuse field and with ATF positions causing little or no acoustic shadowing. Several sources of errors and uncertainties were identified including the errors associated with the inter-unit-variability, the presence of signal saturation and the microphone placement relative to the source and the wearer. The results of the investigations and validation measurements led to recommendations regarding the measurement procedure including the use of external microphones having lower sensitivity and provided the basis for a standardized and unique factory default calibration method intended for implementation in any mobile phone. A user-defined adjustment was proposed to minimize the errors associated with calibration and the acoustical field. Mobile phones implementing the proposed laboratory calibration and used with external microphones showed great potential as noise exposure instruments. Combined with their potential as training and prevention tools, the expansion of their use could significantly help reduce the risks of noise-induced hearing loss.

  14. Predicting herbivore faecal nitrogen using a multispecies near-infrared reflectance spectroscopy calibration.

    PubMed

    Villamuelas, Miriam; Serrano, Emmanuel; Espunyes, Johan; Fernández, Néstor; López-Olvera, Jorge R; Garel, Mathieu; Santos, João; Parra-Aguado, María Ángeles; Ramanzin, Maurizio; Fernández-Aguilar, Xavier; Colom-Cadena, Andreu; Marco, Ignasi; Lavín, Santiago; Bartolomé, Jordi; Albanell, Elena

    2017-01-01

    Optimal management of free-ranging herbivores requires the accurate assessment of an animal's nutritional status. For this purpose 'near-infrared reflectance spectroscopy' (NIRS) is very useful, especially when nutritional assessment is done through faecal indicators such as faecal nitrogen (FN). In order to perform an NIRS calibration, the default protocol recommends starting by generating an initial equation based on at least 50-75 samples from the given species. Although this protocol optimises prediction accuracy, it limits the use of NIRS with rare or endangered species where sample sizes are often small. To overcome this limitation we tested a single NIRS equation (i.e., multispecies calibration) to predict FN in herbivores. Firstly, we used five herbivore species with highly contrasting digestive physiologies to build monospecies and multispecies calibrations, namely horse, sheep, Pyrenean chamois, red deer and European rabbit. Secondly, the equation accuracy was evaluated by two procedures using: (1) an external validation with samples from the same species, which were not used in the calibration process; and (2) samples from different ungulate species, specifically Alpine ibex, domestic goat, European mouflon, roe deer and cattle. The multispecies equation was highly accurate in terms of the coefficient of determination for calibration R2 = 0.98, standard error of validation SECV = 0.10, standard error of external validation SEP = 0.12, ratio of performance to deviation RPD = 5.3, and range error of prediction RER = 28.4. The accuracy of the multispecies equation to predict other herbivore species was also satisfactory (R2 > 0.86, SEP < 0.27, RPD > 2.6, and RER > 8.1). Lastly, the agreement between multi- and monospecies calibrations was also confirmed by the Bland-Altman method. In conclusion, our single multispecies equation can be used as a reliable, cost-effective, easy and powerful analytical method to assess FN in a wide range of herbivore species.

  15. In Situ Determination of Trace Elements in Fish Otoliths by Laser Ablation Double Focusing Sector Field Inductively Coupled Plasma Mass Spectrometry Using a Solution Standard Addition Calibration Method

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Jones, C. M.

    2002-05-01

    Microchemistry of fish otoliths (fish ear bones) is a very useful tool for monitoring aquatic environments and fish migration. However, determination of the elemental composition in fish otolith by ICP-MS has been limited to either analysis of dissolved sample solution or measurement of limited number of trace elements by laser ablation (LA)- ICP-MS due to low sensitivity, lack of available calibration standards, and complexity of polyatomic molecular interference. In this study, a method was developed for in situ determination of trace elements in fish otoliths by laser ablation double focusing sector field ultra high sensitivity Finnigan Element 2 ICP-MS using a solution standard addition calibration method. Due to the lack of matrix-match solid calibration standards, sixteen trace elements (Na, Mg, P, Cr, Mn, Fe, Ni, Cu, Rb, Sr, Y, Cd, La, Ba, Pb and U) were determined using a solution standard calibration with Ca as an internal standard. Flexibility, easy preparation and stable signals are the advantages of using solution calibration standards. In order to resolve polyatomic molecular interferences, medium resolution (M/delta M > 4000) was used for some elements (Na, Mg, P, Cr, Mn, Fe, Ni, and Cu). Both external calibration and standard addition quantification strategies are compared and discussed. Precision, accuracy, and limits of detection are presented.

  16. Research on a high-precision calibration method for tunable lasers

    NASA Astrophysics Data System (ADS)

    Xiang, Na; Li, Zhengying; Gui, Xin; Wang, Fan; Hou, Yarong; Wang, Honghai

    2018-03-01

    Tunable lasers are widely used in the field of optical fiber sensing, but nonlinear tuning exists even for zero external disturbance and limits the accuracy of the demodulation. In this paper, a high-precision calibration method for tunable lasers is proposed. A comb filter is introduced and the real-time output wavelength and scanning rate of the laser are calibrated by linear fitting several time-frequency reference points obtained from it, while the beat signal generated by the auxiliary interferometer is interpolated and frequency multiplied to find more accurate zero crossing points, with these points being used as wavelength counters to resample the comb signal to correct the nonlinear effect, which ensures that the time-frequency reference points of the comb filter are linear. A stability experiment and a strain sensing experiment verify the calibration precision of this method. The experimental result shows that the stability and wavelength resolution of the FBG demodulation can reach 0.088 pm and 0.030 pm, respectively, using a tunable laser calibrated by the proposed method. We have also compared the demodulation accuracy in the presence or absence of the comb filter, with the result showing that the introduction of the comb filter results to a 15-fold wavelength resolution enhancement.

  17. Prediction models for clustered data: comparison of a random intercept and standard regression model.

    PubMed

    Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne

    2013-02-15

    When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.

  18. Reduction of interferences in graphite furnace atomic absorption spectrometry by multiple linear regression modelling

    NASA Astrophysics Data System (ADS)

    Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Tiberiade, Christian; Frache, Roberto

    2000-12-01

    The multivariate effects of Na, K, Mg and Ca as nitrates on the electrothermal atomisation of manganese, cadmium and iron were studied by multiple linear regression modelling. Since the models proved to efficiently predict the effects of the considered matrix elements in a wide range of concentrations, they were applied to correct the interferences occurring in the determination of trace elements in seawater after pre-concentration of the analytes. In order to obtain a statistically significant number of samples, a large volume of the certified seawater reference materials CASS-3 and NASS-3 was treated with Chelex-100 resin; then, the chelating resin was separated from the solution, divided into several sub-samples, each of them was eluted with nitric acid and analysed by electrothermal atomic absorption spectrometry (for trace element determinations) and inductively coupled plasma optical emission spectrometry (for matrix element determinations). To minimise any other systematic error besides that due to matrix effects, accuracy of the pre-concentration step and contamination levels of the procedure were checked by inductively coupled plasma mass spectrometric measurements. Analytical results obtained by applying the multiple linear regression models were compared with those obtained with other calibration methods, such as external calibration using acid-based standards, external calibration using matrix-matched standards and the analyte addition technique. Empirical models proved to efficiently reduce interferences occurring in the analysis of real samples, allowing an improvement of accuracy better than for other calibration methods.

  19. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    PubMed Central

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H.; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-01-01

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221

  20. Calibration of an outdoor distributed camera network with a 3D point cloud.

    PubMed

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-07-29

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC).

  1. Analysis of iodinated haloacetic acids in drinking water by reversed-phase liquid chromatography/electrospray ionization/tandem mass spectrometry with large volume direct aqueous injection.

    PubMed

    Li, Yongtao; Whitaker, Joshua S; McCarty, Christina L

    2012-07-06

    A large volume direct aqueous injection method was developed for the analysis of iodinated haloacetic acids in drinking water by using reversed-phase liquid chromatography/electrospray ionization/tandem mass spectrometry in the negative ion mode. Both the external and internal standard calibration methods were studied for the analysis of monoiodoacetic acid, chloroiodoacetic acid, bromoiodoacetic acid, and diiodoacetic acid in drinking water. The use of a divert valve technique for the mobile phase solvent delay, along with isotopically labeled analogs used as internal standards, effectively reduced and compensated for the ionization suppression typically caused by coexisting common inorganic anions. Under the optimized method conditions, the mean absolute and relative recoveries resulting from the replicate fortified deionized water and chlorinated drinking water analyses were 83-107% with a relative standard deviation of 0.7-11.7% and 84-111% with a relative standard deviation of 0.8-12.1%, respectively. The method detection limits resulting from the external and internal standard calibrations, based on seven fortified deionized water replicates, were 0.7-2.3 ng/L and 0.5-1.9 ng/L, respectively. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Multiplexed MRM-Based Protein Quantitation Using Two Different Stable Isotope-Labeled Peptide Isotopologues for Calibration.

    PubMed

    LeBlanc, André; Michaud, Sarah A; Percy, Andrew J; Hardie, Darryl B; Yang, Juncong; Sinclair, Nicholas J; Proudfoot, Jillaine I; Pistawka, Adam; Smith, Derek S; Borchers, Christoph H

    2017-07-07

    When quantifying endogenous plasma proteins for fundamental and biomedical research - as well as for clinical applications - precise, reproducible, and robust assays are required. Targeted detection of peptides in a bottom-up strategy is the most common and precise mass spectrometry-based quantitation approach when combined with the use of stable isotope-labeled peptides. However, when measuring protein in plasma, the unknown endogenous levels prevent the implementation of the best calibration strategies, since no blank matrix is available. Consequently, several alternative calibration strategies are employed by different laboratories. In this study, these methods were compared to a new approach using two different stable isotope-labeled standard (SIS) peptide isotopologues for each endogenous peptide to be quantified, enabling an external calibration curve as well as the quality control samples to be prepared in pooled human plasma without interference from endogenous peptides. This strategy improves the analytical performance of the assay and enables the accuracy of the assay to be monitored, which can also facilitate method development and validation.

  3. Efficient quantification of water content in edible oils by headspace gas chromatography with vapour phase calibration.

    PubMed

    Xie, Wei-Qi; Gong, Yi-Xian; Yu, Kong-Xian

    2018-06-01

    An automated and accurate headspace gas chromatographic (HS-GC) technique was investigated for rapidly quantifying water content in edible oils. In this method, multiple headspace extraction (MHE) procedures were used to analyse the integrated water content from the edible oil sample. A simple vapour phase calibration technique with an external vapour standard was used to calibrate both the water content in the gas phase and the total weight of water in edible oil sample. After that the water in edible oils can be quantified. The data showed that the relative standard deviation of the present HS-GC method in the precision test was less than 1.13%, the relative differences between the new method and a reference method (i.e. the oven-drying method) were no more than 1.62%. The present HS-GC method is automated, accurate, efficient, and can be a reliable tool for quantifying water content in edible oil related products and research. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  4. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    PubMed Central

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  5. Wavelength-Filter Based Spectral Calibrated Wave number - Linearization in 1.3 mm Spectral Domain Optical Coherence.

    PubMed

    Wijeisnghe, Ruchire Eranga Henry; Cho, Nam Hyun; Park, Kibeom; Shin, Yongseung; Kim, Jeehyun

    2013-12-01

    In this study, we demonstrate the enhanced spectral calibration method for 1.3 μm spectral-domain optical coherence tomography (SD-OCT). The calibration method using wavelength-filter simplifies the SD-OCT system, and also the axial resolution and the entire speed of the OCT system can be dramatically improved as well. An externally connected wavelength-filter is utilized to obtain the information of the wavenumber and the pixel position. During the calibration process the wavelength-filter is placed after a broadband source by connecting through an optical circulator. The filtered spectrum with a narrow line width of 0.5 nm is detected by using a line-scan camera. The method does not require a filter or a software recalibration algorithm for imaging as it simply resamples the OCT signal from the detector array without employing rescaling or interpolation methods. One of the main drawbacks of SD-OCT is the broadened point spread functions (PSFs) with increasing imaging depth can be compensated by increasing the wavenumber-linearization order. The sensitivity of our system was measured at 99.8 dB at an imaging depth of 2.1 mm compared with the uncompensated case.

  6. Low Cost and Efficient 3d Indoor Mapping Using Multiple Consumer Rgb-D Cameras

    NASA Astrophysics Data System (ADS)

    Chen, C.; Yang, B. S.; Song, S.

    2016-06-01

    Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for next generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. The point clouds with auxiliary data such as colour, infrared images derived from 3D indoor mobile mapping suite can be used in a variety of novel applications, including indoor scene visualization, automated floorplan generation, gaming, reverse engineering, navigation, simulation and etc. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners product accurate point clouds of building interiors containing billions of points. However, these laser scanner based systems are mostly expensive and not portable. Low cost consumer RGB-D Cameras provides an alternative way to solve the core challenge of indoor mapping that is capturing detailed underlying geometry of the building interiors. Nevertheless, RGB-D Cameras have a very limited field of view resulting in low efficiency in the data collecting stage and incomplete dataset that missing major building structures (e.g. ceilings, walls). Endeavour to collect a complete scene without data blanks using single RGB-D Camera is not technic sound because of the large amount of human labour and position parameters need to be solved. To find an efficient and low cost way to solve the 3D indoor mapping, in this paper, we present an indoor mapping suite prototype that is built upon a novel calibration method which calibrates internal parameters and external parameters of multiple RGB-D Cameras. Three Kinect sensors are mounted on a rig with different view direction to form a large field of view. The calibration procedure is three folds: 1, the internal parameters of the colour and infrared camera inside each Kinect are calibrated using a chess board pattern, respectively; 2, the external parameters between the colour and infrared camera inside each Kinect are calibrated using a chess board pattern; 3, the external parameters between every Kinect are firstly calculated using a pre-set calibration field and further refined by an iterative closet point algorithm. Experiments are carried out to validate the proposed method upon RGB-D datasets collected by the indoor mapping suite prototype. The effectiveness and accuracy of the proposed method is evaluated by comparing the point clouds derived from the prototype with ground truth data collected by commercial terrestrial laser scanner at ultra-high density. The overall analysis of the results shows that the proposed method achieves seamless integration of multiple point clouds form different RGB-D cameras collected at 30 frame per second.

  7. Constrained Maximum Likelihood Estimation for Model Calibration Using Summary-level Information from External Big Data Sources

    PubMed Central

    Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J.

    2016-01-01

    Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an “internal” study while utilizing summary-level information, such as information on parameters for reduced models, from an “external” big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature. PMID:27570323

  8. Implementation of standardization in clinical practice: not always an easy task.

    PubMed

    Panteghini, Mauro

    2012-02-29

    As soon as a new reference measurement system is adopted, clinical validation of correctly calibrated commercial methods should take place. Tracing back the calibration of routine assays to a reference system can actually modify the relation of analyte results to existing reference intervals and decision limits and this may invalidate some of the clinical decision-making criteria currently used. To maintain the accumulated clinical experience, the quantitative relationship to the previous calibration system should be established and, if necessary, the clinical decision-making criteria should be adjusted accordingly. The implementation of standardization should take place in a concerted action of laboratorians, manufacturers, external quality assessment scheme organizers and clinicians. Dedicated meetings with manufacturers should be organized to discuss the process of assay recalibration and studies should be performed to obtain convincing evidence that the standardization works, improving result comparability. Another important issue relates to the surveillance of the performance of standardized assays through the organization of appropriate analytical internal and external quality controls. Last but not least, uncertainty of measurement that fits for this purpose must be defined across the entire traceability chain, starting with the available reference materials, extending through the manufacturers and their processes for assignment of calibrator values and ultimately to the final result reported to clinicians by laboratories.

  9. DIRBE External Calibrator (DEC)

    NASA Technical Reports Server (NTRS)

    Wyatt, Clair L.; Thurgood, V. Alan; Allred, Glenn D.

    1987-01-01

    Under NASA Contract No. NAS5-28185, the Center for Space Engineering at Utah State University has produced a calibration instrument for the Diffuse Infrared Background Experiment (DIRBE). DIRBE is one of the instruments aboard the Cosmic Background Experiment Observatory (COBE). The calibration instrument is referred to as the DEC (Dirbe External Calibrator). DEC produces a steerable, infrared beam of controlled spectral content and intensity and with selectable point source or diffuse source characteristics, that can be directed into the DIRBE to map fields and determine response characteristics. This report discusses the design of the DEC instrument, its operation and characteristics, and provides an analysis of the systems capabilities and performance.

  10. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  11. Lineal energy calibration of mini tissue-equivalent gas-proportional counters (TEPC)

    NASA Astrophysics Data System (ADS)

    Conte, V.; Moro, D.; Grosswendt, B.; Colautti, P.

    2013-07-01

    Mini TEPCs are cylindrical gas proportional counters of 1 mm or less of sensitive volume diameter. The lineal energy calibration of these tiny counters can be performed with an external gamma-ray source. However, to do that, first a method to get a simple and precise spectral mark has to be found and then the keV/μm value of this mark. A precise method (less than 1% of uncertainty) to identify this markis described here, and the lineal energy value of this mark has been measured for different simulated site sizes by using a 137Cs gamma source and a cylindrical TEPC equipped with a precision internal 244Cm alpha-particle source, and filled with propane-based tissue-equivalent gas mixture. Mini TEPCs can be calibrated in terms of lineal energy, by exposing them to 137Cesium sources, with an overall uncertainty of about 5%.

  12. Automatic analysis of quantitative NMR data of pharmaceutical compound libraries.

    PubMed

    Liu, Xuejun; Kolpak, Michael X; Wu, Jiejun; Leo, Gregory C

    2012-08-07

    In drug discovery, chemical library compounds are usually dissolved in DMSO at a certain concentration and then distributed to biologists for target screening. Quantitative (1)H NMR (qNMR) is the preferred method for the determination of the actual concentrations of compounds because the relative single proton peak areas of two chemical species represent the relative molar concentrations of the two compounds, that is, the compound of interest and a calibrant. Thus, an analyte concentration can be determined using a calibration compound at a known concentration. One particularly time-consuming step in the qNMR analysis of compound libraries is the manual integration of peaks. In this report is presented an automated method for performing this task without prior knowledge of compound structures and by using an external calibration spectrum. The script for automated integration is fast and adaptable to large-scale data sets, eliminating the need for manual integration in ~80% of the cases.

  13. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    PubMed

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.

  14. Use of laser ablation-inductively coupled plasma-time of flight-mass spectrometry to identify the elemental composition of vanilla and determine the geographic origin by discriminant function analysis.

    PubMed

    Hondrogiannis, Ellen M; Ehrlinger, Erin; Poplaski, Alyssa; Lisle, Meredith

    2013-11-27

    A total of 11 elements found in 25 vanilla samples from Uganda, Madagascar, Indonesia, and Papua New Guinea were measured by laser ablation-inductively coupled plasma-time-of-flight-mass spectrometry (LA-ICP-TOF-MS) for the purpose of collecting data that could be used to discriminate among the origins. Pellets were prepared of the samples, and elemental concentrations were obtained on the basis of external calibration curves created using five National Institute of Standards and Technology (NIST) standards and one Chinese standard with (13)C internal standardization. These curves were validated using NIST 1573a (tomato leaves) as a check standard. Discriminant analysis was used to successfully classify the vanilla samples by their origin. Our method illustrates the feasibility of using LA-ICP-TOF-MS with an external calibration curve for high-throughput screening of spice screening analysis.

  15. A high resolution InSAR topographic reconstruction research in urban area based on TerraSAR-X data

    NASA Astrophysics Data System (ADS)

    Qu, Feifei; Qin, Zhang; Zhao, Chaoying; Zhu, Wu

    2011-10-01

    Aiming at the problems of difficult unwrapping and phase noise in InSAR DEM reconstruction, especially for the high-resolution TerraSAR-X data, this paper improved the height reconstruction algorithm in view of "remove-restore" based on external coarse DEM and multi-interferogram processing, proposed a height calibration method based on CR+GPS data. Several measures have been taken for urban high resolution DEM reconstruction with TerraSAR data. The SAR interferometric pairs with long spatial and short temporal baselines are served for the DEM. The external low resolution and low accuracy DEM is applied for the "remove-restore" concept to ease the phase unwrapping. The stochastic errors including atmospheric effects and phase noise are suppressed by weighted averaging of DEM phases. Six TerraSAR-X data are applied to create the twelve-meter's resolution DEM over Xian, China with the newly-proposed method. The heights in discrete GPS benchmarks are used to calibrate the result, and the RMS of 3.29 meter is achieved by comparing with 1:50000 DEM.

  16. Note: An improved calibration system with phase correction for electronic transformers with digital output.

    PubMed

    Cheng, Han-miao; Li, Hong-bin

    2015-08-01

    The existing electronic transformer calibration systems employing data acquisition cards cannot satisfy some practical applications, because the calibration systems have phase measurement errors when they work in the mode of receiving external synchronization signals. This paper proposes an improved calibration system scheme with phase correction to improve the phase measurement accuracy. We employ NI PCI-4474 to design a calibration system, and the system has the potential to receive external synchronization signals and reach extremely high accuracy classes. Accuracy verification has been carried out in the China Electric Power Research Institute, and results demonstrate that the system surpasses the accuracy class 0.05. Furthermore, this system has been used to test the harmonics measurement accuracy of all-fiber optical current transformers. In the same process, we have used an existing calibration system, and a comparison of the test results is presented. The system after improvement is suitable for the intended applications.

  17. A calibration hierarchy for risk models was defined: from utopia to empirical data.

    PubMed

    Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W

    2016-06-01

    Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. The fossilized birth–death process for coherent calibration of divergence-time estimates

    PubMed Central

    Heath, Tracy A.; Huelsenbeck, John P.; Stadler, Tanja

    2014-01-01

    Time-calibrated species phylogenies are critical for addressing a wide range of questions in evolutionary biology, such as those that elucidate historical biogeography or uncover patterns of coevolution and diversification. Because molecular sequence data are not informative on absolute time, external data—most commonly, fossil age estimates—are required to calibrate estimates of species divergence dates. For Bayesian divergence time methods, the common practice for calibration using fossil information involves placing arbitrarily chosen parametric distributions on internal nodes, often disregarding most of the information in the fossil record. We introduce the “fossilized birth–death” (FBD) process—a model for calibrating divergence time estimates in a Bayesian framework, explicitly acknowledging that extant species and fossils are part of the same macroevolutionary process. Under this model, absolute node age estimates are calibrated by a single diversification model and arbitrary calibration densities are not necessary. Moreover, the FBD model allows for inclusion of all available fossils. We performed analyses of simulated data and show that node age estimation under the FBD model results in robust and accurate estimates of species divergence times with realistic measures of statistical uncertainty, overcoming major limitations of standard divergence time estimation methods. We used this model to estimate the speciation times for a dataset composed of all living bears, indicating that the genus Ursus diversified in the Late Miocene to Middle Pliocene. PMID:25009181

  19. Quantitative Analysis of Ca, Mg, and K in the Roots of Angelica pubescens f. biserrata by Laser-Induced Breakdown Spectroscopy Combined with Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Wang, J.; Shi, M.; Zheng, P.; Xue, Sh.; Peng, R.

    2018-03-01

    Laser-induced breakdown spectroscopy has been applied for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens Maxim. f. biserrata Shan et Yuan used in traditional Chinese medicine. Ca II 317.993 nm, Mg I 517.268 nm, and K I 769.896 nm spectral lines have been chosen to set up calibration models for the analysis using the external standard and artificial neural network methods. The linear correlation coefficients of the predicted concentrations versus the standard concentrations of six samples determined by the artificial neural network method are 0.9896, 0.9945, and 0.9911 for Ca, Mg, and K, respectively, which are better than for the external standard method. The artificial neural network method also gives better performance comparing with the external standard method for the average and maximum relative errors, average relative standard deviations, and most maximum relative standard deviations of the predicted concentrations of Ca, Mg, and K in the six samples. Finally, it is proved that the artificial neural network method gives better performance compared to the external standard method for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens.

  20. Decision curve analysis and external validation of the postoperative Karakiewicz nomogram for renal cell carcinoma based on a large single-center study cohort.

    PubMed

    Zastrow, Stefan; Brookman-May, Sabine; Cong, Thi Anh Phuong; Jurk, Stanislaw; von Bar, Immanuel; Novotny, Vladimir; Wirth, Manfred

    2015-03-01

    To predict outcome of patients with renal cell carcinoma (RCC) who undergo surgical therapy, risk models and nomograms are valuable tools. External validation on independent datasets is crucial for evaluating accuracy and generalizability of these models. The objective of the present study was to externally validate the postoperative nomogram developed by Karakiewicz et al. for prediction of cancer-specific survival. A total of 1,480 consecutive patients with a median follow-up of 82 months (IQR 46-128) were included into this analysis with 268 RCC-specific deaths. Nomogram-estimated survival probabilities were compared with survival probabilities of the actual cohort, and concordance indices were calculated. Calibration plots and decision curve analyses were used for evaluating calibration and clinical net benefit of the nomogram. Concordance between predictions of the nomogram and survival rates of the cohort was 0.911 after 12, 0.909 after 24 months and 0.896 after 60 months. Comparison of predicted probabilities and actual survival estimates with calibration plots showed an overestimation of tumor-specific survival based on nomogram predictions of high-risk patients, although calibration plots showed a reasonable calibration for probability ranges of interest. Decision curve analysis showed a positive net benefit of nomogram predictions for our patient cohort. The postoperative Karakiewicz nomogram provides a good concordance in this external cohort and is reasonably calibrated. It may overestimate tumor-specific survival in high-risk patients, which should be kept in mind when counseling patients. A positive net benefit of nomogram predictions was proven.

  1. Simple and accurate quantification of BTEX in ambient air by SPME and GC-MS.

    PubMed

    Baimatova, Nassiba; Kenessov, Bulat; Koziel, Jacek A; Carlsen, Lars; Bektassov, Marat; Demyanenko, Olga P

    2016-07-01

    Benzene, toluene, ethylbenzene and xylenes (BTEX) comprise one of the most ubiquitous and hazardous groups of ambient air pollutants of concern. Application of standard analytical methods for quantification of BTEX is limited by the complexity of sampling and sample preparation equipment, and budget requirements. Methods based on SPME represent simpler alternative, but still require complex calibration procedures. The objective of this research was to develop a simpler, low-budget, and accurate method for quantification of BTEX in ambient air based on SPME and GC-MS. Standard 20-mL headspace vials were used for field air sampling and calibration. To avoid challenges with obtaining and working with 'zero' air, slope factors of external standard calibration were determined using standard addition and inherently polluted lab air. For polydimethylsiloxane (PDMS) fiber, differences between the slope factors of calibration plots obtained using lab and outdoor air were below 14%. PDMS fiber provided higher precision during calibration while the use of Carboxen/PDMS fiber resulted in lower detection limits for benzene and toluene. To provide sufficient accuracy, the use of 20mL vials requires triplicate sampling and analysis. The method was successfully applied for analysis of 108 ambient air samples from Almaty, Kazakhstan. Average concentrations of benzene, toluene, ethylbenzene and o-xylene were 53, 57, 11 and 14µgm(-3), respectively. The developed method can be modified for further quantification of a wider range of volatile organic compounds in air. In addition, the new method is amenable to automation. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Radiometric calibration of optical microscopy and microspectroscopy apparata over a broad spectral range using a special thin-film luminescence standard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valenta, J., E-mail: jan.valenta@mff.cuni.cz; Greben, M.

    2015-04-15

    Application capabilities of optical microscopes and microspectroscopes can be considerably enhanced by a proper calibration of their spectral sensitivity. We propose and demonstrate a method of relative and absolute calibration of a microspectroscope over an extraordinary broad spectral range covered by two (parallel) detection branches in visible and near-infrared spectral regions. The key point of the absolute calibration of a relative spectral sensitivity is application of the standard sample formed by a thin layer of Si nanocrystals with stable and efficient photoluminescence. The spectral PL quantum yield and the PL spatial distribution of the standard sample must be characterized bymore » separate experiments. The absolutely calibrated microspectroscope enables to characterize spectral photon emittance of a studied object or even its luminescence quantum yield (QY) if additional knowledge about spatial distribution of emission and about excitance is available. Capabilities of the calibrated microspectroscope are demonstrated by measuring external QY of electroluminescence from a standard poly-Si solar-cell and of photoluminescence of Er-doped Si nanocrystals.« less

  3. A multilaboratory comparison of calibration accuracy and the performance of external references in analytical ultracentrifugation.

    PubMed

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L; Bakhtina, Marina M; Becker, Donald F; Bedwell, Gregory J; Bekdemir, Ahmet; Besong, Tabot M D; Birck, Catherine; Brautigam, Chad A; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B; Chaton, Catherine T; Cölfen, Helmut; Connaghan, Keith D; Crowley, Kimberly A; Curth, Ute; Daviter, Tina; Dean, William L; Díez, Ana I; Ebel, Christine; Eckert, Debra M; Eisele, Leslie E; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A; Fairman, Robert; Finn, Ron M; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E; Cifre, José G Hernández; Herr, Andrew B; Howell, Elizabeth E; Isaac, Richard S; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A; Kwon, Hyewon; Larson, Adam; Laue, Thomas M; Le Roy, Aline; Leech, Andrew P; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R; Ma, Jia; May, Carrie A; Maynard, Ernest L; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K; Park, Jin-Ku; Pawelek, Peter D; Perdue, Erby E; Perkins, Stephen J; Perugini, Matthew A; Peterson, Craig L; Peverelli, Martin G; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E; Raynal, Bertrand D E; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E; Rosenberg, Rose; Rowe, Arthur J; Rufer, Arne C; Scott, David J; Seravalli, Javier G; Solovyova, Alexandra S; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M; Streicher, Werner W; Sumida, John P; Swygert, Sarah G; Szczepanowski, Roman H; Tessmer, Ingrid; Toth, Ronald T; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F W; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.

  4. A Multilaboratory Comparison of Calibration Accuracy and the Performance of External References in Analytical Ultracentrifugation

    PubMed Central

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L.; Bakhtina, Marina M.; Becker, Donald F.; Bedwell, Gregory J.; Bekdemir, Ahmet; Besong, Tabot M. D.; Birck, Catherine; Brautigam, Chad A.; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B.; Chaton, Catherine T.; Cölfen, Helmut; Connaghan, Keith D.; Crowley, Kimberly A.; Curth, Ute; Daviter, Tina; Dean, William L.; Díez, Ana I.; Ebel, Christine; Eckert, Debra M.; Eisele, Leslie E.; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A.; Fairman, Robert; Finn, Ron M.; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E.; Cifre, José G. Hernández; Herr, Andrew B.; Howell, Elizabeth E.; Isaac, Richard S.; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A.; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A.; Kwon, Hyewon; Larson, Adam; Laue, Thomas M.; Le Roy, Aline; Leech, Andrew P.; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R.; Ma, Jia; May, Carrie A.; Maynard, Ernest L.; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J.; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K.; Park, Jin-Ku; Pawelek, Peter D.; Perdue, Erby E.; Perkins, Stephen J.; Perugini, Matthew A.; Peterson, Craig L.; Peverelli, Martin G.; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E.; Raynal, Bertrand D. E.; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E.; Rosenberg, Rose; Rowe, Arthur J.; Rufer, Arne C.; Scott, David J.; Seravalli, Javier G.; Solovyova, Alexandra S.; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M.; Streicher, Werner W.; Sumida, John P.; Swygert, Sarah G.; Szczepanowski, Roman H.; Tessmer, Ingrid; Toth, Ronald T.; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F. W.; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H.; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E.; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M.; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164

  5. Improving the efficiency of quantitative (1)H NMR: an innovative external standard-internal reference approach.

    PubMed

    Huang, Yande; Su, Bao-Ning; Ye, Qingmei; Palaniswamy, Venkatapuram A; Bolgar, Mark S; Raglione, Thomas V

    2014-01-01

    The classical internal standard quantitative NMR (qNMR) method determines the purity of an analyte by the determination of a solution containing the analyte and a standard. Therefore, the standard must meet the requirements of chemical compatibility and lack of resonance interference with the analyte as well as a known purity. The identification of such a standard can be time consuming and must be repeated for each analyte. In contrast, the external standard qNMR method utilizes a standard with a known purity to calibrate the NMR instrument. The external standard and the analyte are measured separately, thereby eliminating the matter of chemical compatibility and resonance interference between the standard and the analyte. However, the instrumental factors, including the quality of NMR tubes, must be kept the same. Any deviations will compromise the accuracy of the results. An innovative qNMR method reported herein utilizes an internal reference substance along with an external standard to assume the role of the standard used in the traditional internal standard qNMR method. In this new method, the internal reference substance must only be chemically compatible and be free of resonance-interference with the analyte or external standard whereas the external standard must only be of a known purity. The exact purity or concentration of the internal reference substance is not required as long as the same quantity is added to the external standard and the analyte. The new method reduces the burden of searching for an appropriate standard for each analyte significantly. Therefore the efficiency of the qNMR purity assay increases while the precision of the internal standard method is retained. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. A Seafloor Test of the A-0-A Approach to Calibrating Pressure Sensors for Vertical Geodesy

    NASA Astrophysics Data System (ADS)

    Wilcock, W. S. D.; Manalang, D.; Harrington, M.; Cram, G.; Tilley, J.; Burnett, J.; Martin, D.; Paros, J. M.

    2017-12-01

    Seafloor geodetic observations are critical for understanding the locking and slip of the megathrust in Cascadia and other subduction zones. Differences of bottom pressure time series have been used successfully in several subduction zones to detect slow-slip earthquakes centered offshore. Pressure sensor drift rates are much greater than the long-term rates of strain build-up and thus, in-situ calibration is required to measure secular strain. One approach to calibration is to use a dead-weight tester, a laboratory apparatus that produces an accurate reference pressure, to calibrate a pressure sensor deployed on the seafloor by periodically switching between the external pressure and the deadweight tester (Cook et al, this session). The A-0-A method replaces the dead weight tester by using the internal pressure of the instrument housing as the reference pressure. We report on the first non-proprietary ocean test of this approach on the MARS cabled observatory at a depth of 900 m depth in Monterey Bay. We use the Paroscientific Seismic + Oceanic Sensors module that is designed for combined geodetic, oceanographic and seismic observations. The module comprises a three-component broadband accelerometer, two pressure sensors that for this deployment measure ocean pressures, A, up to 2000 psia (14 MPa), and a barometer to measure the internal housing reference pressure, 0. A valve periodically switches between external and internal pressures for 5 minute calibrations. The seafloor test started in mid-June and the results of 30 calibrations collected over the first 6 weeks of operation are very encouraging. After correcting for variations in the internal temperature of the housing, the offset of the pressure sensors from the barometer reading as a function of time, can be fit with a straight line for each sensor with a rms misfit of 0.1 hPa (1 mm of water). The slopes of these lines (-4 cm/yr and -0.4 cm/yr) vary by an order of magnitude but the difference in the span (external minus internal pressure) of the two sensors is constant to 0.05 hPa. We will present the results for the first 6 months of A-0-A calibrations for vertical geodesy and also discuss the performance of the pressure sensors and accelerometer for monitoring seismic activity, tilt and ocean infragravity waves.

  7. External validation and clinical utility of a prediction model for 6-month mortality in patients undergoing hemodialysis for end-stage kidney disease.

    PubMed

    Forzley, Brian; Er, Lee; Chiu, Helen Hl; Djurdjev, Ognjenka; Martinusen, Dan; Carson, Rachel C; Hargrove, Gaylene; Levin, Adeera; Karim, Mohamud

    2018-02-01

    End-stage kidney disease is associated with poor prognosis. Health care professionals must be prepared to address end-of-life issues and identify those at high risk for dying. A 6-month mortality prediction model for patients on dialysis derived in the United States is used but has not been externally validated. We aimed to assess the external validity and clinical utility in an independent cohort in Canada. We examined the performance of the published 6-month mortality prediction model, using discrimination, calibration, and decision curve analyses. Data were derived from a cohort of 374 prevalent dialysis patients in two regions of British Columbia, Canada, which included serum albumin, age, peripheral vascular disease, dementia, and answers to the "the surprise question" ("Would I be surprised if this patient died within the next year?"). The observed mortality in the validation cohort was 11.5% at 6 months. The prediction model had reasonable discrimination (c-stat = 0.70) but poor calibration (calibration-in-the-large = -0.53 (95% confidence interval: -0.88, -0.18); calibration slope = 0.57 (95% confidence interval: 0.31, 0.83)) in our data. Decision curve analysis showed the model only has added value in guiding clinical decision in a small range of threshold probabilities: 8%-20%. Despite reasonable discrimination, the prediction model has poor calibration in this external study cohort; thus, it may have limited clinical utility in settings outside of where it was derived. Decision curve analysis clarifies limitations in clinical utility not apparent by receiver operating characteristic curve analysis. This study highlights the importance of external validation of prediction models prior to routine use in clinical practice.

  8. Design and Optimization of a Chemometric-Assisted Spectrophotometric Determination of Telmisartan and Hydrochlorothiazide in Pharmaceutical Dosage Form

    PubMed Central

    Lakshmi, KS; Lakshmi, S

    2010-01-01

    Two chemometric methods were developed for the simultaneous determination of telmisartan and hydrochlorothiazide. The chemometric methods applied were principal component regression (PCR) and partial least square (PLS-1). These approaches were successfully applied to quantify the two drugs in the mixture using the information included in the UV absorption spectra of appropriate solutions in the range of 200-350 nm with the intervals Δλ = 1 nm. The calibration of PCR and PLS-1 models was evaluated by internal validation (prediction of compounds in its own designed training set of calibration) and by external validation over laboratory prepared mixtures and pharmaceutical preparations. The PCR and PLS-1 methods require neither any separation step, nor any prior graphical treatment of the overlapping spectra of the two drugs in a mixture. The results of PCR and PLS-1 methods were compared with each other and a good agreement was found. PMID:21331198

  9. Design and optimization of a chemometric-assisted spectrophotometric determination of telmisartan and hydrochlorothiazide in pharmaceutical dosage form.

    PubMed

    Lakshmi, Ks; Lakshmi, S

    2010-01-01

    Two chemometric methods were developed for the simultaneous determination of telmisartan and hydrochlorothiazide. The chemometric methods applied were principal component regression (PCR) and partial least square (PLS-1). These approaches were successfully applied to quantify the two drugs in the mixture using the information included in the UV absorption spectra of appropriate solutions in the range of 200-350 nm with the intervals Δλ = 1 nm. The calibration of PCR and PLS-1 models was evaluated by internal validation (prediction of compounds in its own designed training set of calibration) and by external validation over laboratory prepared mixtures and pharmaceutical preparations. The PCR and PLS-1 methods require neither any separation step, nor any prior graphical treatment of the overlapping spectra of the two drugs in a mixture. The results of PCR and PLS-1 methods were compared with each other and a good agreement was found.

  10. The application of polymer gel dosimeters to dosimetry for targeted radionuclide therapy

    NASA Astrophysics Data System (ADS)

    Gear, J. I.; Flux, G. D.; Charles-Edwards, E.; Partridge, M.; Cook, G.; Ott, R. J.

    2006-07-01

    There is a lack of standardized methodology to perform dose calculations for targeted radionuclide therapy and at present no method exists to objectively evaluate the various approaches employed. The aim of the work described here was to investigate the practicality and accuracy of calibrating polymer gel dosimeters such that dose measurements resulting from complex activity distributions can be verified. Twelve vials of the polymer gel dosimeter, 'MAGIC', were uniformly mixed with varying concentrations of P-32 such that absorbed doses ranged from 0 to 30 Gy after a period of 360 h before being imaged on a magnetic resonance scanner. In addition, nine vials were prepared and irradiated using an external 6 MV x-ray beam. Magnetic resonance transverse relaxation time, T2, maps were obtained using a multi-echo spin echo sequence and converted to R2 maps (where T2 = 1/R2). Absorbed doses for P-32 irradiated gel were calculated according to the medical internal radiation dose schema using EGSnrc Monte Carlo simulations. Here the energy deposited in cylinders representing the irradiated vials was scored. A relationship between dose and R2 was determined. Effects from oxygen contamination were present in the internally irradiated vials. An increase in O2 sensitivity over those gels irradiated externally was thought to be a result of the longer irradiation period. However, below the region of contamination dose response appeared homogenous. Due do a drop-off of dose at the periphery of the internally irradiated vials, magnetic resonance ringing artefacts were observed. The ringing did not greatly affect the accuracy of calibration, which was comparable for both methods. The largest errors in calculated dose originated from the initial activity measurements, and were approximately 10%. Measured R2 values ranged from 5-35 s-1 with an average standard deviation of 1%. A clear relationship between R2 and dose was observed, with up to 40% increased sensitivity for internally irradiated gels. Curve fits to the calibration data followed a single exponential function. The correlation coefficients for internally and externally irradiated gels were 0.991 and 0.985, respectively. With the ability to accurately calibrate internally dosed polymer gels, this technology shows promise as a means to evaluate dosimetry methods, particularly in cases of non-uniform uptake of a radionuclide.

  11. Note: An improved calibration system with phase correction for electronic transformers with digital output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Han-miao, E-mail: chenghanmiao@hust.edu.cn; Li, Hong-bin, E-mail: lihongbin@hust.edu.cn; State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074

    The existing electronic transformer calibration systems employing data acquisition cards cannot satisfy some practical applications, because the calibration systems have phase measurement errors when they work in the mode of receiving external synchronization signals. This paper proposes an improved calibration system scheme with phase correction to improve the phase measurement accuracy. We employ NI PCI-4474 to design a calibration system, and the system has the potential to receive external synchronization signals and reach extremely high accuracy classes. Accuracy verification has been carried out in the China Electric Power Research Institute, and results demonstrate that the system surpasses the accuracy classmore » 0.05. Furthermore, this system has been used to test the harmonics measurement accuracy of all-fiber optical current transformers. In the same process, we have used an existing calibration system, and a comparison of the test results is presented. The system after improvement is suitable for the intended applications.« less

  12. Fast and direct screening of copper in micro-volumes of distilled alcoholic beverages by high-resolution continuum source graphite furnace atomic absorption spectrometry.

    PubMed

    Ajtony, Zsolt; Laczai, Nikoletta; Dravecz, Gabriella; Szoboszlai, Norbert; Marosi, Áron; Marlok, Bence; Streli, Christina; Bencs, László

    2016-12-15

    HR-CS-GFAAS methods were developed for the fast determination of Cu in domestic and commercially available Hungarian distilled alcoholic beverages (called pálinka), in order to decide if their Cu content exceeds the permissible limit, as legislated by the WHO. Some microliters of samples were directly dispensed into the atomizer. Graphite furnace heating programs, effects/amounts of the Pd modifier, alternative wavelengths (e.g., Cu I 249.2146nm), external calibration and internal standardization methods were studied. Applying a fast graphite furnace heating program without any chemical modifier, the Cu content of a sample could be quantitated within 1.5min. The detection limit of the method is 0.03mg/L. Calibration curves are linear up to 10-15mg/L Cu. Spike-recoveries ranged from 89% to 119% with an average of 100.9±8.5%. Internal calibration could be applied with the assistance of Cr, Fe, and/or Rh standards. The accuracy of the GFAAS results was verified by TXRF analyses. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Tropospheric and ionospheric media calibrations based on global navigation satellite system observation data

    NASA Astrophysics Data System (ADS)

    Feltens, Joachim; Bellei, Gabriele; Springer, Tim; Kints, Mark V.; Zandbergen, René; Budnik, Frank; Schönemann, Erik

    2018-06-01

    Context: Calibration of radiometric tracking data for effects in the Earth atmosphere is a crucial element in the field of deep-space orbit determination (OD). The troposphere can induce propagation delays in the order of several meters, the ionosphere up to the meter level for X-band signals and up to tens of meters, in extreme cases, for L-band ones. The use of media calibrations based on Global Navigation Satellite Systems (GNSS) measurement data can improve the accuracy of the radiometric observations modelling and, as a consequence, the quality of orbit determination solutions. Aims: ESOC Flight Dynamics employs ranging, Doppler and delta-DOR (Delta-Differential One-Way Ranging) data for the orbit determination of interplanetary spacecraft. Currently, the media calibrations for troposphere and ionosphere are either computed based on empirical models or, under mission specific agreements, provided by external parties such as the Jet Propulsion Laboratory (JPL) in Pasadena, California. In order to become independent from external models and sources, decision fell to establish a new in-house internal service to create these media calibrations based on GNSS measurements recorded at the ESA tracking sites and processed in-house by the ESOC Navigation Support Office with comparable accuracy and quality. Methods: For its concept, the new service was designed to be as much as possible depending on own data and resources and as less as possible depending on external models and data. Dedicated robust and simple algorithms, well suited for operational use, were worked out for that task. This paper describes the approach built up to realize this new in-house internal media calibration service. Results: Test results collected during three months of running the new media calibrations in quasi-operational mode indicate that GNSS-based tropospheric corrections can remove systematic signatures from the Doppler observations and biases from the range ones. For the ionosphere, a direct way of verification was not possible due to non-availability of independent third party data for comparison. Nevertheless, the tests for ionospheric corrections showed also slight improvements in the tracking data modelling, but not to an extent as seen for the tropospheric corrections. Conclusions: The validation results confirmed that the new approach meets the requirements upon accuracy and operational use for the tropospheric part, while some improvement is still ongoing for the ionospheric one. Based on these test results, green light was given to put the new in-house service for media calibrations into full operational mode in April 2017.

  14. Using Lunar Observations to Validate In-Flight Calibrations of Clouds and Earth Radiant Energy System Instruments

    NASA Technical Reports Server (NTRS)

    Daniels, Janet L.; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan

    2014-01-01

    The validation of in-orbit instrument performance requires stability in both instrument and calibration source. This paper describes a method of validation using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. Unlike internal calibrations, the Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, in-orbit observations have become standardized and compiled for the Flight Models-1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance parameters which can be gleaned are detector gain, pointing accuracy and static detector point response function validation. Lunar observations are used to examine the stability of all three detectors on each of these instruments from 2006 to present. This validation method has yielded results showing trends per CERES data channel of 1.2% per decade or less.

  15. SU-C-204-02: Improved Patient-Specific Optimization of the Stopping Power Calibration for Proton Therapy Planning Using a Single Proton Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rinaldi, I; Ludwig Maximilian University, Garching, DE; Heidelberg University Hospital, Heidelberg, DE

    2015-06-15

    Purpose: We present an improved method to calculate patient-specific calibration curves to convert X-ray computed tomography (CT) Hounsfield Unit (HU) to relative stopping powers (RSP) for proton therapy treatment planning. Methods: By optimizing the HU-RSP calibration curve, the difference between a proton radiographic image and a digitally reconstructed X-ray radiography (DRR) is minimized. The feasibility of this approach has previously been demonstrated. This scenario assumes that all discrepancies between proton radiography and DRR originate from uncertainties in the HU-RSP curve. In reality, external factors cause imperfections in the proton radiography, such as misalignment compared to the DRR and unfaithful representationmore » of geometric structures (“blurring”). We analyze these effects based on synthetic datasets of anthropomorphic phantoms and suggest an extended optimization scheme which explicitly accounts for these effects. Performance of the method is been tested for various simulated irradiation parameters. The ultimate purpose of the optimization is to minimize uncertainties in the HU-RSP calibration curve. We therefore suggest and perform a thorough statistical treatment to quantify the accuracy of the optimized HU-RSP curve. Results: We demonstrate that without extending the optimization scheme, spatial blurring (equivalent to FWHM=3mm convolution) in the proton radiographies can cause up to 10% deviation between the optimized and the ground truth HU-RSP calibration curve. Instead, results obtained with our extended method reach 1% or better correspondence. We have further calculated gamma index maps for different acceptance levels. With DTA=0.5mm and RD=0.5%, a passing ratio of 100% is obtained with the extended method, while an optimization neglecting effects of spatial blurring only reach ∼90%. Conclusion: Our contribution underlines the potential of a single proton radiography to generate a patient-specific calibration curve and to improve dose delivery by optimizing the HU-RSP calibration curve as long as all sources of systematic incongruence are properly modeled.« less

  16. Gaia Data Release 1. Validation of the photometry

    NASA Astrophysics Data System (ADS)

    Evans, D. W.; Riello, M.; De Angeli, F.; Busso, G.; van Leeuwen, F.; Jordi, C.; Fabricius, C.; Brown, A. G. A.; Carrasco, J. M.; Voss, H.; Weiler, M.; Montegriffo, P.; Cacciari, C.; Burgess, P.; Osborne, P.

    2017-04-01

    Aims: The photometric validation of the Gaia DR1 release of the ESA Gaia mission is described and the quality of the data shown. Methods: This is carried out via an internal analysis of the photometry using the most constant sources. Comparisons with external photometric catalogues are also made, but are limited by the accuracies and systematics present in these catalogues. An analysis of the quoted errors is also described. Investigations of the calibration coefficients reveal some of the systematic effects that affect the fluxes. Results: The analysis of the constant sources shows that the early-stage photometric calibrations can reach an accuracy as low as 3 mmag.

  17. Gravity gradient preprocessing at the GOCE HPF

    NASA Astrophysics Data System (ADS)

    Bouman, J.; Rispens, S.; Gruber, T.; Schrama, E.; Visser, P.; Tscherning, C. C.; Veicherts, M.

    2009-04-01

    One of the products derived from the GOCE observations are the gravity gradients. These gravity gradients are provided in the Gradiometer Reference Frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. In order to use these gravity gradients for application in Earth sciences and gravity field analysis, additional pre-processing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and non-tidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  18. Preprocessing of gravity gradients at the GOCE high-level processing facility

    NASA Astrophysics Data System (ADS)

    Bouman, Johannes; Rispens, Sietse; Gruber, Thomas; Koop, Radboud; Schrama, Ernst; Visser, Pieter; Tscherning, Carl Christian; Veicherts, Martin

    2009-07-01

    One of the products derived from the gravity field and steady-state ocean circulation explorer (GOCE) observations are the gravity gradients. These gravity gradients are provided in the gradiometer reference frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. To use these gravity gradients for application in Earth scienes and gravity field analysis, additional preprocessing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and nontidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/ f behaviour for low frequencies. In the outlier detection, the 1/ f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/ f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low-degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  19. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  20. Survival analysis with error-prone time-varying covariates: a risk set calibration approach

    PubMed Central

    Liao, Xiaomei; Zucker, David M.; Li, Yi; Spiegelman, Donna

    2010-01-01

    Summary Occupational, environmental, and nutritional epidemiologists are often interested in estimating the prospective effect of time-varying exposure variables such as cumulative exposure or cumulative updated average exposure, in relation to chronic disease endpoints such as cancer incidence and mortality. From exposure validation studies, it is apparent that many of the variables of interest are measured with moderate to substantial error. Although the ordinary regression calibration approach is approximately valid and efficient for measurement error correction of relative risk estimates from the Cox model with time-independent point exposures when the disease is rare, it is not adaptable for use with time-varying exposures. By re-calibrating the measurement error model within each risk set, a risk set regression calibration method is proposed for this setting. An algorithm for a bias-corrected point estimate of the relative risk using an RRC approach is presented, followed by the derivation of an estimate of its variance, resulting in a sandwich estimator. Emphasis is on methods applicable to the main study/external validation study design, which arises in important applications. Simulation studies under several assumptions about the error model were carried out, which demonstrated the validity and efficiency of the method in finite samples. The method was applied to a study of diet and cancer from Harvard’s Health Professionals Follow-up Study (HPFS). PMID:20486928

  1. An Overview of Kinematic and Calibration Models Using Internal/External Sensors or Constraints to Improve the Behavior of Spatial Parallel Mechanisms

    PubMed Central

    Majarena, Ana C.; Santolaria, Jorge; Samper, David; Aguilar, Juan J.

    2010-01-01

    This paper presents an overview of the literature on kinematic and calibration models of parallel mechanisms, the influence of sensors in the mechanism accuracy and parallel mechanisms used as sensors. The most relevant classifications to obtain and solve kinematic models and to identify geometric and non-geometric parameters in the calibration of parallel robots are discussed, examining the advantages and disadvantages of each method, presenting new trends and identifying unsolved problems. This overview tries to answer and show the solutions developed by the most up-to-date research to some of the most frequent questions that appear in the modelling of a parallel mechanism, such as how to measure, the number of sensors and necessary configurations, the type and influence of errors or the number of necessary parameters. PMID:22163469

  2. Internal Water Vapor Photoacoustic Calibration

    NASA Technical Reports Server (NTRS)

    Pilgrim, Jeffrey S.

    2009-01-01

    Water vapor absorption is ubiquitous in the infrared wavelength range where photoacoustic trace gas detectors operate. This technique allows for discontinuous wavelength tuning by temperature-jumping a laser diode from one range to another within a time span suitable for photoacoustic calibration. The use of an internal calibration eliminates the need for external calibrated reference gases. Commercial applications include an improvement of photoacoustic spectrometers in all fields of use.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandula, Gábor, E-mail: mandula.gabor@wigner.mta.hu; Kis, Zsolt; Lengyel, Krisztián

    We report on a method for real-time dynamic calibration of a tunable external cavity diode laser by using a partially mode-matched plano-concave Fabry-Pérot interferometer in reflection geometry. Wide range laser frequency scanning is carried out by piezo-driven tilting of a diffractive grating playing the role of a frequency selective mirror in the laser cavity. The grating tilting system has a considerable mechanical inertness, so static laser frequency calibration leads to false results. The proposed real-time dynamic calibration based on the identification of primary- and Gouy-effect type secondary interference peaks with known frequency and temporal history can be used for amore » wide scanning range (from 0.2 GHz to more than 1 GHz). A concave spherical mirror with a radius of R = 100 cm and a plain 1% transmitting mirror was used as a Fabry-Pérot interferometer with various resonator lengths to investigate and demonstrate real-time calibration procedures for two kinds of laser frequency scanning functions.« less

  4. The LED and fiber based calibration system for the photomultiplier array of SNO+

    NASA Astrophysics Data System (ADS)

    Seabra, L.; Alves, R.; Andringa, S.; Bradbury, S.; Carvalho, J.; Clark, K.; Coulter, I.; Descamps, F.; Falk, L.; Gurriana, L.; Kraus, C.; Lefeuvre, G.; Maio, A.; Maneira, J.; Mottram, M.; Peeters, S.; Rose, J.; Sinclair, J.; Skensved, P.; Waterfield, J.; White, R.; Wilson, J.; SNO+ Collaboration

    2015-02-01

    A new external LED/fiber light injection calibration system was designed for the calibration and monitoring of the photomultiplier array of the SNO+ experiment at SNOLAB. The goal of the calibration system is to allow an accurate and regular measurement of the photomultiplier array's performance, while minimizing the risk of radioactivity ingress. The choice in SNO+ was to use a set of optical fiber cables to convey into the detector the light pulses produced by external LEDs. The quality control was carried out using a modified test bench that was used in QC of optical fibers for TileCal/ATLAS. The optical fibers were characterized for transmission, timing and angular dispersions. This article describes the setups used for the characterization and quality control of the system based on LEDs and optical fibers and their results.

  5. Calibration of paired watersheds: Utility of moving sums in presence of externalities

    Treesearch

    Herbert Ssegane; D.M. Amatya; Augustine Muwamba; George M. Chescheir; Tim Appelboom; E.W. Tollner; Jami E. Nettles; Mohamed A. Youssef; Francois Birgand; R.W. Skaggs

    2017-01-01

    Historically, paired watershed studies have been used to quantify the hydrological effects of land use and management practices by concurrently monitoring two similar watersheds during calibration (pre-treatment) and post-treatment periods. This study characterizes seasonal water table and flow response to rainfall during the calibration period and tests a change...

  6. Calibration and Data Analysis of the MC-130 Air Balance

    NASA Technical Reports Server (NTRS)

    Booth, Dennis; Ulbrich, N.

    2012-01-01

    Design, calibration, calibration analysis, and intended use of the MC-130 air balance are discussed. The MC-130 balance is an 8.0 inch diameter force balance that has two separate internal air flow systems and one external bellows system. The manual calibration of the balance consisted of a total of 1854 data points with both unpressurized and pressurized air flowing through the balance. A subset of 1160 data points was chosen for the calibration data analysis. The regression analysis of the subset was performed using two fundamentally different analysis approaches. First, the data analysis was performed using a recently developed extension of the Iterative Method. This approach fits gage outputs as a function of both applied balance loads and bellows pressures while still allowing the application of the iteration scheme that is used with the Iterative Method. Then, for comparison, the axial force was also analyzed using the Non-Iterative Method. This alternate approach directly fits loads as a function of measured gage outputs and bellows pressures and does not require a load iteration. The regression models used by both the extended Iterative and Non-Iterative Method were constructed such that they met a set of widely accepted statistical quality requirements. These requirements lead to reliable regression models and prevent overfitting of data because they ensure that no hidden near-linear dependencies between regression model terms exist and that only statistically significant terms are included. Finally, a comparison of the axial force residuals was performed. Overall, axial force estimates obtained from both methods show excellent agreement as the differences of the standard deviation of the axial force residuals are on the order of 0.001 % of the axial force capacity.

  7. TU-D-201-03: Results of a Survey On the Implementation of the TG-51 Protocol and Associated Addendum On Reference Dosimetry of External Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, G; Muir, B; Culberson, W

    Purpose: The working group on the review and extension of the TG-51 protocol (WGTG51) collected data from American Association of Physicists in Medicine (AAPM) members with respect to their current TG-51 and associated addendum usage in the interest of considering future protocol addenda and guidance on reference dosimetry best practices. This study reports an overview of this survey on dosimetry of external beams. Methods: Fourteen survey questions were developed by WGTG51 and released in November 2015. The questions collected information on reference dosimetry, beam quality specification, and ancillary calibration equipment. Results: Of the 190 submissions completed worldwide (U.S. 70%), 83%more » were AAPM members. Of the respondents, 33.5% implemented the TG-51 addendum, with the maximum calibration difference for any photon beam, with respect to the original TG-51 protocol, being <1% for 97.4% of responses. One major finding is that 81.8% of respondents used the same cylindrical ionization chamber for photon and electron dosimetry, implying that many clinics are foregoing the use of parallel-plate chambers. Other evidence suggests equivalent dosimetric results can be obtained with both cylindrical and parallel-plate chambers in electron beams. This, combined with users comfort with cylindrical chambers for electrons will likely impact recommendations put forward in an upcoming electron beam addendum to the TG-51 protocol. Data collected on ancillary equipment showed 58.2% (45.0%) of the thermometers (barometers) in use for beam calibration had NIST traceable calibration certificates, but 48.4% (42.7%) were never recalibrated. Conclusion: This survey provides a snapshot of TG-51 external beam reference dosimetry practice in radiotherapy centers. Findings demonstrate the rapid take-up of the TG-51 photon beam addendum and raise issues for the WGTG51 to focus on going forward, including guidelines on ancillary equipment and the choice of chamber for electron beam dosimetry.« less

  8. An integrated approach to monitoring the calibration stability of operational dual-polarization radars

    DOE PAGES

    Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.; ...

    2016-11-08

    The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less

  9. Calibration factors for the SNOOPY NP-100 neutron dosimeter

    NASA Astrophysics Data System (ADS)

    Moscu, D. F.; McNeill, F. E.; Chase, J.

    2007-10-01

    Within CANDU nuclear power facilities, only a small fraction of workers are exposed to neutron radiation. For these individuals, roughly 4.5% of the total radiation equivalent dose is the result of exposure to neutrons. When this figure is considered across all workers receiving external exposure of any kind, only 0.25% of the total radiation equivalent dose is the result of exposure to neutrons. At many facilities, the NP-100 neutron dosimeter, manufactured by Canberra Industries Incorporated, is employed in both direct and indirect dosimetry methods. Also known as "SNOOPY", these detectors undergo calibration, which results in a calibration factor relating the neutron count rate to the ambient dose equivalent rate, using a standard Am-Be neutron source. Using measurements presented in a technical note, readings from the dosimeter for six different neutron fields in six source-detector orientations were used, to determine a calibration factor for each of these sources. The calibration factor depends on the neutron energy spectrum and the radiation weighting factor to link neutron fluence to equivalent dose. Although the neutron energy spectra measured in the CANDU workplace are quite different than that of the Am-Be calibration source, the calibration factor remains constant - within acceptable limits - regardless of the neutron source used in the calibration; for the specified calibration orientation and current radiation weighting factors. However, changing the value of the radiation weighting factors would result in changes to the calibration factor. In the event of changes to the radiation weighting factors, it will be necessary to assess whether a change to the calibration process or resulting calibration factor is warranted.

  10. Radiometric calibration of an ultra-compact microbolometer thermal imaging module

    NASA Astrophysics Data System (ADS)

    Riesland, David W.; Nugent, Paul W.; Laurie, Seth; Shaw, Joseph A.

    2017-05-01

    As microbolometer focal plane array formats are steadily decreasing, new challenges arise in correcting for thermal drift in the calibration coefficients. As the thermal mass of the cameras decrease the focal plane becomes more sensitive to external thermal inputs. This paper shows results from a temperature compensation algorithm for characterizing and radiometrically calibrating a FLIR Lepton camera.

  11. A self-calibration method in single-axis rotational inertial navigation system with rotating mechanism

    NASA Astrophysics Data System (ADS)

    Chen, Yuanpei; Wang, Lingcao; Li, Kui

    2017-10-01

    Rotary inertial navigation modulation mechanism can greatly improve the inertial navigation system (INS) accuracy through the rotation. Based on the single-axis rotational inertial navigation system (RINS), a self-calibration method is put forward. The whole system is applied with the rotation modulation technique so that whole inertial measurement unit (IMU) of system can rotate around the motor shaft without any external input. In the process of modulation, some important errors can be decoupled. Coupled with the initial position information and attitude information of the system as the reference, the velocity errors and attitude errors in the rotation are used as measurement to perform Kalman filtering to estimate part of important errors of the system after which the errors can be compensated into the system. The simulation results show that the method can complete the self-calibration of the single-axis RINS in 15 minutes and estimate gyro drifts of three-axis, the installation error angle of the IMU and the scale factor error of the gyro on z-axis. The calibration accuracy of optic gyro drifts could be about 0.003°/h (1σ) as well as the scale factor error could be about 1 parts per million (1σ). The errors estimate reaches the system requirements which can effectively improve the longtime navigation accuracy of the vehicle or the boat.

  12. X-ray fluorescence analysis of K, Al and trace elements in chloroaluminate melts

    NASA Astrophysics Data System (ADS)

    Shibitko, A. O.; Abramov, A. V.; Denisov, E. I.; Lisienko, D. G.; Rebrin, O. I.; Bunkov, G. M.; Rychkov, V. N.

    2017-09-01

    Energy dispersive x-ray fluorescence spectrometry was applied to quantitative determination of K, Al, Cr, Fe and Ni in chloroaluminate melts. To implement the external standard calibration method, an unconventional way of samples preparation was suggested. A mixture of metal chlorides was melted in a quartz cell at 350-450 °C under a slightly excessive pressure of purified argon (99.999 %). The composition of the calibration samples (CSs) prepared was controlled by means of the inductively coupled plasma atomic emission spectrometry (ICP-AES). The optimal conditions for analytical lines excitation were determined, the analytes calibration curves were obtained. There was some influence of matrix effects in synthesized samples on the analytical signal of some elements. The CSs are to be stored in inert gas atmosphere. The precision, accuracy, and reproducibility factors of the quantitative chemical analysis were computed.

  13. Alignment and calibration of the MgF2 biplate compensator for applications in rotating-compensator multichannel ellipsometry.

    PubMed

    Lee, J; Rovira, P I; An, I; Collins, R W

    2001-08-01

    Biplate compensators made from MgF2 are being used increasingly in rotating-element single-channel and multichannel ellipsometers. For the measurement of accurate ellipsometric spectra, the compensator must be carefully (i) aligned internally to ensure that the fast axes of the two plates are perpendicular and (ii) calibrated to determine the phase retardance delta versus photon energy E. We present alignment and calibration procedures for multichannel ellipsometer configurations with special attention directed to the precision, accuracy, and reproducibility in the determination of delta (E). Run-to-run variations in external compensator alignment, i.e., alignment with respect to the incident beam, can lead to irreproducibilities in delta of approximately 0.2 degrees . Errors in the ellipsometric measurement of a sample can be minimized by calibrating with an external compensator alignment that matches as closely as possible that used in the measurement.

  14. Matrix-normalised quantification of species by threshold-calibrated competitive real-time PCR: allergenic peanut in food as one example.

    PubMed

    Holzhauser, Thomas; Kleiner, Kornelia; Janise, Annabella; Röder, Martin

    2014-11-15

    A novel method to quantify species or DNA on the basis of a competitive quantitative real-time polymerase chain reaction (cqPCR) was developed. Potentially allergenic peanut in food served as one example. Based on an internal competitive DNA sequence for normalisation of DNA extraction and amplification, the cqPCR was threshold-calibrated against 100mg/kg incurred peanut in milk chocolate. No external standards were necessary. The competitive molecule successfully served as calibrator for quantification, matrix normalisation, and inhibition control. Although designed for verification of a virtual threshold of 100mg/kg, the method allowed quantification of 10-1,000 mg/kg peanut incurred in various food matrices and without further matrix adaption: On the basis of four PCR replicates per sample, mean recovery of 10-1,000 mg/kg peanut in chocolate, vanilla ice cream, cookie dough, cookie, and muesli was 87% (range: 39-147%) in comparison to 199% (range: 114-237%) by three commercial ELISA kits. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Reflectance infrared spectroscopy for in-line monitoring of nicotine during a coating process for an oral thin film.

    PubMed

    Hammes, Florian; Hille, Thomas; Kissel, Thomas

    2014-02-01

    A process analytical method using reflectance infrared spectrometry was developed for the in-line monitoring of the amount of the active pharmaceutical ingredient (API) nicotine during a coating process for an oral thin film (OTF). In-line measurements were made using a reflectance infrared (RI) sensor positioned after the last drying zone of the coating line. Real-time spectra from the coating process were used for modelling the nicotine content. Partial least squares (PLS1) calibration models with different data pre-treatments were generated. The calibration model with the most comparable standard error of calibration (SEC) and the standard error of cross validation (SECV) was selected for an external validation run on the production coating line with an independent laminate. Good correlations could be obtained between values estimated from the reflectance infrared data and the reference HPLC test method, respectively. With in-line measurements it was possible to allow real-time adjustments during the production process to keep product specifications within predefined limits hence avoiding loss of material and batch. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.

    The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less

  17. SUMS calibration test report

    NASA Technical Reports Server (NTRS)

    Robertson, G.

    1982-01-01

    Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.

  18. Quantitative real-time monitoring of multi-elements in airborne particulates by direct introduction into an inductively coupled plasma mass spectrometer

    NASA Astrophysics Data System (ADS)

    Suzuki, Yoshinari; Sato, Hikaru; Hiyoshi, Katsuhiro; Furuta, Naoki

    2012-10-01

    A new calibration system for real-time determination of trace elements in airborne particulates was developed. Airborne particulates were directly introduced into an inductively coupled plasma mass spectrometer, and the concentrations of 15 trace elements were determined by means of an external calibration method. External standard solutions were nebulized by an ultrasonic nebulizer (USN) coupled with a desolvation system, and the resulting aerosol was introduced into the plasma. The efficiency of sample introduction via the USN was calculated by two methods: (1) the introduction of a Cr standard solution via the USN was compared with introduction of a Cr(CO)6 standard gas via a standard gas generator and (2) the aerosol generated by the USN was trapped on filters and then analyzed. The Cr introduction efficiencies obtained by the two methods were the same, and the introduction efficiencies of the other elements were equal to the introduction efficiency of Cr. Our results indicated that our calibration method for introduction efficiency worked well for the 15 elements (Ti, V, Cr, Mn, Co, Ni, Cu, Zn, As, Mo, Sn, Sb, Ba, Tl and Pb). The real-time data and the filter-collection data agreed well for elements with low-melting oxides (V, Co, As, Mo, Sb, Tl, and Pb). In contrast, the real-time data were smaller than the filter-collection data for elements with high-melting oxides (Ti, Cr, Mn, Ni, Cu, Zn, Sn, and Ba). This result implies that the oxides of these 8 elements were not completely fused, vaporized, atomized, and ionized in the initial radiation zone of the inductively coupled plasma. However, quantitative real-time monitoring can be realized after correction for the element recoveries which can be calculated from the ratio of real-time data/filter-collection data.

  19. A low-cost acoustic permeameter

    NASA Astrophysics Data System (ADS)

    Drake, Stephen A.; Selker, John S.; Higgins, Chad W.

    2017-04-01

    Intrinsic permeability is an important parameter that regulates air exchange through porous media such as snow. Standard methods of measuring snow permeability are inconvenient to perform outdoors, are fraught with sampling errors, and require specialized equipment, while bringing intact samples back to the laboratory is also challenging. To address these issues, we designed, built, and tested a low-cost acoustic permeameter that allows computation of volume-averaged intrinsic permeability for a homogenous medium. In this paper, we validate acoustically derived permeability of homogenous, reticulated foam samples by comparison with results derived using a standard flow-through permeameter. Acoustic permeameter elements were designed for use in snow, but the measurement methods are not snow-specific. The electronic components - consisting of a signal generator, amplifier, speaker, microphone, and oscilloscope - are inexpensive and easily obtainable. The system is suitable for outdoor use when it is not precipitating, but the electrical components require protection from the elements in inclement weather. The permeameter can be operated with a microphone either internally mounted or buried a known depth in the medium. The calibration method depends on choice of microphone positioning. For an externally located microphone, calibration was based on a low-frequency approximation applied at 500 Hz that provided an estimate of both intrinsic permeability and tortuosity. The low-frequency approximation that we used is valid up to 2 kHz, but we chose 500 Hz because data reproducibility was maximized at this frequency. For an internally mounted microphone, calibration was based on attenuation at 50 Hz and returned only intrinsic permeability. We found that 50 Hz corresponded to a wavelength that minimized resonance frequencies in the acoustic tube and was also within the response limitations of the microphone. We used reticulated foam of known permeability (ranging from 2 × 10-7 to 3 × 10-9 m2) and estimated tortuosity of 1.05 to validate both methods. For the externally mounted microphone the mean normalized standard deviation was 6 % for permeability and 2 % for tortuosity. The mean relative error from known measurements was 17 % for permeability and 2 % for tortuosity. For the internally mounted microphone the mean normalized standard deviation for permeability was 10 % and the relative error was also 10 %. Permeability determination for an externally mounted microphone is less sensitive to environmental noise than is the internally mounted microphone and is therefore the recommended method. The approximation using the internally mounted microphone was developed as an alternative for circumstances in which placing the microphone in the medium was not feasible. Environmental noise degrades precision of both methods and is recognizable as increased scatter for replicate data points.

  20. Air sampling with solid phase microextraction

    NASA Astrophysics Data System (ADS)

    Martos, Perry Anthony

    There is an increasing need for simple yet accurate air sampling methods. The acceptance of new air sampling methods requires compatibility with conventional chromatographic equipment, and the new methods have to be environmentally friendly, simple to use, yet with equal, or better, detection limits, accuracy and precision than standard methods. Solid phase microextraction (SPME) satisfies the conditions for new air sampling methods. Analyte detection limits, accuracy and precision of analysis with SPME are typically better than with any conventional air sampling methods. Yet, air sampling with SPME requires no pumps, solvents, is re-usable, extremely simple to use, is completely compatible with current chromatographic equipment, and requires a small capital investment. The first SPME fiber coating used in this study was poly(dimethylsiloxane) (PDMS), a hydrophobic liquid film, to sample a large range of airborne hydrocarbons such as benzene and octane. Quantification without an external calibration procedure is possible with this coating. Well understood are the physical and chemical properties of this coating, which are quite similar to those of the siloxane stationary phase used in capillary columns. The log of analyte distribution coefficients for PDMS are linearly related to chromatographic retention indices and to the inverse of temperature. Therefore, the actual chromatogram from the analysis of the PDMS air sampler will yield the calibration parameters which are used to quantify unknown airborne analyte concentrations (ppb v to ppm v range). The second fiber coating used in this study was PDMS/divinyl benzene (PDMS/DVB) onto which o-(2,3,4,5,6- pentafluorobenzyl) hydroxylamine (PFBHA) was adsorbed for the on-fiber derivatization of gaseous formaldehyde (ppb v range), with and without external calibration. The oxime formed from the reaction can be detected with conventional gas chromatographic detectors. Typical grab sampling times were as small as 5 seconds. With 300 seconds sampling, the formaldehyde detection limit was 2.1 ppbv, better than any other 5 minute sampling device for formaldehyde. The first-order rate constant for product formation was used to quantify formaldehyde concentrations without a calibration curve. This spot sampler was used to sample the headspace of hair gel, particle board, plant material and coffee grounds for formaldehyde, and other carbonyl compounds, with extremely promising results. The SPME sampling devices were also used for time- weighted average sampling (30 minutes to 16 hours). Finally, the four new SPME air sampling methods were field tested with side-by-side comparisons to standard air sampling methods, showing a tremendous use of SPME as an air sampler.

  1. Early Prediction of Intensive Care Unit-Acquired Weakness: A Multicenter External Validation Study.

    PubMed

    Witteveen, Esther; Wieske, Luuk; Sommers, Juultje; Spijkstra, Jan-Jaap; de Waard, Monique C; Endeman, Henrik; Rijkenberg, Saskia; de Ruijter, Wouter; Sleeswijk, Mengalvio; Verhamme, Camiel; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke

    2018-01-01

    An early diagnosis of intensive care unit-acquired weakness (ICU-AW) is often not possible due to impaired consciousness. To avoid a diagnostic delay, we previously developed a prediction model, based on single-center data from 212 patients (development cohort), to predict ICU-AW at 2 days after ICU admission. The objective of this study was to investigate the external validity of the original prediction model in a new, multicenter cohort and, if necessary, to update the model. Newly admitted ICU patients who were mechanically ventilated at 48 hours after ICU admission were included. Predictors were prospectively recorded, and the outcome ICU-AW was defined by an average Medical Research Council score <4. In the validation cohort, consisting of 349 patients, we analyzed performance of the original prediction model by assessment of calibration and discrimination. Additionally, we updated the model in this validation cohort. Finally, we evaluated a new prediction model based on all patients of the development and validation cohort. Of 349 analyzed patients in the validation cohort, 190 (54%) developed ICU-AW. Both model calibration and discrimination of the original model were poor in the validation cohort. The area under the receiver operating characteristics curve (AUC-ROC) was 0.60 (95% confidence interval [CI]: 0.54-0.66). Model updating methods improved calibration but not discrimination. The new prediction model, based on all patients of the development and validation cohort (total of 536 patients) had a fair discrimination, AUC-ROC: 0.70 (95% CI: 0.66-0.75). The previously developed prediction model for ICU-AW showed poor performance in a new independent multicenter validation cohort. Model updating methods improved calibration but not discrimination. The newly derived prediction model showed fair discrimination. This indicates that early prediction of ICU-AW is still challenging and needs further attention.

  2. Sulfate and sulfide sulfur isotopes (δ34S and δ33S) measured by solution and laser ablation MC-ICP-MS: An enhanced approach using external correction

    USGS Publications Warehouse

    Pribil, Michael; Ridley, William I.; Emsbo, Poul

    2015-01-01

    Isotope ratio measurements using a multi-collector inductively coupled plasma mass spectrometer (MC-ICP-MS) commonly use standard-sample bracketing with a single isotope standard for mass bias correction for elements with narrow-range isotope systems measured by MC-ICP-MS, e.g. Cu, Fe, Zn, and Hg. However, sulfur (S) isotopic composition (δ34S) in nature can range from at least − 40 to + 40‰, potentially exceeding the ability of standard-sample bracketing using a single sulfur isotope standard to accurately correct for mass bias. Isotopic fractionation via solution and laser ablation introduction was determined during sulfate sulfur (Ssulfate) isotope measurements. An external isotope calibration curve was constructed using in-house and National Institute of Standards and Technology (NIST) Ssulfate isotope reference materials (RM) in an attempt to correct for the difference. The ability of external isotope correction for Ssulfate isotope measurements was evaluated by analyzing NIST and United States Geological Survey (USGS) Ssulfate isotope reference materials as unknowns. Differences in δ34Ssulfate between standard-sample bracketing and standard-sample bracketing with external isotope correction for sulfate samples ranged from 0.72‰ to 2.35‰ over a δ34S range of 1.40‰ to 21.17‰. No isotopic differences were observed when analyzing Ssulfide reference materials over a δ34Ssulfide range of − 32.1‰ to 17.3‰ and a δ33S range of − 16.5‰ to 8.9‰ via laser ablation (LA)-MC-ICP-MS. Here, we identify a possible plasma induced fractionation for Ssulfate and describe a new method using external isotope calibration corrections using solution and LA-MC-ICP-MS.

  3. Validity of endothelial cell analysis methods and recommendations for calibration in Topcon SP-2000P specular microscopy.

    PubMed

    van Schaick, Willem; van Dooren, Bart T H; Mulder, Paul G H; Völker-Dieben, Hennie J M

    2005-07-01

    To report on the calibration of the Topcon SP-2000P specular microscope and the Endothelial Cell Analysis Module of the IMAGEnet 2000 software, and to establish the validity of the different endothelial cell density (ECD) assessment methods available in these instruments. Using an external microgrid, we calibrated the magnification of the SP-2000P and the IMAGEnet software. In both eyes of 36 volunteers, we validated 4 ECD assessment methods by comparing these methods to the gold standard manual ECD, manual counting of cells on a video print. These methods were: the estimated ECD, estimation of ECD with a reference grid on the camera screen; the SP-2000P ECD, pointing out whole contiguous cells on the camera screen; the uncorrected IMAGEnet ECD, using automatically drawn cell borders, and the corrected IMAGEnet ECD, with manual correction of incorrectly drawn cell borders in the automated analysis. Validity of each method was evaluated by calculating both the mean difference with the manual ECD and the limits of agreement as described by Bland and Altman. Preset factory values of magnification were incorrect, resulting in errors in ECD of up to 9%. All assessments except 1 of the estimated ECDs differed significantly from manual ECDs, with most differences being similar (< or =6.5%), except for uncorrected IMAGEnet ECD (30.2%). Corrected IMAGEnet ECD showed the narrowest limits of agreement (-4.9 to +19.3%). We advise checking the calibration of magnification in any specular microscope or endothelial analysis software as it may be erroneous. Corrected IMAGEnet ECD is the most valid of the investigated methods in the Topcon SP-2000P/IMAGEnet 2000 combination.

  4. Revision and proposed modification for a total maximum daily load model for Upper Klamath Lake, Oregon

    USGS Publications Warehouse

    Wherry, Susan A.; Wood, Tamara M.; Anderson, Chauncey W.

    2015-01-01

    Using the extended 1991–2010 external phosphorus loading dataset, the lake TMDL model was recalibrated following the same procedures outlined in the Phase 1 review. The version of the model selected for further development incorporated an updated sediment initial condition, a numerical solution method for the chlorophyll a model, changes to light and phosphorus factors limiting algal growth, and a new pH-model regression, which removed Julian day dependence in order to avoid discontinuities in pH at year boundaries. This updated lake TMDL model was recalibrated using the extended dataset in order to compare calibration parameters to those obtained from a calibration with the original 7.5-year dataset. The resulting algal settling velocity calibrated from the extended dataset was more than twice the value calibrated with the original dataset, and, because the calibrated values of algal settling velocity and recycle rate are related (more rapid settling required more rapid recycling), the recycling rate also was larger than that determined with the original dataset. These changes in calibration parameters highlight the uncertainty in critical rates in the Upper Klamath Lake TMDL model and argue for their direct measurement in future data collection to increase confidence in the model predictions.

  5. Lessons learned from the AIRS pre-flight radiometric calibration

    NASA Astrophysics Data System (ADS)

    Pagano, Thomas S.; Aumann, Hartmut H.; Weiler, Margie

    2013-09-01

    The Atmospheric Infrared Sounder (AIRS) instrument flies on the NASA Aqua satellite and measures the upwelling hyperspectral earth radiance in the spectral range of 3.7-15.4 μm with a nominal ground resolution at nadir of 13.5 km. The AIRS spectra are achieved using a temperature controlled grating spectrometer and HgCdTe infrared linear arrays providing 2378 channels with a nominal spectral resolution of approximately 1200. The AIRS pre-flight tests that impact the radiometric calibration include a full system radiometric response (linearity), polarization response, and response vs scan angle (RVS). We re-derive the AIRS instrument radiometric calibration coefficients from the pre-flight polarization measurements, the response vs scan (RVS) angle tests as well as the linearity tests, and a recent lunar roll test that allowed the AIRS to view the moon. The data and method for deriving the coefficients is discussed in detail and the resulting values compared amongst the different tests. Finally, we examine the residual errors in the reconstruction of the external calibrator blackbody radiances and the efficacy of a new radiometric uncertainty model. Results show the radiometric calibration of AIRS to be excellent and the radiometric uncertainty model does a reasonable job of characterizing the errors.

  6. High-performance thin-layer chromatographic-densitometric determination of secoisolariciresinol diglucoside in flaxseed.

    PubMed

    Coran, Silvia A; Giannellini, Valerio; Bambagiotti-Alberti, Massimo

    2004-08-06

    A HPTLC-densitometric method, based on an external standard approach, was developed in order to obtain a novel procedure for routine analysis of secoisolariciresinol diglucoside (SDG) in flaxseed with a minimum of sample pre-treatment. Optimization of TLC conditions for the densitometric scanning was reached by eluting HPTLC silica gel plates in a horizontal developing chamber. Quantitation of SDG was performed in single beam reflectance mode by using a computer-controlled densitometric scanner and applying a five-point calibration in the 1.00-10.00 microg/spot range. As no sample preparation was required, the proposed HPTLC-densitometric procedure demonstrated to be reliable, yet using an external standard approach. The proposed method is precise, reproducible and accurate and can be employed profitably in place of HPLC for the determination of SDG in complex matrices.

  7. Self-Powered Neutron Detector Calibration Using a Large Vertical Irradiation Hole of HANARO

    NASA Astrophysics Data System (ADS)

    Kim, Myong-Seop; Park, Byung-Gun; Kang, Gi-Doo

    2018-01-01

    A calibration technology of the self-powered neutron detectors (SPNDs) using a large vertical irradiation hole of HANARO is developed. The 40 Rh-SPNDs are installed on the polycarbonate plastic support, and the gold wires with the same length as the effective length of the rhodium emitter of the SPND are also installed to measure the neutron flux on the SPND. They are irradiated at a low reactor power, and the SPND current is measured using the pico-ammeter. The external gamma-rays which affect the SPND current response are analyzed using the Monte Carlo simulation for various irradiation conditions in HANARO. It is confirmed that the effect of the external gamma-rays to the SPND current is dependent on the reactor characteristics, and that it is affected by materials around the detector. The current signals due to the external gamma-rays can be either positive or negative, in that the net flow of the current may be either in the same or the opposite direction as the neutron-induced current by the rhodium emitter. From the above procedure, the effective calibration methodology of multiple SPNDs using the large hole of HANARO is developed. It could be useful for the calibration experiment of the neutron detectors in the research reactors.

  8. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2015-08-01

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology.

  9. New tests of the common calibration context for ISO, IRTS, and MSX

    NASA Technical Reports Server (NTRS)

    Cohen, Martin

    1997-01-01

    The work carried out in order to test, verify and validate the accuracy of the calibration spectra provided to the Infrared Space Observatory (ISO), to the Infrared Telescope in Space (IRTS) and to the Midcourse Space Experiment (MSX) for external calibration support of instruments, is reviewed. The techniques, used to vindicate the accuracy of the absolute spectra, are discussed. The work planned for comparing far infrared spectra of Mars and some of the bright stellar calibrators with long wavelength spectrometer data are summarized.

  10. Evaluation of multivariate calibration models with different pre-processing and processing algorithms for a novel resolution and quantitation of spectrally overlapped quaternary mixture in syrup

    NASA Astrophysics Data System (ADS)

    Moustafa, Azza A.; Hegazy, Maha A.; Mohamed, Dalia; Ali, Omnia

    2016-02-01

    A novel approach for the resolution and quantitation of severely overlapped quaternary mixture of carbinoxamine maleate (CAR), pholcodine (PHL), ephedrine hydrochloride (EPH) and sunset yellow (SUN) in syrup was demonstrated utilizing different spectrophotometric assisted multivariate calibration methods. The applied methods have used different processing and pre-processing algorithms. The proposed methods were partial least squares (PLS), concentration residuals augmented classical least squares (CRACLS), and a novel method; continuous wavelet transforms coupled with partial least squares (CWT-PLS). These methods were applied to a training set in the concentration ranges of 40-100 μg/mL, 40-160 μg/mL, 100-500 μg/mL and 8-24 μg/mL for the four components, respectively. The utilized methods have not required any preliminary separation step or chemical pretreatment. The validity of the methods was evaluated by an external validation set. The selectivity of the developed methods was demonstrated by analyzing the drugs in their combined pharmaceutical formulation without any interference from additives. The obtained results were statistically compared with the official and reported methods where no significant difference was observed regarding both accuracy and precision.

  11. Categorizing accident sequences in the external radiotherapy for risk analysis

    PubMed Central

    2013-01-01

    Purpose This study identifies accident sequences from the past accidents in order to help the risk analysis application to the external radiotherapy. Materials and Methods This study reviews 59 accidental cases in two retrospective safety analyses that have collected the incidents in the external radiotherapy extensively. Two accident analysis reports that accumulated past incidents are investigated to identify accident sequences including initiating events, failure of safety measures, and consequences. This study classifies the accidents by the treatments stages and sources of errors for initiating events, types of failures in the safety measures, and types of undesirable consequences and the number of affected patients. Then, the accident sequences are grouped into several categories on the basis of similarity of progression. As a result, these cases can be categorized into 14 groups of accident sequence. Results The result indicates that risk analysis needs to pay attention to not only the planning stage, but also the calibration stage that is committed prior to the main treatment process. It also shows that human error is the largest contributor to initiating events as well as to the failure of safety measures. This study also illustrates an event tree analysis for an accident sequence initiated in the calibration. Conclusion This study is expected to provide sights into the accident sequences for the prospective risk analysis through the review of experiences. PMID:23865005

  12. Improved mass resolution and mass accuracy in TOF-SIMS spectra and images using argon gas cluster ion beams.

    PubMed

    Shon, Hyun Kyong; Yoon, Sohee; Moon, Jeong Hee; Lee, Tae Geol

    2016-06-09

    The popularity of argon gas cluster ion beams (Ar-GCIB) as primary ion beams in time-of-flight secondary ion mass spectrometry (TOF-SIMS) has increased because the molecular ions of large organic- and biomolecules can be detected with less damage to the sample surfaces. However, Ar-GCIB is limited by poor mass resolution as well as poor mass accuracy. The inferior quality of the mass resolution in a TOF-SIMS spectrum obtained by using Ar-GCIB compared to the one obtained by a bismuth liquid metal cluster ion beam and others makes it difficult to identify unknown peaks because of the mass interference from the neighboring peaks. However, in this study, the authors demonstrate improved mass resolution in TOF-SIMS using Ar-GCIB through the delayed extraction of secondary ions, a method typically used in TOF mass spectrometry to increase mass resolution. As for poor mass accuracy, although mass calibration using internal peaks with low mass such as hydrogen and carbon is a common approach in TOF-SIMS, it is unsuited to the present study because of the disappearance of the low-mass peaks in the delayed extraction mode. To resolve this issue, external mass calibration, another regularly used method in TOF-MS, was adapted to enhance mass accuracy in the spectrum and image generated by TOF-SIMS using Ar-GCIB in the delayed extraction mode. By producing spectra analyses of a peptide mixture and bovine serum albumin protein digested with trypsin, along with image analyses of rat brain samples, the authors demonstrate for the first time the enhancement of mass resolution and mass accuracy for the purpose of analyzing large biomolecules in TOF-SIMS using Ar-GCIB through the use of delayed extraction and external mass calibration.

  13. Hydrologic calibration of paired watersheds using a MOSUM approach

    DOE PAGES

    Ssegane, H.; Amatya, D. M.; Muwamba, A.; ...

    2015-01-09

    Paired watershed studies have historically been used to quantify hydrologic effects of land use and management practices by concurrently monitoring two neighboring watersheds (a control and a treatment) during the calibration (pre-treatment) and post-treatment periods. This study characterizes seasonal water table and flow response to rainfall during the calibration period and tests a change detection technique of moving sums of recursive residuals (MOSUM) to select calibration periods for each control-treatment watershed pair when the regression coefficients for daily water table elevation (WTE) were most stable to reduce regression model uncertainty. The control and treatment watersheds included 1–3 year intensively managedmore » loblolly pine ( Pinus taeda L.) with natural understory, same age loblolly pine intercropped with switchgrass ( Panicum virgatum), 14–15 year thinned loblolly pine with natural understory (control), and switchgrass only. Although monitoring during the calibration period spanned 2009 to 2012, silvicultural operational practices that occurred during this period such as harvesting of existing stand and site preparation for pine and switchgrass establishment may have acted as external factors, potentially shifting hydrologic calibration relationships between control and treatment watersheds. Results indicated that MOSUM was able to detect significant changes in regression parameters for WTE due to silvicultural operations. This approach also minimized uncertainty of calibration relationships which could otherwise mask marginal treatment effects. All calibration relationships developed using this MOSUM method were quantifiable, strong, and consistent with Nash–Sutcliffe Efficiency (NSE) greater than 0.97 for WTE and NSE greater than 0.92 for daily flow, indicating its applicability for choosing calibration periods of paired watershed studies.« less

  14. A method for measuring low-weight carboxylic acids from biosolid compost.

    PubMed

    Himanen, Marina; Latva-Kala, Kyösti; Itävaara, Merja; Hänninen, Kari

    2006-01-01

    Concentration of low-weight carboxylic acids (LWCA) is one of the important parameters that should be taken into consideration when compost is applied as soil improver for plant cultivation, because high amounts of LWCA can be toxic to plants. The present work describes a method for analysis of LWCA in compost as a useful tool for monitoring compost quality and safety. The method was tested on compost samples of two different ages: 3 (immature) and 6 (mature) months old. Acids from compost samples were extracted at high pH, filtered, and freeze-dried. The dried sodium salts were derivatized with a sulfuric acid-methanol mixture and concentrations of 11 low-weight fatty acids (C1-C10) were analyzed using headspace gas chromatography. The material was analyzed with two analytical techniques: the external calibration method (tested on 11 LWCA) and the standard addition method (tested only on formic, acetic, propionic, butyric, and iso-butyric acids). The two techniques were compared for efficiency of acids quantification. The method allowed good separation and quantification of a wide range of individual acids with high sensitivity at low concentrations. Detection limit for propionic, butyric, caproic, caprylic, and capric acids was 1 mg kg(-1) compost; for formic, acetic, valeric, enanthoic and pelargonic acids it was 5 mg kg(-1) compost; and for iso-butyric acid it was 10 mg kg(-1) compost. Recovery rates of LWCA were higher in 3-mo-old compost (57-99%) than in 6-mo-old compost (29-45%). In comparison with the external calibration technique the standard addition technique proved to be three to four times more precise for older compost and two times for younger compost. Disadvantages of the standard addition technique are that it is more time demanding and laborious.

  15. Novel quantitative calibration approach for multi-configuration electromagnetic induction (EMI) systems using data acquired at multiple elevations

    NASA Astrophysics Data System (ADS)

    Tan, Xihe; Mester, Achim; von Hebel, Christian; van der Kruk, Jan; Zimmermann, Egon; Vereecken, Harry; van Waasen, Stefan

    2017-04-01

    Electromagnetic induction (EMI) systems offer a great potential to obtain highly resolved layered electrical conductivity models of the shallow subsurface. State-of-the-art inversion procedures require quantitative calibration of EMI data, especially for short-offset EMI systems where significant data shifts are often observed. These shifts are caused by external influences such as the presence of the operator, zero-leveling procedures, the field setup used to move the EMI system and/or cables close by. Calibrations can be performed by using collocated electrical resistivity measurements or taking soil samples, however, these two methods take a lot of time in the field. To improve the calibration in a fast and concise way, we introduce a novel on-site calibration method using a series of apparent electrical conductivity (ECa) values acquired at multiple elevations for a multi-configuration EMI system. No additional instrument or pre-knowledge of the subsurface is needed to acquire quantitative ECa data. By using this calibration method, we correct each coil configuration, i.e., transmitter and receiver coil separation and the horizontal or vertical coplanar (HCP or VCP) coil orientation with a unique set of calibration parameters. A multi-layer soil structure at the corresponding measurement location is inverted together with the calibration parameters using full-solution Maxwell equations for the forward modelling within the shuffled complex evolution (SCE) algorithm to find the optimum solution under a user-defined parameter space. Synthetic data verified the feasibility for calibrating HCP and VCP measurements of a custom made six-coil EMI system with coil offsets between 0.35 m and 1.8 m for quantitative data inversions. As a next step, we applied the calibration approach on acquired experimental data from a bare soil test field (Selhausen, Germany) for the considered EMI system. The obtained calibration parameters were applied to measurements over a 30 m transect line that covers a range of conductivities between 5 and 40 mS/m. Inverted calibrated EMI data of the transect line showed very similar electrical conductivity distributions and layer interfaces of the subsurface compared to reference data obtained from vertical electrical sounding (VES) measurements. These results show that a combined calibration and inversion of multi-configuration EMI data is possible when including measurements at different elevations, which will speed up the measurement process to obtain quantitative EMI data since the labor intensive electrical resistivity measurement or soil coring is not necessary anymore.

  16. SU-E-T-223: Computed Radiography Dose Measurements of External Radiotherapy Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aberle, C; Kapsch, R

    2015-06-15

    Purpose: To obtain quantitative, two-dimensional dose measurements of external radiotherapy beams with a computed radiography (CR) system and to derive volume correction factors for ionization chambers in small fields. Methods: A commercial Kodak ACR2000i CR system with Kodak Flexible Phosphor Screen HR storage foils was used. Suitable measurement conditions and procedures were established. Several corrections were derived, including image fading, length-scale corrections and long-term stability corrections. Dose calibration curves were obtained for cobalt, 4 MV, 8 MV and 25 MV photons, and for 10 MeV, 15 MeV and 18 MeV electrons in a water phantom. Inherent measurement inhomogeneities were studiedmore » as well as directional dependence of the response. Finally, 2D scans with ionization chambers were directly compared to CR measurements, and volume correction factors were derived. Results: Dose calibration curves (0.01 Gy to 7 Gy) were obtained for multiple photon and electron beam qualities. For each beam quality, the calibration curves can be described by a single fit equation over the whole dose range. The energy dependence of the dose response was determined. The length scale on the images was adjusted scan-by-scan, typically by 2 percent horizontally and by 3 percent vertically. The remaining inhomogeneities after the system’s standard calibration procedure were corrected for. After correction, the homogeneity is on the order of a few percent. The storage foils can be rotated by up to 30 degrees without a significant effect on the measured signal. First results on the determination of volume correction factors were obtained. Conclusion: With CR, quantitative, two-dimensional dose measurements with a high spatial resolution (sub-mm) can be obtained over a large dose range. In order to make use of these advantages, several calibrations, corrections and supporting measurements are needed. This work was funded by the European Metrology Research Programme (EMRP) project HLT09 MetrExtRT Metrology for Radiotherapy using Complex Radiation Fields.« less

  17. External validation of prognostic models to predict risk of gestational diabetes mellitus in one Dutch cohort: prospective multicentre cohort study.

    PubMed

    Lamain-de Ruiter, Marije; Kwee, Anneke; Naaktgeboren, Christiana A; de Groot, Inge; Evers, Inge M; Groenendaal, Floris; Hering, Yolanda R; Huisjes, Anjoke J M; Kirpestein, Cornel; Monincx, Wilma M; Siljee, Jacqueline E; Van 't Zelfde, Annewil; van Oirschot, Charlotte M; Vankan-Buitelaar, Simone A; Vonk, Mariska A A W; Wiegers, Therese A; Zwart, Joost J; Franx, Arie; Moons, Karel G M; Koster, Maria P H

    2016-08-30

     To perform an external validation and direct comparison of published prognostic models for early prediction of the risk of gestational diabetes mellitus, including predictors applicable in the first trimester of pregnancy.  External validation of all published prognostic models in large scale, prospective, multicentre cohort study.  31 independent midwifery practices and six hospitals in the Netherlands.  Women recruited in their first trimester (<14 weeks) of pregnancy between December 2012 and January 2014, at their initial prenatal visit. Women with pre-existing diabetes mellitus of any type were excluded.  Discrimination of the prognostic models was assessed by the C statistic, and calibration assessed by calibration plots.  3723 women were included for analysis, of whom 181 (4.9%) developed gestational diabetes mellitus in pregnancy. 12 prognostic models for the disorder could be validated in the cohort. C statistics ranged from 0.67 to 0.78. Calibration plots showed that eight of the 12 models were well calibrated. The four models with the highest C statistics included almost all of the following predictors: maternal age, maternal body mass index, history of gestational diabetes mellitus, ethnicity, and family history of diabetes. Prognostic models had a similar performance in a subgroup of nulliparous women only. Decision curve analysis showed that the use of these four models always had a positive net benefit.  In this external validation study, most of the published prognostic models for gestational diabetes mellitus show acceptable discrimination and calibration. The four models with the highest discriminative abilities in this study cohort, which also perform well in a subgroup of nulliparous women, are easy models to apply in clinical practice and therefore deserve further evaluation regarding their clinical impact. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Inflatable bladder provides accurate calibration of pressure switch

    NASA Technical Reports Server (NTRS)

    Smith, N. J.

    1965-01-01

    Calibration of a pressure switch is accurately checked by a thin-walled circular bladder. It is placed in the pressure switch and applies force to the switch diaphragm when expanded by an external pressure source. The disturbance to the normal operation of the switch is minimal.

  19. Design, calibration and validation of a novel 3D printed instrumented spatial linkage that measures changes in the rotational axes of the tibiofemoral joint.

    PubMed

    Bonny, Daniel P; Hull, M L; Howell, S M

    2014-01-01

    An accurate axis-finding technique is required to measure any changes from normal caused by total knee arthroplasty in the flexion-extension (F-E) and longitudinal rotation (LR) axes of the tibiofemoral joint. In a previous paper, we computationally determined how best to design and use an instrumented spatial linkage (ISL) to locate the F-E and LR axes such that rotational and translational errors were minimized. However, the ISL was not built and consequently was not calibrated; thus the errors in locating these axes were not quantified on an actual ISL. Moreover, previous methods to calibrate an ISL used calibration devices with accuracies that were either undocumented or insufficient for the device to serve as a gold-standard. Accordingly, the objectives were to (1) construct an ISL using the previously established guidelines,(2) calibrate the ISL using an improved method, and (3) quantify the error in measuring changes in the F-E and LR axes. A 3D printed ISL was constructed and calibrated using a coordinate measuring machine, which served as a gold standard. Validation was performed using a fixture that represented the tibiofemoral joint with an adjustable F-E axis and the errors in measuring changes to the positions and orientations of the F-E and LR axes were quantified. The resulting root mean squared errors (RMSEs) of the calibration residuals using the new calibration method were 0.24, 0.33, and 0.15 mm for the anterior-posterior, medial-lateral, and proximal-distal positions, respectively, and 0.11, 0.10, and 0.09 deg for varus-valgus, flexion-extension, and internal-external orientations, respectively. All RMSEs were below 0.29% of the respective full-scale range. When measuring changes to the F-E or LR axes, each orientation error was below 0.5 deg; when measuring changes in the F-E axis, each position error was below 1.0 mm. The largest position RMSE was when measuring a medial-lateral change in the LR axis (1.2 mm). Despite the large size of the ISL, these calibration residuals were better than those for previously published ISLs, particularly when measuring orientations, indicating that using a more accurate gold standard was beneficial in limiting the calibration residuals. The validation method demonstrated that this ISL is capable of accurately measuring clinically important changes (i.e. 1 mm and 1 deg) in the F-E and LR axes.

  20. Biogeographic Dating of Speciation Times Using Paleogeographically Informed Processes

    PubMed Central

    Landis, Michael J.

    2017-01-01

    Abstract Standard models of molecular evolution cannot estimate absolute speciation times alone, and require external calibrations to do so, such as fossils. Because fossil calibration methods rely on the incomplete fossil record, a great number of nodes in the tree of life cannot be dated precisely. However, many major paleogeographical events are dated, and since biogeographic processes depend on paleogeographical conditions, biogeographic dating may be used as an alternative or complementary method to fossil dating. I demonstrate how a time-stratified biogeographic stochastic process may be used to estimate absolute divergence times by conditioning on dated paleogeographical events. Informed by the current paleogeographical literature, I construct an empirical dispersal graph using 25 areas and 26 epochs for the past 540 Ma of Earth’s history. Simulations indicate biogeographic dating performs well so long as paleogeography imposes constraint on biogeographic character evolution. To gauge whether biogeographic dating may be of practical use, I analyzed the well-studied turtle clade (Testudines) to assess how well biogeographic dating fares when compared to fossil-calibrated dating estimates reported in the literature. Fossil-free biogeographic dating estimated the age of the most recent common ancestor of extant turtles to be from the Late Triassic, which is consistent with fossil-based estimates. Dating precision improves further when including a root node fossil calibration. The described model, paleogeographical dispersal graph, and analysis scripts are available for use with RevBayes. PMID:27155009

  1. Calibration of hyperspectral data aviation mode according with accompanying ground-based measurements of standard surfaces of observed scenes

    NASA Astrophysics Data System (ADS)

    Ostrikov, V. N.; Plakhotnikov, O. V.

    2014-12-01

    Using considerable experimental material, we examine whether it is possible to recalculate the initial data of hyperspectral aircraft survey into spectral radiance factors (SRF). The errors of external calibration for various observation conditions and different instruments for data receiving are estimated.

  2. Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique

    NASA Astrophysics Data System (ADS)

    Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua

    2018-05-01

    A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.

  3. Calibration and optimization of an x-ray bendable mirror using displacement-measuring sensors.

    PubMed

    Vannoni, Maurizio; Martín, Idoia Freijo; Music, Valerija; Sinn, Harald

    2016-07-25

    We propose a method to control and to adjust in a closed-loop a bendable x-ray mirror using displacement-measuring devices. For this purpose, the usage of capacitive and interferometric sensors is investigated and compared. We installed the sensors in a bender setup and used them to continuously measure the position and shape of the mirror in the lab. The sensors are vacuum-compatible such that the same concept can also be applied in final conditions. The measurement is used to keep the calibration of the system and to create a closed-loop control compensating for external influences: in a demonstration measurement, using a 950 mm long bendable mirror, the mirror sagitta is kept stable inside a range of 10 nm Peak-To-Valley (P-V).

  4. Application of Multivariable Analysis and FTIR-ATR Spectroscopy to the Prediction of Properties in Campeche Honey

    PubMed Central

    Pat, Lucio; Ali, Bassam; Guerrero, Armando; Córdova, Atl V.; Garduza, José P.

    2016-01-01

    Attenuated total reflectance-Fourier transform infrared spectrometry and chemometrics model was used for determination of physicochemical properties (pH, redox potential, free acidity, electrical conductivity, moisture, total soluble solids (TSS), ash, and HMF) in honey samples. The reference values of 189 honey samples of different botanical origin were determined using Association Official Analytical Chemists, (AOAC), 1990; Codex Alimentarius, 2001, International Honey Commission, 2002, methods. Multivariate calibration models were built using partial least squares (PLS) for the measurands studied. The developed models were validated using cross-validation and external validation; several statistical parameters were obtained to determine the robustness of the calibration models: (PCs) optimum number of components principal, (SECV) standard error of cross-validation, (R 2 cal) coefficient of determination of cross-validation, (SEP) standard error of validation, and (R 2 val) coefficient of determination for external validation and coefficient of variation (CV). The prediction accuracy for pH, redox potential, electrical conductivity, moisture, TSS, and ash was good, while for free acidity and HMF it was poor. The results demonstrate that attenuated total reflectance-Fourier transform infrared spectrometry is a valuable, rapid, and nondestructive tool for the quantification of physicochemical properties of honey. PMID:28070445

  5. Recent advances with quiescent power supply current (I(sub DDQ)) testing at Sandia using the HP82000

    NASA Astrophysics Data System (ADS)

    Righter, A. W.; Leong, D. J.; Cox, L. B.

    Last year at the HP82000 Users Group Meeting, Sandia National Laboratories gave a presentation on I(sub DDQ) testing. This year, some advances are presented on this testing including DUT board fixturing, external DC PMU measurement, and automatic IDD-All circuit calibration. Implementation is examined more than theory, with results presented from Sandia tests. After a brief summary I(sub DDQ) theory and testing concepts, how the break (hold state) vector and data formatting present a test vector generation concern for the HP82000 is described. Fixturing of the DUT board for both types of I(sub DDQ) measurement is then discussed, along with how the continuity test and test vector generation must be taken into account. Results of a test including continuity, IDD-All and I(sub DDQ) Value measurements is shown. Next, measurement of low current using an external PMU is discussed, including noise considerations, implementation and some test results showing nA-range measurements. A method is presented for automatic calibration of the IDD-All analog comparator circuit using RM BASIC on the HP82000, with implementation and measurement results. Finally, future directions for research in this area is explored.

  6. Calibrator device for the extrusion of cable coatings

    NASA Astrophysics Data System (ADS)

    Garbacz, Tomasz; Dulebová, Ľudmila; Spišák, Emil; Dulebová, Martina

    2016-05-01

    This paper presents selected results of theoretical and experimental research works on a new calibration device (calibrators) used to produce coatings of electric cables. The aim of this study is to present design solution calibration equipment and present a new calibration machine, which is an important element of the modernized technology extrusion lines for coating cables. As a result of the extrusion process of PVC modified with blowing agents, an extrudate in the form of an electrical cable was obtained. The conditions of the extrusion process were properly selected, which made it possible to obtain a product with solid external surface and cellular core.

  7. Electrical network method for the thermal or structural characterization of a conducting material sample or structure

    DOEpatents

    Ortiz, Marco G.

    1993-01-01

    A method for modeling a conducting material sample or structure system, as an electrical network of resistances in which each resistance of the network is representative of a specific physical region of the system. The method encompasses measuring a resistance between two external leads and using this measurement in a series of equations describing the network to solve for the network resistances for a specified region and temperature. A calibration system is then developed using the calculated resistances at specified temperatures. This allows for the translation of the calculated resistances to a region temperature. The method can also be used to detect and quantify structural defects in the system.

  8. Electrical network method for the thermal or structural characterization of a conducting material sample or structure

    DOEpatents

    Ortiz, M.G.

    1993-06-08

    A method for modeling a conducting material sample or structure system, as an electrical network of resistances in which each resistance of the network is representative of a specific physical region of the system. The method encompasses measuring a resistance between two external leads and using this measurement in a series of equations describing the network to solve for the network resistances for a specified region and temperature. A calibration system is then developed using the calculated resistances at specified temperatures. This allows for the translation of the calculated resistances to a region temperature. The method can also be used to detect and quantify structural defects in the system.

  9. Prediction models for successful external cephalic version: a systematic review.

    PubMed

    Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M; Molkenboer, Jan F M; Van der Post, Joris A M; Mol, Ben W; Kok, Marjolein

    2015-12-01

    To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015. We extracted information on study design, sample size, model-building strategies and validation. We evaluated the phases of model development and summarized their performance in terms of discrimination, calibration and clinical usefulness. We collected different predictor variables together with their defined significance, in order to identify important predictor variables for successful ECV. We identified eight articles reporting on seven prediction models. All models were subjected to internal validation. Only one model was also validated in an external cohort. Two prediction models had a low overall risk of bias, of which only one showed promising predictive performance at internal validation. This model also completed the phase of external validation. For none of the models their impact on clinical practice was evaluated. The most important predictor variables for successful ECV described in the selected articles were parity, placental location, breech engagement and the fetal head being palpable. One model was assessed using discrimination and calibration using internal (AUC 0.71) and external validation (AUC 0.64), while two other models were assessed with discrimination and calibration, respectively. We found one prediction model for breech presentation that was validated in an external cohort and had acceptable predictive performance. This model should be used to council women considering ECV. Copyright © 2015. Published by Elsevier Ireland Ltd.

  10. Calibration and deployment of a new NIST transfer radiometer for broadband and spectral calibration of space chambers (MDXR)

    NASA Astrophysics Data System (ADS)

    Jung, Timothy M.; Carter, Adriaan C.; Woods, Solomon I.; Kaplan, Simon G.

    2011-06-01

    The Low-Background Infrared (LBIR) facility at NIST has performed on-site calibration and initial off-site deployments of a new infrared transfer radiometer with an integrated cryogenic Fourier transform spectrometer (Cryo- FTS). This mobile radiometer can be deployed to customer sites for broadband and spectral calibrations of space chambers and low-background hardware-in-the-loop testbeds. The Missile Defense Transfer Radiometer (MDXR) has many of the capabilities of a complete IR calibration facility and replaces our existing filter-based transfer radiometer (BXR) as the NIST standard detector deployed to customer facilities. The MDXR features numerous improvements over the BXR, including: a cryogenic Fourier transform spectrometer, an on-board absolute cryogenic radiometer (ACR) and an internal blackbody reference source with an integrated collimator. The Cryo-FTS can be used to measure high resolution spectra from 3 to 28 micrometers, using a Si:As blocked-impurity-band (BIB) detector. The on-board ACR can be used for self-calibration of the MDXR BIB as well as for absolute measurements of external infrared sources. A set of filter wheels and a rotating polarizer within the MDXR allow for filter-based and polarization-sensitive measurements. The optical design of the MDXR makes both radiance and irradiance measurements possible and enables calibration of both divergent and collimated sources. Results of on-site calibration of the MDXR using its internal blackbody source and an external reference source will be discussed, as well as the performance of the new radiometer in its initial deployments to customer sites.

  11. On the Long-Term Stability of Microwave Radiometers Using Noise Diodes for Calibration

    NASA Technical Reports Server (NTRS)

    Brown, Shannon T.; Desai, Shailen; Lu, Wenwen; Tanner, Alan B.

    2007-01-01

    Results are presented from the long-term monitoring and calibration of the National Aeronautics and Space Administration Jason Microwave Radiometer (JMR) on the Jason-1 ocean altimetry satellite and the ground-based Advanced Water Vapor Radiometers (AWVRs) developed for the Cassini Gravity Wave Experiment. Both radiometers retrieve the wet tropospheric path delay (PD) of the atmosphere and use internal noise diodes (NDs) for gain calibration. The JMR is the first radiometer to be flown in space that uses NDs for calibration. External calibration techniques are used to derive a time series of ND brightness for both instruments that is greater than four years. For the JMR, an optimal estimator is used to find the set of calibration coefficients that minimize the root-mean-square difference between the JMR brightness temperatures and the on-Earth hot and cold references. For the AWVR, continuous tip curves are used to derive the ND brightness. For the JMR and AWVR, both of which contain three redundant NDs per channel, it was observed that some NDs were very stable, whereas others experienced jumps and drifts in their effective brightness. Over the four-year time period, the ND stability ranged from 0.2% to 3% among the diodes for both instruments. The presented recalibration methodology demonstrates that long-term calibration stability can be achieved with frequent recalibration of the diodes using external calibration techniques. The JMR PD drift compared to ground truth over the four years since the launch was reduced from 3.9 to - 0.01 mm/year with the recalibrated ND time series. The JMR brightness temperature calibration stability is estimated to be 0.25 K over ten days.

  12. Invasive and non-invasive measurement in medicine and biology: calibration issues

    NASA Astrophysics Data System (ADS)

    Rolfe, P.; Zhang, Yan; Sun, Jinwei; Scopesi, F.; Serra, G.; Yamakoshi, K.; Tanaka, S.; Yamakoshi, T.; Yamakoshi, Y.; Ogawa, M.

    2010-08-01

    Invasive and non-invasive measurement sensors and systems perform vital roles in medical care. Devices are based on various principles, including optics, photonics, and plasmonics, electro-analysis, magnetics, acoustics, bio-recognition, etc. Sensors are used for the direct insertion into the human body, for example to be in contact with blood, which constitutes Invasive Measurement. This approach is very challenging technically, as sensor performance (sensitivity, response time, linearity) can deteriorate due to interactions between the sensor materials and the biological environment, such as blood or interstitial fluid. Invasive techniques may also be potentially hazardous. Alternatively, sensors or devices may be positioned external to the body surface, for example to analyse respired breath, thereby allowing safer Non-Invasive Measurement. However, such methods, which are inherently less direct, often requiring more complex calibration algorithms, perhaps using chemometric principles. This paper considers and reviews the issue of calibration in both invasive and non-invasive biomedical measurement systems. Systems in current use usually rely upon periodic calibration checks being performed by clinical staff against a variety of laboratory instruments and QC samples. These procedures require careful planning and overall management if reliable data are to be assured.

  13. External Validation of a Case-Mix Adjustment Model for the Standardized Reporting of 30-Day Stroke Mortality Rates in China

    PubMed Central

    Yu, Ping; Pan, Yuesong; Wang, Yongjun; Wang, Xianwei; Liu, Liping; Ji, Ruijun; Meng, Xia; Jing, Jing; Tong, Xu; Guo, Li; Wang, Yilong

    2016-01-01

    Background and Purpose A case-mix adjustment model has been developed and externally validated, demonstrating promise. However, the model has not been thoroughly tested among populations in China. In our study, we evaluated the performance of the model in Chinese patients with acute stroke. Methods The case-mix adjustment model A includes items on age, presence of atrial fibrillation on admission, National Institutes of Health Stroke Severity Scale (NIHSS) score on admission, and stroke type. Model B is similar to Model A but includes only the consciousness component of the NIHSS score. Both model A and B were evaluated to predict 30-day mortality rates in 13,948 patients with acute stroke from the China National Stroke Registry. The discrimination of the models was quantified by c-statistic. Calibration was assessed using Pearson’s correlation coefficient. Results The c-statistic of model A in our external validation cohort was 0.80 (95% confidence interval, 0.79–0.82), and the c-statistic of model B was 0.82 (95% confidence interval, 0.81–0.84). Excellent calibration was reported in the two models with Pearson’s correlation coefficient (0.892 for model A, p<0.001; 0.927 for model B, p = 0.008). Conclusions The case-mix adjustment model could be used to effectively predict 30-day mortality rates in Chinese patients with acute stroke. PMID:27846282

  14. An External Matrix-Assisted Laser Desorption Ionization Source for Flexible FT-ICR Mass Spectrometry Imaging with Internal Calibration on Adjacent Samples

    NASA Astrophysics Data System (ADS)

    Smith, Donald F.; Aizikov, Konstantin; Duursma, Marc C.; Giskes, Frans; Spaanderman, Dirk-Jan; McDonnell, Liam A.; O'Connor, Peter B.; Heeren, Ron M. A.

    2011-01-01

    We describe the construction and application of a new MALDI source for FT-ICR mass spectrometry imaging. The source includes a translational X-Y positioning stage with a 10 × 10 cm range of motion for analysis of large sample areas, a quadrupole for mass selection, and an external octopole ion trap with electrodes for the application of an axial potential gradient for controlled ion ejection. An off-line LC MALDI MS/MS run demonstrates the utility of the new source for data- and position-dependent experiments. A FT-ICR MS imaging experiment of a coronal rat brain section yields ˜200 unique peaks from m/z 400-1100 with corresponding mass-selected images. Mass spectra from every pixel are internally calibrated with respect to polymer calibrants collected from an adjacent slide.

  15. Scanning Raman lidar for tropospheric water vapor profiling and GPS path delay correction

    NASA Astrophysics Data System (ADS)

    Tarniewicz, Jerome; Bock, Olivier; Pelon, Jacques R.; Thom, Christian

    2002-01-01

    The design of a ground based and transportable combined Raman elastic-backscatter lidar for the remote sensing of lower tropospheric water vapor and nitrogen concentration is described. This lidar is intended to be used for an external calibration of the wet path delay of GPS signals. A description of the method used to derive water vapor and nitrogen profiles in the lower troposphere is given. The instrument has been tested during the ESCOMPTE campaign in June 2001 and first measurements are presented.

  16. A method of non-contact reading code based on computer vision

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan

    2018-03-01

    With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.

  17. Co-Certification: A New Direction for External Assessment?

    ERIC Educational Resources Information Center

    Newbold, David

    2009-01-01

    The major European testing agencies have calibrated their exams to the levels of language proficiency described in the Common European Framework (CEFR). In Italy, where the Framework has been enthusiastically embraced, external exams are now frequently used within the state education system as they are believed to provide reliable, widely…

  18. Signal acquisition and scale calibration for beam power density distribution of electron beam welding

    NASA Astrophysics Data System (ADS)

    Peng, Yong; Li, Hongqiang; Shen, Chunlong; Guo, Shun; Zhou, Qi; Wang, Kehong

    2017-06-01

    The power density distribution of electron beam welding (EBW) is a key factor to reflect the beam quality. The beam quality test system was designed for the actual beam power density distribution of high-voltage EBW. After the analysis of characteristics and phase relationship between the deflection control signal and the acquisition signal, the Post-Trigger mode was proposed for the signal acquisition meanwhile the same external clock source was shared by the control signal and the sampling clock. The power density distribution of beam cross-section was reconstructed using one-dimensional signal that was processed by median filtering, twice signal segmentation and spatial scale calibration. The diameter of beam cross-section was defined by amplitude method and integral method respectively. The measured diameter of integral definition is bigger than that of amplitude definition, but for the ideal distribution the former is smaller than the latter. The measured distribution without symmetrical shape is not concentrated compared to Gaussian distribution.

  19. Quantitative bioimaging of p-boronophenylalanine in thin liver tissue sections as a tool for treatment planning in boron neutron capture therapy.

    PubMed

    Reifschneider, Olga; Schütz, Christian L; Brochhausen, Christoph; Hampel, Gabriele; Ross, Tobias; Sperling, Michael; Karst, Uwe

    2015-03-01

    An analytical method using laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) was developed and applied to assess enrichment of 10B-containing p-boronophenylalanine-fructose (BPA-f) and its pharmacokinetic distribution in human tissues after application for boron neutron capture therapy (BNCT). High spatial resolution (50 μm) and limits of detection in the low parts-per-billion range were achieved using a Nd:YAG laser of 213 nm wavelength. External calibration by means of 10B-enriched standards based on whole blood proved to yield precise quantification results. Using this calibration method, quantification of 10B in cancerous and healthy tissue was carried out. Additionally, the distribution of 11B was investigated, providing 10B enrichment in the investigated tissues. Quantitative imaging of 10B by means of LA-ICP-MS was demonstrated as a new option to characterise the efficacy of boron compounds for BNCT.

  20. Rapid screening of selective serotonin re-uptake inhibitors in urine samples using solid-phase microextraction gas chromatography-mass spectrometry.

    PubMed

    Salgado-Petinal, Carmen; Lamas, J Pablo; Garcia-Jares, Carmen; Llompart, Maria; Cela, Rafael

    2005-07-01

    In this paper a solid-phase microextraction-gas chromatography-mass spectrometry (SPME-GC-MS) method is proposed for a rapid analysis of some frequently prescribed selective serotonin re-uptake inhibitors (SSRI)-venlafaxine, fluvoxamine, mirtazapine, fluoxetine, citalopram, and sertraline-in urine samples. The SPME-based method enables simultaneous determination of the target SSRI after simple in-situ derivatization of some of the target compounds. Calibration curves in water and in urine were validated and statistically compared. This revealed the absence of matrix effect and, in consequence, the possibility of quantifying SSRI in urine samples by external water calibration. Intra-day and inter-day precision was satisfactory for all the target compounds (relative standard deviation, RSD, <14%) and the detection limits achieved were <0.4 ng mL(-1) urine. The time required for the SPME step and for GC analysis (30 min each) enables high throughput. The method was applied to real urine samples from different patients being treated with some of these pharmaceuticals. Some SSRI metabolites were also detected and tentatively identified.

  1. Comparison of droplet digital PCR with quantitative real-time PCR for determination of zygosity in transgenic maize.

    PubMed

    Xu, Xiaoli; Peng, Cheng; Wang, Xiaofu; Chen, Xiaoyun; Wang, Qiang; Xu, Junfeng

    2016-12-01

    This study evaluated the applicability of droplet digital PCR (ddPCR) as a tool for maize zygosity determination using quantitative real-time PCR (qPCR) as a reference technology. Quantitative real-time PCR is commonly used to determine transgene copy number or GMO zygosity characterization. However, its effectiveness is based on identical reaction efficiencies for the transgene and the endogenous reference gene. Additionally, a calibrator sample should be utilized for accuracy. Droplet digital PCR is a DNA molecule counting technique that directly counts the absolute number of target and reference DNA molecules in a sample, independent of assay efficiency or external calibrators. The zygosity of the transgene can be easily determined using the ratio of the quantity of the target gene to the reference single copy endogenous gene. In this study, both the qPCR and ddPCR methods were used to determine insect-resistant transgenic maize IE034 zygosity. Both methods performed well, but the ddPCR method was more convenient because of its absolute quantification property.

  2. Assessment of strobilurin fungicides' content in soya-based drinks by liquid micro-extraction and liquid chromatography with tandem mass spectrometry.

    PubMed

    Campillo, Natalia; Iniesta, María Jesús; Viñas, Pilar; Hernández-Córdoba, Manuel

    2015-01-01

    Seven strobilurin fungicides were pre-concentrated from soya-based drinks using dispersive liquid-liquid micro-extraction (DLLME) with a prior protein precipitation step in acid medium. The enriched phase was analysed by liquid chromatography (LC) with dual detection, using diode array detection (DAD) and electrospray-ion trap tandem mass spectrometry (ESI-IT-MS/MS). After selecting 1-undecanol and methanol as the extractant and disperser solvents, respectively, for DLLME, the Taguchi experimental method, an orthogonal array design, was applied to select the optimal solvent volumes and salt concentration in the aqueous phase. The matrix effect was evaluated and quantification was carried out using external aqueous calibration for DAD and matrix-matched calibration method for MS/MS. Detection limits in the 4-130 and 0.8-4.5 ng g(-1) ranges were obtained for DAD and MS/MS, respectively. The DLLME-LC-DAD-MS method was applied to the analysis of 10 different samples, none of which was found to contain residues of the studied fungicides.

  3. The Identification and Quantification of Suberin Monomers of Root and Tuber Periderm from Potato (Solanum tuberosum) as Fatty Acyl tert-Butyldimethylsilyl Derivatives.

    PubMed

    Company-Arumí, Dolors; Figueras, Mercè; Salvadó, Victoria; Molinas, Marisa; Serra, Olga; Anticó, Enriqueta

    2016-11-01

    Protective plant lipophilic barriers such as suberin and cutin, with their associated waxes, are complex fatty acyl derived polyesters. Their precise chemical composition is valuable to understand the specific role of each compound to the physiological function of the barrier. To develop a method for the compositional analysis of suberin and associated waxes by gas chromatography (GC) coupled to ion trap-mass spectrometry (IT-MS) using N-(tert-butyldimethylsilyl)-N-methyl-trifluoroacetamide (MTBSTFA) as sylilating reagent, and apply it to compare the suberin of the root and tuber periderm of potato (Solanum tuberosum). Waxes and suberin monomers from root and periderm were extracted subsequently using organic solvents and by methanolysis, and subjected to MTBSTFA derivatisation. GC analyses of periderm extracts were used to optimise the chromatographic method and the compound identification. Quantitative data was obtained using external calibration curves. The method was fully validated and applied for suberin composition analyses of roots and periderm. Wax and suberin compounds were successfully separated and compound identification was based on the specific (M-57) and non-specific ions in mass spectra. The use of calibration curves built with different external standards provided quantitative accurate data and showed that suberin from root contains shorter chained fatty acyl derivatives and a relative predominance of α,ω-alkanedioic acids compared to that of the periderm. We present a method for the analysis of suberin and their associated waxes based on MTBSTFA derivatisation. Moreover, the characteristic root suberin composition may be the adaptive response to its specific regulation of permeability to water and gases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Challenges in the Development of a Self-Calibrating Network of Ceilometers.

    NASA Astrophysics Data System (ADS)

    Hervo, Maxime; Wagner, Frank; Mattis, Ina; Baars, Holger; Haefele, Alexander

    2015-04-01

    There are more than 700 Automatic Lidars and Ceilometers (ALCs) currently operating in Europe. Modern ceilometers can do more than simply measure the cloud base height. They can also measure aerosol layers like volcanic ash, Saharan dust or aerosols within the planetary boundary layer. In the frame of E-PROFILE, which is part of EUMETNET, a European network of automatic lidars and ceilometers will be set up exploiting this new capability. To be able to monitor the evolution of aerosol layers over a large spatial scale, the measurements need to be consistent from one site to another. Currently, most of the instruments do not provide calibrated, only relative measurements. Thus, it is necessary to calibrate the instruments to develop a consistent product for all the instruments from various network and to combine them in an European Network like E-PROFILE. As it is not possible to use an external reference (like a sun photometer or a Raman Lidar) to calibrate all the ALCs in the E-PROFILE network, it is necessary to use a self-calibration algorithm. Two calibration methods have been identified which are suited for automated use in a network: the Rayleigh and the liquid cloud calibration methods In the Rayleigh method, backscatter signals from molecules (this is the Rayleigh signal) can be measured and used to calculate the lidar constant (Wiegner et al. 2012). At the wavelength used for most ceilometers, this signal is weak and can be easily measured only during cloud-free nights. However, with the new algorithm implemented in the frame of the TOPROF COST Action, the Rayleigh calibration was successfully performed on a CHM15k for more than 50% of the nights from October 2013 to September 2014. This method was validated against two reference instruments, the collocated EARLINET PollyXT lidar and the CALIPSO space-borne lidar. The lidar constant was on average within 5.5% compare to the lidar constant determined by the EARLINET lidar. It confirms the validity of the self-calibration method. For 3 CALIPSO overpasses the agreement was on average 20.0%. It is less accurate due to the large uncertainties of CALIPSO data close to the surface. In opposition to the Rayleigh method, Cloud calibration method uses the complete attenuation of the transmitter beam by a liquid water cloud to calculate the lidar constant (O'Connor 2004). The main challenge is the selection of accurately measured water clouds. These clouds should not contain any ice crystals and the detector should not get into saturation. The first problem is especially important during winter time and the second problem is especially important for low clouds. Furthermore the overlap function should be known accurately, especially when the water cloud is located at a distance where the overlap between laser beam and telescope field-of-view is still incomplete. In the E-PROFILE pilot network, the Rayleigh calibration is already performed automatically. This demonstration network maked available, in real time, calibrated ALC measurements from 8 instruments of 4 different types in 6 countries. In collaboration with TOPROF and 20 national weathers services, E-PROFILE will provide, in 2017, near real time ALC measurements in most of Europe.

  5. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

    PubMed

    Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

    2011-02-20

    A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Analytical aspects of diterpene alkaloid poisoning with monkshood.

    PubMed

    Colombo, Maria Laura; Bugatti, Carlo; Davanzo, Franca; Persico, Andrea; Ballabio, Cinzia; Restani, Patrizia

    2009-11-01

    A sensitive and specific method for aconitine extraction from biological samples was developed. Aconitine, the main toxic alkaloid from plants belonging to Aconitum species (family Ranunculaceae), was determined in plant material by an external standard method, and by a standard addition calibration method in biological fluids. Described here is one fatal case and five intoxications of accidental aconitine poisoning following the ingestion of aconite mistaken for an edible grass, Aruncus dioicus (Walt.) Fernald, "mountain asparagus", and Cicerbita alpina (L.) Wallroth. The aconitine content in urine was in the range 2.94 microg/mL (dead patient)-0.20 microg/mL (surviving patients), which was almost two to four times higher than that in plasma.

  7. Calibration methods and tools for KM3NeT

    NASA Astrophysics Data System (ADS)

    Kulikovskiy, Vladimir

    2016-04-01

    The KM3NeT detectors, ARCA and ORCA, composed of several thousands digital optical modules, are in the process of their realization in the Mediterranean Sea. Each optical module contains 31 3-inch photomultipliers. Readout of the optical modules and other detector components is synchronized at the level of sub-nanoseconds. The position of the module is measured by acoustic piezo detectors inside the module and external acoustic emitters installed on the bottom of the sea. The orientation of the module is obtained with an internal attitude and heading reference system chip. Detector calibration, i.e. timing, positioning and sea-water properties, is overviewed in this talk and discussed in detail in this conference. Results of the procedure applied to the first detector unit ready for installation in the deep sea will be shown.

  8. Application of Kalman filters to robot calibration

    NASA Technical Reports Server (NTRS)

    Whitney, D. E.; Junkel, E. F.

    1983-01-01

    This report explores new uses of Kalman filter theory in manufacturing systems (robots in particular). The Kalman filter allows the robot to read its sensors plus external sensors and learn from its experience. In effect, the robot is given primitive intelligence. The study, which is applicable to any type of powered kinematic linkage, focuses on the calibration of a manipulator.

  9. Liquid detection with InGaAsP semiconductor lasers having multiple short external cavities.

    PubMed

    Zhu, X; Cassidy, D T

    1996-08-20

    A liquid detection system consisting of a diode laser with multiple short external cavities (MSXC's) is reported. The MSXC diode laser operates single mode on one of 18 distinct modes that span a range of 72 nm. We selected the modes by setting the length of one of the external cavities using a piezoelectric positioner. One can measure the transmission through cells by modulating the injection current at audio frequencies and using phase-sensitive detection to reject the ambient light and reduce 1/f noise. A method to determine regions of single-mode operation by the rms of the output of the laser is described. The transmission data were processed by multivariate calibration techniques, i.e., partial least squares and principal component regression. Water concentration in acetone was used to demonstrate the performance of the system. A correlation coefficient of R(2) = 0.997 and 0.29% root-mean-square error of prediction are found for water concentration over the range of 2-19%.

  10. Evaluating the predictive accuracy and the clinical benefit of a nomogram aimed to predict survival in node-positive prostate cancer patients: External validation on a multi-institutional database.

    PubMed

    Bianchi, Lorenzo; Schiavina, Riccardo; Borghesi, Marco; Bianchi, Federico Mineo; Briganti, Alberto; Carini, Marco; Terrone, Carlo; Mottrie, Alex; Gacci, Mauro; Gontero, Paolo; Imbimbo, Ciro; Marchioro, Giansilvio; Milanese, Giulio; Mirone, Vincenzo; Montorsi, Francesco; Morgia, Giuseppe; Novara, Giacomo; Porreca, Angelo; Volpe, Alessandro; Brunocilla, Eugenio

    2018-04-06

    To assess the predictive accuracy and the clinical value of a recent nomogram predicting cancer-specific mortality-free survival after surgery in pN1 prostate cancer patients through an external validation. We evaluated 518 prostate cancer patients treated with radical prostatectomy and pelvic lymph node dissection with evidence of nodal metastases at final pathology, at 10 tertiary centers. External validation was carried out using regression coefficients of the previously published nomogram. The performance characteristics of the model were assessed by quantifying predictive accuracy, according to the area under the curve in the receiver operating characteristic curve and model calibration. Furthermore, we systematically analyzed the specificity, sensitivity, positive predictive value and negative predictive value for each nomogram-derived probability cut-off. Finally, we implemented decision curve analysis, in order to quantify the nomogram's clinical value in routine practice. External validation showed inferior predictive accuracy as referred to in the internal validation (65.8% vs 83.3%, respectively). The discrimination (area under the curve) of the multivariable model was 66.7% (95% CI 60.1-73.0%) by testing with receiver operating characteristic curve analysis. The calibration plot showed an overestimation throughout the range of predicted cancer-specific mortality-free survival rates probabilities. However, in decision curve analysis, the nomogram's use showed a net benefit when compared with the scenarios of treating all patients or none. In an external setting, the nomogram showed inferior predictive accuracy and suboptimal calibration characteristics as compared to that reported in the original population. However, decision curve analysis showed a clinical net benefit, suggesting a clinical implication to correctly manage pN1 prostate cancer patients after surgery. © 2018 The Japanese Urological Association.

  11. External validation of the diffuse intrinsic pontine glioma survival prediction model: a collaborative report from the International DIPG Registry and the SIOPE DIPG Registry.

    PubMed

    Veldhuijzen van Zanten, Sophie E M; Lane, Adam; Heymans, Martijn W; Baugh, Joshua; Chaney, Brooklyn; Hoffman, Lindsey M; Doughman, Renee; Jansen, Marc H A; Sanchez, Esther; Vandertop, William P; Kaspers, Gertjan J L; van Vuurden, Dannis G; Fouladi, Maryam; Jones, Blaise V; Leach, James

    2017-08-01

    We aimed to perform external validation of the recently developed survival prediction model for diffuse intrinsic pontine glioma (DIPG), and discuss its utility. The DIPG survival prediction model was developed in a cohort of patients from the Netherlands, United Kingdom and Germany, registered in the SIOPE DIPG Registry, and includes age <3 years, longer symptom duration and receipt of chemotherapy as favorable predictors, and presence of ring-enhancement on MRI as unfavorable predictor. Model performance was evaluated by analyzing the discrimination and calibration abilities. External validation was performed using an unselected cohort from the International DIPG Registry, including patients from United States, Canada, Australia and New Zealand. Basic comparison with the results of the original study was performed using descriptive statistics, and univariate- and multivariable regression analyses in the validation cohort. External validation was assessed following a variety of analyses described previously. Baseline patient characteristics and results from the regression analyses were largely comparable. Kaplan-Meier curves of the validation cohort reproduced separated groups of standard (n = 39), intermediate (n = 125), and high-risk (n = 78) patients. This discriminative ability was confirmed by similar values for the hazard ratios across these risk groups. The calibration curve in the validation cohort showed a symmetric underestimation of the predicted survival probabilities. In this external validation study, we demonstrate that the DIPG survival prediction model has acceptable cross-cohort calibration and is able to discriminate patients with short, average, and increased survival. We discuss how this clinico-radiological model may serve a useful role in current clinical practice.

  12. A comparison of laser ablation-inductively coupled plasma-mass spectrometry and high-resolution continuum source graphite furnace molecular absorption spectrometry for the direct determination of bromine in polymers

    NASA Astrophysics Data System (ADS)

    de Gois, Jefferson S.; Van Malderen, Stijn J. M.; Cadorim, Heloisa R.; Welz, Bernhard; Vanhaecke, Frank

    2017-06-01

    This work describes the development and comparison of two methods for the direct determination of Br in polymer samples via solid sampling, one using laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) and the other using high-resolution continuum source graphite furnace molecular absorption spectrometry with direct solid sample analysis (HR-CS SS-GF MAS). The methods were optimized and their accuracy was evaluated by comparing the results obtained for 6 polymeric certified reference materials (CRMs) with the corresponding certified values. For Br determination with LA-ICP-MS, the 79Br+ signal could be monitored interference-free. For Br determination via HR-CS SS-GF MAS, the CaBr molecule was monitored at 625.315 nm with integration of the central pixel ± 1. Bromine quantification by LA-ICP-MS was performed via external calibration against a single CRM while using the 12C+ signal as an internal standard. With HR-CS SS-GF MAS, Br quantification could be accomplished using external calibration against aqueous standard solutions. Except for one LA-ICP-MS result, the concentrations obtained with both techniques were in agreement with the certified values within the experimental uncertainty as evidenced using a t-test (95% confidence level). The limit of quantification was determined to be 100 μg g- 1 Br for LA-ICP-MS and 10 μg g- 1 Br for HR-CS SS-GF MAS.

  13. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  14. Simultaneous localization and calibration for electromagnetic tracking systems.

    PubMed

    Sadjadi, Hossein; Hashtrudi-Zaad, Keyvan; Fichtinger, Gabor

    2016-06-01

    In clinical environments, field distortion can cause significant electromagnetic tracking errors. Therefore, dynamic calibration of electromagnetic tracking systems is essential to compensate for measurement errors. It is proposed to integrate the motion model of the tracked instrument with redundant EM sensor observations and to apply a simultaneous localization and mapping algorithm in order to accurately estimate the pose of the instrument and create a map of the field distortion in real-time. Experiments were conducted in the presence of ferromagnetic and electrically-conductive field distorting objects and results compared with those of a conventional sensor fusion approach. The proposed method reduced the tracking error from 3.94±1.61 mm to 1.82±0.62 mm in the presence of steel, and from 0.31±0.22 mm to 0.11±0.14 mm in the presence of aluminum. With reduced tracking error and independence from external tracking devices or pre-operative calibrations, the approach is promising for reliable EM navigation in various clinical procedures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Determination of eddy current response with magnetic measurements.

    PubMed

    Jiang, Y Z; Tan, Y; Gao, Z; Nakamura, K; Liu, W B; Wang, S Z; Zhong, H; Wang, B B

    2017-09-01

    Accurate mutual inductances between magnetic diagnostics and poloidal field coils are an essential requirement for determining the poloidal flux for plasma equilibrium reconstruction. The mutual inductance calibration of the flux loops and magnetic probes requires time-varying coil currents, which also simultaneously drive eddy currents in electrically conducting structures. The eddy current-induced field appearing in the magnetic measurements can substantially increase the calibration error in the model if the eddy currents are neglected. In this paper, an expression of the magnetic diagnostic response to the coil currents is used to calibrate the mutual inductances, estimate the conductor time constant, and predict the eddy currents response. It is found that the eddy current effects in magnetic signals can be well-explained by the eddy current response determination. A set of experiments using a specially shaped saddle coil diagnostic are conducted to measure the SUNIST-like eddy current response and to examine the accuracy of this method. In shots that include plasmas, this approach can more accurately determine the plasma-related response in the magnetic signals by eliminating the field due to the eddy currents produced by the external field.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogdanova, M. A.; Zyryanov, S. M.; Faculty of Physics, Moscow State University, MSU, Moscow

    Energy distribution and the flux of the ions coming on a surface are considered as the key-parameters in anisotropic plasma etching. Since direct ion energy distribution (IED) measurements at the treated surface during plasma processing are often hardly possible, there is an opportunity for virtual ones. This work is devoted to the possibility of such indirect IED and ion flux measurements at an rf-biased electrode in low-pressure rf plasma by using a “virtual IED sensor” which represents “in-situ” IED calculations on the absolute scale in accordance with a plasma sheath model containing a set of measurable external parameters. The “virtualmore » IED sensor” should also involve some external calibration procedure. Applicability and accuracy of the “virtual IED sensor” are validated for a dual-frequency reactive ion etching (RIE) inductively coupled plasma (ICP) reactor with a capacitively coupled rf-biased electrode. The validation is carried out for heavy (Ar) and light (H{sub 2}) gases under different discharge conditions (different ICP powers, rf-bias frequencies, and voltages). An EQP mass-spectrometer and an rf-compensated Langmuir probe (LP) are used to characterize plasma, while an rf-compensated retarded field energy analyzer (RFEA) is applied to measure IED and ion flux at the rf-biased electrode. Besides, the pulsed selfbias method is used as an external calibration procedure for ion flux estimating at the rf-biased electrode. It is shown that pulsed selfbias method allows calibrating the IED absolute scale quite accurately. It is also shown that the “virtual IED sensor” based on the simplest collisionless sheath model allows reproducing well enough the experimental IEDs at the pressures when the sheath thickness s is less than the ion mean free path λ{sub i} (s < λ{sub i}). At higher pressure (when s > λ{sub i}), the difference between calculated and experimental IEDs due to ion collisions in the sheath is observed in the low energy range. The effect of electron impact ionization in the sheath on the origin and intensity of low-energy peaks in IED is discussed compared to ion charge-exchange collisions. Obviously, the extrapolation of the “virtual IED sensor” approach to higher pressures requires developing some other sheath models, taking into account both ion and electron collisions and probably including even a model of the whole plasma volume instead of plasma sheath one.« less

  17. Radiometric modeling and calibration of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) ground based measurement experiment

    NASA Astrophysics Data System (ADS)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-12-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data collected during the moon tracking and viewing experiment events. From which, we derive the lunar surface temperature and emissivity associated with the moon viewing measurements.

  18. Radiometric Modeling and Calibration of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS)Ground Based Measurement Experiment

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-01-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere s thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data collected during the moon tracking and viewing experiment events. From which, we derive the lunar surface temperature and emissivity associated with the moon viewing measurements.

  19. Radial line-scans as representative sampling strategy in dried-droplet laser ablation of liquid samples deposited on pre-cut filter paper disks

    NASA Astrophysics Data System (ADS)

    Nischkauer, Winfried; Vanhaecke, Frank; Bernacchi, Sébastien; Herwig, Christoph; Limbeck, Andreas

    2014-11-01

    Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL- 1 with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with conventional liquid measurements, and by analyzing IAEA-153 reference material (Trace Elements in Milk Powder); a good agreement with the certified value for phosphorus was obtained.

  20. Protein quantitation using Ru-NHS ester tagging and isotope dilution high-pressure liquid chromatography-inductively coupled plasma mass spectrometry determination.

    PubMed

    Liu, Rui; Lv, Yi; Hou, Xiandeng; Yang, Lu; Mester, Zoltan

    2012-03-20

    An accurate, simple, and sensitive method for the direct determination of proteins by nonspecies specific isotope dilution and external calibration high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) is described. The labeling of myoglobin (17 kDa), transferrin (77 kDa), and thyroglobulin (670 kDa) proteins was accomplished in a single-step reaction with a commercially available bis(2,2'-bipyridine)-4'-methyl-4-carboxybipyridine-ruthenium N-succinimidyl ester-bis(hexafluorophosphate) (Ru-NHS ester). Using excess amounts of Ru-NHS ester compared to the protein concentration at optimized labeling conditions, constant ratios for Ru to proteins were obtained. Bioconjugate solutions containing both labeled and unlabeled proteins as well as excess Ru-NHS ester reagent were injected onto a size exclusion HPLC column for separation and ICPMS detection without any further treatment. A (99)Ru enriched spike was used for nonspecies specific ID calibration. The accuracy of the method was confirmed at various concentration levels. An average recovery of 100% ± 3% (1 standard deviation (SD), n = 9) was obtained with a typical precision of better than 5% RSD at 100 μg mL(-1) for nonspecies specific ID. Detection limits (3SD) of 1.6, 3.2, and 7.0 fmol estimated from three procedure blanks were obtained for myoglobin, transferrin, and thyroglobulin, respectively. These detection limits are suitable for the direct determination of intact proteins at trace levels. For simplicity, external calibration was also tested. Good linear correlation coefficients, 0.9901, 0.9921, and 0.9980 for myoglobin, transferrin, and thyroglobulin, respectively, were obtained. The measured concentrations of proteins in a solution were in good agreement with their volumetrically prepared values. To the best of our knowledge, this is the first application of nonspecies specific ID for the accurate and direct determination of proteins using a Ru-NHS ester labeling reagent.

  1. External validation of a prediction model for surgical site infection after thoracolumbar spine surgery in a Western European cohort.

    PubMed

    Janssen, Daniël M C; van Kuijk, Sander M J; d'Aumerie, Boudewijn B; Willems, Paul C

    2018-05-16

    A prediction model for surgical site infection (SSI) after spine surgery was developed in 2014 by Lee et al. This model was developed to compute an individual estimate of the probability of SSI after spine surgery based on the patient's comorbidity profile and invasiveness of surgery. Before any prediction model can be validly implemented in daily medical practice, it should be externally validated to assess how the prediction model performs in patients sampled independently from the derivation cohort. We included 898 consecutive patients who underwent instrumented thoracolumbar spine surgery. To quantify overall performance using Nagelkerke's R 2 statistic, the discriminative ability was quantified as the area under the receiver operating characteristic curve (AUC). We computed the calibration slope of the calibration plot, to judge prediction accuracy. Sixty patients developed an SSI. The overall performance of the prediction model in our population was poor: Nagelkerke's R 2 was 0.01. The AUC was 0.61 (95% confidence interval (CI) 0.54-0.68). The estimated slope of the calibration plot was 0.52. The previously published prediction model showed poor performance in our academic external validation cohort. To predict SSI after instrumented thoracolumbar spine surgery for the present population, a better fitting prediction model should be developed.

  2. Color accuracy and reproducibility in whole slide imaging scanners

    PubMed Central

    Shrestha, Prarthana; Hulsken, Bas

    2014-01-01

    Abstract We propose a workflow for color reproduction in whole slide imaging (WSI) scanners, such that the colors in the scanned images match to the actual slide color and the inter-scanner variation is minimum. We describe a new method of preparation and verification of the color phantom slide, consisting of a standard IT8-target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several International Color Consortium (ICC) compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color space. Based on the quality of the color reproduction in histopathology slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed workflow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We quantify color difference using the CIE-DeltaE2000 metric, where DeltaE values below 1 are considered imperceptible. Our evaluation on 14 phantom slides, manufactured according to the proposed method, shows an average inter-slide color difference below 1 DeltaE. The proposed workflow is implemented and evaluated in 35 WSI scanners developed at Philips, called the Ultra Fast Scanners (UFS). The color accuracy, measured as DeltaE between the scanner reproduced colors and the reference colorimetric values of the phantom patches, is improved on average to 3.5 DeltaE in calibrated scanners from 10 DeltaE in uncalibrated scanners. The average inter-scanner color difference is found to be 1.2 DeltaE. The improvement in color performance upon using the proposed method is apparent with the visual color quality of the tissue scans. PMID:26158041

  3. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  4. An analysis of cross-coupling of a multicomponent jet engine test stand using finite element modeling techniques

    NASA Technical Reports Server (NTRS)

    Schweikhard, W. G.; Singnoi, W. N.

    1985-01-01

    A two axis thrust measuring system was analyzed by using a finite a element computer program to determine the sensitivities of the thrust vectoring nozzle system to misalignment of the load cells and applied loads, and the stiffness of the structural members. Three models were evaluated: (1) the basic measuring element and its internal calibration load cells; (2) the basic measuring element and its external load calibration equipment; and (3) the basic measuring element, external calibration load frame and the altitude facility support structure. Alignment of calibration loads was the greatest source of error for multiaxis thrust measuring systems. Uniform increases or decreases in stiffness of the members, which might be caused by the selection of the materials, have little effect on the accuracy of the measurements. It is found that the POLO-FINITE program is a viable tool for designing and analyzing multiaxis thrust measurement systems. The response of the test stand to step inputs that might be encountered with thrust vectoring tests was determined. The dynamic analysis show a potential problem for measuring the dynamic response characteristics of thrust vectoring systems because of the inherently light damping of the test stand.

  5. Development of a multi-residue analytical methodology based on liquid chromatography-tandem mass spectrometry (LC-MS/MS) for screening and trace level determination of pharmaceuticals in surface and wastewaters.

    PubMed

    Gros, Meritxell; Petrović, Mira; Barceló, Damiá

    2006-11-15

    This paper describes development, optimization and validation of a method for the simultaneous determination of 29 multi-class pharmaceuticals using off line solid phase extraction (SPE) followed by liquid chromatography-triple quadrupole mass spectrometry (LC-MS-MS). Target compounds include analgesics and non-steroidal anti-inflammatories (NSAIDs), lipid regulators, psychiatric drugs, anti-histaminics, anti-ulcer agent, antibiotics and beta-blockers. Recoveries obtained were generally higher than 60% for both surface and wastewaters, with exception of several compounds that yielded lower, but still acceptable recoveries: ranitidine (50%), sotalol (50%), famotidine (50%) and mevastatin (34%). The overall variability of the method was below 15%, for all compounds and all tested matrices. Method detection limits (MDL) varied between 1 and 30ng/L and from 3 to 160ng/L for surface and wastewaters, respectively. The precision of the method, calculated as relative standard deviation (R.S.D.), ranged from 0.2 to 6% and from 1 to 11% for inter and intra-day analysis, respectively. A detailed study of matrix effects was performed in order to evaluate the suitability of different calibration approaches (matrix-matched external calibration, internal calibration, extract dilution) to reduce analyte suppression or enhancement during instrumental analysis. The main advantages and drawbacks of each approach are demonstrated, justifying the selection of internal standard calibration as the most suitable approach for our study. The developed analytical method was successfully applied to the analysis of pharmaceutical residues in WWTP influents and effluents, as well as in river water. For both, river and wastewaters, the most ubiquitous compounds belonged to the group of anti-inflammatories and analgesics, antibiotics, the lipid regulators being acetaminophen, trimethoprim, ibuprofen, ketoprofen, atenolol, propranolol, mevastatin, carbamazepine and ranitidine the most frequently detected compounds.

  6. Frequency characterization of a swept- and fixed-wavelength external-cavity quantum cascade laser by use of a frequency comb.

    PubMed

    Knabe, Kevin; Williams, Paul A; Giorgetta, Fabrizio R; Armacost, Chris M; Crivello, Sam; Radunsky, Michael B; Newbury, Nathan R

    2012-05-21

    The instantaneous optical frequency of an external-cavity quantum cascade laser (QCL) is characterized by comparison to a near-infrared frequency comb. Fluctuations in the instantaneous optical frequency are analyzed to determine the frequency-noise power spectral density for the external-cavity QCL both during fixed-wavelength and swept-wavelength operation. The noise performance of a near-infrared external-cavity diode laser is measured for comparison. In addition to providing basic frequency metrology of external-cavity QCLs, this comb-calibrated swept QCL system can be applied to rapid, precise broadband spectroscopy in the mid-infrared spectral region.

  7. Out of lab calibration of a rotating 2D scanner for 3D mapping

    NASA Astrophysics Data System (ADS)

    Koch, Rainer; Böttcher, Lena; Jahrsdörfer, Maximilian; Maier, Johannes; Trommer, Malte; May, Stefan; Nüchter, Andreas

    2017-06-01

    Mapping is an essential task in mobile robotics. To fulfil advanced navigation and manipulation tasks a 3D representation of the environment is required. Applying stereo cameras or Time-of-flight cameras (TOF cameras) are one way to archive this requirement. Unfortunately, they suffer from drawbacks which makes it difficult to map properly. Therefore, costly 3D laser scanners are applied. An inexpensive way to build a 3D representation is to use a 2D laser scanner and rotate the scan plane around an additional axis. A 3D point cloud acquired with such a custom device consists of multiple 2D line scans. Therefore the scanner pose of each line scan need to be determined as well as parameters resulting from a calibration to generate a 3D point cloud. Using external sensor systems are a common method to determine these calibration parameters. This is costly and difficult when the robot needs to be calibrated outside the lab. Thus, this work presents a calibration method applied on a rotating 2D laser scanner. It uses a hardware setup to identify the required parameters for calibration. This hardware setup is light, small, and easy to transport. Hence, an out of lab calibration is possible. Additional a theoretical model was created to test the algorithm and analyse impact of the scanner accuracy. The hardware components of the 3D scanner system are an HOKUYO UTM-30LX-EW 2D laser scanner, a Dynamixel servo-motor, and a control unit. The calibration system consists of an hemisphere. In the inner of the hemisphere a circular plate is mounted. The algorithm needs to be provided with a dataset of a single rotation from the laser scanner. To achieve a proper calibration result the scanner needs to be located in the middle of the hemisphere. By means of geometric formulas the algorithms determine the individual deviations of the placed laser scanner. In order to minimize errors, the algorithm solves the formulas in an iterative process. First, the calibration algorithm was tested with an ideal hemisphere model created in Matlab. Second, laser scanner was mounted differently, the scanner position and the rotation axis was modified. In doing so, every deviation, was compared with the algorithm results. Several measurement settings were tested repeatedly with the 3D scanner system and the calibration system. The results show that the length accuracy of the laser scanner is most critical. It influences the required size of the hemisphere and the calibration accuracy.

  8. Radiometric and spectral calibrations of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) using principle component analysis

    NASA Astrophysics Data System (ADS)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-10-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw GIFTS interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. The radiometric calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. The absolute radiometric performance of the instrument is affected by several factors including the FPA off-axis effect, detector/readout electronics induced nonlinearity distortions, and fore-optics offsets. The GIFTS-EDU, being the very first imaging spectrometer to use ultra-high speed electronics to readout its large area format focal plane array detectors, operating at wavelengths as large as 15 microns, possessed non-linearity's not easily removable in the initial calibration process. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts remaining after the initial radiometric calibration process, thus, further enhance the absolute calibration accuracy. This method is applied to data collected during an atmospheric measurement experiment with the GIFTS, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The PC vectors of the calibrated radiance spectra are defined from the AERI observations and regression matrices relating the initial GIFTS radiance PC scores to the AERI radiance PC scores are calculated using the least squares inverse method. A new set of accurately calibrated GIFTS radiances are produced using the first four PC scores in the regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period.

  9. Precise Haptic Device Co-Location for Visuo-Haptic Augmented Reality.

    PubMed

    Eck, Ulrich; Pankratz, Frieder; Sandor, Christian; Klinker, Gudrun; Laga, Hamid

    2015-12-01

    Visuo-haptic augmented reality systems enable users to see and touch digital information that is embedded in the real world. PHANToM haptic devices are often employed to provide haptic feedback. Precise co-location of computer-generated graphics and the haptic stylus is necessary to provide a realistic user experience. Previous work has focused on calibration procedures that compensate the non-linear position error caused by inaccuracies in the joint angle sensors. In this article we present a more complete procedure that additionally compensates for errors in the gimbal sensors and improves position calibration. The proposed procedure further includes software-based temporal alignment of sensor data and a method for the estimation of a reference for position calibration, resulting in increased robustness against haptic device initialization and external tracker noise. We designed our procedure to require minimal user input to maximize usability. We conducted an extensive evaluation with two different PHANToMs, two different optical trackers, and a mechanical tracker. Compared to state-of-the-art calibration procedures, our approach significantly improves the co-location of the haptic stylus. This results in higher fidelity visual and haptic augmentations, which are crucial for fine-motor tasks in areas such as medical training simulators, assembly planning tools, or rapid prototyping applications.

  10. Evaluation of a new arterial pressure-based cardiac output device requiring no external calibration

    PubMed Central

    Prasser, Christopher; Bele, Sylvia; Keyl, Cornelius; Schweiger, Stefan; Trabold, Benedikt; Amann, Matthias; Welnhofer, Julia; Wiesenack, Christoph

    2007-01-01

    Background Several techniques have been discussed as alternatives to the intermittent bolus thermodilution cardiac output (COPAC) measurement by the pulmonary artery catheter (PAC). However, these techniques usually require a central venous line, an additional catheter, or a special calibration procedure. A new arterial pressure-based cardiac output (COAP) device (FloTrac™, Vigileo™; Edwards Lifesciences, Irvine, CA, USA) only requires access to the radial or femoral artery using a standard arterial catheter and does not need an external calibration. We validated this technique in critically ill patients in the intensive care unit (ICU) using COPAC as the method of reference. Methods We studied 20 critically ill patients, aged 16 to 74 years (mean, 55.5 ± 18.8 years), who required both arterial and pulmonary artery pressure monitoring. COPAC measurements were performed at least every 4 hours and calculated as the average of 3 measurements, while COAP values were taken immediately at the end of bolus determinations. Accuracy of measurements was assessed by calculating the bias and limits of agreement using the method described by Bland and Altman. Results A total of 164 coupled measurements were obtained. Absolute values of COPAC ranged from 2.80 to 10.80 l/min (mean 5.93 ± 1.55 l/min). The bias and limits of agreement between COPAC and COAP for unequal numbers of replicates was 0.02 ± 2.92 l/min. The percentage error between COPAC and COAP was 49.3%. The bias between percentage changes in COPAC (ΔCOPAC) and percentage changes in COAP (ΔCOAP) for consecutive measurements was -0.70% ± 32.28%. COPAC and COAP showed a Pearson correlation coefficient of 0.58 (p < 0.01), while the correlation coefficient between ΔCOPAC and ΔCOAP was 0.46 (p < 0.01). Conclusion Although the COAP algorithm shows a minimal bias with COPAC over a wide range of values in an inhomogeneous group of critically ill patients, the scattering of the data remains relative wide. Therefore, the used algorithm (V 1.03) failed to demonstrate an acceptable accuracy in comparison to the clinical standard of cardiac output determination. PMID:17996086

  11. WFC3 Cycle 19 Calibration Program

    NASA Astrophysics Data System (ADS)

    Sabbi, E.; WFC3 Team

    2012-03-01

    The Cycle 19 WFC3 Calibration Program runs from October 2011 through September 2012 and is designed to measure and monitor the behavior of both the UVIS and IR channels. The program was prepared with the actual usage of WFC3 in mind, to provide the best calibration data and reference les for the approved scientic programs. During Cycle 19 the WFC3 team is using 125 external and 1587 internal orbits of HST time divided in 29 di erent programs, grouped in six categories: Monitor, Photometry, Spectroscopy, Detectors, Flat-elds, and Image Quality

  12. [Determining biomedical equipment calibration in health care Institutions in the Risaralda Department of Colombia].

    PubMed

    López-Isaza, Giovanni A; Llamosa-Rincón, Luis E

    2008-01-01

    Determining quality features related to tracking biomedical equipment calibration patterns and their electrical safety as implemented by Health Care Institutions in the Risaralda department. This was a descriptive study using non-probabilistic sampling and the criterion of a greater equipment inventory and service demand for Clinics, Aesthetic, Radiology and Dentistry Centres and Hospitals. Census; the instrument was applied to 32 health-care institutions distributed throughout the Risaralda departments 14 municipalities between September 2005 and January 2006. Hospitals was the category having a highest number of electro-medical equipment (56%). Pereira (the capital of Risaralda) had 81% of all electro-medical equipment. All the institutions lacked NTC-ISO-IEC-17025 accreditation regarding standards certified by the Superintendence of Industry and Commerce. None of the institutions externally contracted by the institutions being surveyed was accredited. There is a public health risk in the Risaralda department; all health-care institutions lacked NTC-ISO-IEC-17025 accreditation and external institutions (in turn being hired by them for calibrating their equipment) also lacked accreditation. Based on the information obtained from non-calibrated equipment having international patterns, there is a great danger that determining the quality of biomedical equipment calibration patterns may be erroneous. It also places health-care institutions at a competitive disadvantage when compared to other accredited institutions in Colombia or in other countries.

  13. Self-development of visual space perception by learning from the hand

    NASA Astrophysics Data System (ADS)

    Chung, Jae-Moon; Ohnishi, Noboru

    1998-10-01

    Animals have been considered to develop ability for interpreting images captured on their retina by themselves gradually from their birth. For this they do not need external supervisor. We think that the visual function is obtained together with the development of hand reaching and grasping operations which are executed by active interaction with environment. On the viewpoint of hand teaches eye, this paper shows how visual space perception is developed in a simulated robot. The robot has simplified human-like structure used for hand-eye coordination. From the experimental results it may be possible to validate the method to describe how visual space perception of biological systems is developed. In addition the description gives a way to self-calibrate the vision of intelligent robot based on learn by doing manner without external supervision.

  14. [Research on Resistant Starch Content of Rice Grain Based on NIR Spectroscopy Model].

    PubMed

    Luo, Xi; Wu, Fang-xi; Xie, Hong-guang; Zhu, Yong-sheng; Zhang, Jian-fu; Xie, Hua-an

    2016-03-01

    A new method based on near-infrared reflectance spectroscopy (NIRS) analysis was explored to determine the content of rice-resistant starch instead of common chemical method which took long time was high-cost. First of all, we collected 62 spectral data which have big differences in terms of resistant starch content of rice, and then the spectral data and detected chemical values are imported chemometrics software. After that a near-infrared spectroscopy calibration model for rice-resistant starch content was constructed with partial least squares (PLS) method. Results are as follows: In respect of internal cross validation, the coefficient of determination (R2) of untreated, pretreatment with MSC+1thD, pretreatment with 1thD+SNV were 0.920 2, 0.967 0 and 0.976 7 respectively. Root mean square error of prediction (RMSEP) were 1.533 7, 1.011 2 and 0.837 1 respectively. In respect of external validation, the coefficient of determination (R2) of untreated, pretreatment with MSC+ 1thD, pretreatment with 1thD+SNV were 0.805, 0.976 and 0.992 respectively. The average absolute error was 1.456, 0.818, 0.515 respectively. There was no significant difference between chemical and predicted values (Turkey multiple comparison), so we think near infrared spectrum analysis is more feasible than chemical measurement. Among the different pretreatment, the first derivation and standard normal variate (1thD+SNV) have higher coefficient of determination (R2) and lower error value whether in internal validation and external validation. In other words, the calibration model has higher precision and less error by pretreatment with 1thD+SNV.

  15. Designing an experiment to measure cellular interaction forces

    NASA Astrophysics Data System (ADS)

    McAlinden, Niall; Glass, David G.; Millington, Owain R.; Wright, Amanda J.

    2013-09-01

    Optical trapping is a powerful tool in Life Science research and is becoming common place in many microscopy laboratories and facilities. The force applied by the laser beam on the trapped object can be accurately determined allowing any external forces acting on the trapped object to be deduced. We aim to design a series of experiments that use an optical trap to measure and quantify the interaction force between immune cells. In order to cause minimum perturbation to the sample we plan to directly trap T cells and remove the need to introduce exogenous beads to the sample. This poses a series of challenges and raises questions that need to be answered in order to design a set of effect end-point experiments. A typical cell is large compared to the beads normally trapped and highly non-uniform - can we reliably trap such objects and prevent them from rolling and re-orientating? In this paper we show how a spatial light modulator can produce a triple-spot trap, as opposed to a single-spot trap, giving complete control over the object's orientation and preventing it from rolling due, for example, to Brownian motion. To use an optical trap as a force transducer to measure an external force you must first have a reliably calibrated system. The optical trapping force is typically measured using either the theory of equipartition and observing the Brownian motion of the trapped object or using an escape force method, e.g. the viscous drag force method. In this paper we examine the relationship between force and displacement, as well as measuring the maximum displacement from equilibrium position before an object falls out of the trap, hence determining the conditions under which the different calibration methods should be applied.

  16. Implications of Version 8 TOMS and SBUV Data for Long-Term Trend Analysis

    NASA Technical Reports Server (NTRS)

    Frith, Stacey M.

    2004-01-01

    Total ozone data from the Total Ozone Mapping Spectrometer (TOMS) and profile/total ozone data from the Solar Backscatter Ultraviolet (SBUV; SBW/2) series of instruments have recently been reprocessed using new retrieval algorithms (referred to as Version 8 for both) and updated calibrations. In this paper, we incorporate the Version 8 data into a TOMS/SBW merged total ozone data set and an S B W merged profile ozone data set. The Total Merged Ozone Data (Total MOD) combines data from multiple TOMS and SBW instruments to form an internally consistent global data set with virtually complete time coverage from October 1978 through December 2003. Calibration differences between instruments are accounted for using external adjustments based on instrument intercomparisons during overlap periods. Previous results showed errors due to aerosol loading and sea glint are significantly reduced in the V8 TOMS retrievals. Using SBW as a transfer standard, calibration differences between V8 Nimbus 7 and Earth Probe TOMS data are approx. 1.3%, suggesting small errors in calibration remain. We will present updated total ozone long-term trends based on the Version 8 data. The Profile Merged Ozone Data (Profile MOD) data set is constructed using data from the SBUV series of instruments. In previous versions, SAGE data were used to establish the long-term external calibration of the combined data set. The SBW Version 8 we assess the V8 profile data through comparisons with SAGE and between SBW instruments in overlap periods. We then construct a consistently-calibrated long term time series. Updated zonal mean trends as a function of altitude and season from the new profile data set will be shown, and uncertainties in determining the best long-term calibration will be discussed.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrison, H; Menon, G; Sloboda, R

    The purpose of this study was to investigate the accuracy of radiochromic film calibration procedures used in external beam radiotherapy when applied to I-125 brachytherapy sources delivering higher doses, and to determine any necessary modifications to achieve similar accuracy in absolute dose measurements. GafChromic EBT3 film was used to measure radiation doses upwards of 35 Gy from 6 MV, 75 kVp and (∼28 keV) I-125 photon sources. A custom phantom was used for the I-125 irradiations to obtain a larger film area with nearly constant dose to reduce the effects of film heterogeneities on the optical density (OD) measurements. RGBmore » transmission images were obtained with an Epson 10000XL flatbed scanner, and calibration curves relating OD and dose using a rational function were determined for each colour channel and at each energy using a non-linear least square minimization method. Differences found between the 6 MV calibration curve and those for the lower energy sources are large enough that 6 MV beams should not be used to calibrate film for low-energy sources. However, differences between the 75 kVp and I-125 calibration curves were quite small; indicating that 75 kVp is a good choice. Compared with I-125 irradiation, this gives the advantages of lower type B uncertainties and markedly reduced irradiation time. To obtain high accuracy calibration for the dose range up to 35 Gy, two-segment piece-wise fitting was required. This yielded absolute dose measurement accuracy above 1 Gy of ∼2% for 75 kVp and ∼5% for I-125 seed exposures.« less

  18. Characterization of magnetic force microscopy probe tip remagnetization for measurements in external in-plane magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weis, Tanja; Engel, Dieter; Ehresmann, Arno

    2008-12-15

    A quantitative analysis of magnetic force microscopy (MFM) images taken in external in-plane magnetic fields is difficult because of the influence of the magnetic field on the magnetization state of the magnetic probe tip. We prepared calibration samples by ion bombardment induced magnetic patterning with a topographically flat magnetic pattern magnetically stable in a certain external magnetic field range for a quantitative characterization of the MFM probe tip magnetization in point-dipole approximation.

  19. ISO/IEC 17025 laboratory accreditation of NRC Acoustical Standards Program

    NASA Astrophysics Data System (ADS)

    Wong, George S. K.; Wu, Lixue; Hanes, Peter; Ohm, Won-Suk

    2004-05-01

    Experience gained during the external accreditation of the Acoustical Standards Program at the Institute for National Measurement Standards of the National Research Council is discussed. Some highlights include the preparation of documents for calibration procedures, control documents with attention to reducing future paper work and the need to maintain documentation or paper trails to satisfy the external assessors. General recommendations will be given for laboratories that are contemplating an external audit in accordance to the requirements of ISO/IEC 17025.

  20. Calorimetric method of ac loss measurement in a rotating magnetic field.

    PubMed

    Ghoshal, P K; Coombs, T A; Campbell, A M

    2010-07-01

    A method is described for calorimetric ac-loss measurements of high-T(c) superconductors (HTS) at 80 K. It is based on a technique used at 4.2 K for conventional superconducting wires that allows an easy loss measurement in parallel or perpendicular external field orientation. This paper focuses on ac loss measurement setup and calibration in a rotating magnetic field. This experimental setup is to demonstrate measuring loss using a temperature rise method under the influence of a rotating magnetic field. The slight temperature increase of the sample in an ac-field is used as a measure of losses. The aim is to simulate the loss in rotating machines using HTS. This is a unique technique to measure total ac loss in HTS at power frequencies. The sample is mounted on to a cold finger extended from a liquid nitrogen heat exchanger (HEX). The thermal insulation between the HEX and sample is provided by a material of low thermal conductivity, and low eddy current heating sample holder in vacuum vessel. A temperature sensor and noninductive heater have been incorporated in the sample holder allowing a rapid sample change. The main part of the data is obtained in the calorimetric measurement is used for calibration. The focus is on the accuracy and calibrations required to predict the actual ac losses in HTS. This setup has the advantage of being able to measure the total ac loss under the influence of a continuous moving field as experienced by any rotating machines.

  1. The Accuracy of Two-Way Satellite Time Transfer Calibrations

    DTIC Science & Technology

    2005-01-01

    20392, USA Abstract Results from successive calibrations of Two-Way Satellite Time and Frequency Transfer ( TWSTFT ) operational equipment at...USNO and five remote stations using portable TWSTFT equipment are analyzed for internal and external errors, finding an average random error of ±0.35...most accurate means of operational long-distance time transfer are Two-Way Satellite Time and Frequency Transfer ( TWSTFT ) and carrier-phase GPS

  2. Surface scanning through a cylindrical tank of coupling fluid for clinical microwave breast imaging exams

    PubMed Central

    Pallone, Matthew J.; Meaney, Paul M.; Paulsen, Keith D.

    2012-01-01

    Purpose: Microwave tomographic image quality can be improved significantly with prior knowledge of the breast surface geometry. The authors have developed a novel laser scanning system capable of accurately recovering surface renderings of breast-shaped phantoms immersed within a cylindrical tank of coupling fluid which resides completely external to the tank (and the aqueous environment) and overcomes the challenges associated with the optical distortions caused by refraction from the air, tank wall, and liquid bath interfaces. Methods: The scanner utilizes two laser line generators and a small CCD camera mounted concentrically on a rotating gantry about the microwave imaging tank. Various calibration methods were considered for optimizing the accuracy of the scanner in the presence of the optical distortions including traditional ray tracing and image registration approaches. In this paper, the authors describe the construction and operation of the laser scanner, compare the efficacy of several calibration methods—including analytical ray tracing and piecewise linear, polynomial, locally weighted mean, and thin-plate-spline (TPS) image registrations—and report outcomes from preliminary phantom experiments. Results: The results show that errors in calibrating camera angles and position prevented analytical ray tracing from achieving submillimeter accuracy in the surface renderings obtained from our scanner configuration. Conversely, calibration by image registration reliably attained mean surface errors of less than 0.5 mm depending on the geometric complexity of the object scanned. While each of the image registration approaches outperformed the ray tracing strategy, the authors found global polynomial methods produced the best compromise between average surface error and scanner robustness. Conclusions: The laser scanning system provides a fast and accurate method of three dimensional surface capture in the aqueous environment commonly found in microwave breast imaging. Optical distortions imposed by the imaging tank and coupling bath diminished the effectiveness of the ray tracing approach; however, calibration through image registration techniques reliably produced scans of submillimeter accuracy. Tests of the system with breast-shaped phantoms demonstrated the successful implementation of the scanner for the intended application. PMID:22755695

  3. Biogeographic Dating of Speciation Times Using Paleogeographically Informed Processes.

    PubMed

    Landis, Michael J

    2017-03-01

    Standard models of molecular evolution cannot estimate absolute speciation times alone, and require external calibrations to do so, such as fossils. Because fossil calibration methods rely on the incomplete fossil record, a great number of nodes in the tree of life cannot be dated precisely. However, many major paleogeographical events are dated, and since biogeographic processes depend on paleogeographical conditions, biogeographic dating may be used as an alternative or complementary method to fossil dating. I demonstrate how a time-stratified biogeographic stochastic process may be used to estimate absolute divergence times by conditioning on dated paleogeographical events. Informed by the current paleogeographical literature, I construct an empirical dispersal graph using 25 areas and 26 epochs for the past 540 Ma of Earth's history. Simulations indicate biogeographic dating performs well so long as paleogeography imposes constraint on biogeographic character evolution. To gauge whether biogeographic dating may be of practical use, I analyzed the well-studied turtle clade (Testudines) to assess how well biogeographic dating fares when compared to fossil-calibrated dating estimates reported in the literature. Fossil-free biogeographic dating estimated the age of the most recent common ancestor of extant turtles to be from the Late Triassic, which is consistent with fossil-based estimates. Dating precision improves further when including a root node fossil calibration. The described model, paleogeographical dispersal graph, and analysis scripts are available for use with RevBayes. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Selecting the optimum number of partial least squares components for the calibration of attenuated total reflectance-mid-infrared spectra of undesigned kerosene samples.

    PubMed

    Gómez-Carracedo, M P; Andrade, J M; Rutledge, D N; Faber, N M

    2007-03-07

    Selecting the correct dimensionality is critical for obtaining partial least squares (PLS) regression models with good predictive ability. Although calibration and validation sets are best established using experimental designs, industrial laboratories cannot afford such an approach. Typically, samples are collected in an (formally) undesigned way, spread over time and their measurements are included in routine measurement processes. This makes it hard to evaluate PLS model dimensionality. In this paper, classical criteria (leave-one-out cross-validation and adjusted Wold's criterion) are compared to recently proposed alternatives (smoothed PLS-PoLiSh and a randomization test) to seek out the optimum dimensionality of PLS models. Kerosene (jet fuel) samples were measured by attenuated total reflectance-mid-IR spectrometry and their spectra where used to predict eight important properties determined using reference methods that are time-consuming and prone to analytical errors. The alternative methods were shown to give reliable dimensionality predictions when compared to external validation. By contrast, the simpler methods seemed to be largely affected by the largest changes in the modeling capabilities of the first components.

  5. A novel second-order standard addition analytical method based on data processing with multidimensional partial least-squares and residual bilinearization.

    PubMed

    Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C

    2009-10-05

    In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.

  6. Quantitative Determination of Fluorine Content in Blends of Polylactide (PLA)–Talc Using Near Infrared Spectroscopy

    PubMed Central

    Tamburini, Elena; Tagliati, Chiara; Bonato, Tiziano; Costa, Stefania; Scapoli, Chiara; Pedrini, Paola

    2016-01-01

    Near-infrared spectroscopy (NIRS) has been widely used for quantitative and/or qualitative determination of a wide range of matrices. The objective of this study was to develop a NIRS method for the quantitative determination of fluorine content in polylactide (PLA)-talc blends. A blending profile was obtained by mixing different amounts of PLA granules and talc powder. The calibration model was built correlating wet chemical data (alkali digestion method) and NIR spectra. Using FT (Fourier Transform)-NIR technique, a Partial Least Squares (PLS) regression model was set-up, in a concentration interval of 0 ppm of pure PLA to 800 ppm of pure talc. Fluorine content prediction (R2cal = 0.9498; standard error of calibration, SEC = 34.77; standard error of cross-validation, SECV = 46.94) was then externally validated by means of a further 15 independent samples (R2EX.V = 0.8955; root mean standard error of prediction, RMSEP = 61.08). A positive relationship between an inorganic component as fluorine and NIR signal has been evidenced, and used to obtain quantitative analytical information from the spectra. PMID:27490548

  7. Estimating economic value of agricultural water under changing conditions and the effects of spatial aggregation.

    PubMed

    Medellín-Azuara, Josué; Harou, Julien J; Howitt, Richard E

    2010-11-01

    Given the high proportion of water used for agriculture in certain regions, the economic value of agricultural water can be an important tool for water management and policy development. This value is quantified using economic demand curves for irrigation water. Such demand functions show the incremental contribution of water to agricultural production. Water demand curves are estimated using econometric or optimisation techniques. Calibrated agricultural optimisation models allow the derivation of demand curves using smaller datasets than econometric models. This paper introduces these subject areas then explores the effect of spatial aggregation (upscaling) on the valuation of water for irrigated agriculture. A case study from the Rio Grande-Rio Bravo Basin in North Mexico investigates differences in valuation at farm and regional aggregated levels under four scenarios: technological change, warm-dry climate change, changes in agricultural commodity prices, and water costs for agriculture. The scenarios consider changes due to external shocks or new policies. Positive mathematical programming (PMP), a calibrated optimisation method, is the deductive valuation method used. An exponential cost function is compared to the quadratic cost functions typically used in PMP. Results indicate that the economic value of water at the farm level and the regionally aggregated level are similar, but that the variability and distributional effects of each scenario are affected by aggregation. Moderately aggregated agricultural production models are effective at capturing average-farm adaptation to policy changes and external shocks. Farm-level models best reveal the distribution of scenario impacts. Copyright © 2009 Elsevier B.V. All rights reserved.

  8. Construction of a 1 MeV Electron Accelerator for High Precision Beta Decay Studies

    NASA Astrophysics Data System (ADS)

    Longfellow, Brenden

    2014-09-01

    Beta decay energy calibration for detectors is typically established using conversion sources. However, the calibration points from conversion sources are not evenly distributed over the beta energy spectrum and the foil backing of the conversion sources produces perturbations in the calibration spectrum. To improve this, an external, tunable electron beam coupled by a magnetic field can be used to calibrate the detector. The 1 MeV electron accelerator in development at Triangle Universities Nuclear Laboratory (TUNL) utilizes a pelletron charging system. The electron gun shoots 104 electrons per second with an energy range of 50 keV to 1 MeV and is pulsed at a 10 kHz rate with a few ns width. The magnetic field in the spectrometer is 1 T and guiding fields of 0.01 to 0.05 T for the electron gun are used to produce a range of pitch angles. This accelerator can be used to calibrate detectors evenly over its energy range and determine the detector response over a range of pitch angles. Beta decay energy calibration for detectors is typically established using conversion sources. However, the calibration points from conversion sources are not evenly distributed over the beta energy spectrum and the foil backing of the conversion sources produces perturbations in the calibration spectrum. To improve this, an external, tunable electron beam coupled by a magnetic field can be used to calibrate the detector. The 1 MeV electron accelerator in development at Triangle Universities Nuclear Laboratory (TUNL) utilizes a pelletron charging system. The electron gun shoots 104 electrons per second with an energy range of 50 keV to 1 MeV and is pulsed at a 10 kHz rate with a few ns width. The magnetic field in the spectrometer is 1 T and guiding fields of 0.01 to 0.05 T for the electron gun are used to produce a range of pitch angles. This accelerator can be used to calibrate detectors evenly over its energy range and determine the detector response over a range of pitch angles. TUNL REU Program.

  9. Optical calibration of the Auger fluorescence telescopes

    NASA Astrophysics Data System (ADS)

    Matthews, John A. J.

    2003-02-01

    The Pierre Auger Observatory is optimized to study the cosmic ray spectrum in the region of the Greisen-Zatsepin-Kuz'min (GZK) cutoff, i.e.cosmic rays with energies of ~1020eV. Cosmic rays are detected as extensive air showers. To measure these showers each Auger site combines a 3000sq-km ground array with air fluorescence telescopes into a hybrid detector. Our design choice is motivated by the heightened importance of the energy scale, and related systematic uncertainties in shower energies, for experiments investigating the GZK cutoff. This paper focuses on the optical calibration of the Auger fluorescence telescopes. The optical calibration is done three independent ways: an absolute end-to-end calibration using a uniform, calibrated intensity, light-source at the telescope entrance aperture, a component by component calibration using both laboratory and in-situ measurements, and Rayleigh scattered light from external laser beams. The calibration concepts and related instrumentation are summarized. Results from the 5-month engineering array test are presented.

  10. Prediction of Outcome after Moderate and Severe Traumatic Brain Injury: External Validation of the IMPACT and CRASH Prognostic Models

    PubMed Central

    Roozenbeek, Bob; Lingsma, Hester F.; Lecky, Fiona E.; Lu, Juan; Weir, James; Butcher, Isabella; McHugh, Gillian S.; Murray, Gordon D.; Perel, Pablo; Maas, Andrew I.R.; Steyerberg, Ewout W.

    2012-01-01

    Objective The International Mission on Prognosis and Analysis of Clinical Trials (IMPACT) and Corticoid Randomisation After Significant Head injury (CRASH) prognostic models predict outcome after traumatic brain injury (TBI) but have not been compared in large datasets. The objective of this is study is to validate externally and compare the IMPACT and CRASH prognostic models for prediction of outcome after moderate or severe TBI. Design External validation study. Patients We considered 5 new datasets with a total of 9036 patients, comprising three randomized trials and two observational series, containing prospectively collected individual TBI patient data. Measurements Outcomes were mortality and unfavourable outcome, based on the Glasgow Outcome Score (GOS) at six months after injury. To assess performance, we studied the discrimination of the models (by AUCs), and calibration (by comparison of the mean observed to predicted outcomes and calibration slopes). Main Results The highest discrimination was found in the TARN trauma registry (AUCs between 0.83 and 0.87), and the lowest discrimination in the Pharmos trial (AUCs between 0.65 and 0.71). Although differences in predictor effects between development and validation populations were found (calibration slopes varying between 0.58 and 1.53), the differences in discrimination were largely explained by differences in case-mix in the validation studies. Calibration was good, the fraction of observed outcomes generally agreed well with the mean predicted outcome. No meaningful differences were noted in performance between the IMPACT and CRASH models. More complex models discriminated slightly better than simpler variants. Conclusions Since both the IMPACT and the CRASH prognostic models show good generalizability to more recent data, they are valid instruments to quantify prognosis in TBI. PMID:22511138

  11. Aspheric and freeform surfaces metrology with software configurable optical test system: a computerized reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Su, Peng; Khreishi, Manal A. H.; Su, Tianquan; Huang, Run; Dominguez, Margaret Z.; Maldonado, Alejandro; Butel, Guillaume; Wang, Yuhao; Parks, Robert E.; Burge, James H.

    2014-03-01

    A software configurable optical test system (SCOTS) based on deflectometry was developed at the University of Arizona for rapidly, robustly, and accurately measuring precision aspheric and freeform surfaces. SCOTS uses a camera with an external stop to realize a Hartmann test in reverse. With the external camera stop as the reference, a coordinate measuring machine can be used to calibrate the SCOTS test geometry to a high accuracy. Systematic errors from the camera are carefully investigated and controlled. Camera pupil imaging aberration is removed with the external aperture stop. Imaging aberration and other inherent errors are suppressed with an N-rotation test. The performance of the SCOTS test is demonstrated with the measurement results from a 5-m-diameter Large Synoptic Survey Telescope tertiary mirror and an 8.4-m diameter Giant Magellan Telescope primary mirror. The results show that SCOTS can be used as a large-dynamic-range, high-precision, and non-null test method for precision aspheric and freeform surfaces. The SCOTS test can achieve measurement accuracy comparable to traditional interferometric tests.

  12. Optically transmitted and inductively coupled electric reference to access in vivo concentrations for quantitative proton-decoupled ¹³C magnetic resonance spectroscopy.

    PubMed

    Chen, Xing; Pavan, Matteo; Heinzer-Schweizer, Susanne; Boesiger, Peter; Henning, Anke

    2012-01-01

    This report describes our efforts on quantification of tissue metabolite concentrations in mM by nuclear Overhauser enhanced and proton decoupled (13) C magnetic resonance spectroscopy and the Electric Reference To access In vivo Concentrations (ERETIC) method. Previous work showed that a calibrated synthetic magnetic resonance spectroscopy-like signal transmitted through an optical fiber and inductively coupled into a transmit/receive coil represents a reliable reference standard for in vivo (1) H magnetic resonance spectroscopy quantification on a clinical platform. In this work, we introduce a related implementation that enables simultaneous proton decoupling and ERETIC-based metabolite quantification and hence extends the applicability of the ERETIC method to nuclear Overhauser enhanced and proton decoupled in vivo (13) C magnetic resonance spectroscopy. In addition, ERETIC signal stability under the influence of simultaneous proton decoupling is investigated. The proposed quantification method was cross-validated against internal and external reference standards on human skeletal muscle. The ERETIC signal intensity stability was 100.65 ± 4.18% over 3 months including measurements with and without proton decoupling. Glycogen and unsaturated fatty acid concentrations measured with the ERETIC method were in excellent agreement with internal creatine and external phantom reference methods, showing a difference of 1.85 ± 1.21% for glycogen and 1.84 ± 1.00% for unsaturated fatty acid between ERETIC and creatine-based quantification, whereas the deviations between external reference and creatine-based quantification are 6.95 ± 9.52% and 3.19 ± 2.60%, respectively. Copyright © 2011 Wiley Periodicals, Inc.

  13. Life Cycle Greenhouse Gas Emissions and Energy Analysis of Passive House with Variable Construction Materials

    NASA Astrophysics Data System (ADS)

    Baďurová, Silvia; Ponechal, Radoslav; Ďurica, Pavol

    2013-11-01

    The term "passive house" refers to rigorous and voluntary standards for energy efficiency in a building, reducing its ecological footprint. There are many ways how to build a passive house successfully. These designs as well as construction techniques vary from ordinary timber constructions using packs of straw or constructions of clay. This paper aims to quantify environmental quality of external walls in a passive house, which are made of a timber frame, lightweight concrete blocks and sand-lime bricks in order to determine whether this constructional form provides improved environmental performance. Furthermore, this paper assesses potential benefit of energy savings at heating of houses in which their external walls are made of these three material alternatives. A two storey residential passive house, with floorage of 170.6 m2, was evaluated. Some measurements of air and surface temperatures were done as a calibration etalon for a method of simulation.

  14. Optimization, evaluation and calibration of a cross-strip DOI detector

    NASA Astrophysics Data System (ADS)

    Schmidt, F. P.; Kolb, A.; Pichler, B. J.

    2018-02-01

    This study depicts the evaluation of a SiPM detector with depth of interaction (DOI) capability via a dual-sided readout that is suitable for high-resolution positron emission tomography and magnetic resonance (PET/MR) imaging. Two different 12  ×  12 pixelated LSO scintillator arrays with a crystal pitch of 1.60 mm are examined. One array is 20 mm-long with a crystal separation by the specular reflector Vikuiti enhanced specular reflector (ESR), and the other one is 18 mm-long and separated by the diffuse reflector Lumirror E60 (E60). An improvement in energy resolution from 22.6% to 15.5% for the scintillator array with the E60 reflector is achieved by taking a nonlinear light collection correction into account. The results are FWHM energy resolutions of 14.0% and 15.5%, average FWHM DOI resolutions of 2.96 mm and 1.83 mm, and FWHM coincidence resolving times of 1.09 ns and 1.48 ns for the scintillator array with the ESR and that with the E60 reflector, respectively. The measured DOI signal ratios need to be assigned to an interaction depth inside the scintillator crystal. A linear and a nonlinear method, using the intrinsic scintillator radiation from lutetium, are implemented for an easy to apply calibration and are compared to the conventional method, which exploits a setup with an externally collimated radiation beam. The deviation between the DOI functions of the linear or nonlinear method and the conventional method is determined. The resulting average of differences in DOI positions is 0.67 mm and 0.45 mm for the nonlinear calibration method for the scintillator array with the ESR and with the E60 reflector, respectively; Whereas the linear calibration method results in 0.51 mm and 0.32 mm for the scintillator array with the ESR and the E60 reflector, respectively; and is, due to its simplicity, also applicable in assembled detector systems.

  15. Optimization, evaluation and calibration of a cross-strip DOI detector.

    PubMed

    Schmidt, F P; Kolb, A; Pichler, B J

    2018-02-20

    This study depicts the evaluation of a SiPM detector with depth of interaction (DOI) capability via a dual-sided readout that is suitable for high-resolution positron emission tomography and magnetic resonance (PET/MR) imaging. Two different 12  ×  12 pixelated LSO scintillator arrays with a crystal pitch of 1.60 mm are examined. One array is 20 mm-long with a crystal separation by the specular reflector Vikuiti enhanced specular reflector (ESR), and the other one is 18 mm-long and separated by the diffuse reflector Lumirror E60 (E60). An improvement in energy resolution from 22.6% to 15.5% for the scintillator array with the E60 reflector is achieved by taking a nonlinear light collection correction into account. The results are FWHM energy resolutions of 14.0% and 15.5%, average FWHM DOI resolutions of 2.96 mm and 1.83 mm, and FWHM coincidence resolving times of 1.09 ns and 1.48 ns for the scintillator array with the ESR and that with the E60 reflector, respectively. The measured DOI signal ratios need to be assigned to an interaction depth inside the scintillator crystal. A linear and a nonlinear method, using the intrinsic scintillator radiation from lutetium, are implemented for an easy to apply calibration and are compared to the conventional method, which exploits a setup with an externally collimated radiation beam. The deviation between the DOI functions of the linear or nonlinear method and the conventional method is determined. The resulting average of differences in DOI positions is 0.67 mm and 0.45 mm for the nonlinear calibration method for the scintillator array with the ESR and with the E60 reflector, respectively; Whereas the linear calibration method results in 0.51 mm and 0.32 mm for the scintillator array with the ESR and the E60 reflector, respectively; and is, due to its simplicity, also applicable in assembled detector systems.

  16. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  17. Research on self-calibration biaxial autocollimator based on ZYNQ

    NASA Astrophysics Data System (ADS)

    Guo, Pan; Liu, Bingguo; Liu, Guodong; Zhong, Yao; Lu, Binghui

    2018-01-01

    Autocollimators are mainly based on computers or the electronic devices that can be connected to the internet, and its precision, measurement range and resolution are all defective, and external displays are needed to display images in real time. What's more, there is no real-time calibration for autocollimator in the market. In this paper, we propose a biaxial autocollimator based on the ZYNQ embedded platform to solve the above problems. Firstly, the traditional optical system is improved and a light path is added for real-time calibration. Then, in order to improve measurement speed, the embedded platform based on ZYNQ that combines Linux operating system with autocollimator is designed. In this part, image acquisition, image processing, image display and the man-machine interaction interface based on Qt are achieved. Finally, the system realizes two-dimensional small angle measurement. Experimental results showed that the proposed method can improve the angle measurement accuracy. The standard deviation of the close distance (1.5m) is 0.15" in horizontal direction of image and 0.24"in vertical direction, the repeatability of measurement of the long distance (10m) is improved by 0.12 in horizontal direction of image and 0.3 in vertical direction.

  18. External Validation of the Updated Partin Tables in a Cohort of French and Italian Men

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhojani, Naeem; Department of Urology, University of Montreal, Montreal, PQ; Salomon, Laurent

    2009-02-01

    Purpose: To test the discrimination and calibration properties of the newly developed 2007 Partin Tables in two European cohorts with localized prostate cancer. Methods: Data on clinical and pathologic characteristics were obtained for 1,064 men treated with radical prostatectomy at the Creteil University Health Center in France (n = 839) and at the Milan University Vita-Salute in Italy (n = 225). Overall discrimination was assessed with receiver operating characteristic curve analysis, which quantified the accuracy of stage predictions for each center. Calibration plots graphically explored the relationship between predicted and observed rates of extracapsular extension (ECE), seminal vesicle invasion (SVI)more » and lymph node invasion (LNI). Results: The rates of ECE, SVI, and LNI were 28%, 14%, and 2% in the Creteil cohort vs. 11%, 5%, and 5% in the Milan cohort. In the Creteil cohort, the accuracy of ECE, SVI, and LNI prediction was 61%, 71%, and 82% vs. 66%, 92% and 75% for the Milan cohort. Important departures were recorded between Partin Tables' predicted and observed rates of ECE, SVI, and LNI within both cohorts. Conclusions: The 2007 Partin Tables demonstrated worse performance in European men than they originally did in North American men. This indicates that predictive models need to be externally validated before their implementation into clinical practice.« less

  19. Method for measuring the alternating current half-wave voltage of a Mach-Zehnder modulator based on opto-electronic oscillation.

    PubMed

    Hong, Jun; Chen, Dongchu; Peng, Zhiqiang; Li, Zulin; Liu, Haibo; Guo, Jian

    2018-05-01

    A new method for measuring the alternating current (AC) half-wave voltage of a Mach-Zehnder modulator is proposed and verified by experiment in this paper. Based on the opto-electronic self-oscillation technology, the physical relationship between the saturation output power of the oscillating signal and the AC half-wave voltage is revealed, and the value of the AC half-wave voltage is solved by measuring the saturation output power of the oscillating signal. The experimental results show that the measured data of this new method involved are in agreement with a traditional method, and not only an external microwave signal source but also the calibration for different frequency measurements is not needed in our new method. The measuring process is simplified with this new method on the premise of ensuring the accuracy of measurement, and it owns good practical value.

  20. Simultaneous quantitative analysis of olmesartan, amlodipine and hydrochlorothiazide in their combined dosage form utilizing classical and alternating least squares based chemometric methods.

    PubMed

    Darwish, Hany W; Bakheit, Ahmed H; Abdelhameed, Ali S

    2016-03-01

    Simultaneous spectrophotometric analysis of a multi-component dosage form of olmesartan, amlodipine and hydrochlorothiazide used for the treatment of hypertension has been carried out using various chemometric methods. Multivariate calibration methods include classical least squares (CLS) executed by net analyte processing (NAP-CLS), orthogonal signal correction (OSC-CLS) and direct orthogonal signal correction (DOSC-CLS) in addition to multivariate curve resolution-alternating least squares (MCR-ALS). Results demonstrated the efficiency of the proposed methods as quantitative tools of analysis as well as their qualitative capability. The three analytes were determined precisely using the aforementioned methods in an external data set and in a dosage form after optimization of experimental conditions. Finally, the efficiency of the models was validated via comparison with the partial least squares (PLS) method in terms of accuracy and precision.

  1. Calibrated Noise Measurements with Induced Receiver Gain Fluctuations

    NASA Technical Reports Server (NTRS)

    Racette, Paul; Walker, David; Gu, Dazhen; Rajola, Marco; Spevacek, Ashly

    2011-01-01

    The lack of well-developed techniques for modeling changing statistical moments in our observations has stymied the application of stochastic process theory in science and engineering. These limitations were encountered when modeling the performance of radiometer calibration architectures and algorithms in the presence of non stationary receiver fluctuations. Analyses of measured signals have traditionally been limited to a single measurement series. Whereas in a radiometer that samples a set of noise references, the data collection can be treated as an ensemble set of measurements of the receiver state. Noise Assisted Data Analysis is a growing field of study with significant potential for aiding the understanding and modeling of non stationary processes. Typically, NADA entails adding noise to a signal to produce an ensemble set on which statistical analysis is performed. Alternatively as in radiometric measurements, mixing a signal with calibrated noise provides, through the calibration process, the means to detect deviations from the stationary assumption and thereby a measurement tool to characterize the signal's non stationary properties. Data sets comprised of calibrated noise measurements have been limited to those collected with naturally occurring fluctuations in the radiometer receiver. To examine the application of NADA using calibrated noise, a Receiver Gain Modulation Circuit (RGMC) was designed and built to modulate the gain of a radiometer receiver using an external signal. In 2010, an RGMC was installed and operated at the National Institute of Standards and Techniques (NIST) using their Noise Figure Radiometer (NFRad) and national standard noise references. The data collected is the first known set of calibrated noise measurements from a receiver with an externally modulated gain. As an initial step, sinusoidal and step-function signals were used to modulate the receiver gain, to evaluate the circuit characteristics and to study the performance of a variety of calibration algorithms. The receiver noise temperature and time-bandwidth product of the NFRad are calculated from the data. Statistical analysis using temporal-dependent calibration algorithms reveals that the natural occurring fluctuations in the receiver are stationary over long intervals (100s of seconds); however the receiver exhibits local non stationarity over the interval over which one set of reference measurements are collected. A variety of calibration algorithms have been applied to the data to assess algorithms' performance with the gain fluctuation signals. This presentation will describe the RGMC, experiment design and a comparative analysis of calibration algorithms.

  2. Validated ¹H and 13C Nuclear Magnetic Resonance Methods for the Quantitative Determination of Glycerol in Drug Injections.

    PubMed

    Lu, Jiaxi; Wang, Pengli; Wang, Qiuying; Wang, Yanan; Jiang, Miaomiao

    2018-05-15

    In the current study, we employed high-resolution proton and carbon nuclear magnetic resonance spectroscopy (¹H and 13 C NMR) for quantitative analysis of glycerol in drug injections without any complex pre-treatment or derivatization on samples. The established methods were validated with good specificity, linearity, accuracy, precision, stability, and repeatability. Our results revealed that the contents of glycerol were convenient to calculate directly via the integration ratios of peak areas with an internal standard in ¹H NMR spectra, while the integration of peak heights were proper for 13 C NMR in combination with an external calibration of glycerol. The developed methods were both successfully applied in drug injections. Quantitative NMR methods showed an extensive prospect for glycerol determination in various liquid samples.

  3. Simulating cartilage conduction sound to estimate the sound pressure level in the external auditory canal

    NASA Astrophysics Data System (ADS)

    Shimokura, Ryota; Hosoi, Hiroshi; Nishimura, Tadashi; Iwakura, Takashi; Yamanaka, Toshiaki

    2015-01-01

    When the aural cartilage is made to vibrate it generates sound directly into the external auditory canal which can be clearly heard. Although the concept of cartilage conduction can be applied to various speech communication and music industrial devices (e.g. smartphones, music players and hearing aids), the conductive performance of such devices has not yet been defined because the calibration methods are different from those currently used for air and bone conduction. Thus, the aim of this study was to simulate the cartilage conduction sound (CCS) using a head and torso simulator (HATS) and a model of aural cartilage (polyurethane resin pipe) and compare the results with experimental ones. Using the HATS, we found the simulated CCS at frequencies above 2 kHz corresponded to the average measured CCS from seven subjects. Using a model of skull bone and aural cartilage, we found that the simulated CCS at frequencies lower than 1.5 kHz agreed with the measured CCS. Therefore, a combination of these two methods can be used to estimate the CCS with high accuracy.

  4. Guidelines on the implementation of diode in vivo dosimetry programs for photon and electron external beam therapy.

    PubMed

    Alecu, R; Loomis, T; Alecu, J; Ochran, T

    1999-01-01

    Semiconductor diodes offer many advantages for clinical dosimetry: high sensitivity, real-time readout, simple instrumentation, robustness and air pressure independence. The feasibility and usefulness of in vivo dosimetry with diodes has been shown by numerous publications, but very few, if any, refer to the utilization of diodes in electron beam dosimetry. The purpose of this paper is to present our methods for implementing an effective IVD program for external beam therapy with photons and electrons and to evaluate a new type of diodes. Methods of deciding on reasonable action levels along with calibration procedures, established according to the type of measurements intended to be performed and the action limits, are discussed. Correction factors to account for nonreference clinical conditions for new types of diodes (designed for photon and electron beams) are presented and compared with those required by older models commercially available. The possibilities and limitations of each type of diode are presented, emphasizing the importance of using the appropriate diode for each task and energy range.

  5. A rapid method for detection of fumonisins B1 and B2 in corn meal using Fourier transform near infrared (FT-NIR) spectroscopy implemented with integrating sphere.

    PubMed

    Gaspardo, B; Del Zotto, S; Torelli, E; Cividino, S R; Firrao, G; Della Riccia, G; Stefanon, B

    2012-12-01

    Fourier transform near infrared (FT-NIR) spectroscopy is an analytical procedure generally used to detect organic compounds in food. In this work the ability to predict fumonisin B(1)+B(2) contents in corn meal using an FT-NIR spectrophotometer, equipped with an integration sphere, was assessed. A total of 143 corn meal samples were collected in Friuli Venezia Giulia Region (Italy) and used to define a 15 principal components regression model, applying partial least square regression algorithm with full cross validation as internal validation. External validation was performed to 25 unknown samples. Coefficients of correlation, root mean square error and standard error of calibration were 0.964, 0.630 and 0.632, respectively and the external validation confirmed a fair potential of the model in predicting FB(1)+FB(2) concentration. Results suggest that FT-NIR analysis is a suitable method to detect FB(1)+FB(2) in corn meal and to discriminate safe meals from those contaminated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. A Public-Private Partnership Develops and Externally Validates a 30-Day Hospital Readmission Risk Prediction Model

    PubMed Central

    Choudhry, Shahid A.; Li, Jing; Davis, Darcy; Erdmann, Cole; Sikka, Rishi; Sutariya, Bharat

    2013-01-01

    Introduction: Preventing the occurrence of hospital readmissions is needed to improve quality of care and foster population health across the care continuum. Hospitals are being held accountable for improving transitions of care to avert unnecessary readmissions. Advocate Health Care in Chicago and Cerner (ACC) collaborated to develop all-cause, 30-day hospital readmission risk prediction models to identify patients that need interventional resources. Ideally, prediction models should encompass several qualities: they should have high predictive ability; use reliable and clinically relevant data; use vigorous performance metrics to assess the models; be validated in populations where they are applied; and be scalable in heterogeneous populations. However, a systematic review of prediction models for hospital readmission risk determined that most performed poorly (average C-statistic of 0.66) and efforts to improve their performance are needed for widespread usage. Methods: The ACC team incorporated electronic health record data, utilized a mixed-method approach to evaluate risk factors, and externally validated their prediction models for generalizability. Inclusion and exclusion criteria were applied on the patient cohort and then split for derivation and internal validation. Stepwise logistic regression was performed to develop two predictive models: one for admission and one for discharge. The prediction models were assessed for discrimination ability, calibration, overall performance, and then externally validated. Results: The ACC Admission and Discharge Models demonstrated modest discrimination ability during derivation, internal and external validation post-recalibration (C-statistic of 0.76 and 0.78, respectively), and reasonable model fit during external validation for utility in heterogeneous populations. Conclusions: The ACC Admission and Discharge Models embody the design qualities of ideal prediction models. The ACC plans to continue its partnership to further improve and develop valuable clinical models. PMID:24224068

  7. Quality assessment of gasoline using comprehensive two-dimensional gas chromatography combined with unfolded partial least squares: A reliable approach for the detection of gasoline adulteration.

    PubMed

    Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan

    2016-01-01

    Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  9. Signal Space Separation Method for a Biomagnetic Sensor Array Arranged on a Flat Plane for Magnetocardiographic Applications: A Computer Simulation Study

    PubMed Central

    2018-01-01

    Although the signal space separation (SSS) method can successfully suppress interference/artifacts overlapped onto magnetoencephalography (MEG) signals, the method is considered inapplicable to data from nonhelmet-type sensor arrays, such as the flat sensor arrays typically used in magnetocardiographic (MCG) applications. This paper shows that the SSS method is still effective for data measured from a (nonhelmet-type) array of sensors arranged on a flat plane. By using computer simulations, it is shown that the optimum location of the origin can be determined by assessing the dependence of signal and noise gains of the SSS extractor on the origin location. The optimum values of the parameters LC and LD, which, respectively, indicate the truncation values of the multipole-order ℓ of the internal and external subspaces, are also determined by evaluating dependences of the signal, noise, and interference gains (i.e., the shield factor) on these parameters. The shield factor exceeds 104 for interferences originating from fairly distant sources. However, the shield factor drops to approximately 100 when calibration errors of 0.1% exist and to 30 when calibration errors of 1% exist. The shielding capability can be significantly improved using vector sensors, which measure the x, y, and z components of the magnetic field. With 1% calibration errors, a vector sensor array still maintains a shield factor of approximately 500. It is found that the SSS application to data from flat sensor arrays causes a distortion in the signal magnetic field, but it is shown that the distortion can be corrected by using an SSS-modified sensor lead field in the voxel space analysis. PMID:29854364

  10. Technical Note: Millimeter precision in ultrasound based patient positioning: Experimental quantification of inherent technical limitations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballhausen, Hendrik, E-mail: hendrik.ballhausen@med.uni-muenchen.de; Hieber, Sheila; Li, Minglun

    2014-08-15

    Purpose: To identify the relevant technical sources of error of a system based on three-dimensional ultrasound (3D US) for patient positioning in external beam radiotherapy. To quantify these sources of error in a controlled laboratory setting. To estimate the resulting end-to-end geometric precision of the intramodality protocol. Methods: Two identical free-hand 3D US systems at both the planning-CT and the treatment room were calibrated to the laboratory frame of reference. Every step of the calibration chain was repeated multiple times to estimate its contribution to overall systematic and random error. Optimal margins were computed given the identified and quantified systematicmore » and random errors. Results: In descending order of magnitude, the identified and quantified sources of error were: alignment of calibration phantom to laser marks 0.78 mm, alignment of lasers in treatment vs planning room 0.51 mm, calibration and tracking of 3D US probe 0.49 mm, alignment of stereoscopic infrared camera to calibration phantom 0.03 mm. Under ideal laboratory conditions, these errors are expected to limit ultrasound-based positioning to an accuracy of 1.05 mm radially. Conclusions: The investigated 3D ultrasound system achieves an intramodal accuracy of about 1 mm radially in a controlled laboratory setting. The identified systematic and random errors require an optimal clinical tumor volume to planning target volume margin of about 3 mm. These inherent technical limitations do not prevent clinical use, including hypofractionation or stereotactic body radiation therapy.« less

  11. Precision Spectrophotometric Calibration System for Dark Energy Instruments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schubnell, Michael S.

    2015-06-30

    For this research we build a precision calibration system and carried out measurements to demonstrate the precision that can be achieved with a high precision spectrometric calibration system. It was shown that the system is capable of providing a complete spectrophotometric calibration at the sub-pixel level. The calibration system uses a fast, high precision monochromator that can quickly and efficiently scan over an instrument’s entire spectral range with a spectral line width of less than 0.01 nm corresponding to a fraction of a pixel on the CCD. The system was extensively evaluated in the laboratory. Our research showed that amore » complete spectrophotometric calibration standard for spectroscopic survey instruments such as DESI is possible. The monochromator precision and repeatability to a small fraction of the DESI spectrograph LSF was demonstrated with re-initialization on every scan and thermal drift compensation by locking to multiple external line sources. A projector system that mimics telescope aperture for point source at infinity was demonstrated.« less

  12. Ultrafast gas chromatography method with direct injection for the quantitative determination of benzene, toluene, ethylbenzene, and xylenes in commercial gasoline.

    PubMed

    Miranda, Nahieh Toscano; Sequinel, Rodrigo; Hatanaka, Rafael Rodrigues; de Oliveira, José Eduardo; Flumignan, Danilo Luiz

    2017-04-01

    Benzene, toluene, ethylbenzene, and xylenes are some of the most hazardous constituents found in commercial gasoline samples; therefore, these components must be monitored to avoid toxicological problems. We propose a new routine method of ultrafast gas chromatography coupled to flame ionization detection for the direct determination of benzene, toluene, ethylbenzene, and xylenes in commercial gasoline. This method is based on external standard calibration to quantify each compound, including the validation step of the study of linearity, detection and quantification limits, precision, and accuracy. The time of analysis was less than 3.2 min, with quantitative statements regarding the separation and quantification of all compounds in commercial gasoline samples. Ultrafast gas chromatography is a promising alternative method to official analytical techniques. Government laboratories could consider using this method for quality control. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Calibration of strontium-90 eye applicator using a strontium external beam standard.

    PubMed

    Siddle, D; Langmack, K

    1999-07-01

    Four techniques for measuring the dose rate from Sr-90 concave eye plaques are presented. The techniques involve calibrating a concave eye plaque against a Sr-90 teletherapy unit using X-Omat film, radiochromic film, black LiF TLD discs and LiF chips. The mean dose rate predicted by these dosimeters is 7.5 cGy s(-1). The dose rate quoted by the manufacturer is 33% lower than this value, which is consistent with discrepancies reported by other authors. Calibration against a 6 MV linear accelerator was also carried out using each of the above dosimetric devices, and appropriate sensitivity correction factors have been presented.

  14. Calibration of strontium-90 eye applicator using a strontium external beam standard

    NASA Astrophysics Data System (ADS)

    Siddle, D.; Langmack, K.

    1999-07-01

    Four techniques for measuring the dose rate from Sr-90 concave eye plaques are presented. The techniques involve calibrating a concave eye plaque against a Sr-90 teletherapy unit using X-Omat film, radiochromic film, black LiF TLD discs and LiF chips. The mean dose rate predicted by these dosimeters is 7.5 cGy s-1. The dose rate quoted by the manufacturer is 33% lower than this value, which is consistent with discrepancies reported by other authors. Calibration against a 6 MV linear accelerator was also carried out using each of the above dosimetric devices, and appropriate sensitivity correction factors have been presented.

  15. Weak lensing magnification of SpARCS galaxy clusters

    NASA Astrophysics Data System (ADS)

    Tudorica, A.; Hildebrandt, H.; Tewes, M.; Hoekstra, H.; Morrison, C. B.; Muzzin, A.; Wilson, G.; Yee, H. K. C.; Lidman, C.; Hicks, A.; Nantais, J.; Erben, T.; van der Burg, R. F. J.; Demarco, R.

    2017-12-01

    Context. Measuring and calibrating relations between cluster observables is critical for resource-limited studies. The mass-richness relation of clusters offers an observationally inexpensive way of estimating masses. Its calibration is essential for cluster and cosmological studies, especially for high-redshift clusters. Weak gravitational lensing magnification is a promising and complementary method to shear studies, that can be applied at higher redshifts. Aims: We aim to employ the weak lensing magnification method to calibrate the mass-richness relation up to a redshift of 1.4. We used the Spitzer Adaptation of the Red-Sequence Cluster Survey (SpARCS) galaxy cluster candidates (0.2 < z < 1.4) and optical data from the Canada France Hawaii Telescope (CFHT) to test whether magnification can be effectively used to constrain the mass of high-redshift clusters. Methods: Lyman-break galaxies (LBGs) selected using the u-band dropout technique and their colours were used as a background sample of sources. LBG positions were cross-correlated with the centres of the sample of SpARCS clusters to estimate the magnification signal, which was optimally-weighted using an externally-calibrated LBG luminosity function. The signal was measured for cluster sub-samples, binned in both redshift and richness. Results: We measured the cross-correlation between the positions of galaxy cluster candidates and LBGs and detected a weak lensing magnification signal for all bins at a detection significance of 2.6-5.5σ. In particular, the significance of the measurement for clusters with z> 1.0 is 4.1σ; for the entire cluster sample we obtained an average M200 of 1.28 -0.21+0.23 × 1014 M⊙. Conclusions: Our measurements demonstrated the feasibility of using weak lensing magnification as a viable tool for determining the average halo masses for samples of high redshift galaxy clusters. The results also established the success of using galaxy over-densities to select massive clusters at z > 1. Additional studies are necessary for further modelling of the various systematic effects we discussed.

  16. Qualitative and quantitative analysis of pyrolysis oil by gas chromatography with flame ionization detection and comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry.

    PubMed

    Sfetsas, Themistoklis; Michailof, Chrysa; Lappas, Angelos; Li, Qiangyi; Kneale, Brian

    2011-05-27

    Pyrolysis oils have attracted a lot of interest, as they are liquid energy carriers and general sources of chemicals. In this work, gas chromatography with flame ionization detector (GC-FID) and two-dimensional gas chromatography with time-of-flight mass spectrometry (GC×GC-TOFMS) techniques were used to provide both qualitative and quantitative results of the analysis of three different pyrolysis oils. The chromatographic methods and parameters were optimized and solvent choice and separation restrictions are discussed. Pyrolysis oil samples were diluted in suitable organic solvent and were analyzed by GC×GC-TOFMS. An average of 300 compounds were detected and identified in all three samples using the ChromaToF (Leco) software. The deconvoluted spectra were compared with the NIST software library for correct matching. Group type classification was performed by use of the ChromaToF software. The quantification of 11 selected compounds was performed by means of a multiple-point external calibration curve. Afterwards, the pyrolysis oils were extracted with water, and the aqueous phase was analyzed both by GC-FID and, after proper change of solvent, by GC×GC-TOFMS. As previously, the selected compounds were quantified by both techniques, by means of multiple point external calibration curves. The parameters of the calibration curves were calculated by weighted linear regression analysis. The limit of detection, limit of quantitation and linearity range for each standard compound with each method are presented. The potency of GC×GC-TOFMS for an efficient mapping of the pyrolysis oil is undisputable, and the possibility of using it for quantification as well has been demonstrated. On the other hand, the GC-FID analysis provides reliable results that allow for a rapid screening of the pyrolysis oil. To the best of our knowledge, very few papers have been reported with quantification attempts on pyrolysis oil samples using GC×GC-TOFMS most of which make use of the internal standard method. This work provides the ground for further analysis of pyrolysis oils of diverse sources for a rational design of both their production and utilization process. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Advanced analysis techniques for uranium assay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, W. H.; Ensslin, Norbert; Carrillo, L. A.

    2001-01-01

    Uranium has a negligible passive neutron emission rate making its assay practicable only with an active interrogation method. The active interrogation uses external neutron sources to induce fission events in the uranium in order to determine the mass. This technique requires careful calibration with standards that are representative of the items to be assayed. The samples to be measured are not always well represented by the available standards which often leads to large biases. A technique of active multiplicity counting is being developed to reduce some of these assay difficulties. Active multiplicity counting uses the measured doubles and triples countmore » rates to determine the neutron multiplication (f4) and the product of the source-sample coupling ( C ) and the 235U mass (m). Since the 35U mass always appears in the multiplicity equations as the product of Cm, the coupling needs to be determined before the mass can be known. A relationship has been developed that relates the coupling to the neutron multiplication. The relationship is based on both an analytical derivation and also on empirical observations. To determine a scaling constant present in this relationship, known standards must be used. Evaluation of experimental data revealed an improvement over the traditional calibration curve analysis method of fitting the doubles count rate to the 235Um ass. Active multiplicity assay appears to relax the requirement that the calibration standards and unknown items have the same chemical form and geometry.« less

  18. Space-Based Observations of Satellites From the MOST Microsatellite

    DTIC Science & Technology

    2006-11-01

    error estimate for these observations. To perform differential photometry, reference magnitudes for the background stars are needed. The Hubble Guide ...22 6.3 External Calibration References ..................................................................... 23 6.4 Post...32 10. References

  19. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling

    PubMed Central

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-01-01

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method. PMID:27690028

  20. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.

    PubMed

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-09-27

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.

  1. A calibration method for fringe reflection technique based on the analytical phase-slope description

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong

    2018-05-01

    The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.

  2. Regional mapping of soil parent material by machine learning based on point data

    NASA Astrophysics Data System (ADS)

    Lacoste, Marine; Lemercier, Blandine; Walter, Christian

    2011-10-01

    A machine learning system (MART) has been used to predict soil parent material (SPM) at the regional scale with a 50-m resolution. The use of point-specific soil observations as training data was tested as a replacement for the soil maps introduced in previous studies, with the aim of generating a more even distribution of training data over the study area and reducing information uncertainty. The 27,020-km 2 study area (Brittany, northwestern France) contains mainly metamorphic, igneous and sedimentary substrates. However, superficial deposits (aeolian loam, colluvial and alluvial deposits) very often represent the actual SPM and are typically under-represented in existing geological maps. In order to calibrate the predictive model, a total of 4920 point soil descriptions were used as training data along with 17 environmental predictors (terrain attributes derived from a 50-m DEM, as well as emissions of K, Th and U obtained by means of airborne gamma-ray spectrometry, geological variables at the 1:250,000 scale and land use maps obtained by remote sensing). Model predictions were then compared: i) during SPM model creation to point data not used in model calibration (internal validation), ii) to the entire point dataset (point validation), and iii) to existing detailed soil maps (external validation). The internal, point and external validation accuracy rates were 56%, 81% and 54%, respectively. Aeolian loam was one of the three most closely predicted substrates. Poor prediction results were associated with uncommon materials and areas with high geological complexity, i.e. areas where existing maps used for external validation were also imprecise. The resultant predictive map turned out to be more accurate than existing geological maps and moreover indicated surface deposits whose spatial coverage is consistent with actual knowledge of the area. This method proves quite useful in predicting SPM within areas where conventional mapping techniques might be too costly or lengthy or where soil maps are insufficient for use as training data. In addition, this method allows producing repeatable and interpretable results, whose accuracy can be assessed objectively.

  3. Advanced spectrophotometric chemometric methods for resolving the binary mixture of doxylamine succinate and pyridoxine hydrochloride.

    PubMed

    Katsarov, Plamen; Gergov, Georgi; Alin, Aylin; Pilicheva, Bissera; Al-Degs, Yahya; Simeonov, Vasil; Kassarova, Margarita

    2018-03-01

    The prediction power of partial least squares (PLS) and multivariate curve resolution-alternating least squares (MCR-ALS) methods have been studied for simultaneous quantitative analysis of the binary drug combination - doxylamine succinate and pyridoxine hydrochloride. Analysis of first-order UV overlapped spectra was performed using different PLS models - classical PLS1 and PLS2 as well as partial robust M-regression (PRM). These linear models were compared to MCR-ALS with equality and correlation constraints (MCR-ALS-CC). All techniques operated within the full spectral region and extracted maximum information for the drugs analysed. The developed chemometric methods were validated on external sample sets and were applied to the analyses of pharmaceutical formulations. The obtained statistical parameters were satisfactory for calibration and validation sets. All developed methods can be successfully applied for simultaneous spectrophotometric determination of doxylamine and pyridoxine both in laboratory-prepared mixtures and commercial dosage forms.

  4. Human Life History Strategies.

    PubMed

    Chua, Kristine J; Lukaszewski, Aaron W; Grant, DeMond M; Sng, Oliver

    2017-01-01

    Human life history (LH) strategies are theoretically regulated by developmental exposure to environmental cues that ancestrally predicted LH-relevant world states (e.g., risk of morbidity-mortality). Recent modeling work has raised the question of whether the association of childhood family factors with adult LH variation arises via (i) direct sampling of external environmental cues during development and/or (ii) calibration of LH strategies to internal somatic condition (i.e., health), which itself reflects exposure to variably favorable environments. The present research tested between these possibilities through three online surveys involving a total of over 26,000 participants. Participants completed questionnaires assessing components of self-reported environmental harshness (i.e., socioeconomic status, family neglect, and neighborhood crime), health status, and various LH-related psychological and behavioral phenotypes (e.g., mating strategies, paranoia, and anxiety), modeled as a unidimensional latent variable. Structural equation models suggested that exposure to harsh ecologies had direct effects on latent LH strategy as well as indirect effects on latent LH strategy mediated via health status. These findings suggest that human LH strategies may be calibrated to both external and internal cues and that such calibrational effects manifest in a wide range of psychological and behavioral phenotypes.

  5. Flight loads measurements obtained from calibrated strain-gage bridges mounted externally on the skin of a low-aspect-ratio wing

    NASA Technical Reports Server (NTRS)

    Eckstrom, C. V.

    1976-01-01

    Flight-test measurements of wingloads (shear, bending moment, and torque) were obtained by means of strain-gage bridges mounted on the exterior surface of a low-aspect-ratio, thin, swept wing which had a structural skin, full-depth honeycomb core, sandwich construction. Details concerning the strain-gage bridges, the calibration procedures used, and the flight-test results are presented along with some pressure measurements and theoretical calculations for comparison purposes.

  6. Results of the space shuttle vehicle ascent air data system probe calibration test using a 0.07-scale external tank forebody model (68T) in the AEDC 16-foot transonic wind tunnel (IA-310), volume 2

    NASA Technical Reports Server (NTRS)

    Collette, J. G. R.

    1991-01-01

    A recalibration of the Space Shuttle Vehicle Ascent Air Data System probe was conducted in the Arnold Engineering and Development Center (AEDC) transonic wind tunnel. The purpose was to improve on the accuracy of the previous calibration in order to reduce the existing uncertainties in the system. A probe tip attached to a 0.07-scale External Tank Forebody model was tested at angles of attack of -8 to +4 degrees and sideslip angles of -4 to +4 degrees. High precision instrumentation was used to acquire pressure data at discrete Mach numbers ranging from 0.6 to 1.55. Pressure coefficient uncertainties were estimated at less than 0.0020. Additional information is given in tabular form.

  7. Results of the space shuttle vehicle ascent air data system probe calibration test using a 0.07-scale external tank forebody model (68T) in the AEDC 16-foot transonic wind tunnel (IA-310), volume 1

    NASA Technical Reports Server (NTRS)

    Collette, J. G. R.

    1991-01-01

    A recalibration of the Space Shuttle Vehicle Ascent Air Data System probe was conducted in the Arnold Engineering Development Center (AEDC) transonic wind tunnel. The purpose was to improve on the accuracy of the previous calibration in order to reduce the existing uncertainties in the system. A probe tip attached to a 0.07-scale External Tank Forebody model was tested at angles of attack of -8 to +4 degrees and sideslip angles of -4 to +4 degrees. High precision instrumentation was used to acquire pressure data at discrete Mach numbers ranging from 0.6 to 1.55. Pressure coefficient uncertainties were estimated at less than 0.0020. Data is given in graphical and tabular form.

  8. AgRISTARS. Supporting research: MARS x-band scatterometer

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Gabel, P. F., Jr.; Brunfeldt, D. R.

    1981-01-01

    The design, construction, and data collection procedures of the mobile agricultural radar sensor (MARS) x band scatterometer are described. This system is an inexpensive, highly mobile, truck mounted FM-CW radar operating at a center frequency of 10.2 GHz. The antennas, which allow for VV and VH polarizations, are configured in a side looking mode that allows for drive by data collection. This configuration shortens fieldwork time considerably while increasing statistical confidence in the data. Both internal calibration, via a delay line, and external calibration with a Luneberg lens are used to calibrate the instrument in terms of sigma(o). The radar scattering cross section per unit area, sigma(o), is found using the radar equation.

  9. Multi-Institutional External Validation of Seminal Vesicle Invasion Nomograms: Head-to-Head Comparison of Gallina Nomogram Versus 2007 Partin Tables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zorn, Kevin C.; Capitanio, Umberto; Jeldres, Claudio

    2009-04-01

    Purpose: The Partin tables represent one of the most widely used prostate cancer staging tools for seminal vesicle invasion (SVI) prediction. Recently, Gallina et al. reported a novel staging tool for the prediction of SVI that further incorporated the use of the percentage of positive biopsy cores. We performed an external validation of the Gallina et al. nomogram and the 2007 Partin tables in a large, multi-institutional North American cohort of men treated with robotic-assisted radical prostatectomy. Methods and Materials: Clinical and pathologic data were prospectively gathered from 2,606 patients treated with robotic-assisted radical prostatectomy at one of four Northmore » American robotic referral centers between 2002 and 2007. Discrimination was quantified with the area under the receiver operating characteristics curve. The calibration compared the predicted and observed SVI rates throughout the entire range of predictions. Results: At robotic-assisted radical prostatectomy, SVI was recorded in 4.2% of patients. The discriminant properties of the Gallina et al. nomogram resulted in 81% accuracy compared with 78% for the 2007 Partin tables. The Gallina et al. nomogram overestimated the true rate of SVI. Conversely, the Partin tables underestimated the true rate of SVI. Conclusion: The Gallina et al. nomogram offers greater accuracy (81%) than the 2007 Partin tables (78%). However, both tools are associated with calibration limitations that need to be acknowledged and considered before their implementation into clinical practice.« less

  10. Pattern sampling for etch model calibration

    NASA Astrophysics Data System (ADS)

    Weisbuch, François; Lutich, Andrey; Schatz, Jirka

    2017-06-01

    Successful patterning requires good control of the photolithography and etch processes. While compact litho models, mainly based on rigorous physics, can predict very well the contours printed in photoresist, pure empirical etch models are less accurate and more unstable. Compact etch models are based on geometrical kernels to compute the litho-etch biases that measure the distance between litho and etch contours. The definition of the kernels as well as the choice of calibration patterns is critical to get a robust etch model. This work proposes to define a set of independent and anisotropic etch kernels -"internal, external, curvature, Gaussian, z_profile" - designed to capture the finest details of the resist contours and represent precisely any etch bias. By evaluating the etch kernels on various structures it is possible to map their etch signatures in a multi-dimensional space and analyze them to find an optimal sampling of structures to train an etch model. The method was specifically applied to a contact layer containing many different geometries and was used to successfully select appropriate calibration structures. The proposed kernels evaluated on these structures were combined to train an etch model significantly better than the standard one. We also illustrate the usage of the specific kernel "z_profile" which adds a third dimension to the description of the resist profile.

  11. A jaw calibration method to provide a homogeneous dose distribution in the matching region when using a monoisocentric beam split technique.

    PubMed

    Cenizo, E; García-Pareja, S; Galán, P; Bodineau, C; Caudepón, F; Casado, F J

    2011-05-01

    Asymmetric collimators are currently available in most of linear accelerators. They involve a lot of clinical improvements, such as the monoisocentric beam split technique that is more and more used in many external radiotherapy treatments. The tolerance established for each independent jaw positioning is 1 mm. Within this tolerance, a gap or overlap of the collimators up to 2 mm can occur in the half beams matching region, causing dose heterogeneities up to 40%. In order to solve this dosimetric problem, we propose an accurate jaw calibration method based on the Monte Carlo modeling of linac photon beams. Simulating different jaw misalignments, the dose distribution occurring in the matching region for each particular configuration is precisely known, so we can relate the misalignment of the jaws with the maximum heterogeneity produced. From experimental measurements using film dosimetry, and taking into account Monte Carlo results, we obtain the actual misalignment of each jaw. By direct inspection of the readings of the potentiometers that control the position of the jaws, high precision correction can be performed, adjusting the obtained misalignments. In the linac studied, the dose heterogeneity in the junction performed with X jaws (those farther from the source), and 6 MV photon beam was initially over 12%, although each jaw was within the tolerance in position. After jaw calibration, the heterogeneity was reduced to below 3%. With this method, we are able to reduce the positioning accuracy to 0.2 mm. Consequently, the dose distribution in the junction of abutted fields is highly smoothed, achieving the maximum dose heterogeneity to be less than 3%.

  12. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  13. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †

    PubMed Central

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-01

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. PMID:29320434

  14. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.

    PubMed

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-10

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.

  15. First Demonstration of ECHO: an External Calibrator for Hydrogen Observatories

    NASA Astrophysics Data System (ADS)

    Jacobs, Daniel C.; Burba, Jacob; Bowman, Judd D.; Neben, Abraham R.; Stinnett, Benjamin; Turner, Lauren; Johnson, Kali; Busch, Michael; Allison, Jay; Leatham, Marc; Serrano Rodriguez, Victoria; Denney, Mason; Nelson, David

    2017-03-01

    Multiple instruments are pursuing constraints on dark energy, observing reionization and opening a window on the dark ages through the detection and characterization of the 21 cm hydrogen line for redshifts ranging from ˜1 to 25. These instruments, including CHIME in the sub-meter and HERA in the meter bands, are wide-field arrays with multiple-degree beams, typically operating in transit mode. Accurate knowledge of their primary beams is critical for separation of bright foregrounds from the desired cosmological signals, but difficult to achieve through astronomical observations alone. Previous beam calibration work at low frequencies has focused on model verification and does not address the need of 21 cm experiments for routine beam mapping, to the horizon, of the as-built array. We describe the design and methodology of a drone-mounted calibrator, the External Calibrator for Hydrogen Observatories (ECHO), that aims to address this need. We report on a first set of trials to calibrate low-frequency dipoles at 137 MHz and compare ECHO measurements to an established beam-mapping system based on transmissions from the Orbcomm satellite constellation. We create beam maps of two dipoles at a 9° resolution and find sample noise ranging from 1% at the zenith to 100% in the far sidelobes. Assuming this sample noise represents the error in the measurement, the higher end of this range is not yet consistent with the desired requirement but is an improvement on Orbcomm. The overall performance of ECHO suggests that the desired precision and angular coverage is achievable in practice with modest improvements. We identify the main sources of systematic error and uncertainty in our measurements and describe the steps needed to overcome them.

  16. External cavity-quantum cascade laser (EC-QCL) spectroscopy for protein analysis in bovine milk.

    PubMed

    Kuligowski, Julia; Schwaighofer, Andreas; Alcaráz, Mirta Raquel; Quintás, Guillermo; Mayer, Helmut; Vento, Máximo; Lendl, Bernhard

    2017-04-22

    The analytical determination of bovine milk proteins is important in food and non-food industrial applications and yet, rather labour-intensive wet-chemical, low-throughput methods have been employed since decades. This work proposes the use of external cavity-quantum cascade laser (EC-QCL) spectroscopy for the simultaneous quantification of the most abundant bovine milk proteins and the total protein content based on the chemical information contained in mid-infrared (IR) spectral features of the amide I band. Mid-IR spectra of protein standard mixtures were used for building partial least squares (PLS) regression models. Protein concentrations in commercial bovine milk samples were calculated after chemometric compensation of the matrix contribution employing science-based calibration (SBC) without sample pre-processing. The use of EC-QCL spectroscopy together with advanced multivariate data analysis allowed the determination of casein, α-lactalbumin, β-lactoglobulin and total protein content within several minutes. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Assessment of mechanical strain in the intact plantar fascia.

    PubMed

    Clark, Ross A; Franklyn-Miller, Andrew; Falvey, Eanna; Bryant, Adam L; Bartold, Simon; McCrory, Paul

    2009-09-01

    A method of measuring tri-axial plantar fascia strain that is minimally affected by external compressive force has not previously been reported. The purpose of this study was to assess the use of micro-strain gauges to examine strain in the different axes of the plantar fascia. Two intact limbs from a thawed, fresh-frozen cadaver were dissected, and a combination of five linear and one three-way rosette gauges were attached to the fascia of the foot and ankle. Strain was assessed during two trials, both consisting of an identical controlled, loaded dorsiflexion. An ICC analysis of the results revealed that the majority of gauge placement sites produced reliable measures (ICC>0.75). Strain mapping of the plantar fascia indicates that the majority of the strain is centrally longitudinal, which provides supportive evidence for finite element model analysis. Although micro-strain gauges do possess the limitation of calibration difficulty, they provide a repeatable measure of fascial strain and may provide benefits in situations that require tri-axial assessment or external compression.

  18. Prognostic models for complete recovery in ischemic stroke: a systematic review and meta-analysis.

    PubMed

    Jampathong, Nampet; Laopaiboon, Malinee; Rattanakanokchai, Siwanon; Pattanittum, Porjai

    2018-03-09

    Prognostic models have been increasingly developed to predict complete recovery in ischemic stroke. However, questions arise about the performance characteristics of these models. The aim of this study was to systematically review and synthesize performance of existing prognostic models for complete recovery in ischemic stroke. We searched journal publications indexed in PUBMED, SCOPUS, CENTRAL, ISI Web of Science and OVID MEDLINE from inception until 4 December, 2017, for studies designed to develop and/or validate prognostic models for predicting complete recovery in ischemic stroke patients. Two reviewers independently examined titles and abstracts, and assessed whether each study met the pre-defined inclusion criteria and also independently extracted information about model development and performance. We evaluated validation of the models by medians of the area under the receiver operating characteristic curve (AUC) or c-statistic and calibration performance. We used a random-effects meta-analysis to pool AUC values. We included 10 studies with 23 models developed from elderly patients with a moderately severe ischemic stroke, mainly in three high income countries. Sample sizes for each study ranged from 75 to 4441. Logistic regression was the only analytical strategy used to develop the models. The number of various predictors varied from one to 11. Internal validation was performed in 12 models with a median AUC of 0.80 (95% CI 0.73 to 0.84). One model reported good calibration. Nine models reported external validation with a median AUC of 0.80 (95% CI 0.76 to 0.82). Four models showed good discrimination and calibration on external validation. The pooled AUC of the two validation models of the same developed model was 0.78 (95% CI 0.71 to 0.85). The performance of the 23 models found in the systematic review varied from fair to good in terms of internal and external validation. Further models should be developed with internal and external validation in low and middle income countries.

  19. Multisite external validation of a risk prediction model for the diagnosis of blood stream infections in febrile pediatric oncology patients without severe neutropenia.

    PubMed

    Esbenshade, Adam J; Zhao, Zhiguo; Aftandilian, Catherine; Saab, Raya; Wattier, Rachel L; Beauchemin, Melissa; Miller, Tamara P; Wilkes, Jennifer J; Kelly, Michael J; Fernbach, Alison; Jeng, Michael; Schwartz, Cindy L; Dvorak, Christopher C; Shyr, Yu; Moons, Karl G M; Sulis, Maria-Luisa; Friedman, Debra L

    2017-10-01

    Pediatric oncology patients are at an increased risk of invasive bacterial infection due to immunosuppression. The risk of such infection in the absence of severe neutropenia (absolute neutrophil count ≥ 500/μL) is not well established and a validated prediction model for blood stream infection (BSI) risk offers clinical usefulness. A 6-site retrospective external validation was conducted using a previously published risk prediction model for BSI in febrile pediatric oncology patients without severe neutropenia: the Esbenshade/Vanderbilt (EsVan) model. A reduced model (EsVan2) excluding 2 less clinically reliable variables also was created using the initial EsVan model derivative cohort, and was validated using all 5 external validation cohorts. One data set was used only in sensitivity analyses due to missing some variables. From the 5 primary data sets, there were a total of 1197 febrile episodes and 76 episodes of bacteremia. The overall C statistic for predicting bacteremia was 0.695, with a calibration slope of 0.50 for the original model and a calibration slope of 1.0 when recalibration was applied to the model. The model performed better in predicting high-risk bacteremia (gram-negative or Staphylococcus aureus infection) versus BSI alone, with a C statistic of 0.801 and a calibration slope of 0.65. The EsVan2 model outperformed the EsVan model across data sets with a C statistic of 0.733 for predicting BSI and a C statistic of 0.841 for high-risk BSI. The results of this external validation demonstrated that the EsVan and EsVan2 models are able to predict BSI across multiple performance sites and, once validated and implemented prospectively, could assist in decision making in clinical practice. Cancer 2017;123:3781-3790. © 2017 American Cancer Society. © 2017 American Cancer Society.

  20. A confidence metric for using neurobiological feedback in actor-critic reinforcement learning based brain-machine interfaces

    PubMed Central

    Prins, Noeline W.; Sanchez, Justin C.; Prasad, Abhishek

    2014-01-01

    Brain-Machine Interfaces (BMIs) can be used to restore function in people living with paralysis. Current BMIs require extensive calibration that increase the set-up times and external inputs for decoder training that may be difficult to produce in paralyzed individuals. Both these factors have presented challenges in transitioning the technology from research environments to activities of daily living (ADL). For BMIs to be seamlessly used in ADL, these issues should be handled with minimal external input thus reducing the need for a technician/caregiver to calibrate the system. Reinforcement Learning (RL) based BMIs are a good tool to be used when there is no external training signal and can provide an adaptive modality to train BMI decoders. However, RL based BMIs are sensitive to the feedback provided to adapt the BMI. In actor-critic BMIs, this feedback is provided by the critic and the overall system performance is limited by the critic accuracy. In this work, we developed an adaptive BMI that could handle inaccuracies in the critic feedback in an effort to produce more accurate RL based BMIs. We developed a confidence measure, which indicated how appropriate the feedback is for updating the decoding parameters of the actor. The results show that with the new update formulation, the critic accuracy is no longer a limiting factor for the overall performance. We tested and validated the system onthree different data sets: synthetic data generated by an Izhikevich neural spiking model, synthetic data with a Gaussian noise distribution, and data collected from a non-human primate engaged in a reaching task. All results indicated that the system with the critic confidence built in always outperformed the system without the critic confidence. Results of this study suggest the potential application of the technique in developing an autonomous BMI that does not need an external signal for training or extensive calibration. PMID:24904257

  1. Review and evaluation of performance measures for survival prediction models in external validation settings.

    PubMed

    Rahman, M Shafiqur; Ambler, Gareth; Choodari-Oskooei, Babak; Omar, Rumana Z

    2017-04-18

    When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell's concordance measure which tended to increase as censoring increased. We recommend that Uno's concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller's measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston's D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive accuracy curves. In addition, we recommend to investigate the characteristics of the validation data such as the level of censoring and the distribution of the prognostic index derived in the validation setting before choosing the performance measures.

  2. Development of Natural Flaw Samples for Evaluating Nondestructive Testing Methods for Foam Thermal Protection Systems

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.; Davis, Jason; Farrington, Seth; Walker, James

    2007-01-01

    Low density polyurethane foam has been an important insulation material for space launch vehicles for several decades. The potential for damage from foam breaking away from the NASA External Tank was not realized until the foam impacts on the Columbia Orbiter vehicle caused damage to its Leading Edge thermal protection systems (TPS). Development of improved inspection techniques on the foam TPS is necessary to prevent similar occurrences in the future. Foamed panels with drilled holes for volumetric flaws and Teflon inserts to simulate debonded conditions have been used to evaluate and calibrate nondestructive testing (NDT) methods. Unfortunately the symmetric edges and dissimilar materials used in the preparation of these simulated flaws provide an artificially large signal while very little signal is generated from the actual defects themselves. In other words, the same signal are not generated from the artificial defects in the foam test panels as produced when inspecting natural defect in the ET foam TPS. A project to create more realistic voids similar to what actually occurs during manufacturing operations was began in order to improve detection of critical voids during inspections. This presentation describes approaches taken to create more natural voids in foam TPS in order to provide a more realistic evaluation of what the NDT methods can detect. These flaw creation techniques were developed with both sprayed foam and poured foam used for insulation on the External Tank. Test panels with simulated defects have been used to evaluate NDT methods for the inspection of the External Tank. A comparison of images between natural flaws and machined flaws generated from backscatter x-ray radiography, x-ray laminography, terahertz imaging and millimeter wave imaging show significant differences in identifying defect regions.

  3. External calibration of polarimetric radars using point and distributed targets

    NASA Technical Reports Server (NTRS)

    Yueh, S. H.; Kong, J. A.; Shin, R. T.

    1991-01-01

    Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.

  4. External calibration of polarimetric radars using point and distributed targets

    NASA Astrophysics Data System (ADS)

    Yueh, S. H.; Kong, J. A.; Shin, R. T.

    1991-08-01

    Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.

  5. The Effect of Inappropriate Calibration: Three Case Studies in Molecular Ecology

    PubMed Central

    Ho, Simon Y. W.; Saarma, Urmas; Barnett, Ross; Haile, James; Shapiro, Beth

    2008-01-01

    Time-scales estimated from sequence data play an important role in molecular ecology. They can be used to draw correlations between evolutionary and palaeoclimatic events, to measure the tempo of speciation, and to study the demographic history of an endangered species. In all of these studies, it is paramount to have accurate estimates of time-scales and substitution rates. Molecular ecological studies typically focus on intraspecific data that have evolved on genealogical scales, but often these studies inappropriately employ deep fossil calibrations or canonical substitution rates (e.g., 1% per million years for birds and mammals) for calibrating estimates of divergence times. These approaches can yield misleading estimates of molecular time-scales, with significant impacts on subsequent evolutionary and ecological inferences. We illustrate this calibration problem using three case studies: avian speciation in the late Pleistocene, the demographic history of bowhead whales, and the Pleistocene biogeography of brown bears. For each data set, we compare the date estimates that are obtained using internal and external calibration points. In all three cases, the conclusions are significantly altered by the application of revised, internally-calibrated substitution rates. Collectively, the results emphasise the importance of judicious selection of calibrations for analyses of recent evolutionary events. PMID:18286172

  6. The effect of inappropriate calibration: three case studies in molecular ecology.

    PubMed

    Ho, Simon Y W; Saarma, Urmas; Barnett, Ross; Haile, James; Shapiro, Beth

    2008-02-20

    Time-scales estimated from sequence data play an important role in molecular ecology. They can be used to draw correlations between evolutionary and palaeoclimatic events, to measure the tempo of speciation, and to study the demographic history of an endangered species. In all of these studies, it is paramount to have accurate estimates of time-scales and substitution rates. Molecular ecological studies typically focus on intraspecific data that have evolved on genealogical scales, but often these studies inappropriately employ deep fossil calibrations or canonical substitution rates (e.g., 1% per million years for birds and mammals) for calibrating estimates of divergence times. These approaches can yield misleading estimates of molecular time-scales, with significant impacts on subsequent evolutionary and ecological inferences. We illustrate this calibration problem using three case studies: avian speciation in the late Pleistocene, the demographic history of bowhead whales, and the Pleistocene biogeography of brown bears. For each data set, we compare the date estimates that are obtained using internal and external calibration points. In all three cases, the conclusions are significantly altered by the application of revised, internally-calibrated substitution rates. Collectively, the results emphasise the importance of judicious selection of calibrations for analyses of recent evolutionary events.

  7. EPA Traceability Protocol for Assay and Certification of Gaseous Calibration Standards

    EPA Pesticide Factsheets

    EPA's air monitoring regulations require the use of Protocol Gases to set air pollution monitors. This protocol balances the government's need for accuracy with the producers' need for flexibility, low cost, and minimum external oversight.

  8. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  9. Risk scores for outcome in bacterial meningitis: Systematic review and external validation study.

    PubMed

    Bijlsma, Merijn W; Brouwer, Matthijs C; Bossuyt, Patrick M; Heymans, Martijn W; van der Ende, Arie; Tanck, Michael W T; van de Beek, Diederik

    2016-11-01

    To perform an external validation study of risk scores, identified through a systematic review, predicting outcome in community-acquired bacterial meningitis. MEDLINE and EMBASE were searched for articles published between January 1960 and August 2014. Performance was evaluated in 2108 episodes of adult community-acquired bacterial meningitis from two nationwide prospective cohort studies by the area under the receiver operating characteristic curve (AUC), the calibration curve, calibration slope or Hosmer-Lemeshow test, and the distribution of calculated risks. Nine risk scores were identified predicting death, neurological deficit or death, or unfavorable outcome at discharge in bacterial meningitis, pneumococcal meningitis and invasive meningococcal disease. Most studies had shortcomings in design, analyses, and reporting. Evaluation showed AUCs of 0.59 (0.57-0.61) and 0.74 (0.71-0.76) in bacterial meningitis, 0.67 (0.64-0.70) in pneumococcal meningitis, and 0.81 (0.73-0.90), 0.82 (0.74-0.91), 0.84 (0.75-0.93), 0.84 (0.76-0.93), 0.85 (0.75-0.95), and 0.90 (0.83-0.98) in meningococcal meningitis. Calibration curves showed adequate agreement between predicted and observed outcomes for four scores, but statistical tests indicated poor calibration of all risk scores. One score could be recommended for the interpretation and design of bacterial meningitis studies. None of the existing scores performed well enough to recommend routine use in individual patient management. Copyright © 2016 The British Infection Association. Published by Elsevier Ltd. All rights reserved.

  10. Hand-Eye Calibration of Robonaut

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin; Huber, Eric

    2004-01-01

    NASA's Human Space Flight program depends heavily on Extra-Vehicular Activities (EVA's) performed by human astronauts. EVA is a high risk environment that requires extensive training and ground support. In collaboration with the Defense Advanced Research Projects Agency (DARPA), NASA is conducting a ground development project to produce a robotic astronaut's assistant, called Robonaut, that could help reduce human EVA time and workload. The project described in this paper designed and implemented a hand-eye calibration scheme for Robonaut, Unit A. The intent of this calibration scheme is to improve hand-eye coordination of the robot. The basic approach is to use kinematic and stereo vision measurements, namely the joint angles self-reported by the right arm and 3-D positions of a calibration fixture as measured by vision, to estimate the transformation from Robonaut's base coordinate system to its hand coordinate system and to its vision coordinate system. Two methods of gathering data sets have been developed, along with software to support each. In the first, the system observes the robotic arm and neck angles as the robot is operated under external control, and measures the 3-D position of a calibration fixture using Robonaut's stereo cameras, and logs these data. In the second, the system drives the arm and neck through a set of pre-recorded configurations, and data are again logged. Two variants of the calibration scheme have been developed. The full calibration scheme is a batch procedure that estimates all relevant kinematic parameters of the arm and neck of the robot The daily calibration scheme estimates only joint offsets for each rotational joint on the arm and neck, which are assumed to change from day to day. The schemes have been designed to be automatic and easy to use so that the robot can be fully recalibrated when needed such as after repair, upgrade, etc, and can be partially recalibrated after each power cycle. The scheme has been implemented on Robonaut Unit A and has been shown to reduce mismatch between kinematically derived positions and visually derived positions from a mean of 13.75cm using the previous calibration to means of 1.85cm using a full calibration and 2.02cm using a suboptimal but faster daily calibration. This improved calibration has already enabled the robot to more accurately reach for and grasp objects that it sees within its workspace. The system has been used to support an autonomous wrench-grasping experiment and significantly improved the workspace positioning of the hand based on visually derived wrench position. estimates.

  11. External quality assurance programs as a tool for verifying standardization of measurement procedures: Pilot collaboration in Europe.

    PubMed

    Perich, C; Ricós, C; Alvarez, V; Biosca, C; Boned, B; Cava, F; Doménech, M V; Fernández-Calle, P; Fernández-Fernández, P; García-Lario, J V; Minchinela, J; Simón, M; Jansen, R

    2014-05-15

    Current external quality assurance schemes have been classified into six categories, according to their ability to verify the degree of standardization of the participating measurement procedures. SKML (Netherlands) is a Category 1 EQA scheme (commutable EQA materials with values assigned by reference methods), whereas SEQC (Spain) is a Category 5 scheme (replicate analyses of non-commutable materials with no values assigned by reference methods). The results obtained by a group of Spanish laboratories participating in a pilot study organized by SKML are examined, with the aim of pointing out the improvements over our current scheme that a Category 1 program could provide. Imprecision and bias are calculated for each analyte and laboratory, and compared with quality specifications derived from biological variation. Of the 26 analytes studied, 9 had results comparable with those from reference methods, and 10 analytes did not have comparable results. The remaining 7 analytes measured did not have available reference method values, and in these cases, comparison with the peer group showed comparable results. The reasons for disagreement in the second group can be summarized as: use of non-standard methods (IFCC without exogenous pyridoxal phosphate for AST and ALT, Jaffé kinetic at low-normal creatinine concentrations and with eGFR); non-commutability of the reference material used to assign values to the routine calibrator (calcium, magnesium and sodium); use of reference materials without established commutability instead of reference methods for AST and GGT, and lack of a systematic effort by manufacturers to harmonize results. Results obtained in this work demonstrate the important role of external quality assurance programs using commutable materials with values assigned by reference methods to correctly monitor the standardization of laboratory tests with consequent minimization of risk to patients. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Derivation and external validation of a case mix model for the standardized reporting of 30-day stroke mortality rates.

    PubMed

    Bray, Benjamin D; Campbell, James; Cloud, Geoffrey C; Hoffman, Alex; James, Martin; Tyrrell, Pippa J; Wolfe, Charles D A; Rudd, Anthony G

    2014-11-01

    Case mix adjustment is required to allow valid comparison of outcomes across care providers. However, there is a lack of externally validated models suitable for use in unselected stroke admissions. We therefore aimed to develop and externally validate prediction models to enable comparison of 30-day post-stroke mortality outcomes using routine clinical data. Models were derived (n=9000 patients) and internally validated (n=18 169 patients) using data from the Sentinel Stroke National Audit Program, the national register of acute stroke in England and Wales. External validation (n=1470 patients) was performed in the South London Stroke Register, a population-based longitudinal study. Models were fitted using general estimating equations. Discrimination and calibration were assessed using receiver operating characteristic curve analysis and correlation plots. Two final models were derived. Model A included age (<60, 60-69, 70-79, 80-89, and ≥90 years), National Institutes of Health Stroke Severity Score (NIHSS) on admission, presence of atrial fibrillation on admission, and stroke type (ischemic versus primary intracerebral hemorrhage). Model B was similar but included only the consciousness component of the NIHSS in place of the full NIHSS. Both models showed excellent discrimination and calibration in internal and external validation. The c-statistics in external validation were 0.87 (95% confidence interval, 0.84-0.89) and 0.86 (95% confidence interval, 0.83-0.89) for models A and B, respectively. We have derived and externally validated 2 models to predict mortality in unselected patients with acute stroke using commonly collected clinical variables. In settings where the ability to record the full NIHSS on admission is limited, the level of consciousness component of the NIHSS provides a good approximation of the full NIHSS for mortality prediction. © 2014 American Heart Association, Inc.

  13. Development of Decision Support Formulas for the Prediction of Bladder Outlet Obstruction and Prostatic Surgery in Patients With Lower Urinary Tract Symptom/Benign Prostatic Hyperplasia: Part II, External Validation and Usability Testing of a Smartphone App.

    PubMed

    Choo, Min Soo; Jeong, Seong Jin; Cho, Sung Yong; Yoo, Changwon; Jeong, Chang Wook; Ku, Ja Hyeon; Oh, Seung-June

    2017-04-01

    We aimed to externally validate the prediction model we developed for having bladder outlet obstruction (BOO) and requiring prostatic surgery using 2 independent data sets from tertiary referral centers, and also aimed to validate a mobile app for using this model through usability testing. Formulas and nomograms predicting whether a subject has BOO and needs prostatic surgery were validated with an external validation cohort from Seoul National University Bundang Hospital and Seoul Metropolitan Government-Seoul National University Boramae Medical Center between January 2004 and April 2015. A smartphone-based app was developed, and 8 young urologists were enrolled for usability testing to identify any human factor issues of the app. A total of 642 patients were included in the external validation cohort. No significant differences were found in the baseline characteristics of major parameters between the original (n=1,179) and the external validation cohort, except for the maximal flow rate. Predictions of requiring prostatic surgery in the validation cohort showed a sensitivity of 80.6%, a specificity of 73.2%, a positive predictive value of 49.7%, and a negative predictive value of 92.0%, and area under receiver operating curve of 0.84. The calibration plot indicated that the predictions have good correspondence. The decision curve showed also a high net benefit. Similar evaluation results using the external validation cohort were seen in the predictions of having BOO. Overall results of the usability test demonstrated that the app was user-friendly with no major human factor issues. External validation of these newly developed a prediction model demonstrated a moderate level of discrimination, adequate calibration, and high net benefit gains for predicting both having BOO and requiring prostatic surgery. Also a smartphone app implementing the prediction model was user-friendly with no major human factor issue.

  14. The drift chamber array at the external target facility in HIRFL-CSR

    NASA Astrophysics Data System (ADS)

    Sun, Y. Z.; Sun, Z. Y.; Wang, S. T.; Duan, L. M.; Sun, Y.; Yan, D.; Tang, S. W.; Yang, H. R.; Lu, C. G.; Ma, P.; Yu, Y. H.; Zhang, X. H.; Yue, K.; Fang, F.; Su, H.

    2018-06-01

    A drift chamber array at the External Target Facility in HIRFL-CSR has been constructed for three-dimensional particle tracking in high-energy radioactive ion beam experiments. The design, readout, track reconstruction program and calibration procedures for the detector are described. The drift chamber array was tested in a 311 AMeV 40Ar beam experiment. The detector performance based on the measurements of the beam test is presented. A spatial resolution of 230 μm is achieved.

  15. Calibration strategy for the COROT photometry

    NASA Astrophysics Data System (ADS)

    Buey, J.-T.; Auvergne, M.; Lapeyrere, V.; Boumier, P.

    2004-01-01

    Like Eddington, the COROT photometer will measure very small fluctutions on a large signal: the amplitudes of planetary transits and solar-like oscillations are expressed in ppm (parts per million). For such an instrument, specific calibration has to be done during the different phases of the development of the instrument and of all the subsystems. Two main things have to be taken into account: - the calibration during the study phase; - the calibration of the sub-systems and building of numerical models. The first item allows us to clearly understand all the perturbations (internal and external) and to identify their relative impacts on the expected signal (by numerical models including expected values of perturbations and sensitivity of the instrument). Methods and a schedule for the calibration process can also be introduced, in good agreement with the development plan of the instrument. The second item is more related to the measurement of the sensitivity of the instrument and all its sub-systems. As the instrument is designed to be as stable as possible, we have to mix measurements (with larger fluctuations of parameters than expected) and numerical models. Some typical reasons for that are: - there are many parameters to introduce in the measurements and results from some models (bread-board for example) may be extrapolated to the flight model; - larger fluctuations than expected are used (to measure precisely the sensitivity) and numerical models give the real value of noise with the expected fluctuations. - Characteristics of sub-systems may be measured and models used to give the sensitivity of the whole system built with them, as end-to-end measurements may be impossible (time, budget, physical limitations). Also, house-keeping measurements have to be set up on the critical parts of the sub-systems: measurements on thermal probes, power supply, pointing, etc. All these house-keeping data are used during ground calibration and during the flight, so that correct correlation between signal and house-keeping can be achieved.

  16. Real-time evaluation of polyphenol oxidase (PPO) activity in lychee pericarp based on weighted combination of spectral data and image features as determined by fuzzy neural network.

    PubMed

    Yang, Yi-Chao; Sun, Da-Wen; Wang, Nan-Nan; Xie, Anguo

    2015-07-01

    A novel method of using hyperspectral imaging technique with the weighted combination of spectral data and image features by fuzzy neural network (FNN) was proposed for real-time prediction of polyphenol oxidase (PPO) activity in lychee pericarp. Lychee images were obtained by a hyperspectral reflectance imaging system operating in the range of 400-1000nm. A support vector machine-recursive feature elimination (SVM-RFE) algorithm was applied to eliminating variables with no or little information for the prediction from all bands, resulting in a reduced set of optimal wavelengths. Spectral information at the optimal wavelengths and image color features were then used respectively to develop calibration models for the prediction of PPO in pericarp during storage, and the results of two models were compared. In order to improve the prediction accuracy, a decision strategy was developed based on weighted combination of spectral data and image features, in which the weights were determined by FNN for a better estimation of PPO activity. The results showed that the combined decision model was the best among all of the calibration models, with high R(2) values of 0.9117 and 0.9072 and low RMSEs of 0.45% and 0.459% for calibration and prediction, respectively. These results demonstrate that the proposed weighted combined decision method has great potential for improving model performance. The proposed technique could be used for a better prediction of other internal and external quality attributes of fruits. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Precision Photometry and Astrometry from Pan-STARRS

    NASA Astrophysics Data System (ADS)

    Magnier, Eugene A.; Pan-STARRS Team

    2018-01-01

    The Pan-STARRS 3pi Survey has been calibrated with excellent precision for both astrometry and photometry. The Pan-STARRS Data Release 1, opened to the public on 2016 Dec 16, provides photometry in 5 well-calibrated, well-defined bandpasses (grizy) astrometrically registered to the Gaia frame. Comparisons with other surveys illustrate the high quality of the calibration and provide tests of remaining systematic errors in both Pan-STARRS and those external surveys. With photometry and astrometry of roughly 3 billion astronomical objects, the Pan-STARRS DR1 has substantial overlap with Gaia, SDSS, 2MASS and other surveys. I will discuss the astrometric tie between Pan-STARRS DR1 and Gaia and show comparisons between Pan-STARRS and other large-scale surveys.

  18. Development of a solid-phase microextraction gas chromatography with microelectron-capture detection method for a multiresidue analysis of pesticides in bovine milk.

    PubMed

    Fernandez-Alvarez, Maria; Llompart, Maria; Lamas, J Pablo; Lores, Marta; Garcia-Jares, Carmen; Cela, Rafael; Dagnac, Thierry

    2008-06-09

    A simple and rapid method based on solid-phase microextraction (SPME) technique followed by gas chromatography with microelectron-capture detection (GC-microECD) was developed for the simultaneous determination of more than 30 pesticides (pyrethroids and organochlorinated among others) in milk. To our knowledge, this is the first application of SPME for the determination of pyrethroid pesticides in milk. Negative matrix effects due to the complexity and lipophility of the studied matrix were reduced by diluting the sample with distilled water. A 2(5-1) fractional factorial design was performed to assess the influence of several factors (type of fiber coating, sampling mode, stirring, extraction temperature, and addition of sodium chloride) on the SPME procedure and to determine the optimal extraction conditions. After optimization of all the significant variables and interactions, the recommended procedure was established as follows: DSPME (using a polydimethylsiloxane (PDMS)/divinylbenzene (DVB) coating) of 1 mL of milk sample diluted with Milli-Q water (1:10 dilution ratio), at 100 degrees C, under stirring for 30 min. The proposed method showed good linearity and high sensitivity, with limits of detection (LOD) at the sub-ng mL(-1) level. Within a day and among days precisions were also evaluated (R.S.D.<15%). One of the most important attainments of this work was the use of external calibration with milk-matched standards to quantify the levels of the target analytes. The method was tested with liquid and powdered milk samples with different fat contents covering the whole commercial range. The efficiency of the extraction process was studied at several analyte concentration levels obtaining high recoveries (>80% in most cases) for different types of full-fat milks. The optimized procedure was validated with powdered milk certified reference material, which was quantified using external calibration and standard addition protocols. Finally, the DSPME-GC-microECD methodology was applied to the analysis of milk samples collected in farms of dairy cattle from NW Spain.

  19. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  20. A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry

    NASA Astrophysics Data System (ADS)

    Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.

    2018-03-01

    Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.

  1. A study of short test and charge retention test methods for nickel-cadmium spacecraft cells

    NASA Technical Reports Server (NTRS)

    Scott, W. R.

    1975-01-01

    Methods for testing nickel-cadmium cells for internal shorts and charge retention were studied. Included were (a) open circuit voltage decay after a brief charge, (b) open circuit voltage recovery after shorting, and (c) open circuit voltage decay and capacity loss after a full charge. The investigation included consideration of the effects of prior history, of conditioning cells prior to testing, and of various test method variables on the results of the tests. Sensitivity of the tests was calibrated in terms of equivalent external resistance. The results were correlated. It was shown that a large number of variables may affect the results of these tests. It is concluded that the voltage decay after a brief charge and the voltage recovery methods are more sensitive than the charged stand method, and can detect an internal short equivalent to a resistance of about (10,000/C)ohms where "C' is the numerical value of the capacity of the cell in ampere hours.

  2. Direct calibration of a reference standard against the air kerma strength primary standard, at 192Ir HDR energy.

    PubMed

    Rajan, K N Govinda; Selvam, T Palani; Bhatt, B C; Vijayam, M; Patki, V S; Vinatha; Pendse, A M; Kannan, V

    2002-04-07

    The primary standard of low air kerma rate sources or beams, maintained at the Radiological Standards Laboratory (RSL) of the Bhabha Atomic Research Centre (BARC), is a 60 cm3 spherical graphite ionization chamber. A 192Ir HDR source was standardized at the hospital site in units of air kerma strength (AKS) using this primary standard. A 400 cm3 bakelite chamber, functioning as a reference standard at the RSL for a long period, at low air kerma rates (compared to external beam dose rates), was calibrated against the primary standard. It was seen that the primary standard and the reference standard, both being of low Z, showed roughly the same scatter response and yielded the same calibration factor for the 400 cm3 reference chamber, with or without room scatter. However, any likelihood of change in the reference chamber calibration factor would necessitate the re-transport of the primary standard to the hospital site for re-calibration. Frequent transport of the primary standard can affect the long-term stability of the primary standard, due to its movement or other extraneous causes. The calibration of the reference standard against the primary standard at the RSL, for an industrial type 192Ir source maintained at the laboratory, showed excellent agreement with the hospital calibration, making it possible to check the reference chamber calibration at RSL itself. Further calibration procedures have been developed to offer traceable calibration of the hospital well ionization chambers.

  3. A calibration method of infrared LVF based spectroradiometer

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin

    2017-10-01

    In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.

  4. An iterative method for the localization of a neutron source in a large box (container)

    NASA Astrophysics Data System (ADS)

    Dubinski, S.; Presler, O.; Alfassi, Z. B.

    2007-12-01

    The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.

  5. Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.

  6. A Context-Recognition-Aided PDR Localization Method Based on the Hidden Markov Model

    PubMed Central

    Lu, Yi; Wei, Dongyan; Lai, Qifeng; Li, Wen; Yuan, Hong

    2016-01-01

    Indoor positioning has recently become an important field of interest because global navigation satellite systems (GNSS) are usually unavailable in indoor environments. Pedestrian dead reckoning (PDR) is a promising localization technique for indoor environments since it can be implemented on widely used smartphones equipped with low cost inertial sensors. However, the PDR localization severely suffers from the accumulation of positioning errors, and other external calibration sources should be used. In this paper, a context-recognition-aided PDR localization model is proposed to calibrate PDR. The context is detected by employing particular human actions or characteristic objects and it is matched to the context pre-stored offline in the database to get the pedestrian’s location. The Hidden Markov Model (HMM) and Recursive Viterbi Algorithm are used to do the matching, which reduces the time complexity and saves the storage. In addition, the authors design the turn detection algorithm and take the context of corner as an example to illustrate and verify the proposed model. The experimental results show that the proposed localization method can fix the pedestrian’s starting point quickly and improves the positioning accuracy of PDR by 40.56% at most with perfect stability and robustness at the same time. PMID:27916922

  7. Improving the performance of the mass transfer-based reference evapotranspiration estimation approaches through a coupled wavelet-random forest methodology

    NASA Astrophysics Data System (ADS)

    Shiri, Jalal

    2018-06-01

    Among different reference evapotranspiration (ETo) modeling approaches, mass transfer-based methods have been less studied. These approaches utilize temperature and wind speed records. On the other hand, the empirical equations proposed in this context generally produce weak simulations, except when a local calibration is used for improving their performance. This might be a crucial drawback for those equations in case of local data scarcity for calibration procedure. So, application of heuristic methods can be considered as a substitute for improving the performance accuracy of the mass transfer-based approaches. However, given that the wind speed records have usually higher variation magnitudes than the other meteorological parameters, application of a wavelet transform for coupling with heuristic models would be necessary. In the present paper, a coupled wavelet-random forest (WRF) methodology was proposed for the first time to improve the performance accuracy of the mass transfer-based ETo estimation approaches using cross-validation data management scenarios in both local and cross-station scales. The obtained results revealed that the new coupled WRF model (with the minimum scatter index values of 0.150 and 0.192 for local and external applications, respectively) improved the performance accuracy of the single RF models as well as the empirical equations to great extent.

  8. Application of confocal surface wave microscope to self-calibrated attenuation coefficient measurement by Goos-Hänchen phase shift modulation.

    PubMed

    Pechprasarn, Suejit; Chow, Terry W K; Somekh, Michael G

    2018-06-04

    In this paper, we present a direct method to measure surface wave attenuation arising from both ohmic and coupling losses using our recently developed phase spatial light modulator (phase-SLM) based confocal surface plasmon microscope. The measurement is carried out in the far-field using a phase-SLM to impose an artificial surface wave phase profile in the back focal plane (BFP) of a microscope objective. In other words, we effectively provide an artificially engineered backward surface wave by modulating the Goos Hänchen (GH) phase shift of the surface wave. Such waves with opposing phase and group velocities are well known in acoustics and electromagnetic metamaterials but usually require structured or layered surfaces, here the effective wave is produced externally in the microscope illumination path. Key features of the technique developed here are that it (i) is self-calibrating and (ii) can distinguish between attenuation arising from ohmic loss (k″ Ω ) and coupling (reradiation) loss (k″ c ). This latter feature has not been achieved with existing methods. In addition to providing a unique measurement the measurement occurs of over a localized region of a few microns. The results were then validated against the surface plasmons (SP) dip measurement in the BFP and a theoretical model based on a simplified Green's function.

  9. Direct determination of chromium in infant formulas employing high-resolution continuum source electrothermal atomic absorption spectrometry and solid sample analysis.

    PubMed

    Silva, Arlene S; Brandao, Geovani C; Matos, Geraldo D; Ferreira, Sergio L C

    2015-11-01

    The present work proposed an analytical method for the direct determination of chromium in infant formulas employing the high-resolution continuum source electrothermal atomic absorption spectrometry combined with the solid sample analysis (SS-HR-CS ET AAS). Sample masses up to 2.0mg were directly weighted on a solid sampling platform and introduced into the graphite tube. In order to minimize the formation of carbonaceous residues and to improve the contact of the modifier solution with the solid sample, a volume of 10 µL of a solution containing 6% (v/v) H2O2, 20% (v/v) ethanol and 1% (v/v) HNO3 was added. The pyrolysis and atomization temperatures established were 1600 and 2400 °C, respectively, using magnesium as chemical modifier. The calibration technique was evaluated by comparing the slopes of calibration curves established using aqueous and solid standards. This test revealed that chromium can be determined employing the external calibration technique using aqueous standards. Under these conditions, the method developed allows the direct determination of chromium with limit of quantification of 11.5 ng g(-1), precision expressed as relative standard deviation (RSD) in the range of 4.0-17.9% (n=3) and a characteristic mass of 1.2 pg of chromium. The accuracy was confirmed by analysis of a certified reference material of tomato leaves furnished by National Institute of Standards and Technology. The method proposed was applied for the determination of chromium in five different infant formula samples. The chromium content found varied in the range of 33.9-58.1 ng g(-1) (n=3). These samples were also analyzed employing ICP-MS. A statistical test demonstrated that there is no significant difference between the results found by two methods. The chromium concentrations achieved are lower than the maximum limit permissible for chromium in foods by Brazilian Legislation. Copyright © 2015. Published by Elsevier B.V.

  10. Self shielding in cylindrical fissile sources in the APNea system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hensley, D.

    1997-02-01

    In order for a source of fissile material to be useful as a calibration instrument, it is necessary to know not only how much fissile material is in the source but also what the effective fissile content is. Because uranium and plutonium absorb thermal neutrons so Efficiently, material in the center of a sample is shielded from the external thermal flux by the surface layers of the material. Differential dieaway measurements in the APNea System of five different sets of cylindrical fissile sources show the various self shielding effects that are routinely encountered. A method for calculating the self shieldingmore » effect is presented and its predictions are compared with the experimental results.« less

  11. Wavelength calibration of dispersive near-infrared spectrometer using relative k-space distribution with low coherence interferometer

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2016-05-01

    The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.

  12. Verification of the ISO calibration method for field pyranometers under tropical sky conditions

    NASA Astrophysics Data System (ADS)

    Janjai, Serm; Tohsing, Korntip; Pattarapanitchai, Somjet; Detkhon, Pasakorn

    2017-02-01

    Field pyranomters need to be annually calibrated and the International Organization for Standardization (ISO) has defined a standard method (ISO 9847) for calibrating these pyranometers. According to this standard method for outdoor calibration, the field pyranometers have to be compared to a reference pyranometer for the period of 2 to 14 days, depending on sky conditions. In this work, the ISO 9847 standard method was verified under tropical sky conditions. To verify the standard method, calibration of field pyranometers was conducted at a tropical site located in Nakhon Pathom (13.82o N, 100.04o E), Thailand under various sky conditions. The conditions of the sky were monitored by using a sky camera. The calibration results for different time periods used for the calibration under various sky conditions were analyzed. It was found that the calibration periods given by this standard method could be reduced without significant change in the final calibration result. In addition, recommendation and discussion on the use of this standard method in the tropics were also presented.

  13. Matrix-effect free multi-residue analysis of veterinary drugs in food samples of animal origin by nanoflow liquid chromatography high resolution mass spectrometry.

    PubMed

    Alcántara-Durán, Jaime; Moreno-González, David; Gilbert-López, Bienvenida; Molina-Díaz, Antonio; García-Reyes, Juan F

    2018-04-15

    In this work, a sensitive method based on nanoflow liquid chromatography high-resolution mass spectrometry has been developed for the multiresidue determination of veterinary drugs residues in honey, veal muscle, egg and milk. Salting-out supported liquid extraction was employed as sample treatment for milk, veal muscle and egg, while a modified QuEChERS procedure was used in honey. The enhancement of sensitivity provided by the nanoflow LC system also allowed the implementation of high dilution factors as high as 100:1. For all matrices tested, matrix effects were negligible starting from a dilution factor of 100, enabling, thus, the use of external standard calibration instead of matrix-matched calibration of each sample, and the subsequent increase of laboratory throughput. At spiked levels as low as 0.1 or 1 µg kg -1 before the 1:100 dilution, the obtained signals were still significantly higher than the instrumental limit of quantitation (S/N 10). Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Near infrared spectroscopy for prediction of antioxidant compounds in the honey.

    PubMed

    Escuredo, Olga; Seijo, M Carmen; Salvador, Javier; González-Martín, M Inmaculada

    2013-12-15

    The selection of antioxidant variables in honey is first time considered applying the near infrared (NIR) spectroscopic technique. A total of 60 honey samples were used to develop the calibration models using the modified partial least squares (MPLS) regression method and 15 samples were used for external validation. Calibration models on honey matrix for the estimation of phenols, flavonoids, vitamin C, antioxidant capacity (DPPH), oxidation index and copper using near infrared (NIR) spectroscopy has been satisfactorily obtained. These models were optimised by cross-validation, and the best model was evaluated according to multiple correlation coefficient (RSQ), standard error of cross-validation (SECV), ratio performance deviation (RPD) and root mean standard error (RMSE) in the prediction set. The result of these statistics suggested that the equations developed could be used for rapid determination of antioxidant compounds in honey. This work shows that near infrared spectroscopy can be considered as rapid tool for the nondestructive measurement of antioxidant constitutes as phenols, flavonoids, vitamin C and copper and also the antioxidant capacity in the honey. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Operational Support for Instrument Stability through ODI-PPA Metadata Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Young, M. D.; Hayashi, S.; Gopu, A.; Kotulla, R.; Harbeck, D.; Liu, W.

    2015-09-01

    Over long time scales, quality assurance metrics taken from calibration and calibrated data products can aid observatory operations in quantifying the performance and stability of the instrument, and identify potential areas of concern or guide troubleshooting and engineering efforts. Such methods traditionally require manual SQL entries, assuming the requisite metadata has even been ingested into a database. With the ODI-PPA system, QA metadata has been harvested and indexed for all data products produced over the life of the instrument. In this paper we will describe how, utilizing the industry standard Highcharts Javascript charting package with a customized AngularJS-driven user interface, we have made the process of visualizing the long-term behavior of these QA metadata simple and easily replicated. Operators can easily craft a custom query using the powerful and flexible ODI-PPA search interface and visualize the associated metadata in a variety of ways. These customized visualizations can be bookmarked, shared, or embedded externally, and will be dynamically updated as new data products enter the system, enabling operators to monitor the long-term health of their instrument with ease.

  16. Research on camera on orbit radial calibration based on black body and infrared calibration stars

    NASA Astrophysics Data System (ADS)

    Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng

    2018-05-01

    Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.

  17. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  18. Clusters of Monoisotopic Elements for Calibration in (TOF) Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Kolářová, Lenka; Prokeš, Lubomír; Kučera, Lukáš; Hampl, Aleš; Peňa-Méndez, Eladia; Vaňhara, Petr; Havel, Josef

    2017-03-01

    Precise calibration in TOF MS requires suitable and reliable standards, which are not always available for high masses. We evaluated inorganic clusters of the monoisotopic elements gold and phosphorus (Au n +/Au n - and P n +/P n -) as an alternative to peptides or proteins for the external and internal calibration of mass spectra in various experimental and instrumental scenarios. Monoisotopic gold or phosphorus clusters can be easily generated in situ from suitable precursors by laser desorption/ionization (LDI) or matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS). Their use offers numerous advantages, including simplicity of preparation, biological inertness, and exact mass determination even at lower mass resolution. We used citrate-stabilized gold nanoparticles to generate gold calibration clusters, and red phosphorus powder to generate phosphorus clusters. Both elements can be added to samples to perform internal calibration up to mass-to-charge ( m/z) 10-15,000 without significantly interfering with the analyte. We demonstrated the use of the gold and phosphorous clusters in the MS analysis of complex biological samples, including microbial standards and total extracts of mouse embryonic fibroblasts. We believe that clusters of monoisotopic elements could be used as generally applicable calibrants for complex biological samples.

  19. Assessing endothelial function and providing calibrated UFMD data using a blood pressure cuff

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maltz, Jonathan S.

    Methods and apparatus are provided for assessing endothelial function in a mammal. In certain embodiments the methods involve using a cuff to apply pressure to an artery in a subject to determine a plurality of baseline values for a parameter related to endothelial function as a function of applied pressure (P.sub.m); b) applying a stimulus to the subject; and applying external pressure P.sub.m to the artery to determine a plurality of stimulus-effected values for the parameter related to endothelial function as a function of applied pressure (P.sub.m); where the baseline values are determined from measurements made when said mammal ismore » not substantially effected by said stimulus and differences in said baseline values and said stimulus-effected values provide a measure of endothelial function in said mammal.« less

  20. Headspace solid phase microextraction--GC/C-IRMS for delta13CVPDB measurements of mono-aromatic hydrocarbons using EA-IRMS calibration.

    PubMed

    Ebongué, Véronique Woule; Geypens, Benny; Berglund, Michael; Taylor, Philip

    2009-03-01

    This work aims at comparing the delta(13)C(VPDB) of mono-aromatic hydrocarbons benzene, toluene, ethylbenzene and xylene isomers (BTEX) measured by elemental analyser (EA)-isotope ratio mass spectrometer (IRMS) with the delta(13)C(VPDB) measured on the same compounds by headspace solid phase microextraction - GC/C-IRMS (hSPME - GC/C-IRMS) with the final goal of using these compounds as internal standards on the latter system. The EA-IRMS measurements were done using calcium and lithium carbonate isotopic reference materials: NBS19 and L-SVEC for establishing the delta(13)C(VPDB) scale. The EA-IRMS measurements with helium dilution of a set of five reference materials (USGS40, USGS41, IAEA-CH-6, IAEA-CH-3 and IAEA-601) show systematic bias of 1 per thousand relative to their assigned values. This bias due to the dilution mechanism in the used ConfloII interface device could not be avoided. As the selected hydrocarbons: BTEX could not be analysed by EA-IRMS without helium dilution, their delta(13)C(VPDB) must be corrected from this observed bias using an external calibration. The CO(2) gas calibrated using EA-IRMS without helium dilution, was used as an in-house reference for the delta(13)C(VPDB) measurements of the BTEX by the hSPME - GC/C-IRMS system. The comparison made between the delta(13)C(VPDB) measured on the same BTEX compounds by EA-IRMS (with external calibration) and by hSPME - GC/C-IRMS techniques showed good agreement.

  1. Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter

    NASA Astrophysics Data System (ADS)

    Tulyakov, Stepan; Ivanov, Anton; Thomas, Nicolas; Roloff, Victoria; Pommerol, Antoine; Cremonese, Gabriele; Weigel, Thomas; Fleuret, Francois

    2018-01-01

    There are many geometric calibration methods for "standard" cameras. These methods, however, cannot be used for the calibration of telescopes with large focal lengths and complex off-axis optics. Moreover, specialized calibration methods for the telescopes are scarce in literature. We describe the calibration method that we developed for the Colour and Stereo Surface Imaging System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO). Although our method is described in the context of CaSSIS, with camera-specific experiments, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.

  2. External validation of risk prediction models for incident colorectal cancer using UK Biobank

    PubMed Central

    Usher-Smith, J A; Harshfield, A; Saunders, C L; Sharp, S J; Emery, J; Walter, F M; Muir, K; Griffin, S J

    2018-01-01

    Background: This study aimed to compare and externally validate risk scores developed to predict incident colorectal cancer (CRC) that include variables routinely available or easily obtainable via self-completed questionnaire. Methods: External validation of fourteen risk models from a previous systematic review in 373 112 men and women within the UK Biobank cohort with 5-year follow-up, no prior history of CRC and data for incidence of CRC through linkage to national cancer registries. Results: There were 1719 (0.46%) cases of incident CRC. The performance of the risk models varied substantially. In men, the QCancer10 model and models by Tao, Driver and Ma all had an area under the receiver operating characteristic curve (AUC) between 0.67 and 0.70. Discrimination was lower in women: the QCancer10, Wells, Tao, Guesmi and Ma models were the best performing with AUCs between 0.63 and 0.66. Assessment of calibration was possible for six models in men and women. All would require country-specific recalibration if estimates of absolute risks were to be given to individuals. Conclusions: Several risk models based on easily obtainable data have relatively good discrimination in a UK population. Modelling studies are now required to estimate the potential health benefits and cost-effectiveness of implementing stratified risk-based CRC screening. PMID:29381683

  3. MODIS. Volume 2: MODIS level 1 geolocation, characterization and calibration algorithm theoretical basis document, version 1

    NASA Technical Reports Server (NTRS)

    Barker, John L.; Harnden, Joann M. K.; Montgomery, Harry; Anuta, Paul; Kvaran, Geir; Knight, ED; Bryant, Tom; Mckay, AL; Smid, Jon; Knowles, Dan, Jr.

    1994-01-01

    The EOS Moderate Resolution Imaging Spectrometer (MODIS) is being developed by NASA for flight on the Earth Observing System (EOS) series of satellites, the first of which (EOS-AM-1) is scheduled for launch in 1998. This document describes the algorithms and their theoretical basis for the MODIS Level 1B characterization, calibration, and geolocation algorithms which must produce radiometrically, spectrally, and spatially calibrated data with sufficient accuracy so that Global change research programs can detect minute changes in biogeophysical parameters. The document first describes the geolocation algorithm which determines geodetic latitude, longitude, and elevation of each MODIS pixel and the determination of geometric parameters for each observation (satellite zenith angle, satellite azimuth, range to the satellite, solar zenith angle, and solar azimuth). Next, the utilization of the MODIS onboard calibration sources, which consist of the Spectroradiometric Calibration Assembly (SRCA), Solar Diffuser (SD), Solar Diffuser Stability Monitor (SDSM), and the Blackbody (BB), is treated. Characterization of these sources and integration of measurements into the calibration process is described. Finally, the use of external sources, including the Moon, instrumented sites on the Earth (called vicarious calibration), and unsupervised normalization sites having invariant reflectance and emissive properties is treated. Finally, algorithms for generating utility masks needed for scene-based calibration are discussed. Eight appendices are provided, covering instrument design and additional algorithm details.

  4. 40 CFR 1065.307 - Linearity verification.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... meter at different flow rates. Use a gravimetric reference measurement (such as a scale, balance, or... nitrogen. Select gas divisions that you typically use. Use a selected gas division as the measured value.... For linearity verification for gravimetric PM balances, use external calibration weights that that...

  5. 40 CFR 1065.307 - Linearity verification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... meter at different flow rates. Use a gravimetric reference measurement (such as a scale, balance, or... nitrogen. Select gas divisions that you typically use. Use a selected gas division as the measured value.... For linearity verification for gravimetric PM balances, use external calibration weights that that...

  6. External Quality Assessment Scheme for reference laboratories - review of 8 years' experience.

    PubMed

    Kessler, Anja; Siekmann, Lothar; Weykamp, Cas; Geilenkeuser, Wolf Jochen; Dreazen, Orna; Middle, Jonathan; Schumann, Gerhard

    2013-05-01

    We describe an External Quality Assessment Scheme (EQAS) intended for reference (calibration) laboratories in laboratory medicine and supervised by the Scientific Division of the International Federation of Clinical Chemistry and Laboratory Medicine and the responsible Committee on Traceability in Laboratory Medicine. The official EQAS website, RELA (www.dgkl-rfb.de:81), is open to interested parties. Information on all requirements for participation and results of surveys are published annually. As an additional feature, the identity of every participant in relation to the respective results is disclosed. The results of various groups of measurands (metabolites and substrates, enzymes, electrolytes, glycated hemoglobins, proteins, hormones, thyroid hormones, therapeutic drugs) are discussed in detail. The RELA system supports reference measurement laboratories preparing for accreditation according to ISO 17025 and ISO 15195. Participation in a scheme such as RELA is one of the requirements for listing of the services of a calibration laboratory by the Joint Committee on Traceability in Laboratory Medicine.

  7. Normal tissue complication probability (NTCP) modelling using spatial dose metrics and machine learning methods for severe acute oral mucositis resulting from head and neck radiotherapy.

    PubMed

    Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L

    2016-07-01

    Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  8. Differentiation and identification of grape-associated black aspergilli using Fourier transform infrared (FT-IR) spectroscopic analysis of mycelia.

    PubMed

    Kogkaki, Efstathia A; Sofoulis, Manos; Natskoulis, Pantelis; Tarantilis, Petros A; Pappas, Christos S; Panagou, Efstathios Z

    2017-10-16

    The purpose of this study was to evaluate the potential of FT-IR spectroscopy as a high-throughput method for rapid differentiation among the ochratoxigenic species of Aspergillus carbonarius and the non-ochratoxigenic or low toxigenic species of Aspergillus niger aggregate, namely A. tubingensis and A. niger isolated previously from grapes of Greek vineyards. A total of 182 isolates of A. carbonarius, A. tubingensis, and A. niger were analyzed using FT-IR spectroscopy. The first derivative of specific spectral regions (3002-2801cm -1 , 1773-1550cm -1 , and 1286-952cm -1 ) were chosen and evaluated with respect to absorbance values. The average spectra of 130 fungal isolates were used for model calibration based on Discriminant analysis and the remaining 52 spectra were used for external model validation. This methodology was able to differentiate correctly 98.8% in total accuracy in both model calibration and validation. The per class accuracy for A. carbonarius was 95.3% and 100% for model calibration and validation, respectively, whereas for A. niger aggregate the per class accuracy amounted to 100% in both cases. The obtained results indicated that FT-IR could become a promising, fast, reliable and low-cost tool for the discrimination and differentiation of closely related fungal species. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Accelerated Fast Spin-Echo Magnetic Resonance Imaging of the Heart Using a Self-Calibrated Split-Echo Approach

    PubMed Central

    Klix, Sabrina; Hezel, Fabian; Fuchs, Katharina; Ruff, Jan; Dieringer, Matthias A.; Niendorf, Thoralf

    2014-01-01

    Purpose Design, validation and application of an accelerated fast spin-echo (FSE) variant that uses a split-echo approach for self-calibrated parallel imaging. Methods For self-calibrated, split-echo FSE (SCSE-FSE), extra displacement gradients were incorporated into FSE to decompose odd and even echo groups which were independently phase encoded to derive coil sensitivity maps, and to generate undersampled data (reduction factor up to R = 3). Reference and undersampled data were acquired simultaneously. SENSE reconstruction was employed. Results The feasibility of SCSE-FSE was demonstrated in phantom studies. Point spread function performance of SCSE-FSE was found to be competitive with traditional FSE variants. The immunity of SCSE-FSE for motion induced mis-registration between reference and undersampled data was shown using a dynamic left ventricular model and cardiac imaging. The applicability of black blood prepared SCSE-FSE for cardiac imaging was demonstrated in healthy volunteers including accelerated multi-slice per breath-hold imaging and accelerated high spatial resolution imaging. Conclusion SCSE-FSE obviates the need of external reference scans for SENSE reconstructed parallel imaging with FSE. SCSE-FSE reduces the risk for mis-registration between reference scans and accelerated acquisitions. SCSE-FSE is feasible for imaging of the heart and of large cardiac vessels but also meets the needs of brain, abdominal and liver imaging. PMID:24728341

  10. Beam related response of in vivo diode detectors for external radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baci, Syrja, E-mail: sbarci2013@gmail.com; Telhaj, Ervis; Malkaj, Partizan

    2016-03-25

    In Vivo Dosimetry (IVD) is a set of methods used in cancer treatment clinics to determine the real dose of radiation absorbed by target volume in a patient’s body. IVD has been widely implemented in radiotherapy treatment centers and is now recommended part of Quality Assurance program by many International health and radiation organizations. Because of cost and lack of specialized personnel, IVD has not been practiced as yet, in Albanian radiotherapy clinics. At Hygeia Hospital Tirana, patients are irradiated with high energy photons generated by Elekta Synergy Accelerators. We have recently started experimenting with the purpose of establishing anmore » IVD practice at this hospital. The first set of experiments was aimed at calibration of diodes that are going to be used for IVD. PMMA, phantoms by PTW were used to calibrate p – type Si, semiconductor diode dosimeters, made by PTW Freiburg for entrance dose. Response of the detectors is affected by energy of the beam, accumulated radiation dose, dose rate, temperature, angle against the beam axis, etc. Here we present the work done for calculating calibration factor and correction factors of source to surface distance, field size, and beam incidence for the entrance dose for both 6 MV photon beam and 18 MV photon beam. Dependence of dosimeter response was found to be more pronounced with source to surface distance as compared to other variables investigated.« less

  11. Applying torque to the Escherichia coli flagellar motor using magnetic tweezers.

    PubMed

    van Oene, Maarten M; Dickinson, Laura E; Cross, Bronwen; Pedaci, Francesco; Lipfert, Jan; Dekker, Nynke H

    2017-03-07

    The bacterial flagellar motor of Escherichia coli is a nanoscale rotary engine essential for bacterial propulsion. Studies on the power output of single motors rely on the measurement of motor torque and rotation under external load. Here, we investigate the use of magnetic tweezers, which in principle allow the application and active control of a calibrated load torque, to study single flagellar motors in Escherichia coli. We manipulate the external load on the motor by adjusting the magnetic field experienced by a magnetic bead linked to the motor, and we probe the motor's response. A simple model describes the average motor speed over the entire range of applied fields. We extract the motor torque at stall and find it to be similar to the motor torque at drag-limited speed. In addition, use of the magnetic tweezers allows us to force motor rotation in both forward and backward directions. We monitor the motor's performance before and after periods of forced rotation and observe no destructive effects on the motor. Our experiments show how magnetic tweezers can provide active and fast control of the external load while also exposing remaining challenges in calibration. Through their non-invasive character and straightforward parallelization, magnetic tweezers provide an attractive platform to study nanoscale rotary motors at the single-motor level.

  12. Applying torque to the Escherichia coli flagellar motor using magnetic tweezers

    PubMed Central

    van Oene, Maarten M.; Dickinson, Laura E.; Cross, Bronwen; Pedaci, Francesco; Lipfert, Jan; Dekker, Nynke H.

    2017-01-01

    The bacterial flagellar motor of Escherichia coli is a nanoscale rotary engine essential for bacterial propulsion. Studies on the power output of single motors rely on the measurement of motor torque and rotation under external load. Here, we investigate the use of magnetic tweezers, which in principle allow the application and active control of a calibrated load torque, to study single flagellar motors in Escherichia coli. We manipulate the external load on the motor by adjusting the magnetic field experienced by a magnetic bead linked to the motor, and we probe the motor’s response. A simple model describes the average motor speed over the entire range of applied fields. We extract the motor torque at stall and find it to be similar to the motor torque at drag-limited speed. In addition, use of the magnetic tweezers allows us to force motor rotation in both forward and backward directions. We monitor the motor’s performance before and after periods of forced rotation and observe no destructive effects on the motor. Our experiments show how magnetic tweezers can provide active and fast control of the external load while also exposing remaining challenges in calibration. Through their non-invasive character and straightforward parallelization, magnetic tweezers provide an attractive platform to study nanoscale rotary motors at the single-motor level. PMID:28266562

  13. [Determination of four insecticide residues in honey and royal jelly by gas chromatography-negative chemical ionization mass spectrometry].

    PubMed

    Xia, Guanghui; Shen, Weijian; Yu, Keyao; Wu, Bin; Zhang, Rui; Shen, Chongyu; Zhao, Zengyun; Bian, Xiaohong; Xu, Jiyang

    2014-07-01

    A method was developed for the determination of four insecticide residues in honey and royal jelly by gas chromatography-negative chemical ionization mass spectrometry (GC-NCI/MS). The honey and royal jelly samples were treated with different preparation methods as the result of the different components. The honey sample was extracted with ethyl acetate and cleaned up with primary second amine, and the royal jelly sample was extracted with acetonitrile-water (1:1, v/v), and cleaned up with a C18 solid-phase extraction column. Finally, the extracts of the honey and royal jelly were analyzed by GC-NCI/MS in selected ion monitoring (SIM) mode separately. External standard calibration method was used for quantification. The linearities of calibration curves of the four insecticides were good with the correlation coefficients greater than 0.99 in the range of 50-500 microg/L. The limits of the detection (LODs) of the four insecticides were in the range of 0.12- 5.0 microg/kg, and the limits of the quantification (LOQs) were in the range of 0.40-16.5 microg/kg. The recoveries of the four insecticides spiked in honey and royal jelly at three spiked levels (10, 15 and 20 microg/kg) were in the range of 78.2 -110.0%, and the relative standard deviations (RSDs) were all below 14%. The sensitivity and selectivity of this method were good with no interfering peaks. The proposed method is simple quick and effective to analyze the four insecticide residues in honey and royal jelly.

  14. An investigation into force-moment calibration techniques applicable to a magnetic suspension and balance system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Eskins, Jonathan

    1988-01-01

    The problem of determining the forces and moments acting on a wind tunnel model suspended in a Magnetic Suspension and Balance System is addressed. Two calibration methods were investigated for three types of model cores, i.e., Alnico, Samarium-Cobalt, and a superconducting solenoid. Both methods involve calibrating the currents in the electromagnetic array against known forces and moments. The first is a static calibration method using calibration weights and a system of pulleys. The other method, dynamic calibration, involves oscillating the model and using its inertia to provide calibration forces and moments. Static calibration data, found to produce the most reliable results, is presented for three degrees of freedom at 0, 15, and -10 deg angle of attack. Theoretical calculations are hampered by the inability to represent iron-cored electromagnets. Dynamic calibrations, despite being quicker and easier to perform, are not as accurate as static calibrations. Data for dynamic calibrations at 0 and 15 deg is compared with the relevant static data acquired. Distortion of oscillation traces is cited as a major source of error in dynamic calibrations.

  15. The calibration methods for Multi-Filter Rotating Shadowband Radiometer: a review

    NASA Astrophysics Data System (ADS)

    Chen, Maosi; Davis, John; Tang, Hongzhao; Ownby, Carolyn; Gao, Wei

    2013-09-01

    The continuous, over two-decade data record from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) is ideal for climate research which requires timely and accurate information of important atmospheric components such as gases, aerosols, and clouds. Except for parameters derived from MFRSR measurement ratios, which are not impacted by calibration error, most applications require accurate calibration factor(s), angular correction, and spectral response function(s) from calibration. Although a laboratory lamp (or reference) calibration can provide all the information needed to convert the instrument readings to actual radiation, in situ calibration methods are implemented routinely (daily) to fill the gaps between lamp calibrations. In this paper, the basic structure and the data collection and pretreatment of the MFRSR are described. The laboratory lamp calibration and its limitations are summarized. The cloud screening algorithms for MFRSR data are presented. The in situ calibration methods, the standard Langley method and its variants, the ratio-Langley method, the general method, Alexandrov's comprehensive method, and Chen's multi-channel method, are outlined. The reason that all these methods do not fit for all situations is that they assume some properties, such as aerosol optical depth (AOD), total optical depth (TOD), precipitable water vapor (PWV), effective size of aerosol particles, or angstrom coefficient, are invariant over time. These properties are not universal and some of them rarely happen. In practice, daily calibration factors derived from these methods should be smoothed to restrain error.

  16. Simple transfer calibration method for a Cimel Sun-Moon photometer: calculating lunar calibration coefficients from Sun calibration constants.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane

    2016-09-20

    The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.

  17. Optimized star sensors laboratory calibration method using a regularization neural network.

    PubMed

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  18. Features calibration of the dynamic force transducers

    NASA Astrophysics Data System (ADS)

    Sc., M. Yu Prilepko D.; Lysenko, V. G.

    2018-04-01

    The article discusses calibration methods of dynamic forces measuring instruments. The relevance of work is dictated by need to valid definition of the dynamic forces transducers metrological characteristics taking into account their intended application. The aim of this work is choice justification of calibration method, which provides the definition dynamic forces transducers metrological characteristics under simulation operating conditions for determining suitability for using in accordance with its purpose. The following tasks are solved: the mathematical model and the main measurements equation of calibration dynamic forces transducers by load weight, the main budget uncertainty components of calibration are defined. The new method of dynamic forces transducers calibration with use the reference converter “force-deformation” based on the calibrated elastic element and measurement of his deformation by a laser interferometer is offered. The mathematical model and the main measurements equation of the offered method is constructed. It is shown that use of calibration method based on measurements by the laser interferometer of calibrated elastic element deformations allows to exclude or to considerably reduce the uncertainty budget components inherent to method of load weight.

  19. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  20. Discrimination of edible oils and fats by combination of multivariate pattern recognition and FT-IR spectroscopy: A comparative study between different modeling methods

    NASA Astrophysics Data System (ADS)

    Javidnia, Katayoun; Parish, Maryam; Karimi, Sadegh; Hemmateenejad, Bahram

    2013-03-01

    By using FT-IR spectroscopy, many researchers from different disciplines enrich the experimental complexity of their research for obtaining more precise information. Moreover chemometrics techniques have boosted the use of IR instruments. In the present study we aimed to emphasize on the power of FT-IR spectroscopy for discrimination between different oil samples (especially fat from vegetable oils). Also our data were used to compare the performance of different classification methods. FT-IR transmittance spectra of oil samples (Corn, Colona, Sunflower, Soya, Olive, and Butter) were measured in the wave-number interval of 450-4000 cm-1. Classification analysis was performed utilizing PLS-DA, interval PLS-DA, extended canonical variate analysis (ECVA) and interval ECVA methods. The effect of data preprocessing by extended multiplicative signal correction was investigated. Whilst all employed method could distinguish butter from vegetable oils, iECVA resulted in the best performances for calibration and external test set with 100% sensitivity and specificity.

  1. Assessment of Various Organic Matter Properties by Infrared Reflectance Spectroscopy of Sediments and Filters

    NASA Astrophysics Data System (ADS)

    Alaoui, G.; Leger, M.; Gagne, J.; Tremblay, L.

    2009-05-01

    The goal of this work was to evaluate the capability of infrared reflectance spectroscopy for a fast quantification of the elemental and molecular compositions of sedimentary and particulate organic matter (OM). A partial least-squares (PLS) regression model was used for analysis and values were compared to those obtained by traditional methods (i.e., elemental, humic and HPLC analyses). PLS tools are readily accessible from software such as GRAMS (Thermo-Fisher) used in spectroscopy. This spectroscopic-chemometric approach has several advantages including its rapidity and use of whole unaltered samples. To predict properties, a set of infrared spectra from representative samples must first be fitted to form a PLS calibration model. In this study, a large set (180) of sediments and particles on GFF filters from the St. Lawrence estuarine system were used. These samples are very heterogenous (e.g., various tributaries, terrigenous vs. marine, events such as landslides and floods) and thus represent a challenging test for PLS prediction. For sediments, the infrared spectra were obtained with a diffuse reflectance, or DRIFT, accessory. Sedimentary carbon, nitrogen, humic substance contents as well as humic substance proportions in OM and N:C ratios were predicted by PLS. The relative root mean square error of prediction (%RMSEP) for these properties were between 5.7% (humin content) and 14.1% (total humic substance yield) using the cross-validation, or leave-one out, approach. The %RMSEP calculated by PLS for carbon content was lower with the PLS model (7.6%) than with an external calibration method (11.7%) (Tremblay and Gagné, 2002, Anal. Chem., 74, 2985). Moreover, the PLS approach does not require the extraction of POM needed in external calibration. Results highlighted the importance of using a PLS calibration set representative of the unknown samples (e.g., same area). For filtered particles, the infrared spectra were obtained using a novel approach based on attenuated total reflectance, or ATR, allowing the direct analysis of the filters. In addition to carbon and nitrogen contents, amino acid and muramic acid (a bacterial biomarker) yields were predicted using PLS. Calculated %RMSEP varied from 6.4% (total amino acid content) to 18.6% (muramic acid content) with cross-validation. PLS regression modeling does not require a priori knowledge of the spectral bands associated with the properties to be predicted. In turn, the spectral regions that give good PLS predictions provided valuable information on band assignment and geochemical processes. For instance, nitrogen and humin contents were greatly determined by an absorption band caused by aluminosilicate OH group. This supports the idea that OM-clay interactions, important in humin formation and OM preservation, are mediated by nitrogen-containing groups.

  2. Determination of microbial phenolic acids in human faeces by UPLC-ESI-TQ MS.

    PubMed

    Sánchez-Patán, Fernando; Monagas, María; Moreno-Arribas, M Victoria; Bartolomé, Begoña

    2011-03-23

    The aim of the present work was to develop a reproducible, sensitive, and rapid UPLC-ESI-TQ MS analytical method for determination of microbial phenolic acids and other related compounds in faeces. A total of 47 phenolic compounds including hydroxyphenylpropionic, hydroxyphenylacetic, hydroxycinnamic, hydroxybenzoic, and hydroxymandelic acids and simple phenols were considered. To prepare an optimum pool standard solution, analytes were classified in 5 different groups with different starting concentrations according to their MS response. The developed UPLC method allowed a high resolution of the pool standard solution within an 18 min injection run time. The LOD of phenolic compounds ranged from 0.001 to 0.107 μg/mL and LOQ from 0.003 to 0.233 μg/mL. The method precision met acceptance criteria (<15% RSD) for all analytes, and accuracy was >80%. The method was applied to faecal samples collected before and after the intake of a flavan-3-ol supplement by a healthy volunteer. Both external and internal calibration methods were considered for quantification purposes, using 4-hydroxybenzoic-2,3,4,5-d4 acid as internal standard. For most analytes and samples, the level of microbial phenolic acids did not differ by using one or another calibration method. The results revealed an increase in protocatechuic, syringic, benzoic, p-coumaric, phenylpropionic, 3-hydroxyphenylacetic, and 3-hydroxyphenylpropionic acids, although differences due to the intake were only significant for the latter compound. In conclusion, the UPLC-DAD-ESI-TQ MS method developed is suitable for targeted analysis of microbial-derived phenolic metabolites in faecal samples from human intervention or in vitro fermentation studies, which requires high sensitivity and throughput.

  3. Development and external multicenter validation of Chinese Prostate Cancer Consortium prostate cancer risk calculator for initial prostate biopsy.

    PubMed

    Chen, Rui; Xie, Liping; Xue, Wei; Ye, Zhangqun; Ma, Lulin; Gao, Xu; Ren, Shancheng; Wang, Fubo; Zhao, Lin; Xu, Chuanliang; Sun, Yinghao

    2016-09-01

    Substantial differences exist in the relationship of prostate cancer (PCa) detection rate and prostate-specific antigen (PSA) level between Western and Asian populations. Classic Western risk calculators, European Randomized Study for Screening of Prostate Cancer Risk Calculator, and Prostate Cancer Prevention Trial Risk Calculator, were shown to be not applicable in Asian populations. We aimed to develop and validate a risk calculator for predicting the probability of PCa and high-grade PCa (defined as Gleason Score sum 7 or higher) at initial prostate biopsy in Chinese men. Urology outpatients who underwent initial prostate biopsy according to the inclusion criteria were included. The multivariate logistic regression-based Chinese Prostate Cancer Consortium Risk Calculator (CPCC-RC) was constructed with cases from 2 hospitals in Shanghai. Discriminative ability, calibration and decision curve analysis were externally validated in 3 CPCC member hospitals. Of the 1,835 patients involved, PCa was identified in 338/924 (36.6%) and 294/911 (32.3%) men in the development and validation cohort, respectively. Multivariate logistic regression analyses showed that 5 predictors (age, logPSA, logPV, free PSA ratio, and digital rectal examination) were associated with PCa (Model 1) or high-grade PCa (Model 2), respectively. The area under the curve of Model 1 and Model 2 was 0.801 (95% CI: 0.771-0.831) and 0.826 (95% CI: 0.796-0.857), respectively. Both models illustrated good calibration and substantial improvement in decision curve analyses than any single predictors at all threshold probabilities. Higher predicting accuracy, better calibration, and greater clinical benefit were achieved by CPCC-RC, compared with European Randomized Study for Screening of Prostate Cancer Risk Calculator and Prostate Cancer Prevention Trial Risk Calculator in predicting PCa. CPCC-RC performed well in discrimination and calibration and decision curve analysis in external validation compared with Western risk calculators. CPCC-RC may aid in decision-making of prostate biopsy in Chinese or in other Asian populations with similar genetic and environmental backgrounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  5. Development of a TLD mailed system for remote dosimetry audit for (192)Ir HDR and PDR sources.

    PubMed

    Roué, Amélie; Venselaar, Jack L M; Ferreira, Ivaldo H; Bridier, André; Van Dam, Jan

    2007-04-01

    In the framework of an ESTRO ESQUIRE project, the BRAPHYQS Physics Network and the EQUAL-ESTRO laboratory have developed a procedure for checking the absorbed dose to water in the vicinity of HDR or PDR sources using a mailed TLD system. The methodology and the materials used in the procedure are based on the existing EQUAL-ESTRO external radiotherapy dose checks. A phantom for TLD postal dose assurance service, adapted to accept catheters from different HDR afterloaders, has been developed. The phantom consists of three PMMA tubes supporting catheters placed at 120 degrees around a central TLD holder. A study on the use of LiF powder type DTL 937 (Philitech) has been performed in order to establish the TLD calibration in dose-to-water at a given distance from (192)Ir source, as well as to determine all correction factors to convert the TLD reading into absorbed dose to water. The dosimetric audit is based on the comparison between the dose to water measured with the TL dosimeter and the dose calculated by the clinical TPS. Results of the audits are classified in four different levels depending on the ratio of the measured dose to the stated dose. The total uncertainty budget in the measurement of the absorbed dose to water using TLD near an (192)Ir HDR source, including TLD reading, correction factors and TLD calibration coefficient, is determined as 3.27% (1s). To validate the procedures, the external audit was first tested among the members of the BRAPHYQS Network. Since November 2004, the test has been made available for use by all European brachytherapy centres. To date, 11 centres have participated in the checks and the results obtained are very encouraging. Nevertheless, one error detected has shown the usefulness of this audit. A method of absorbed dose to water determination in the vicinity of an (192)Ir brachytherapy source was developed for the purpose of a mailed TL dosimetry system. The accuracy of the procedure was determined. This method allows a check of the whole dosimetry chain for this type of brachytherapy afterloading system and can easily be performed by mail to any institution in the European area and elsewhere. Such an external audit can be an efficient QC method complementary to internal quality control as it can reveal some errors which are not observable by other means.

  6. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  7. Instrument For Simulation Of Piezoelectric Transducers

    NASA Technical Reports Server (NTRS)

    Mcnichol, Randal S.

    1996-01-01

    Electronic instrument designed to simulate dynamic output of integrated-circuit piezoelectric acceleration or pressure transducer. Operates in conjunction with external signal-conditioning circuit, generating square-wave signal of known amplitude for use in calibrating signal-conditioning circuit. Instrument also useful as special-purpose square-wave generator in other applications.

  8. 40 CFR 1065.307 - Linearity verification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... measurement (such as a scale, balance, or mass comparator) at the inlet to the fuel-measurement system. Use a... nitrogen. Select gas divisions that you typically use. Use a selected gas division as the measured value.... (9) Mass. For linearity verification for gravimetric PM balances, use external calibration weights...

  9. 40 CFR 1065.307 - Linearity verification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... different flow rates. Use a gravimetric reference measurement (such as a scale, balance, or mass comparator... the gas-division system to divide the span gas with purified air or nitrogen. Select gas divisions... verification for gravimetric PM balances, use external calibration weights that that meet the requirements in...

  10. Finding trap stiffness of optical tweezers using digital filters.

    PubMed

    Almendarez-Rangel, Pedro; Morales-Cruzado, Beatriz; Sarmiento-Gómez, Erick; Pérez-Gutiérrez, Francisco G

    2018-02-01

    Obtaining trap stiffness and calibration of the position detection system is the basis of a force measurement using optical tweezers. Both calibration quantities can be calculated using several experimental methods available in the literature. In most cases, stiffness determination and detection system calibration are performed separately, often requiring procedures in very different conditions, and thus confidence of calibration methods is not assured due to possible changes in the environment. In this work, a new method to simultaneously obtain both the detection system calibration and trap stiffness is presented. The method is based on the calculation of the power spectral density of positions through digital filters to obtain the harmonic contributions of the position signal. This method has the advantage of calculating both trap stiffness and photodetector calibration factor from the same dataset in situ. It also provides a direct method to avoid unwanted frequencies that could greatly affect calibration procedure, such as electric noise, for example.

  11. The GOES-R Advanced Baseline Imager: detector spectral response effects on thermal emissive band calibration

    NASA Astrophysics Data System (ADS)

    Pearlman, Aaron J.; Padula, Francis; Cao, Changyong; Wu, Xiangqian

    2015-10-01

    The Advanced Baseline Imager (ABI) will be aboard the National Oceanic and Atmospheric Administration's Geostationary Operational Environmental Satellite R-Series (GOES-R) to supply data needed for operational weather forecasts and long-term climate variability studies, which depend on high quality data. Unlike the heritage operational GOES systems that have two or four detectors per band, ABI has hundreds of detectors per channel requiring calibration coefficients for each one. This increase in number of detectors poses new challenges for next generation sensors as each detector has a unique spectral response function (SRF) even though only one averaged SRF per band is used operationally to calibrate each detector. This simplified processing increases computational efficiency. Using measured system-level SRF data from pre-launch testing, we have the opportunity to characterize the calibration impact using measured SRFs, both per detector and as an average of detector-level SRFs similar to the operational version. We calculated the spectral response impacts for the thermal emissive bands (TEB) theoretically, by simulating the ABI response viewing an ideal blackbody and practically, with the measured ABI response to an external reference blackbody from the pre-launch TEB calibration test. The impacts from the practical case match the theoretical results using an ideal blackbody. The observed brightness temperature trends show structure across the array with magnitudes as large as 0.1 K for and 12 (9.61 µm), and 0.25 K for band 14 (11.2 µm) for a 300 K blackbody. The trends in the raw ABI signal viewing the blackbody support the spectral response measurements results, since they show similar trends in bands 12 (9.61µm), and 14 (11.2 µm), meaning that the spectral effects dominate the response differences between detectors for these bands. We further validated these effects using the radiometric bias calculated between calibrations using the external blackbody and another blackbody, the ABI on-board calibrator. Using the detector-level SRFs reduces the structure across the arrays but leaves some residual bias. Further understanding of this bias could lead to refinements of the blackbody thermal model. This work shows the calibration impacts of using an average SRF across many detectors instead of accounting for each detector SRF independently in the TEB calibration. Note that these impacts neglect effects from the spectral sampling of Earth scene radiances that include atmospheric effects, which may further contribute to artifacts post-launch and cannot be mitigated by processing with detector-level SRFs. This study enhances the ability to diagnose anomalies on-orbit and reduce calibration uncertainty for improved system performance.

  12. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    PubMed Central

    Deng, Mingjun; Li, Jiansong

    2017-01-01

    The Chinese Gaofen-3 (GF-3) mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR) sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts) using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method. PMID:29240675

  13. Radiometric calibration of the Earth observing system's imaging sensors

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1987-01-01

    Philosophy, requirements, and methods of calibration of multispectral space sensor systems as applicable to the Earth Observing System (EOS) are discussed. Vicarious methods for calibration of low spatial resolution systems, with respect to the Advanced Very High Resolution Radiometer (AVHRR), are then summarized. Finally, a theoretical introduction is given to a new vicarious method of calibration using the ratio of diffuse-to-global irradiance at the Earth's surfaces as the key input. This may provide an additional independent method for in-flight calibration.

  14. Configurations and calibration methods for passive sampling techniques.

    PubMed

    Ouyang, Gangfeng; Pawliszyn, Janusz

    2007-10-19

    Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.

  15. Application of composite small calibration objects in traffic accident scene photogrammetry.

    PubMed

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  16. Self-calibration of robot-sensor system

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1990-01-01

    The process of finding the coordinate transformation between a robot and an external sensor system has been addressed. This calibration is equivalent to solving a nonlinear optimization problem for the parameters that characterize the transformation. A two-step procedure is herein proposed for solving the problem. The first step involves finding a nominal solution that is a good approximation of the final solution. A varational problem is then generated to replace the original problem in the next step. With the assumption that the variational parameters are small compared to unity, the problem that can be more readily solved with relatively small computation effort.

  17. Hanford radiological protection support services annual report for 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyon, M.; Bihl, D.E.; Fix, J.J.

    1995-06-01

    Various Hanford Site radiation protection services provided by the Pacific Northwest Laboratory for the US Department of Energy Richland Operations Office and Hanford contractors are described in this annual report for the calendar year 1994. These activities include external dosimetry measurements and evaluations, internal dosimetry measurements and evaluations, in vivo measurements, radiological record keeping, radiation source calibration, and instrument calibration and evaluation. For each of these activities, the routine program and any program changes or enhancements are described, as well as associated tasks, investigations, and studies. Program- related publications, presentations, and other staff professional activities are also described.

  18. Hanford radiological protection support services. Annual report for 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyon, M.; Bihl, D.E.; Carbaugh, E.H.

    1996-05-01

    Various Hanford Site radiation protection services provided by the Pacific Northwest National Laboratory for the U.S. Department of Energy Richland Operations Office and Hanford contractors are described in this annual report for calendar year 1995. These activities include external dosimetry measurements and evaluations, internal dosimetry measurements and evaluations, in vivo measurements, radiological record keeping, radiation source calibration, and instrument calibration and evaluation. For each of these activities, the routine program and any program changes or enhancements are described, as well as associated tasks, investigations, and studies. Program-related publications, presentations, and other staff professional activities are also described.

  19. Signal inference with unknown response: calibration-uncertainty renormalized estimator.

    PubMed

    Dorn, Sebastian; Enßlin, Torsten A; Greiner, Maksim; Selig, Marco; Boehm, Vanessa

    2015-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.

  20. [Determination of 21 fragrance allergens in toys by gas chromatography-ion trap mass spectrometry].

    PubMed

    Lü, Qing; Zang, Qing; Bai, Hua; Li, Haiyu; Kang, Suyuan; Wang, Chao

    2012-05-01

    A method of gas chromatography-ion trap mass spectrometry (GC-IT-MS) was developed for the determination of 21 fragrance allergens in sticker toys, plush toys and plastic toys. The experimental conditions, such as sample pretreatment conditions, and the analytical conditions of GC-IT-MS, were optimized. The sticker toy samples and plush toy samples were extracted with acetone by ultrasonic wave, and the extracts were separated on an Agilent HP-1 MS column (50 m x 0.2 mm x 0.5 microm), then determined by IT-MS and quantified by external standard method. The plastic toy samples were extracted by the dissolution-precipitation approach, cleaned up with an Envi-carb solid phase extraction column and concentrated by rotary evaporation and nitrogen blowing, then determined by GC-IT-MS and quantified by external standard method. The calibration curves showed good linearity in the range of 0.002-50 mg/L with the correlation coefficients greater than 0.996 8. The limits of quantification (LOQ, S/N > 10) were 0.02-40 mg/kg. The average recoveries of the target compounds spiked in the sample at three concentration levels were in the range of 82.2%-110.8% with the relative standard deviations (RSDs) of 0.6%-10.5%. These results show that this method is accurate and sensitive for the qualitative and quantitative determination of the 21 fragrance allergens in the 3 types of toys.

  1. Structured light system calibration method with optimal fringe angle.

    PubMed

    Li, Beiwen; Zhang, Song

    2014-11-20

    For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H)  mm×250(W)  mm×500(D)  mm.

  2. Bayesian regression models outperform partial least squares methods for predicting milk components and technological properties using infrared spectral data

    PubMed Central

    Ferragina, A.; de los Campos, G.; Vazquez, A. I.; Cecchinato, A.; Bittante, G.

    2017-01-01

    The aim of this study was to assess the performance of Bayesian models commonly used for genomic selection to predict “difficult-to-predict” dairy traits, such as milk fatty acid (FA) expressed as percentage of total fatty acids, and technological properties, such as fresh cheese yield and protein recovery, using Fourier-transform infrared (FTIR) spectral data. Our main hypothesis was that Bayesian models that can estimate shrinkage and perform variable selection may improve our ability to predict FA traits and technological traits above and beyond what can be achieved using the current calibration models (e.g., partial least squares, PLS). To this end, we assessed a series of Bayesian methods and compared their prediction performance with that of PLS. The comparison between models was done using the same sets of data (i.e., same samples, same variability, same spectral treatment) for each trait. Data consisted of 1,264 individual milk samples collected from Brown Swiss cows for which gas chromatographic FA composition, milk coagulation properties, and cheese-yield traits were available. For each sample, 2 spectra in the infrared region from 5,011 to 925 cm−1 were available and averaged before data analysis. Three Bayesian models: Bayesian ridge regression (Bayes RR), Bayes A, and Bayes B, and 2 reference models: PLS and modified PLS (MPLS) procedures, were used to calibrate equations for each of the traits. The Bayesian models used were implemented in the R package BGLR (http://cran.r-project.org/web/packages/BGLR/index.html), whereas the PLS and MPLS were those implemented in the WinISI II software (Infrasoft International LLC, State College, PA). Prediction accuracy was estimated for each trait and model using 25 replicates of a training-testing validation procedure. Compared with PLS, which is currently the most widely used calibration method, MPLS and the 3 Bayesian methods showed significantly greater prediction accuracy. Accuracy increased in moving from calibration to external validation methods, and in moving from PLS and MPLS to Bayesian methods, particularly Bayes A and Bayes B. The maximum R2 value of validation was obtained with Bayes B and Bayes A. For the FA, C10:0 (% of each FA on total FA basis) had the highest R2 (0.75, achieved with Bayes A and Bayes B), and among the technological traits, fresh cheese yield R2 of 0.82 (achieved with Bayes B). These 2 methods have proven to be useful instruments in shrinking and selecting very informative wavelengths and inferring the structure and functions of the analyzed traits. We conclude that Bayesian models are powerful tools for deriving calibration equations, and, importantly, these equations can be easily developed using existing open-source software. As part of our study, we provide scripts based on the open source R software BGLR, which can be used to train customized prediction equations for other traits or populations. PMID:26387015

  3. A new method for testing the scale-factor performance of fiber optical gyroscope

    NASA Astrophysics Data System (ADS)

    Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin

    2015-10-01

    Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by analyzing the influence of each parameter error on the equivalent angular velocity. A comparative experiment of the method proposed in this paper and the method of turntable calibration was conducted, and the scale-factor performance test results of the same FOG using the two methods were consistent. Using the method proposed in this paper to test the scale-factor performance of FOG, the input angular velocity is the equivalent effect produced by a sawtooth voltage signal, and there is no need to use a turntable to produce mechanical rotation, so this method can be used to test the performance of FOG at the ambient conditions which turntable can not work.

  4. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  5. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  6. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  7. Aquarius's Instrument Science Data System (ISDS) Automated to Acquire, Process, Trend Data and Produce Radiometric System Assessment Reports

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Aquarius Radiometer, a subsystem of the Aquarius Instrument required a data acquisition ground system to support calibration and radiometer performance assessment. To support calibration and compose performance assessments, we developed an automated system which uploaded raw data to a ftp server and saved raw and processed data to a database. This paper details the overall functionalities of the Aquarius Instrument Science Data System (ISDS) and the individual electrical ground support equipment (EGSE) which produced data files that were infused into the ISDS. Real time EGSEs include an ICDS Simulator, Calibration GSE, Labview controlled power supply, and a chamber data acquisition system. ICDS Simulator serves as a test conductor primary workstation, collecting radiometer housekeeping (HK) and science data and passing commands and HK telemetry collection request to the radiometer. Calibration GSE (Radiometer Active Test Source) provides source choice from multiple targets for the radiometer external calibration. Power Supply GSE, controlled by labview, provides real time voltage and current monitoring of the radiometer. And finally the chamber data acquisition system produces data reflecting chamber vacuum pressure, thermistor temperatures, AVG and watts. Each GSE system produce text based data files every two to six minutes and automatically copies the data files to the Central Archiver PC. The Archiver PC stores the data files, schedules automated uploads of these files to an external FTP server, and accepts request to copy all data files to the ISDS for offline data processing and analysis. Aquarius Radiometer ISDS contains PHP and MATLab programs to parse, process and save all data to a MySQL database. Analysis tools (MATLab programs) in the ISDS system are capable of displaying radiometer science, telemetry and auxiliary data in near real time as well as performing data analysis and producing automated performance assessment reports of the Aquarius Radiometer.

  8. Preliminary characterisation of new glass reference materials (GSA-1G, GSC-1G, GSD-1G and GSE-1G) by laser ablation-inductively coupled plasma-mass spectrometry using 193 nm, 213 nm and 266 nm wavelengths

    USGS Publications Warehouse

    Guillong, M.; Hametner, K.; Reusser, E.; Wilson, S.A.; Gunther, D.

    2005-01-01

    New glass reference materials GSA-1G, GSC-1G, GSD-1G and GSE-1G have been characterised using a prototype solid state laser ablation system capable of producing wavelengths of 193 nm, 213 nm and 266 nm. This system allowed comparison of the effects of different laser wavelengths under nearly identical ablation and ICP operating conditions. The wavelengths 213 nm and 266 nm were also used at higher energy densities to evaluate the influence of energy density on quantitative analysis. In addition, the glass reference materials were analysed using commercially available 266 nm Nd:YAG and 193 nm ArF excimer lasers. Laser ablation analysis was carried out using both single spot and scanning mode ablation. Using laser ablation ICP-MS, concentrations of fifty-eight elements were determined with external calibration to the NIST SRM 610 glass reference material. Instead of applying the more common internal standardisation procedure, the total concentration of all element oxide concentrations was normalised to 100%. Major element concentrations were compared with those determined by electron microprobe. In addition to NIST SRM 610 for external calibration, USGS BCR-2G was used as a more closely matrix-matched reference material in order to compare the effect of matrix-matched and non matrix-matched calibration on quantitative analysis. The results show that the various laser wavelengths and energy densities applied produced similar results, with the exception of scanning mode ablation at 266 nm without matrix-matched calibration where deviations up to 60% from the average were found. However, results acquired using a scanning mode with a matrix-matched calibration agreed with results obtained by spot analysis. The increased abundance of large particles produced when using a scanning ablation mode with NIST SRM 610, is responsible for elemental fractionation effects caused by incomplete vaporisation of large particles in the ICP.

  9. Luminescence isochron dating: a new approach using different grain sizes.

    PubMed

    Zhao, H; Li, S H

    2002-01-01

    A new approach to isochron dating is described using different sizes of quartz and K-feldspar grains. The technique can be applied to sites with time-dependent external dose rates. It is assumed that any underestimation of the equivalent dose (De) using K-feldspar is by a factor F, which is independent of grain size (90-350 microm) for a given sample. Calibration of the beta source for different grain sizes is discussed, and then the sample ages are calculated using the differences between quartz and K-feldspar De from grains of similar size. Two aeolian sediment samples from north-eastern China are used to illustrate the application of the new method. It is confirmed that the observed values of De derived using K-feldspar underestimate the expected doses (based on the quartz De) but, nevertheless, these K-feldspar De values correlate linearly with the calculated internal dose rate contribution, supporting the assumption that the underestimation factor F is independent of grain size. The isochron ages are also compared with the results obtained using quartz De and the measured external dose rates.

  10. James Webb Space Telescope Integrated Science Instrument Module Calibration and Verification of High-Accuracy Instrumentation to Measure Heat Flow in Cryogenic Testing

    NASA Technical Reports Server (NTRS)

    Comber, Brian; Glazer, Stuart

    2012-01-01

    The James Webb Space Telescope (JWST) is an upcoming flagship observatory mission scheduled to be launched in 2018. Three of the four science instruments are passively cooled to their operational temperature range of 36K to 40K, and the fourth instrument is actively cooled to its operational temperature of approximately 6K. The requirement for multiple thermal zoned results in the instruments being thermally connected to five external radiators via individual high purity aluminum heat straps. Thermal-vacuum and thermal balance testing of the flight instruments at the Integrated Science Instrument Module (ISIM) element level will take place within a newly constructed shroud cooled by gaseous helium inside Goddard Space Flight Center's (GSFC) Space environment Simulator (SES). The flight external radiators are not available during ISIM-level thermal vacuum/thermal testing, so they will be replaced in test with stable and adjustable thermal boundaries with identical physical interfaces to the flight radiators. Those boundaries are provided by specially designed test hardware which also measures the heat flow within each of the five heat straps to an accuracy of less than 2 mW, which is less than 5% of the minimum predicted heat flow values. Measurement of the heat loads to this accuracy is essential to ISIM thermal model correlation, since thermal models are more accurately correlated when temperature data is supplemented by accurate knowledge of heat flows. It also provides direct verification by test of several high-level thermal requirements. Devices that measure heat flow in this manner have historically been referred to a "Q-meters". Perhaps the most important feature of the design of the JWST Q-meters is that it does not depend on the absolute accuracy of its temperature sensors, but rather on knowledge of precise heater power required to maintain a constant temperature difference between sensors on two stages, for which a table is empirically developed during a calibration campaign in a small chamber at GSFC. This paper provides a brief review of Q-meter design, and discusses the Q-meter calibration procedure including calibration chamber modifications and accommodations, handling of differing conditions between calibration and usage, the calibration process itself, and the results of the tests used to determine if the calibration is successful.

  11. A new systematic calibration method of ring laser gyroscope inertial navigation system

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu

    2016-10-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.

  12. Predicting survival of men with recurrent prostate cancer after radical prostatectomy.

    PubMed

    Dell'Oglio, Paolo; Suardi, Nazareno; Boorjian, Stephen A; Fossati, Nicola; Gandaglia, Giorgio; Tian, Zhe; Moschini, Marco; Capitanio, Umberto; Karakiewicz, Pierre I; Montorsi, Francesco; Karnes, R Jeffrey; Briganti, Alberto

    2016-02-01

    To develop and externally validate a novel nomogram aimed at predicting cancer-specific mortality (CSM) after biochemical recurrence (BCR) among prostate cancer (PCa) patients treated with radical prostatectomy (RP) with or without adjuvant external beam radiotherapy (aRT) and/or hormonal therapy (aHT). The development cohort included 689 consecutive PCa patients treated with RP between 1987 and 2011 with subsequent BCR, defined as two subsequent prostate-specific antigen values >0.2 ng/ml. Multivariable competing-risks regression analyses tested the predictors of CSM after BCR for the purpose of 5-year CSM nomogram development. Validation (2000 bootstrap resamples) was internally tested. External validation was performed into a population of 6734 PCa patients with BCR after treatment with RP at the Mayo Clinic from 1987 to 2011. The predictive accuracy (PA) was quantified using the receiver operating characteristic-derived area under the curve and the calibration plot method. The 5-year CSM-free survival rate was 83.6% (confidence interval [CI]: 79.6-87.2). In multivariable analyses, pathologic stage T3b or more (hazard ratio [HR]: 7.42; p = 0.008), pathologic Gleason score 8-10 (HR: 2.19; p = 0.003), lymph node invasion (HR: 3.57; p = 0.001), time to BCR (HR: 0.99; p = 0.03) and age at BCR (HR: 1.04; p = 0.04), were each significantly associated with the risk of CSM after BCR. The bootstrap-corrected PA was 87.4% (bootstrap 95% CI: 82.0-91.7%). External validation of our nomogram showed a good PA at 83.2%. We developed and externally validated the first nomogram predicting 5-year CSM applicable to contemporary patients with BCR after RP with or without adjuvant treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  14. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  15. Quantitation of low molecular weight sugars by chemical derivatization-liquid chromatography/multiple reaction monitoring/mass spectrometry.

    PubMed

    Han, Jun; Lin, Karen; Sequria, Carita; Yang, Juncong; Borchers, Christoph H

    2016-07-01

    A new method for the separation and quantitation of 13 mono- and disaccharides has been developed by chemical derivatization/ultra-HPLC/negative-ion ESI-multiple-reaction monitoring MS. 3-Nitrophenylhydrazine (at 50°C for 60 min) was shown to be able to quantitatively derivatize low-molecular weight (LMW) reducing sugars. The nonreducing sugar, sucrose, was not derivatized. A pentafluorophenyl-bonded phase column was used for the chromatographic separation of the derivatized sugars. This method exhibits femtomole-level sensitivity, high precision (CVs of ≤ 4.6%) and high accuracy for the quantitation of LMW sugars in wine. Excellent linearity (R(2) ≥ 0.9993) and linear ranges of ∼500-fold for disaccharides and ∼1000-4000-fold for monosaccharides were achieved. With internal calibration ((13) C-labeled internal standards), recoveries were between 93.6% ± 1.6% (xylose) and 104.8% ± 5.2% (glucose). With external calibration, recoveries ranged from 82.5% ± 0.8% (ribulose) to 105.2% ± 2.1% (xylulose). Quantitation of sugars in two red wines and two white wines was performed using this method; quantitation of the central carbon metabolism-related carboxylic acids and tartaric acid was carried out using a previously established derivatization procedure with 3-nitrophenylhydrazine as well. The results showed that these two classes of compounds-both of which have important organoleptic properties-had different compositions in red and white wines. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Hydrophilic interaction liquid chromatography of anthranilic acid-labelled oligosaccharides with a 4-aminobenzoic acid ethyl ester-labelled dextran hydrolysate internal standard.

    PubMed

    Neville, David C A; Alonzi, Dominic S; Butters, Terry D

    2012-04-13

    Hydrophilic interaction liquid chromatography (HILIC) of fluorescently labelled oligosaccharides is used in many laboratories to analyse complex oligosaccharide mixtures. Separations are routinely performed using a TSK gel-Amide 80 HPLC column, and retention times of different oligosaccharide species are converted to glucose unit (GU) values that are determined with reference to an external standard. However, if retention times were to be compared with an internal standard, consistent and more accurate GU values would be obtained. We present a method to perform internal standard-calibrated HILIC of fluorescently labelled oligosaccharides. The method relies on co-injection of 4-aminobenzoic acid ethyl ester (4-ABEE)-labelled internal standard and detection by UV absorption, with 2-AA (2-aminobenzoic acid)-labelled oligosaccharides. 4-ABEE is a UV chromophore and a fluorophore, but there is no overlap of the fluorescent spectrum of 4-ABEE with the commonly used fluorescent reagents. The dual nature of 4-ABEE allows for accurate calculation of the delay between UV and fluorescent signals when determining the GU values of individual oligosaccharides. The GU values obtained are inherently more accurate as slight differences in gradients that can influence retention are negated by use of an internal standard. Therefore, this paper provides the first method for determination of HPLC-derived GU values of fluorescently labelled oligosaccharides using an internal calibrant. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Methods to control for unmeasured confounding in pharmacoepidemiology: an overview.

    PubMed

    Uddin, Md Jamal; Groenwold, Rolf H H; Ali, Mohammed Sanni; de Boer, Anthonius; Roes, Kit C B; Chowdhury, Muhammad A B; Klungel, Olaf H

    2016-06-01

    Background Unmeasured confounding is one of the principal problems in pharmacoepidemiologic studies. Several methods have been proposed to detect or control for unmeasured confounding either at the study design phase or the data analysis phase. Aim of the Review To provide an overview of commonly used methods to detect or control for unmeasured confounding and to provide recommendations for proper application in pharmacoepidemiology. Methods/Results Methods to control for unmeasured confounding in the design phase of a study are case only designs (e.g., case-crossover, case-time control, self-controlled case series) and the prior event rate ratio adjustment method. Methods that can be applied in the data analysis phase include, negative control method, perturbation variable method, instrumental variable methods, sensitivity analysis, and ecological analysis. A separate group of methods are those in which additional information on confounders is collected from a substudy. The latter group includes external adjustment, propensity score calibration, two-stage sampling, and multiple imputation. Conclusion As the performance and application of the methods to handle unmeasured confounding may differ across studies and across databases, we stress the importance of using both statistical evidence and substantial clinical knowledge for interpretation of the study results.

  18. Role of inspectors in external review mechanisms: criteria for selection, training and appraisal.

    PubMed

    Plebani, M

    2001-07-20

    There is a wide consensus that an external review mechanism, both in the form of a peer review, accreditation and certification according to the ISO 9000 series, is more than its standards. The survey process, the role of inspectors and standard interpretation contribute to the essence of the programme itself. Above all, the criteria used for the selection, training and appraisal of inspectors are of paramount importance. While the ISO norms do not require certification bodies to employ "peer reviewers" for the healthcare sector, experience in this sector is the main criterion for recruiting inspectors in accreditation and peer review programmes. However, the ISO/IEC Guide 58, for the setting up and operation of a laboratory accreditation body, specifies that inspectors should have appropriate technical knowledge of the specific calibrations, tests or types of calibration or tests for which accreditation is sought. Training, updating and assessment of inspectors are clearly defined under ISO, but are also systematic under accreditation programmes. Part-time inspectors who are professionals currently practising in a healthcare facility and are in touch with the day-to-day work reality are preferred for accreditation programmes which have self-regulation, education and quality improvement as their main concerns, while full-time and external inspectors are used in external review mechanisms with registration and certification as their main concerns. As well as harmonising the standards for accreditation, it is important to obtain consensus on the criteria to use for the selection, training and assessment of inspectors in order to ensure that different national or international programmes gain mutual recognition.

  19. Shortwave Radiometer Calibration Methods Comparison and Resulting Solar Irradiance Measurement Differences: A User Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Banks financing solar energy projects require assurance that these systems will produce the energy predicted. Furthermore, utility planners and grid system operators need to understand the impact of the variable solar resource on solar energy conversion system performance. Accurate solar radiation data sets reduce the expense associated with mitigating performance risk and assist in understanding the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methods provided by radiometric calibrationmore » service providers, such as NREL and manufacturers of radiometers, on the resulting calibration responsivity. Some of these radiometers are calibrated indoors and some outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides the outdoor calibration responsivity of pyranometers and pyrheliometers at 45 degree solar zenith angle, and as a function of solar zenith angle determined by clear-sky comparisons with reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison between the test radiometer under calibration and a reference radiometer of the same type. In both methods, the reference radiometer calibrations are traceable to the World Radiometric Reference (WRR). These different methods of calibration demonstrated +1% to +2% differences in solar irradiance measurement. Analyzing these differences will ultimately help determine the uncertainty of the field radiometer data and guide the development of a consensus standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainty will allow more accurate prediction of solar output and improve the bankability of solar projects.« less

  20. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  1. External calibration of polarimetric radar images using distributed targets

    NASA Technical Reports Server (NTRS)

    Yueh, Simon H.; Nghiem, S. V.; Kwok, R.

    1992-01-01

    A new technique is presented for calibrating polarimetric synthetic aperture radar (SAR) images using only the responses from natural distributed targets. The model for polarimetric radars is assumed to be X = cRST where X is the measured scattering matrix corresponding to the target scattering matrix S distorted by the system matrices T and R (in general T does not equal R(sup t)). To allow for the polarimetric calibration using only distributed targets and corner reflectors, van Zyl assumed a reciprocal polarimetric radar model with T = R(sup t); when applied for JPL SAR data, a heuristic symmetrization procedure is used by POLCAL to compensate the phase difference between the measured HV and VH responses and then take the average of both. This heuristic approach causes some non-removable cross-polarization responses for corner reflectors, which can be avoided by a rigorous symmetrization method based on reciprocity. After the radar is made reciprocal, a new algorithm based on the responses from distributed targets with reflection symmetry is developed to estimate the cross-talk parameters. The new algorithm never experiences problems in convergence and is also found to converge faster than the existing routines implemented for POLCAL. When the new technique is implemented for the JPL polarimetric data, symmetrization and cross-talk removal are performed on a line-by-line (azimuth) basis. After the cross-talks are removed from the entire image, phase and amplitude calibrations are carried out by selecting distributed targets either with azimuthal symmetry along the looking direction or with some well-known volume and surface scattering mechanisms to estimate the relative phases and amplitude responses of the horizontal and vertical channels.

  2. Comparison of Three Contemporary Risk Scores for Mortality Following Elective Abdominal Aortic Aneurysm Repair

    PubMed Central

    Grant, S.W.; Hickey, G.L.; Carlson, E.D.; McCollum, C.N.

    2014-01-01

    Objective/background A number of contemporary risk prediction models for mortality following elective abdominal aortic aneurysm (AAA) repair have been developed. Before a model is used either in clinical practice or to risk-adjust surgical outcome data it is important that its performance is assessed in external validation studies. Methods The British Aneurysm Repair (BAR) score, Medicare, and Vascular Governance North West (VGNW) models were validated using an independent prospectively collected sample of multicentre clinical audit data. Consecutive, data on 1,124 patients undergoing elective AAA repair at 17 hospitals in the north-west of England and Wales between April 2011 and March 2013 were analysed. The outcome measure was in-hospital mortality. Model calibration (observed to expected ratio with chi-square test, calibration plots, calibration intercept and slope) and discrimination (area under receiver operating characteristic curve [AUC]) were assessed in the overall cohort and procedural subgroups. Results The mean age of the population was 74.4 years (SD 7.7); 193 (17.2%) patients were women and the majority of patients (759, 67.5%) underwent endovascular aneurysm repair. All three models demonstrated good calibration in the overall cohort and procedural subgroups. Overall discrimination was excellent for the BAR score (AUC 0.83, 95% confidence interval [CI] 0.76–0.89), and acceptable for the Medicare and VGNW models, with AUCs of 0.78 (95% CI 0.70–0.86) and 0.75 (95% CI 0.65–0.84) respectively. Only the BAR score demonstrated good discrimination in procedural subgroups. Conclusion All three models demonstrated good calibration and discrimination for the prediction of in-hospital mortality following elective AAA repair and are potentially useful. The BAR score has a number of advantages, which include being developed on the most contemporaneous data, excellent overall discrimination, and good performance in procedural subgroups. Regular model validations and recalibration will be essential. PMID:24837173

  3. Calibration of Airframe and Occupant Models for Two Full-Scale Rotorcraft Crash Tests

    NASA Technical Reports Server (NTRS)

    Annett, Martin S.; Horta, Lucas G.; Polanco, Michael A.

    2012-01-01

    Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. Accelerations and kinematic data collected from the crash tests were compared to a system integrated finite element model of the test article. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the second full-scale crash test. This combination of heuristic and quantitative methods was used to identify modeling deficiencies, evaluate parameter importance, and propose required model changes. It is shown that the multi-dimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and co-pilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. This approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and test planning guidance. Complete crash simulations with validated finite element models can be used to satisfy crash certification requirements, thereby reducing overall development costs.

  4. Integrated Positioning for Coal Mining Machinery in Enclosed Underground Mine Based on SINS/WSN

    PubMed Central

    Hui, Jing; Wu, Lei; Yan, Wenxu; Zhou, Lijuan

    2014-01-01

    To realize dynamic positioning of the shearer, a new method based on SINS/WSN is studied in this paper. Firstly, the shearer movement model is built and running regularity of the shearer in coal mining face has been mastered. Secondly, as external calibration of SINS using GPS is infeasible in enclosed underground mine, WSN positioning strategy is proposed to eliminate accumulative error produced by SINS; then the corresponding coupling model is established. Finally, positioning performance is analyzed by simulation and experiment. Results show that attitude angle and position of the shearer can be real-timely tracked by integrated positioning strategy based on SINS/WSN, and positioning precision meet the demand of actual working condition. PMID:24574891

  5. Detection of gas leakage

    DOEpatents

    Thornberg, Steven [Peralta, NM; Brown, Jason [Albuquerque, NM

    2012-06-19

    A method of detecting leaks and measuring volumes as well as an apparatus, the Power-free Pump Module (PPM), that is a self-contained leak test and volume measurement apparatus that requires no external sources of electrical power during leak testing or volume measurement, where the invention is a portable, pneumatically-controlled instrument capable of generating a vacuum, calibrating volumes, and performing quantitative leak tests on a closed test system or device, all without the use of alternating current (AC) power. Capabilities include the ability is to provide a modest vacuum (less than 10 Torr), perform a pressure rise leak test, measure the gas's absolute pressure, and perform volume measurements. All operations are performed through a simple rotary control valve which controls pneumatically-operated manifold valves.

  6. Detection of gas leakage

    DOEpatents

    Thornberg, Steven M; Brown, Jason

    2015-02-17

    A method of detecting leaks and measuring volumes as well as a device, the Power-free Pump Module (PPM), provides a self-contained leak test and volume measurement apparatus that requires no external sources of electrical power during leak testing or volume measurement. The PPM is a portable, pneumatically-controlled instrument capable of generating a vacuum, calibrating volumes, and performing quantitative leak tests on a closed test system or device, all without the use of alternating current (AC) power. Capabilities include the ability is to provide a modest vacuum (less than 10 Torr) using a venturi pump, perform a pressure rise leak test, measure the gas's absolute pressure, and perform volume measurements. All operations are performed through a simple rotary control valve which controls pneumatically-operated manifold valves.

  7. Vision-based augmented reality system

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Wang, Yongtian; Shi, Qi; Yan, Dayuan

    2003-04-01

    The most promising aspect of augmented reality lies in its ability to integrate the virtual world of the computer with the real world of the user. Namely, users can interact with the real world subjects and objects directly. This paper presents an experimental augmented reality system with a video see-through head-mounted device to display visual objects, as if they were lying on the table together with real objects. In order to overlay virtual objects on the real world at the right position and orientation, the accurate calibration and registration are most important. A vision-based method is used to estimate CCD external parameters by tracking 4 known points with different colors. It achieves sufficient accuracy for non-critical applications such as gaming, annotation and so on.

  8. Quantitative 1H NMR: Development and Potential of an Analytical Method – an Update

    PubMed Central

    Pauli, Guido F.; Gödecke, Tanja; Jaki, Birgit U.; Lankin, David C.

    2012-01-01

    Covering the literature from mid-2004 until the end of 2011, this review continues a previous literature overview on quantitative 1H NMR (qHNMR) methodology and its applications in the analysis of natural products (NPs). Among the foremost advantages of qHNMR is its accurate function with external calibration, the lack of any requirement for identical reference materials, a high precision and accuracy when properly validated, and an ability to quantitate multiple analytes simultaneously. As a result of the inclusion of over 170 new references, this updated review summarizes a wealth of detailed experiential evidence and newly developed methodology that supports qHNMR as a valuable and unbiased analytical tool for natural product and other areas of research. PMID:22482996

  9. Development and validation of a nomogram predicting recurrence risk in women with symptomatic urinary tract infection.

    PubMed

    Cai, Tommaso; Mazzoli, Sandra; Migno, Serena; Malossini, Gianni; Lanzafame, Paolo; Mereu, Liliana; Tateo, Saverio; Wagenlehner, Florian M E; Pickard, Robert S; Bartoletti, Riccardo

    2014-09-01

    To develop and externally validate a novel nomogram predicting recurrence risk probability at 12 months in women after an episode of urinary tract infection. The study included 768 women from Santa Maria Annunziata Hospital, Florence, Italy, affected by urinary tract infections from January 2005 to December 2009. Another 373 women with the same criteria enrolled at Santa Chiara Hospital, Trento, Italy, from January 2010 to June 2012 were used to externally validate and calibrate the nomogram. Univariate and multivariate Cox regression models tested the relationship between urinary tract infection recurrence risk, and patient clinical and laboratory characteristics. The nomogram was evaluated by calculating concordance probabilities, as well as testing calibration of predicted urinary tract infection recurrence with observed urinary tract infections. Nomogram variables included: number of partners, bowel function, type of pathogens isolated (Gram-positive/negative), hormonal status, number of previous urinary tract infection recurrences and previous treatment of asymptomatic bacteriuria. Of the original development data, 261 out of 768 women presented at least one episode of recurrence of urinary tract infection (33.9%). The nomogram had a concordance index of 0.85. The nomogram predictions were well calibrated. This model showed high discrimination accuracy and favorable calibration characteristics. In the validation group (373 women), the overall c-index was 0.83 (P = 0.003, 95% confidence interval 0.51-0.99), whereas the area under the receiver operating characteristic curve was 0.85 (95% confidence interval 0.79-0.91). The present nomogram accurately predicts the recurrence risk of urinary tract infection at 12 months, and can assist in identifying women at high risk of symptomatic recurrence that can be suitable candidates for a prophylactic strategy. © 2014 The Japanese Urological Association.

  10. Centennial increase in geomagnetic activity: Latitudinal differences and global estimates

    NASA Astrophysics Data System (ADS)

    Mursula, K.; Martini, D.

    2006-08-01

    We study here the centennial change in geomagnetic activity using the newly proposed Inter-Hour Variability (IHV) index. We correct the earlier estimates of the centennial increase by taking into account the effect of the change of the sampling of the magnetic field from one sample per hour to hourly means in the first years of the previous century. Since the IHV index is a variability index, the larger variability in the case of hourly sampling leads, without due correction, to excessively large values in the beginning of the century and an underestimated centennial increase. We discuss two ways to extract the necessary sampling calibration factors and show that they agree very well with each other. The effect of calibration is especially large at the midlatitude Cheltenham/Fredricksburg (CLH/FRD) station where the centennial increase changes from only 6% to 24% caused by calibration. Sampling calibration also leads to a larger centennial increase of global geomagnetic activity based on the IHV index. The results verify a significant centennial increase in global geomagnetic activity, in a qualitative agreement with the aa index, although a quantitative comparison is not warranted. We also find that the centennial increase has a rather strong and curious latitudinal dependence. It is largest at high latitudes. Quite unexpectedly, it is larger at low latitudes than at midlatitudes. These new findings indicate interesting long-term changes in near-Earth space. We also discuss possible internal and external causes for these observed differences. The centennial change of geomagnetic activity may be partly affected by changes in external conditions, partly by the secular decrease of the Earth's magnetic moment whose effect in near-Earth space may be larger than estimated so far.

  11. Spatial and Temporal Patterns of SMAP Brightness Temperatures for Use in Level 1 TB Characterization

    NASA Astrophysics Data System (ADS)

    Kim, E. J.

    2015-12-01

    1. IntroductionThe recent launch of NASA's Soil Moisture Active Passive (SMAP) mission [Entekhabi, et al] has opened the door to improved brightness temperature (TB) calibration of satellite L-band microwave radiometers, through the use of SMAP's lower noise performance and better immunity to man-made interference (vs. ESA's Soil Moisture Ocean Salinity (SMOS) mission [Kerr, et al]), better spatial resolution (vs. NASA's Aquarius sea surface salinity mission [Le Vine, et al]), and cleaner antenna pattern (vs. SMOS). All three radiometers use/used large homogeneous places on Earth's surface as calibration targets—parts of the ocean, Antarctica, and tropical forests. Despite the recent loss of Aquarius data, there is still hope for creating a longer-term L-band data set that spans the timeframe of all 3 missions. 2. Description of Analyses and Expected Results In this paper, we analyze SMAP brightness temperature data to quantify the spatial and temporal characteristics of external target areas in the oceans, Antarctica, forests, and other areas. Existing analyses have examined these targets in terms of averages, standard deviations, and other basic statistics (for Aquarius & SMOS as well). This paper will approach the problem from a signal processing perspective. Coupled with the use of SMAP's novel RFI-mitigated TBs, and the aforementioned lower noise and cleaner antenna pattern, it is expected that of the 3 L-band missions, SMAP should do the best job of characterizing such external targets. The resulting conclusions should be useful to extract the best possible TB calibration from all 3 missions, helping to inter-compare the TB from the 3 missions, and to eventually inter-calibrate the TBs into a single long-term dataset.

  12. Calibration and Validation Plan for the L2A Processor and Products of the SENTINEL-2 Mission

    NASA Astrophysics Data System (ADS)

    Main-Knorn, M.; Pflug, B.; Debaecker, V.; Louis, J.

    2015-04-01

    The Copernicus programme, is a European initiative for the implementation of information services based on observation data received from Earth Observation (EO) satellites and ground based information. In the frame of this programme, ESA is developing the Sentinel-2 optical imaging mission that will deliver optical data products designed to feed downstream services mainly related to land monitoring, emergency management and security. To ensure the highest quality of service, ESA sets up the Sentinel-2 Mission Performance Centre (MPC) in charge of the overall performance monitoring of the Sentinel-2 mission. TPZ F and DLR have teamed up in order to provide the best added-value support to the MPC for calibration and validation of the Level-2A processor (Sen2Cor) and products. This paper gives an overview over the planned L2A calibration and validation activities. Level-2A processing is applied to Top-Of-Atmosphere (TOA) Level-1C ortho-image reflectance products. Level-2A main output is the Bottom-Of-Atmosphere (BOA) corrected reflectance product. Additional outputs are an Aerosol Optical Thickness (AOT) map, a Water Vapour (WV) map and a Scene Classification (SC) map with Quality Indicators for cloud and snow probabilities. Level-2A BOA, AOT and WV outputs are calibrated and validated using ground-based data of automatic operating stations and data of in-situ campaigns. Scene classification is validated by the visual inspection of test datasets and cross-sensor comparison, supplemented by meteorological data, if available. Contributions of external in-situ campaigns would enlarge the reference dataset and enable extended validation exercise. Therefore, we are highly interested in and welcome external contributors.

  13. A combined application of thermal desorber and gas chromatography to the analysis of gaseous carbonyls with the aid of two internal standards.

    PubMed

    Kim, Ki-Hyun; Anthwal, A; Pandey, Sudhir Kumar; Kabir, Ehsanul; Sohn, Jong Ryeul

    2010-11-01

    In this study, a series of GC calibration experiments were conducted to examine the feasibility of the thermal desorption approach for the quantification of five carbonyl compounds (acetaldehyde, propionaldehyde, butyraldehyde, isovaleraldehyde, and valeraldehyde) in conjunction with two internal standard compounds. The gaseous working standards of carbonyls were calibrated with the aid of thermal desorption as a function of standard concentration and of loading volume. The detection properties were then compared against two types of external calibration data sets derived by fixed standard volume and fixed standard concentration approach. According to this comparison, the fixed standard volume-based calibration of carbonyls should be more sensitive and reliable than its fixed standard concentration counterpart. Moreover, the use of internal standard can improve the analytical reliability of aromatics and some carbonyls to a considerable extent. Our preliminary test on real samples, however, indicates that the performance of internal calibration, when tested using samples of varying dilution ranges, can be moderately different from that derivable from standard gases. It thus suggests that the reliability of calibration approaches should be examined carefully with the considerations on the interactive relationships between the compound-specific properties and the operation conditions of the instrumental setups.

  14. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    PubMed Central

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726

  15. Prospects and difficulties in TiO₂ nanoparticles analysis in cosmetic and food products using asymmetrical flow field-flow fractionation hyphenated to inductively coupled plasma mass spectrometry.

    PubMed

    López-Heras, Isabel; Madrid, Yolanda; Cámara, Carmen

    2014-06-01

    In this work, we proposed an analytical approach based on asymmetrical flow field-flow fractionation combined to an inductively coupled plasma mass spectrometry (AsFlFFF-ICP-MS) for rutile titanium dioxide nanoparticles (TiO2NPs) characterization and quantification in cosmetic and food products. AsFlFFF-ICP-MS separation of TiO2NPs was performed using 0.2% (w/v) SDS, 6% (v/v) methanol at pH 8.7 as the carrier solution. Two problems were addressed during TiO2NPs analysis by AsFlFFF-ICP-MS: size distribution determination and element quantification of the NPs. Two approaches were used for size determination: size calibration using polystyrene latex standards of known sizes and transmission electron microscopy (TEM). A method based on focused sonication for preparing NPs dispersions followed by an on-line external calibration strategy based on AsFlFFF-ICP-MS, using rutile TiO2NPs as standards is presented here for the first time. The developed method suppressed non-specific interactions between NPs and membrane, and overcame possible erroneous results obtained when quantification is performed by using ionic Ti solutions. The applicability of the quantification method was tested on cosmetic products (moisturizing cream). Regarding validation, at the 95% confidence level, no significant differences were detected between titanium concentrations in the moisturizing cream prior sample mineralization (3865±139 mg Ti/kg sample), by FIA-ICP-MS analysis prior NPs extraction (3770±24 mg Ti/kg sample), and after using the optimized on-line calibration approach (3699±145 mg Ti/kg sample). Besides the high Ti content found in the studied food products (sugar glass and coffee cream), TiO2NPs were not detected. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry

    PubMed Central

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies. PMID:26011052

  17. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  18. SU-F-T-485: Independent Remote Audits for TG51 NonCompliant Photon Beams Performed by the IROC Houston QA Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alvarez, P; Molineu, A; Lowenstein, J

    Purpose: IROC-H conducts external audits for output check verification of photon and electron beams. Many of these beams can meet the geometric requirements of the TG 51 calibration protocol. For those photon beams that are non TG 51 compliant like Elekta GammaKnife, Accuray CyberKnife and TomoTherapy, IROC-H has specific audit tools to monitor the reference calibration. Methods: IROC-H used its TLD and OSLD remote monitoring systems to verify the output of machines with TG 51 non compliant beams. Acrylic OSLD miniphantoms are used for the CyberKnife. Special TLD phantoms are used for TomoTherapy and GammaKnife machines to accommodate the specificmore » geometry of each machine. These remote audit tools are sent to institutions to be irradiated and returned to IROC-H for analysis. Results: The average IROC-H/institution ratios for 480 GammaKnife, 660 CyberKnife and 907 rotational TomoTherapy beams are 1.000±0.021, 1.008±0.019, 0.974±0.023, respectively. In the particular case of TomoTherapy, the overall ratio is 0.977±0.022 for HD units. The standard deviations of all results are consistent with values determined for TG 51 compliant photon beams. These ratios have shown some changes compared to values presented in 2008. The GammaKnife results were corrected by an experimentally determined scatter factor of 1.025 in 2013. The TomoTherapy helical beam results are now from a rotational beam whereas in 2008 the results were from a static beam. The decision to change modality was based on recommendations from the users. Conclusion: External audits of beam outputs is a valuable tool to confirm the calibrations of photon beams regardless of whether the machine is TG 51 or TG 51 non compliant. The difference found for TomoTherapy units is under investigation. This investigation was supported by IROC grant CA180803 awarded by the NCI.« less

  19. Comparison of TLD calibration methods for  192Ir dosimetry

    PubMed Central

    Butler, Duncan J.; Wilfert, Lisa; Ebert, Martin A.; Todd, Stephen P.; Hayton, Anna J.M.; Kron, Tomas

    2013-01-01

    For the purpose of dose measurement using a high‐dose rate  192Ir source, four methods of thermoluminescent dosimeter (TLD) calibration were investigated. Three of the four calibration methods used the  192Ir source. Dwell times were calculated to deliver 1 Gy to the TLDs irradiated either in air or water. Dwell time calculations were confirmed by direct measurement using an ionization chamber. The fourth method of calibration used 6 MV photons from a medical linear accelerator, and an energy correction factor was applied to account for the difference in sensitivity of the TLDs in  192Ir and 6 M V. The results of the four TLD calibration methods are presented in terms of the results of a brachytherapy audit where seven Australian centers irradiated three sets of TLDs in a water phantom. The results were in agreement within estimated uncertainties when the TLDs were calibrated with the  192Ir source. Calibrating TLDs in a phantom similar to that used for the audit proved to be the most practical method and provided the greatest confidence in measured dose. When calibrated using 6 MV photons, the TLD results were consistently higher than the  192Ir−calibrated TLDs, suggesting this method does not fully correct for the response of the TLDs when irradiated in the audit phantom. PACS number: 87 PMID:23318392

  20. Methods for Calibration of Prout-Tompkins Kinetics Parameters Using EZM Iteration and GLO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K; de Supinski, B

    2006-11-07

    This document contains information regarding the standard procedures used to calibrate chemical kinetics parameters for the extended Prout-Tompkins model to match experimental data. Two methods for calibration are mentioned: EZM calibration and GLO calibration. EZM calibration matches kinetics parameters to three data points, while GLO calibration slightly adjusts kinetic parameters to match multiple points. Information is provided regarding the theoretical approach and application procedure for both of these calibration algorithms. It is recommended that for the calibration process, the user begin with EZM calibration to provide a good estimate, and then fine-tune the parameters using GLO. Two examples have beenmore » provided to guide the reader through a general calibrating process.« less

  1. A calibration method based on virtual large planar target for cameras with large FOV

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu

    2018-02-01

    In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.

  2. A Full-Envelope Air Data Calibration and Three-Dimensional Wind Estimation Method Using Global Output-Error Optimization and Flight-Test Techniques

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.

    2012-01-01

    A novel, efficient air data calibration method is proposed for aircraft with limited envelopes. This method uses output-error optimization on three-dimensional inertial velocities to estimate calibration and wind parameters. Calibration parameters are based on assumed calibration models for static pressure, angle of attack, and flank angle. Estimated wind parameters are the north, east, and down components. The only assumptions needed for this method are that the inertial velocities and Euler angles are accurate, the calibration models are correct, and that the steady-state component of wind is constant throughout the maneuver. A two-minute maneuver was designed to excite the aircraft over the range of air data calibration parameters and de-correlate the angle-of-attack bias from the vertical component of wind. Simulation of the X-48B (The Boeing Company, Chicago, Illinois) aircraft was used to validate the method, ultimately using data derived from wind-tunnel testing to simulate the un-calibrated air data measurements. Results from the simulation were accurate and robust to turbulence levels comparable to those observed in flight. Future experiments are planned to evaluate the proposed air data calibration in a flight environment.

  3. LA-ICP-MS of magnetite: Methods and reference materials

    USGS Publications Warehouse

    Nadoll, P.; Koenig, A.E.

    2011-01-01

    Magnetite (Fe3O4) is a common accessory mineral in many geologic settings. Its variable geochemistry makes it a powerful petrogenetic indicator. Electron microprobe (EMPA) analyses are commonly used to examine major and minor element contents in magnetite. Laser ablation ICP-MS (LA-ICP-MS) is applicable to trace element analyses of magnetite but has not been widely employed to examine compositional variations. We tested the applicability of the NIST SRM 610, the USGS GSE-1G, and the NIST SRM 2782 reference materials (RMs) as external standards and developed a reliable method for LA-ICP-MS analysis of magnetite. LA-ICP-MS analyses were carried out on well characterized magnetite samples with a 193 nm, Excimer, ArF LA system. Although matrix-matched RMs are sometimes important for calibration and normalization of LA-ICP-MS data, we demonstrate that glass RMs can produce accurate results for LA-ICP-MS analyses of magnetite. Cross-comparison between the NIST SRM 610 and USGS GSE-1G indicates good agreement for magnetite minor and trace element data calibrated with either of these RMs. Many elements show a sufficiently good match between the LA-ICP-MS and the EMPA data; for example, Ti and V show a close to linear relationship with correlation coefficients, R2 of 0.79 and 0.85 respectively. ?? 2011 The Royal Society of Chemistry.

  4. A new technique for spectrophotometric determination of pseudoephedrine and guaifenesin in syrup and synthetic mixture.

    PubMed

    Riahi, Siavash; Hadiloo, Farshad; Milani, Seyed Mohammad R; Davarkhah, Nazila; Ganjali, Mohammad R; Norouzi, Parviz; Seyfi, Payam

    2011-05-01

    The accuracy in predicting different chemometric methods was compared when applied on ordinary UV spectra and first order derivative spectra. Principal component regression (PCR) and partial least squares with one dependent variable (PLS1) and two dependent variables (PLS2) were applied on spectral data of pharmaceutical formula containing pseudoephedrine (PDP) and guaifenesin (GFN). The ability to derivative in resolved overlapping spectra chloropheniramine maleate was evaluated when multivariate methods are adopted for analysis of two component mixtures without using any chemical pretreatment. The chemometrics models were tested on an external validation dataset and finally applied to the analysis of pharmaceuticals. Significant advantages were found in analysis of the real samples when the calibration models from derivative spectra were used. It should also be mentioned that the proposed method is a simple and rapid way requiring no preliminary separation steps and can be used easily for the analysis of these compounds, especially in quality control laboratories. Copyright © 2011 John Wiley & Sons, Ltd.

  5. Ultra trace determination of 31 pesticides in water samples by direct injection-rapid resolution liquid chromatography-electrospray tandem mass spectrometry.

    PubMed

    Díaz, Laura; Llorca-Pórcel, Julio; Valor, Ignacio

    2008-08-22

    A liquid chromatography-tandem mass spectrometry (LC-MS/MS)-based method for the detection of pesticides in tap and treated wastewater was developed and validated according to the ISO/IEC 17025:1999. Key features of this method include direct injection of 100 microL of sample, an 11 min separation by means of a rapid resolution liquid chromatography system with a 4.6 mm x 50 mm, 1.8 microm particle size reverse phase column and detection by electrospray ionization (ESI) MS-MS. The limits of detection were below 15 ng L(-1) and correlation coefficients for the calibration curves in the range of 30-2000 ng L(-1) were higher than 0.99. Precision was always below 20% and accuracy was confirmed by external evaluation. The main advantages of this method are direct injection of sample without preparative procedures and low limits of detection that fulfill the requirements established by the current European regulations governing pesticide detection.

  6. Simple quantification of phenolic compounds present in the minor fraction of virgin olive oil by LC-DAD-FLD.

    PubMed

    Godoy-Caballero, M P; Acedo-Valenzuela, M I; Galeano-Díaz, T

    2012-11-15

    This paper presents the results of the study on the extraction, identification and quantification of a group of important phenolic compounds in virgin olive oil (VOO) samples, obtained from olives of various varieties, by liquid chromatography coupled to UV-vis and fluorescence detection. Sixteen phenolic compounds belonging to different families have been identified and quantified spending a total time of 25 min. The linearity was examined by establishing the external standard calibration curves. Four order linear ranges and limits of detection ranging from 0.02 to 0.6 μg mL(-1) and 0.006 to 0.3 μg mL(-1) were achieved using UV-vis and fluorescence detection, respectively. Regarding the real samples, for the determination of the phenolic compounds in higher concentrations (hydroxytyrosol and tyrosol) a simple liquid-liquid extraction with ethanol was used to make the sample compatible with the mobile phase. Recovery values close to 100% were obtained. However, a previous solid phase extraction with Diol cartridges was necessary to concentrate and separate the minor phenolic compounds of the main interferences. The parameters affecting this step were carefully optimized and, after that, recoveries near 80-100% were obtained for the rest of the studied phenolic compounds. Also, the limits of detection were improved 15 times. Finally, the standard addition method was carried out for each of the analytes and no matrix effect was found, so the quantification of the 16 phenolic compounds from different monovarietal VOO was carried out by using the corresponding external standard calibration plot. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. A biomarker-based risk score to predict death in patients with atrial fibrillation: the ABC (age, biomarkers, clinical history) death risk score

    PubMed Central

    Hijazi, Ziad; Oldgren, Jonas; Lindbäck, Johan; Alexander, John H; Connolly, Stuart J; Eikelboom, John W; Ezekowitz, Michael D; Held, Claes; Hylek, Elaine M; Lopes, Renato D; Yusuf, Salim; Granger, Christopher B; Siegbahn, Agneta; Wallentin, Lars

    2018-01-01

    Abstract Aims In atrial fibrillation (AF), mortality remains high despite effective anticoagulation. A model predicting the risk of death in these patients is currently not available. We developed and validated a risk score for death in anticoagulated patients with AF including both clinical information and biomarkers. Methods and results The new risk score was developed and internally validated in 14 611 patients with AF randomized to apixaban vs. warfarin for a median of 1.9 years. External validation was performed in 8548 patients with AF randomized to dabigatran vs. warfarin for 2.0 years. Biomarker samples were obtained at study entry. Variables significantly contributing to the prediction of all-cause mortality were assessed by Cox-regression. Each variable obtained a weight proportional to the model coefficients. There were 1047 all-cause deaths in the derivation and 594 in the validation cohort. The most important predictors of death were N-terminal pro B-type natriuretic peptide, troponin-T, growth differentiation factor-15, age, and heart failure, and these were included in the ABC (Age, Biomarkers, Clinical history)-death risk score. The score was well-calibrated and yielded higher c-indices than a model based on all clinical variables in both the derivation (0.74 vs. 0.68) and validation cohorts (0.74 vs. 0.67). The reduction in mortality with apixaban was most pronounced in patients with a high ABC-death score. Conclusion A new biomarker-based score for predicting risk of death in anticoagulated AF patients was developed, internally and externally validated, and well-calibrated in two large cohorts. The ABC-death risk score performed well and may contribute to overall risk assessment in AF. ClinicalTrials.gov identifier NCT00412984 and NCT00262600 PMID:29069359

  8. External Validation of European System for Cardiac Operative Risk Evaluation II (EuroSCORE II) for Risk Prioritization in an Iranian Population

    PubMed Central

    Atashi, Alireza; Amini, Shahram; Tashnizi, Mohammad Abbasi; Moeinipour, Ali Asghar; Aazami, Mathias Hossain; Tohidnezhad, Fariba; Ghasemi, Erfan; Eslami, Saeid

    2018-01-01

    Introduction The European System for Cardiac Operative Risk Evaluation II (EuroSCORE II) is a prediction model which maps 18 predictors to a 30-day post-operative risk of death concentrating on accurate stratification of candidate patients for cardiac surgery. Objective The objective of this study was to determine the performance of the EuroSCORE II risk-analysis predictions among patients who underwent heart surgeries in one area of Iran. Methods A retrospective cohort study was conducted to collect the required variables for all consecutive patients who underwent heart surgeries at Emam Reza hospital, Northeast Iran between 2014 and 2015. Univariate and multivariate analysis were performed to identify covariates which significantly contribute to higher EuroSCORE II in our population. External validation was performed by comparing the real and expected mortality using area under the receiver operating characteristic curve (AUC) for discrimination assessment. Also, Brier Score and Hosmer-Lemeshow goodness-of-fit test were used to show the overall performance and calibration level, respectively. Results Two thousand five hundred eight one (59.6% males) were included. The observed mortality rate was 3.3%, but EuroSCORE II had a prediction of 4.7%. Although the overall performance was acceptable (Brier score=0.047), the model showed poor discriminatory power by AUC=0.667 (sensitivity=61.90, and specificity=66.24) and calibration (Hosmer-Lemeshow test, P<0.01). Conclusion Our study showed that the EuroSCORE II discrimination power is less than optimal for outcome prediction and less accurate for resource allocation programs. It highlights the need for recalibration of this risk stratification tool aiming to improve post cardiac surgery outcome predictions in Iran. PMID:29617500

  9. Prognostic nomogram and score to predict overall survival in locally advanced untreated pancreatic cancer (PROLAP)

    PubMed Central

    Vernerey, Dewi; Huguet, Florence; Vienot, Angélique; Goldstein, David; Paget-Bailly, Sophie; Van Laethem, Jean-Luc; Glimelius, Bengt; Artru, Pascal; Moore, Malcolm J; André, Thierry; Mineur, Laurent; Chibaudel, Benoist; Benetkiewicz, Magdalena; Louvet, Christophe; Hammel, Pascal; Bonnetain, Franck

    2016-01-01

    Background: The management of locally advanced pancreatic cancer (LAPC) patients remains controversial. Better discrimination for overall survival (OS) at diagnosis is needed. We address this issue by developing and validating a prognostic nomogram and a score for OS in LAPC (PROLAP). Methods: Analyses were derived from 442 LAPC patients enrolled in the LAP07 trial. The prognostic ability of 30 baseline parameters was evaluated using univariate and multivariate Cox regression analyses. Performance assessment and internal validation of the final model were done with Harrell's C-index, calibration plot and bootstrap sample procedures. On the basis of the final model, a prognostic nomogram and a score were developed, and externally validated in 106 consecutive LAPC patients treated in Besançon Hospital, France. Results: Age, pain, tumour size, albumin and CA 19-9 were independent prognostic factors for OS. The final model had good calibration, acceptable discrimination (C-index=0.60) and robust internal validity. The PROLAP score has the potential to delineate three different prognosis groups with median OS of 15.4, 11.7 and 8.5 months (log-rank P<0.0001). The score ability to discriminate OS was externally confirmed in 63 (59%) patients with complete clinical data derived from a data set of 106 consecutive LAPC patients; median OS of 18.3, 14.1 and 7.6 months for the three groups (log-rank P<0.0001). Conclusions: The PROLAP nomogram and score can accurately predict OS before initiation of induction chemotherapy in LAPC-untreated patients. They may help to optimise clinical trials design and might offer the opportunity to define risk-adapted strategies for LAPC management in the future. PMID:27404456

  10. High sensitivity optical measurement of skin gloss

    PubMed Central

    Ezerskaia, Anna; Ras, Arno; Bloemen, Pascal; Pereira, Silvania F.; Urbach, H. Paul; Varghese, Babu

    2017-01-01

    We demonstrate a low-cost optical method for measuring the gloss properties with improved sensitivity in the low gloss regime, relevant for skin gloss properties. The gloss estimation method is based on, on the one hand, the slope of the intensity gradient in the transition regime between specular and diffuse reflection and on the other on the sum over the intensities of pixels above threshold, derived from a camera image obtained using unpolarized white light illumination. We demonstrate the improved sensitivity of the two proposed methods using Monte Carlo simulations and experiments performed on ISO gloss calibration standards with an optical prototype. The performance and linearity of the method was compared with different professional gloss measurement devices based on the ratio of specular to diffuse intensity. We demonstrate the feasibility for in-vivo skin gloss measurements by quantifying the temporal evolution of skin gloss after application of standard paraffin cream bases on skin. The presented method opens new possibilities in the fields of cosmetology and dermatopharmacology for measuring the skin gloss and resorption kinetics and the pharmacodynamics of various external agents. PMID:29026683

  11. Determination of arsenic in traditional Chinese medicine by microwave digestion with flow injection-inductively coupled plasma mass spectrometry (FI-ICP-MS).

    PubMed

    Ong, E S; Yong, Y L; Woo, S O

    1999-01-01

    A simple, rapid, and sensitive method with high sample throughput was developed for determining arsenic in traditional Chinese medicine (TCM) in the form of uncoated tablets, sugar-coated tablets, black pills, capsules, powders, and syrups. The method involves microwave digestion with flow injection-inductively coupled plasma mass spectrometry (FI-ICP-MS). Method precision was 2.7-10.1% (relative standard deviation, n = 6) for different concentrations of arsenic in different TCM samples analyzed by different analysts on different days. Method accuracy was checked with a certified reference material (sea lettuce, Ulva lactuca, BCR CRM 279) for external calibration and by spiking arsenic standard into different TCMs. Recoveries of 89-92% were obtained for the certified reference material and higher than 95% for spiked TCMs. Matrix interference was insignificant for samples analyzed by the method of standard addition. Hence, no correction equation was used in the analysis of arsenic in the samples studied. Sample preparation using microwave digestion gave results that were very similar to those obtained by conventional wet acid digestion using nitric acid.

  12. High sensitivity optical measurement of skin gloss.

    PubMed

    Ezerskaia, Anna; Ras, Arno; Bloemen, Pascal; Pereira, Silvania F; Urbach, H Paul; Varghese, Babu

    2017-09-01

    We demonstrate a low-cost optical method for measuring the gloss properties with improved sensitivity in the low gloss regime, relevant for skin gloss properties. The gloss estimation method is based on, on the one hand, the slope of the intensity gradient in the transition regime between specular and diffuse reflection and on the other on the sum over the intensities of pixels above threshold, derived from a camera image obtained using unpolarized white light illumination. We demonstrate the improved sensitivity of the two proposed methods using Monte Carlo simulations and experiments performed on ISO gloss calibration standards with an optical prototype. The performance and linearity of the method was compared with different professional gloss measurement devices based on the ratio of specular to diffuse intensity. We demonstrate the feasibility for in-vivo skin gloss measurements by quantifying the temporal evolution of skin gloss after application of standard paraffin cream bases on skin. The presented method opens new possibilities in the fields of cosmetology and dermatopharmacology for measuring the skin gloss and resorption kinetics and the pharmacodynamics of various external agents.

  13. An Investigation of the Relation Between Contact Thermometry and Dew-Point Temperature Realization

    NASA Astrophysics Data System (ADS)

    Benyon, R.; Böse, N.; Mitter, H.; Mutter, D.; Vicente, T.

    2012-09-01

    Precision optical dew-point hygrometers are the most commonly used transfer standards for the comparison of dew-point temperature realizations at National Metrology Institutes (NMIs) and for disseminating traceability to calibration laboratories. These instruments have been shown to be highly reproducible when properly used. In order to obtain the best performance, the resistance of the platinum resistance thermometer (PRT) embedded in the mirror is usually measured with an external, traceable resistance bridge or digital multimeter. The relation between the conventional calibration of miniature PRTs, prior to their assembly in the mirrors of state-of-the-art optical dew-point hygrometers and their subsequent calibration as dew-point temperature measurement devices, has been investigated. Standard humidity generators of three NMIs were used to calibrate hygrometers of different designs, covering the dew-point temperature range from -75 °C to + 95 °C. The results span more than a decade, during which time successive improvements and modifications were implemented by the manufacturer. The findings are presented and discussed in the context of enabling the optimum use of these transfer standards and as a basis for determining contributions to the uncertainty in their calibration.

  14. Experimental and analytical study of cryogenic propellant boiloff to develop and verify alternate pressurization concepts for Space Shuttle external tank using a scaled down tank

    NASA Technical Reports Server (NTRS)

    Akyuzlu, K. M.; Jones, S.; Meredith, T.

    1993-01-01

    Self pressurization by propellant boiloff is experimentally studied as an alternate pressurization concept for the Space Shuttle external tank (ET). The experimental setup used in the study is an open flow system which is composed of a variable area test tank and a recovery tank. The vacuum jacketed test tank is geometrically similar to the external LOx tank for the Space Shuttle. It is equipped with instrumentation to measure the temperature and pressure histories within the liquid and vapor, and viewports to accommodate visual observations and Laser-Doppler Anemometry measurements of fluid velocities. A set of experiments were conducted using liquid Nitrogen to determine the temperature stratification in the liquid and vapor, and pressure histories of the vapor during sudden and continuous depressurization for various different boundary and initial conditions. The study also includes the development and calibration of a computer model to simulate the experiments. This model is a one-dimensional, multi-node type which assumes the liquid and the vapor to be under non-equilibrium conditions during the depressurization. It has been tested for a limited number of cases. The preliminary results indicate that the accuracy of the simulations is determined by the accuracy of the heat transfer coefficients for the vapor and the liquid at the interface which are taken to be the calibration parameters in the present model.

  15. External Validation of a Case-Mix Adjustment Model for the Standardized Reporting of 30-Day Stroke Mortality Rates in China.

    PubMed

    Yu, Ping; Pan, Yuesong; Wang, Yongjun; Wang, Xianwei; Liu, Liping; Ji, Ruijun; Meng, Xia; Jing, Jing; Tong, Xu; Guo, Li; Wang, Yilong

    2016-01-01

    A case-mix adjustment model has been developed and externally validated, demonstrating promise. However, the model has not been thoroughly tested among populations in China. In our study, we evaluated the performance of the model in Chinese patients with acute stroke. The case-mix adjustment model A includes items on age, presence of atrial fibrillation on admission, National Institutes of Health Stroke Severity Scale (NIHSS) score on admission, and stroke type. Model B is similar to Model A but includes only the consciousness component of the NIHSS score. Both model A and B were evaluated to predict 30-day mortality rates in 13,948 patients with acute stroke from the China National Stroke Registry. The discrimination of the models was quantified by c-statistic. Calibration was assessed using Pearson's correlation coefficient. The c-statistic of model A in our external validation cohort was 0.80 (95% confidence interval, 0.79-0.82), and the c-statistic of model B was 0.82 (95% confidence interval, 0.81-0.84). Excellent calibration was reported in the two models with Pearson's correlation coefficient (0.892 for model A, p<0.001; 0.927 for model B, p = 0.008). The case-mix adjustment model could be used to effectively predict 30-day mortality rates in Chinese patients with acute stroke.

  16. A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.

    PubMed

    Pagoulatos, N; Haynor, D R; Kim, Y

    2001-09-01

    We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.

  17. Self-calibration method for rotating laser positioning system using interscanning technology and ultrasonic ranging.

    PubMed

    Wu, Jun; Yu, Zhijing; Zhuge, Jingchang

    2016-04-01

    A rotating laser positioning system (RLPS) is an efficient measurement method for large-scale metrology. Due to multiple transmitter stations, which consist of a measurement network, the position relationship of these stations must be first calibrated. However, with such auxiliary devices such as a laser tracker, scale bar, and complex calibration process, the traditional calibration methods greatly reduce the measurement efficiency. This paper proposes a self-calibration method for RLPS, which can automatically obtain the position relationship. The method is implemented through interscanning technology by using a calibration bar mounted on the transmitter station. Each bar is composed of three RLPS receivers and one ultrasonic sensor whose coordinates are known in advance. The calibration algorithm is mainly based on multiplane and distance constraints and is introduced in detail through a two-station mathematical model. The repeated experiments demonstrate that the coordinate measurement uncertainty of spatial points by using this method is about 0.1 mm, and the accuracy experiments show that the average coordinate measurement deviation is about 0.3 mm compared with a laser tracker. The accuracy can meet the requirements of most applications, while the calibration efficiency is significantly improved.

  18. Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut

    2017-04-01

    Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization

  19. Volumetric calibration of a plenoptic camera.

    PubMed

    Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S

    2018-02-01

    The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.

  20. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  1. BESTEST-EX | Buildings | NREL

    Science.gov Websites

    method for testing home energy audit software and associated calibration methods. BESTEST-EX is one of Energy Analysis Model Calibration Methods. When completed, the ANSI/RESNET SMOT will specify test procedures for evaluating calibration methods used in conjunction with predicting building energy use and

  2. High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu

    2017-05-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.

  3. Uncertainty propagation in the calibration equations for NTC thermistors

    NASA Astrophysics Data System (ADS)

    Liu, Guang; Guo, Liang; Liu, Chunlong; Wu, Qingwen

    2018-06-01

    The uncertainty propagation problem is quite important for temperature measurements, since we rely so much on the sensors and calibration equations. Although uncertainty propagation for platinum resistance or radiation thermometers is well known, there have been few publications concerning negative temperature coefficient (NTC) thermistors. Insight into the propagation characteristics of uncertainty that develop when equations are determined using the Lagrange interpolation or least-squares fitting method is presented here with respect to several of the most common equations used in NTC thermistor calibration. Within this work, analytical expressions of the propagated uncertainties for both fitting methods are derived for the uncertainties in the measured temperature and resistance at each calibration point. High-precision calibration of an NTC thermistor in a precision water bath was performed by means of the comparison method. Results show that, for both fitting methods, the propagated uncertainty is flat in the interpolation region but rises rapidly beyond the calibration range. Also, for temperatures interpolated between calibration points, the propagated uncertainty is generally no greater than that associated with the calibration points. For least-squares fitting, the propagated uncertainty is significantly reduced by increasing the number of calibration points and can be well kept below the uncertainty of the calibration points.

  4. Rear shape in 3 dimensions summarized by principal component analysis is a good predictor of body condition score in Holstein dairy cows.

    PubMed

    Fischer, A; Luginbühl, T; Delattre, L; Delouard, J M; Faverdin, P

    2015-07-01

    Body condition is an indirect estimation of the level of body reserves, and its variation reflects cumulative variation in energy balance. It interacts with reproductive and health performance, which are important to consider in dairy production but not easy to monitor. The commonly used body condition score (BCS) is time consuming, subjective, and not very sensitive. The aim was therefore to develop and validate a method assessing BCS with 3-dimensional (3D) surfaces of the cow's rear. A camera captured 3D shapes 2 m from the floor in a weigh station at the milking parlor exit. The BCS was scored by 3 experts on the same day as 3D imaging. Four anatomical landmarks had to be identified manually on each 3D surface to define a space centered on the cow's rear. A set of 57 3D surfaces from 56 Holstein dairy cows was selected to cover a large BCS range (from 0.5 to 4.75 on a 0 to 5 scale) to calibrate 3D surfaces on BCS. After performing a principal component analysis on this data set, multiple linear regression was fitted on the coordinates of these surfaces in the principal components' space to assess BCS. The validation was performed on 2 external data sets: one with cows used for calibration, but at a different lactation stage, and one with cows not used for calibration. Additionally, 6 cows were scanned once and their surfaces processed 8 times each for repeatability and then these cows were scanned 8 times each the same day for reproducibility. The selected model showed perfect calibration and a good but weaker validation (root mean square error=0.31 for the data set with cows used for calibration; 0.32 for the data set with cows not used for calibration). Assessing BCS with 3D surfaces was 3 times more repeatable (standard error=0.075 versus 0.210 for BCS) and 2.8 times more reproducible than manually scored BCS (standard error=0.103 versus 0.280 for BCS). The prediction error was similar for both validation data sets, indicating that the method is not less efficient for cows not used for calibration. The major part of reproducibility error incorporates repeatability error. An automation of the anatomical landmarks identification is required, first to allow broadband measures of body condition and second to improve repeatability and consequently reproducibility. Assessing BCS using 3D imaging coupled with principal component analysis appears to be a very promising means of improving precision and feasibility of this trait measurement. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  5. Quality Management and Calibration

    NASA Astrophysics Data System (ADS)

    Merkus, Henk G.

    Good specification of a product’s performance requires adequate characterization of relevant properties. Particulate products are usually characterized by some PSD, shape or porosity parameter(s). For proper characterization, adequate sampling, dispersion, and measurement procedures should be available or developed and skilful personnel should use appropriate, well-calibrated/qualified equipment. The characterization should be executed, in agreement with customers, in a wellorganized laboratory. All related aspects should be laid down in a quality handbook. The laboratory should provide proof for its capability to perform the characterization of stated products and/or reference materials within stated confidence limits. This can be done either by internal validation and audits or by external GLP accreditation.

  6. Frequency control of tunable lasers using a frequency-calibrated λ-meter in an experiment on preparation of Rydberg atoms in a magneto-optical trap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saakyan, S A; Vilshanskaya, E V; Zelener, B B

    2015-09-30

    A new technique is proposed and applied to study the frequency drift of an external-cavity semiconductor laser, locked to the transmission resonances of a thermally stabilised Fabry–Perot interferometer. The interferometer frequency drift is measured to be less than 2 MHz h{sup -1}. The laser frequency is measured using an Angstrom wavemeter, calibrated using an additional stabilised laser. It is shown that this system of laser frequency control can be used to identify Rydberg transitions in ultracold {sup 7}Li atoms. (control of laser radiation parameters)

  7. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  8. Modeling and experimental characterization of a new piezoelectric sensor for low-amplitude vibration measurement

    NASA Astrophysics Data System (ADS)

    Hou, X. Y.; Koh, C. G.; Kuang, K. S. C.; Lee, W. H.

    2017-07-01

    This paper investigates the capability of a novel piezoelectric sensor for low-frequency and low-amplitude vibration measurement. The proposed design effectively amplifies the input acceleration via two amplifying mechanisms and thus eliminates the use of the external charge amplifier or conditioning amplifier typically employed for measurement system. The sensor is also self-powered, i.e. no external power unit is required. Consequently, wiring and electrical insulation for on-site measurement are considerably simpler. In addition, the design also greatly reduces the interference from rotational motion which often accompanies the translational acceleration to be measured. An analytical model is developed based on a set of piezoelectric constitutive equations and beam theory. Closed-form expression is derived to correlate sensor geometry and material properties with its dynamic performance. Experimental calibration is then carried out to validate the analytical model. After calibration, experiments are carried out to check the feasibility of the new sensor in structural vibration detection. From experimental results, it is concluded that the proposed sensor is suitable for measuring low-frequency and low-amplitude vibrations.

  9. Calibration procedure of Hukseflux SR25 to Establish the Diffuse Reference for the Outdoor Broadband Radiometer Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reda, Ibrahim M.; Andreas, Afshin M.

    2017-08-01

    Accurate pyranometer calibrations, traceable to internationally recognized standards, are critical for solar irradiance measurements. One calibration method is the component summation method, where the pyranometers are calibrated outdoors under clear sky conditions, and the reference global solar irradiance is calculated as the sum of two reference components, the diffuse horizontal and subtended beam solar irradiances. The beam component is measured with pyrheliometers traceable to the World Radiometric Reference, while there is no internationally recognized reference for the diffuse component. In the absence of such a reference, we present a method to consistently calibrate pyranometers for measuring the diffuse component. Themore » method is based on using a modified shade/unshade method and a pyranometer with less than 0.5 W/m2 thermal offset. The calibration result shows that the responsivity of Hukseflux SR25 pyranometer equals 10.98 uV/(W/m2) with +/-0.86 percent uncertainty.« less

  10. Radiation calibration for LWIR Hyperspectral Imager Spectrometer

    NASA Astrophysics Data System (ADS)

    Yang, Zhixiong; Yu, Chunchao; Zheng, Wei-jian; Lei, Zhenggang; Yan, Min; Yuan, Xiaochun; Zhang, Peizhong

    2014-11-01

    The radiometric calibration of LWIR Hyperspectral imager Spectrometer is presented. The lab has been developed to LWIR Interferometric Hyperspectral imager Spectrometer Prototype(CHIPED-I) to study Lab Radiation Calibration, Two-point linear calibration is carried out for the spectrometer by using blackbody respectively. Firstly, calibration measured relative intensity is converted to the absolute radiation lightness of the object. Then, radiation lightness of the object is is converted the brightness temperature spectrum by the method of brightness temperature. The result indicated †that this method of Radiation Calibration calibration was very good.

  11. A proposed standard method for polarimetric calibration and calibration verification

    NASA Astrophysics Data System (ADS)

    Persons, Christopher M.; Jones, Michael W.; Farlow, Craig A.; Morell, L. Denise; Gulley, Michael G.; Spradley, Kevin D.

    2007-09-01

    Accurate calibration of polarimetric sensors is critical to reducing and analyzing phenomenology data, producing uniform polarimetric imagery for deployable sensors, and ensuring predictable performance of polarimetric algorithms. It is desirable to develop a standard calibration method, including verification reporting, in order to increase credibility with customers and foster communication and understanding within the polarimetric community. This paper seeks to facilitate discussions within the community on arriving at such standards. Both the calibration and verification methods presented here are performed easily with common polarimetric equipment, and are applicable to visible and infrared systems with either partial Stokes or full Stokes sensitivity. The calibration procedure has been used on infrared and visible polarimetric imagers over a six year period, and resulting imagery has been presented previously at conferences and workshops. The proposed calibration method involves the familiar calculation of the polarimetric data reduction matrix by measuring the polarimeter's response to a set of input Stokes vectors. With this method, however, linear combinations of Stokes vectors are used to generate highly accurate input states. This allows the direct measurement of all system effects, in contrast with fitting modeled calibration parameters to measured data. This direct measurement of the data reduction matrix allows higher order effects that are difficult to model to be discovered and corrected for in calibration. This paper begins with a detailed tutorial on the proposed calibration and verification reporting methods. Example results are then presented for a LWIR rotating half-wave retarder polarimeter.

  12. Determination of the content of fatty acid methyl esters (FAME) in biodiesel samples obtained by esterification using 1H-NMR spectroscopy.

    PubMed

    Mello, Vinicius M; Oliveira, Flavia C C; Fraga, William G; do Nascimento, Claudia J; Suarez, Paulo A Z

    2008-11-01

    Three different calibration curves based on (1)H-NMR spectroscopy (300 MHz) were used for quantifying the reaction yield during biodiesel synthesis by esterification of fatty acids mixtures and methanol. For this purpose, the integrated intensities of the hydrogens of the ester methoxy group (3.67 ppm) were correlated with the areas related to the various protons of the alkyl chain (olefinic hydrogens: 5.30-5.46 ppm; aliphatic: 2.67-2.78 ppm, 2.30 ppm, 1.96-2.12 ppm, 1.56-1.68 ppm, 1.22-1.42 ppm, 0.98 ppm, and 0.84-0.92 ppm). The first curve was obtained using the peaks relating the olefinic hydrogens, a second with the parafinic protons and the third curve using the integrated intensities of all the hydrogens. A total of 35 samples were examined: 25 samples to build the three different calibration curves and ten samples to serve as external validation samples. The results showed no statistical differences among the three methods, and all presented prediction errors less than 2.45% with a co-efficient of variation (CV) of 4.66%. 2008 John Wiley & Sons, Ltd.

  13. Virtual IED sensor at an rf-biased electrode in low-pressure plasma

    NASA Astrophysics Data System (ADS)

    Bogdanova, Maria; Lopaev, Dmitry; Zyryanov, Sergey; Rakhimov, Alexander

    2016-09-01

    The majority of present-day technologies resort to ion-assisted processes in rf low-pressure plasma. In order to control the process precisely, the energy distribution of ions (IED) bombarding the sample placed on the rf-biased electrode should be tracked. In this work the ``Virtual IED sensor'' concept is considered. The idea is to obtain the IED ``virtually'' from the plasma sheath model including a set of externally measurable discharge parameters. The applicability of the ``Virtual IED sensor'' concept was studied for dual-frequency asymmetric ICP and CCP discharges. The IED measurements were carried out in Ar and H2 plasmas in a wide range of conditions. The calculated IEDs were compared to those measured by the Retarded Field Energy Analyzer. To calibrate the ``Virtual IED sensor'', the ion flux was measured by the pulsed self-bias method and then compared to plasma density measurements by Langmuir and hairpin probes. It is shown that if there is a reliable calibration procedure, the ``Virtual IED sensor'' can be successfully realized on the basis of analytical and semianalytical plasma sheath models including measurable discharge parameters. This research is supported by Russian Science Foundation (RSF) Grant 14-12-01012.

  14. The Impact of Indoor and Outdoor Radiometer Calibration on Solar Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    2016-06-02

    This study addresses the effect of calibration methodologies on calibration responsivities and the resulting impact on radiometric measurements. The calibration responsivities used in this study are provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides outdoor calibration responsivity of pyranometers and pyrheliometers at a 45 degree solar zenith angle and responsivity as a function of solar zenith angle determined by clear-sky comparisons to reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturersmore » are performed using a stable artificial light source in a side-by-side comparison of the test radiometer under calibration to a reference radiometer of the same type. These different methods of calibration demonstrated 1percent to 2 percent differences in solar irradiance measurement. Analyzing these values will ultimately enable a reduction in radiometric measurement uncertainties and assist in developing consensus on a standard for calibration.« less

  15. Radiogenic 4He as a conservative tracer in buried‐valley aquifers

    USGS Publications Warehouse

    Van der Hoven, Stephen J.; Wright, R. Erik; Carstens, David A.; Hackley, Keith C.

    2005-01-01

    The accumulation of 4He in groundwater can be a powerful tool in hydrogeologic investigations. However, the use of 4He often suffers from disagreement or uncertainty related to in situ and external sources of 4He. In situ sources are quantified by several methods, while external sources are often treated as calibration parameters in modeling. We present data from direct laboratory measurements of 4He release from sediments and field data of dissolved 4He in the Mahomet Aquifer, a well‐studied buried‐valley aquifer in central Illinois. The laboratory‐derived accumulation rates (0.13–0.91 μcm3 STP kgwater−1 yr−1) are 1–2 orders of magnitude greater than the accumulation rates based on the U and Th concentrations of the sediments (0.004–0.009 μcm3 STP kgwater−1 yr−1). The direct measurement of accumulation rates are more consistent with dissolved concentrations of 4He in the groundwater. We suggest that the direct measurement method is applicable in a variety of hydrogeologic settings. The patterns of accumulation of 4He are consistent with the conceptual model of flow in the aquifer based on hydraulic and geochemical evidence and show areas where in situ production and external sources of 4He are dominant. In the southwestern part of the study area, Ne concentrations are less than atmospheric solubility, indicating gases have been lost from the groundwater. Available evidence indicates that the gases are lost as groundwater passes by pockets of CH4 in glacial deposits overlying the aquifer. However, the external flux from the underlying bedrock appears to dominate the accumulation of radiogenic 4He in the aquifer in the southwestern part of the study area, and the loss or gain of helium as groundwater passes through the overlying sediments is minor in comparison.

  16. Development and External Validation of the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer: Comparison with Two Western Risk Calculators in an Asian Cohort

    PubMed Central

    Yoon, Sungroh; Park, Man Sik; Choi, Hoon; Bae, Jae Hyun; Moon, Du Geon; Hong, Sung Kyu; Lee, Sang Eun; Park, Chanwang

    2017-01-01

    Purpose We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Materials and Methods Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. Results PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, p<0.001) but not different from that of the ERSPCRC-HG (0.83) on external validation. Calibration plots also revealed better performance of KPCRC-HG and ERSPCRC-HG than that of PCPTRC-HG on external validation. At a cut-off of 5% for KPCRC-HG, 253 of the 2,313 men (11%) would not have been biopsied, and 14 of the 614 PC cases with Gleason score 7 or higher (2%) would not have been diagnosed. Conclusions KPCRC-HG is the first web-based high-grade prostate cancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings. PMID:28046017

  17. An image‐based method to synchronize cone‐beam CT and optical surface tracking

    PubMed Central

    Schaerer, Joël; Riboldi, Marco; Sarrut, David; Baroni, Guido

    2015-01-01

    The integration of in‐room X‐ray imaging and optical surface tracking has gained increasing importance in the field of image guided radiotherapy (IGRT). An essential step for this integration consists of temporally synchronizing the acquisition of X‐ray projections and surface data. We present an image‐based method for the synchronization of cone‐beam computed tomography (CBCT) and optical surface systems, which does not require the use of additional hardware. The method is based on optically tracking the motion of a component of the CBCT/gantry unit, which rotates during the acquisition of the CBCT scan. A calibration procedure was implemented to relate the position of the rotating component identified by the optical system with the time elapsed since the beginning of the CBCT scan, thus obtaining the temporal correspondence between the acquisition of X‐ray projections and surface data. The accuracy of the proposed synchronization method was evaluated on a motorized moving phantom, performing eight simultaneous acquisitions with an Elekta Synergy CBCT machine and the AlignRT optical device. The median time difference between the sinusoidal peaks of phantom motion signals extracted from the synchronized CBCT and AlignRT systems ranged between ‐3.1 and 12.9 msec, with a maximum interquartile range of 14.4 msec. The method was also applied to clinical data acquired from seven lung cancer patients, demonstrating the potential of the proposed approach in estimating the individual and daily variations in respiratory parameters and motion correlation of internal and external structures. The presented synchronization method can be particularly useful for tumor tracking applications in extracranial radiation treatments, especially in the field of patient‐specific breathing models, based on the correlation between internal tumor motion and external surface surrogates. PACS number: 87

  18. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  19. Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Katrašnik, Jaka; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    The goal of this article is to present a novel method for spectral characterization and calibration of spectrometers and hyper-spectral imaging systems based on non-collinear acousto-optical tunable filters. The method characterizes the spectral tuning curve (frequency-wavelength characteristic) of the AOTF (Acousto-Optic Tunable Filter) filter by matching the acquired and modeled spectra of the HgAr calibration lamp, which emits line spectrum that can be well modeled via AOTF transfer function. In this way, not only tuning curve characterization and corresponding spectral calibration but also spectral resolution assessment is performed. The obtained results indicated that the proposed method is efficient, accurate and feasible for routine calibration of AOTF spectrometers and hyper-spectral imaging systems and thereby a highly competitive alternative to the existing calibration methods.

  20. Calibration of Ocean Wave Measurements by the TOPEX, Jason-1, and Jason-2 Satellites

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Beckley, B. D.

    2012-01-01

    The calibration and validation of ocean wave height measurements by the TOPEX, Jason-1, and Jason-2 satellite altimeters is addressed by comparing the measurements internally among them- selves and against independent wave measurements at moored buoys. The two six-month verification campaigns, when two of the satellites made near-simultaneous measurements along the same ground track, are invaluable for such work and reveal subtle aspects that otherwise might go undetected. The two Jason satellites are remarkably consistent; Topex reports waves generally 1-2% larger. External calibration is complicated by some systematic errors in the buoy data. We confirm a recent report by Durrant et al. that Canadian buoys underestimate significant wave heights by about 10% relative to U.S. buoys. Wave heights from all three altimetric satellites require scaling upwards by 5 6% to be consistent with U.S. buoys.

  1. The Calibration of AVHRR/3 Visible Dual Gain Using Meteosat-8 as a MODIS Calibration Transfer Medium

    NASA Technical Reports Server (NTRS)

    Avey, Lance; Garber, Donald; Nguyen, Louis; Minnis, Patrick

    2007-01-01

    This viewgraph presentation reviews the NOAA-17 AVHRR visible channels calibrated against MET-8/MODIS using dual gain regression methods. The topics include: 1) Motivation; 2) Methodology; 3) Dual Gain Regression Methods; 4) Examples of Regression methods; 5) AVHRR/3 Regression Strategy; 6) Cross-Calibration Method; 7) Spectral Response Functions; 8) MET8/NOAA-17; 9) Example of gain ratio adjustment; 10) Effect of mixed low/high count FOV; 11) Monitor dual gains over time; and 12) Conclusions

  2. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-06-22

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.

  3. IMU-based online kinematic calibration of robot manipulator.

    PubMed

    Du, Guanglong; Zhang, Ping

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  4. SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Elder, E; Roper, J

    2015-06-15

    Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less

  5. Development and external validation of a prediction rule for an unfavorable course of late-life depression: A multicenter cohort study.

    PubMed

    Maarsingh, O R; Heymans, M W; Verhaak, P F; Penninx, B W J H; Comijs, H C

    2018-08-01

    Given the poor prognosis of late-life depression, it is crucial to identify those at risk. Our objective was to construct and validate a prediction rule for an unfavourable course of late-life depression. For development and internal validation of the model, we used The Netherlands Study of Depression in Older Persons (NESDO) data. We included participants with a major depressive disorder (MDD) at baseline (n = 270; 60-90 years), assessed with the Composite International Diagnostic Interview (CIDI). For external validation of the model, we used The Netherlands Study of Depression and Anxiety (NESDA) data (n = 197; 50-66 years). The outcome was MDD after 2 years of follow-up, assessed with the CIDI. Candidate predictors concerned sociodemographics, psychopathology, physical symptoms, medication, psychological determinants, and healthcare setting. Model performance was assessed by calculating calibration and discrimination. 111 subjects (41.1%) had MDD after 2 years of follow-up. Independent predictors of MDD after 2 years were (older) age, (early) onset of depression, severity of depression, anxiety symptoms, comorbid anxiety disorder, fatigue, and loneliness. The final model showed good calibration and reasonable discrimination (AUC of 0.75; 0.70 after external validation). The strongest individual predictor was severity of depression (AUC of 0.69; 0.68 after external validation). The model was developed and validated in The Netherlands, which could affect the cross-country generalizability. Based on rather simple clinical indicators, it is possible to predict the 2-year course of MDD. The prediction rule can be used for monitoring MDD patients and identifying those at risk of an unfavourable outcome. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  7. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  8. Volumetric calibration of a plenoptic camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  9. Volumetric calibration of a plenoptic camera

    DOE PAGES

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...

    2018-02-01

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  10. Blind calibration of radio interferometric arrays using sparsity constraints and its implications for self-calibration

    NASA Astrophysics Data System (ADS)

    Chiarucci, Simone; Wijnholds, Stefan J.

    2018-02-01

    Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.

  11. Practical wavelength calibration considerations for UV-visible Fourier-transform spectroscopy.

    PubMed

    Salit, M L; Travis, J C; Winchester, M R

    1996-06-01

    The intrinsic wavelength scale in a modern reference laser-controlled Michelson interferometer-sometimes referred to as the Connes advantage-offers excellent wavelength accuracy with relative ease. Truly superb wavelength accuracy, with total relative uncertainty in line position of the order of several parts in 10(8), should be within reach with single-point, multiplicative calibration. The need for correction of the wavelength scale arises from two practical effects: the use of a finite aperture, from which off-axis rays propagate through the interferometer, and imperfect geometric alignment of the sample beam with the reference beam and the optical axis of the moving mirror. Although an analytical correction can be made for the finite-aperture effect, calibration with a trusted wavelength standard is typically used to accomplish both corrections. Practical aspects of accurate calibration of an interferometer in the UV-visible region are discussed. Critical issues regarding accurate use of a standard external to the sample source and the evaluation and selection of an appropriate standard are addressed. Anomalous results for two different potential wavelength standards measured by Fabry-Perot interferometry (Ar II and (198)Hg I) are observed.

  12. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  13. Development and External Validation of a Melanoma Risk Prediction Model Based on Self-assessed Risk Factors.

    PubMed

    Vuong, Kylie; Armstrong, Bruce K; Weiderpass, Elisabete; Lund, Eiliv; Adami, Hans-Olov; Veierod, Marit B; Barrett, Jennifer H; Davies, John R; Bishop, D Timothy; Whiteman, David C; Olsen, Catherine M; Hopper, John L; Mann, Graham J; Cust, Anne E; McGeechan, Kevin

    2016-08-01

    Identifying individuals at high risk of melanoma can optimize primary and secondary prevention strategies. To develop and externally validate a risk prediction model for incident first-primary cutaneous melanoma using self-assessed risk factors. We used unconditional logistic regression to develop a multivariable risk prediction model. Relative risk estimates from the model were combined with Australian melanoma incidence and competing mortality rates to obtain absolute risk estimates. A risk prediction model was developed using the Australian Melanoma Family Study (629 cases and 535 controls) and externally validated using 4 independent population-based studies: the Western Australia Melanoma Study (511 case-control pairs), Leeds Melanoma Case-Control Study (960 cases and 513 controls), Epigene-QSkin Study (44 544, of which 766 with melanoma), and Swedish Women's Lifestyle and Health Cohort Study (49 259 women, of which 273 had melanoma). We validated model performance internally and externally by assessing discrimination using the area under the receiver operating curve (AUC). Additionally, using the Swedish Women's Lifestyle and Health Cohort Study, we assessed model calibration and clinical usefulness. The risk prediction model included hair color, nevus density, first-degree family history of melanoma, previous nonmelanoma skin cancer, and lifetime sunbed use. On internal validation, the AUC was 0.70 (95% CI, 0.67-0.73). On external validation, the AUC was 0.66 (95% CI, 0.63-0.69) in the Western Australia Melanoma Study, 0.67 (95% CI, 0.65-0.70) in the Leeds Melanoma Case-Control Study, 0.64 (95% CI, 0.62-0.66) in the Epigene-QSkin Study, and 0.63 (95% CI, 0.60-0.67) in the Swedish Women's Lifestyle and Health Cohort Study. Model calibration showed close agreement between predicted and observed numbers of incident melanomas across all deciles of predicted risk. In the external validation setting, there was higher net benefit when using the risk prediction model to classify individuals as high risk compared with classifying all individuals as high risk. The melanoma risk prediction model performs well and may be useful in prevention interventions reliant on a risk assessment using self-assessed risk factors.

  14. Novel Calibration Algorithm for a Three-Axis Strapdown Magnetometer

    PubMed Central

    Liu, Yan Xia; Li, Xi Sheng; Zhang, Xiao Juan; Feng, Yi Bo

    2014-01-01

    A complete error calibration model with 12 independent parameters is established by analyzing the three-axis magnetometer error mechanism. The said model conforms to an ellipsoid restriction, the parameters of the ellipsoid equation are estimated, and the ellipsoid coefficient matrix is derived. However, the calibration matrix cannot be determined completely, as there are fewer ellipsoid parameters than calibration model parameters. Mathematically, the calibration matrix derived from the ellipsoid coefficient matrix by a different matrix decomposition method is not unique, and there exists an unknown rotation matrix R between them. This paper puts forward a constant intersection angle method (angles between the geomagnetic field and gravitational field are fixed) to estimate R. The Tikhonov method is adopted to solve the problem that rounding errors or other errors may seriously affect the calculation results of R when the condition number of the matrix is very large. The geomagnetic field vector and heading error are further corrected by R. The constant intersection angle method is convenient and practical, as it is free from any additional calibration procedure or coordinate transformation. In addition, the simulation experiment indicates that the heading error declines from ±1° calibrated by classical ellipsoid fitting to ±0.2° calibrated by a constant intersection angle method, and the signal-to-noise ratio is 50 dB. The actual experiment exhibits that the heading error is further corrected from ±0.8° calibrated by the classical ellipsoid fitting to ±0.3° calibrated by a constant intersection angle method. PMID:24831110

  15. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  16. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  17. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  18. A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature

    NASA Astrophysics Data System (ADS)

    Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min

    2017-05-01

    This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.

  19. Comparison of infusion pumps calibration methods

    NASA Astrophysics Data System (ADS)

    Batista, Elsa; Godinho, Isabel; do Céu Ferreira, Maria; Furtado, Andreia; Lucas, Peter; Silva, Claudia

    2017-12-01

    Nowadays, several types of infusion pump are commonly used for drug delivery, such as syringe pumps and peristaltic pumps. These instruments present different measuring features and capacities according to their use and therapeutic application. In order to ensure the metrological traceability of these flow and volume measuring equipment, it is necessary to use suitable calibration methods and standards. Two different calibration methods can be used to determine the flow error of infusion pumps. One is the gravimetric method, considered as a primary method, commonly used by National Metrology Institutes. The other calibration method, a secondary method, relies on an infusion device analyser (IDA) and is typically used by hospital maintenance offices. The suitability of the IDA calibration method was assessed by testing several infusion instruments at different flow rates using the gravimetric method. In addition, a measurement comparison between Portuguese Accredited Laboratories and hospital maintenance offices was performed under the coordination of the Portuguese Institute for Quality, the National Metrology Institute. The obtained results were directly related to the used calibration method and are presented in this paper. This work has been developed in the framework of the EURAMET projects EMRP MeDD and EMPIR 15SIP03.

  20. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  1. Bayesian regression models outperform partial least squares methods for predicting milk components and technological properties using infrared spectral data.

    PubMed

    Ferragina, A; de los Campos, G; Vazquez, A I; Cecchinato, A; Bittante, G

    2015-11-01

    The aim of this study was to assess the performance of Bayesian models commonly used for genomic selection to predict "difficult-to-predict" dairy traits, such as milk fatty acid (FA) expressed as percentage of total fatty acids, and technological properties, such as fresh cheese yield and protein recovery, using Fourier-transform infrared (FTIR) spectral data. Our main hypothesis was that Bayesian models that can estimate shrinkage and perform variable selection may improve our ability to predict FA traits and technological traits above and beyond what can be achieved using the current calibration models (e.g., partial least squares, PLS). To this end, we assessed a series of Bayesian methods and compared their prediction performance with that of PLS. The comparison between models was done using the same sets of data (i.e., same samples, same variability, same spectral treatment) for each trait. Data consisted of 1,264 individual milk samples collected from Brown Swiss cows for which gas chromatographic FA composition, milk coagulation properties, and cheese-yield traits were available. For each sample, 2 spectra in the infrared region from 5,011 to 925 cm(-1) were available and averaged before data analysis. Three Bayesian models: Bayesian ridge regression (Bayes RR), Bayes A, and Bayes B, and 2 reference models: PLS and modified PLS (MPLS) procedures, were used to calibrate equations for each of the traits. The Bayesian models used were implemented in the R package BGLR (http://cran.r-project.org/web/packages/BGLR/index.html), whereas the PLS and MPLS were those implemented in the WinISI II software (Infrasoft International LLC, State College, PA). Prediction accuracy was estimated for each trait and model using 25 replicates of a training-testing validation procedure. Compared with PLS, which is currently the most widely used calibration method, MPLS and the 3 Bayesian methods showed significantly greater prediction accuracy. Accuracy increased in moving from calibration to external validation methods, and in moving from PLS and MPLS to Bayesian methods, particularly Bayes A and Bayes B. The maximum R(2) value of validation was obtained with Bayes B and Bayes A. For the FA, C10:0 (% of each FA on total FA basis) had the highest R(2) (0.75, achieved with Bayes A and Bayes B), and among the technological traits, fresh cheese yield R(2) of 0.82 (achieved with Bayes B). These 2 methods have proven to be useful instruments in shrinking and selecting very informative wavelengths and inferring the structure and functions of the analyzed traits. We conclude that Bayesian models are powerful tools for deriving calibration equations, and, importantly, these equations can be easily developed using existing open-source software. As part of our study, we provide scripts based on the open source R software BGLR, which can be used to train customized prediction equations for other traits or populations. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  2. On-Demand Calibration and Evaluation for Electromagnetically Tracked Laparoscope in Augmented Reality Visualization

    PubMed Central

    Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Purpose Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that calibration can be performed in the OR on demand. Methods We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration result in the OR, we integrated a tube phantom with fCalib and overlaid a virtual representation of the tube on the live video scene. Results We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggested that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, would affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s – 22.7 s). Conclusions We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand. PMID:27250853

  3. Accurate calibration of a molecular beam time-of-flight mass spectrometer for on-line analysis of high molecular weight species.

    PubMed

    Apicella, B; Wang, X; Passaro, M; Ciajolo, A; Russo, C

    2016-10-15

    Time-of-Flight (TOF) Mass Spectrometry is a powerful analytical technique, provided that an accurate calibration by standard molecules in the same m/z range of the analytes is performed. Calibration in a very large m/z range is a difficult task, particularly in studies focusing on the detection of high molecular weight clusters of different molecules or high molecular weight species. External calibration is the most common procedure used for TOF mass spectrometric analysis in the gas phase and, generally, the only available standards are made up of mixtures of noble gases, covering a small mass range for calibration, up to m/z 136 (higher mass isotope of xenon). In this work, an accurate calibration of a Molecular Beam Time-of Flight Mass Spectrometer (MB-TOFMS) is presented, based on the use of water clusters up to m/z 3000. The advantages of calibrating a MB-TOFMS with water clusters for the detection of analytes with masses above those of the traditional calibrants such as noble gases were quantitatively shown by statistical calculations. A comparison of the water cluster and noble gases calibration procedures in attributing the masses to a test mixture extending up to m/z 800 is also reported. In the case of the analysis of combustion products, another important feature of water cluster calibration was shown, that is the possibility of using them as "internal standard" directly formed from the combustion water, under suitable experimental conditions. The water clusters calibration of a MB-TOFMS gives rise to a ten-fold reduction in error compared to the traditional calibration with noble gases. The consequent improvement in mass accuracy in the calibration of a MB-TOFMS has important implications in various fields where detection of high molecular mass species is required. In combustion products analysis, it is also possible to obtain a new calibration spectrum before the acquisition of each spectrum, only modifying some operative conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Precise determination of N-acetylcysteine in pharmaceuticals by microchip electrophoresis.

    PubMed

    Rudašová, Marína; Masár, Marián

    2016-01-01

    A novel microchip electrophoresis method for the rapid and high-precision determination of N-acetylcysteine, a pharmaceutically active ingredient, in mucolytics has been developed. Isotachophoresis separations were carried out at pH 6.0 on a microchip with conductivity detection. The methods of external calibration and internal standard were used to evaluate the results. The internal standard method effectively eliminated variations in various working parameters, mainly run-to-run fluctuations of an injected volume. The repeatability and accuracy of N-acetylcysteine determination in all mucolytic preparations tested (Solmucol 90 and 200, and ACC Long 600) were more than satisfactory with the relative standard deviation and relative error values <0.7 and <1.9%, respectively. A recovery range of 99-101% of N-acetylcysteine in the analyzed pharmaceuticals predetermines the proposed method for accurate analysis as well. This work, in general, indicates analytical possibilities of microchip isotachophoresis for the quantitative analysis of simplified samples such as pharmaceuticals that contain the analyte(s) at relatively high concentrations. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Discrimination of edible oils and fats by combination of multivariate pattern recognition and FT-IR spectroscopy: a comparative study between different modeling methods.

    PubMed

    Javidnia, Katayoun; Parish, Maryam; Karimi, Sadegh; Hemmateenejad, Bahram

    2013-03-01

    By using FT-IR spectroscopy, many researchers from different disciplines enrich the experimental complexity of their research for obtaining more precise information. Moreover chemometrics techniques have boosted the use of IR instruments. In the present study we aimed to emphasize on the power of FT-IR spectroscopy for discrimination between different oil samples (especially fat from vegetable oils). Also our data were used to compare the performance of different classification methods. FT-IR transmittance spectra of oil samples (Corn, Colona, Sunflower, Soya, Olive, and Butter) were measured in the wave-number interval of 450-4000 cm(-1). Classification analysis was performed utilizing PLS-DA, interval PLS-DA, extended canonical variate analysis (ECVA) and interval ECVA methods. The effect of data preprocessing by extended multiplicative signal correction was investigated. Whilst all employed method could distinguish butter from vegetable oils, iECVA resulted in the best performances for calibration and external test set with 100% sensitivity and specificity. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Design of an ultra-portable field transfer radiometer supporting automated vicarious calibration

    NASA Astrophysics Data System (ADS)

    Anderson, Nikolaus; Thome, Kurtis; Czapla-Myers, Jeffrey; Biggar, Stuart

    2015-09-01

    The University of Arizona Remote Sensing Group (RSG) began outfitting the radiometric calibration test site (RadCaTS) at Railroad Valley Nevada in 2004 for automated vicarious calibration of Earth-observing sensors. RadCaTS was upgraded to use RSG custom 8-band ground viewing radiometers (GVRs) beginning in 2011 and currently four GVRs are deployed providing an average reflectance for the test site. This measurement of ground reflectance is the most critical component of vicarious calibration using the reflectance-based method. In order to ensure the quality of these measurements, RSG has been exploring more efficient and accurate methods of on-site calibration evaluation. This work describes the design of, and initial results from, a small portable transfer radiometer for the purpose of GVR calibration validation on site. Prior to deployment, RSG uses high accuracy laboratory calibration methods in order to provide radiance calibrations with low uncertainties for each GVR. After deployment, a solar radiation based calibration has typically been used. The method is highly dependent on a clear, stable atmosphere, requires at least two people to perform, is time consuming in post processing, and is dependent on several large pieces of equipment. In order to provide more regular and more accurate calibration monitoring, the small portable transfer radiometer is designed for quick, one-person operation and on-site field calibration comparison results. The radiometer is also suited for laboratory calibration use and thus could be used as a transfer radiometer calibration standard for ground viewing radiometers of a RadCalNet site.

  7. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    PubMed

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Automatic Calibration Method for Driver’s Head Orientation in Natural Driving Environment

    PubMed Central

    Fu, Xianping; Guan, Xiao; Peli, Eli; Liu, Hongbo; Luo, Gang

    2013-01-01

    Gaze tracking is crucial for studying driver’s attention, detecting fatigue, and improving driver assistance systems, but it is difficult in natural driving environments due to nonuniform and highly variable illumination and large head movements. Traditional calibrations that require subjects to follow calibrators are very cumbersome to be implemented in daily driving situations. A new automatic calibration method, based on a single camera for determining the head orientation and which utilizes the side mirrors, the rear-view mirror, the instrument board, and different zones in the windshield as calibration points, is presented in this paper. Supported by a self-learning algorithm, the system tracks the head and categorizes the head pose in 12 gaze zones based on facial features. The particle filter is used to estimate the head pose to obtain an accurate gaze zone by updating the calibration parameters. Experimental results show that, after several hours of driving, the automatic calibration method without driver’s corporation can achieve the same accuracy as a manual calibration method. The mean error of estimated eye gazes was less than 5°in day and night driving. PMID:24639620

  9. Risk prediction models of breast cancer: a systematic review of model performances.

    PubMed

    Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin

    2012-05-01

    The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.

  10. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  11. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  12. Fast wavelength calibration method for spectrometers based on waveguide comb optical filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Zhengang; Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240; Huang, Meizhen, E-mail: mzhuang@sjtu.edu.cn

    2015-04-15

    A novel fast wavelength calibration method for spectrometers based on a standard spectrometer and a double metal-cladding waveguide comb optical filter (WCOF) is proposed and demonstrated. By using the WCOF device, a wide-spectrum beam is comb-filtered, which is very suitable for spectrometer wavelength calibration. The influence of waveguide filter’s structural parameters and the beam incident angle on the comb absorption peaks’ wavelength and its bandwidth are also discussed. The verification experiments were carried out in the wavelength range of 200–1100 nm with satisfactory results. Comparing with the traditional wavelength calibration method based on discrete sparse atomic emission or absorption lines,more » the new method has some advantages: sufficient calibration data, high accuracy, short calibration time, fit for produce process, stability, etc.« less

  13. Determination of para red, Sudan dyes, canthaxanthin, and astaxanthin in animal feeds using UPLC.

    PubMed

    Hou, Xiaolin; Li, Yonggang; Wu, Guojuan; Wang, Lei; Hong, Miao; Wu, Yongnin

    2010-01-01

    A simple high-performance liquid chromatography method was developed for quantitative determination of para red, Sudan I, Sudan II, Sudan III, Sudan IV, canthaxanthin, and astaxanthin in feedstuff. The sample was extracted using acetonitrile and cleaned up on a C(18) SPE column. The residues were analyzed using ultra-performance liquid chromatography coupled to a diode array detector at 500 nm. The mobile phase was acetonitrile-formic acid-water with a gradient elution condition. The external standard curves were calibrated. The mean recoveries of the seven colorants were 62.7-91.0% with relative standard deviation 2.6-10.4% (intra-day) and 4.0-13.2% (inter-day). The detection limits were in the range of 0.006-0.02 mg/kg.

  14. Optimal laser wavelength for efficient laser power converter operation over temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Höhn, O., E-mail: oliver.hoehn@ise.fraunhofer.de; Walker, A. W.; Bett, A. W.

    2016-06-13

    A temperature dependent modeling study is conducted on a GaAs laser power converter to identify the optimal incident laser wavelength for optical power transmission. Furthermore, the respective temperature dependent maximal conversion efficiencies in the radiative limit as well as in a practically achievable limit are presented. The model is based on the transfer matrix method coupled to a two-diode model, and is calibrated to experimental data of a GaAs photovoltaic device over laser irradiance and temperature. Since the laser wavelength does not strongly influence the open circuit voltage of the laser power converter, the optimal laser wavelength is determined tomore » be in the range where the external quantum efficiency is maximal, but weighted by the photon flux of the laser.« less

  15. A Flexile and High Precision Calibration Method for Binocular Structured Light Scanning System

    PubMed Central

    Yuan, Jianying; Wang, Qiong; Li, Bailin

    2014-01-01

    3D (three-dimensional) structured light scanning system is widely used in the field of reverse engineering, quality inspection, and so forth. Camera calibration is the key for scanning precision. Currently, 2D (two-dimensional) or 3D fine processed calibration reference object is usually applied for high calibration precision, which is difficult to operate and the cost is high. In this paper, a novel calibration method is proposed with a scale bar and some artificial coded targets placed randomly in the measuring volume. The principle of the proposed method is based on hierarchical self-calibration and bundle adjustment. We get initial intrinsic parameters from images. Initial extrinsic parameters in projective space are estimated with the method of factorization and then upgraded to Euclidean space with orthogonality of rotation matrix and rank 3 of the absolute quadric as constraint. Last, all camera parameters are refined through bundle adjustment. Real experiments show that the proposed method is robust, and has the same precision level as the result using delicate artificial reference object, but the hardware cost is very low compared with the current calibration method used in 3D structured light scanning system. PMID:25202736

  16. Radiometric calibration method for large aperture infrared system with broad dynamic range.

    PubMed

    Sun, Zhiyuan; Chang, Songtao; Zhu, Wei

    2015-05-20

    Infrared radiometric measurements can acquire important data for missile defense systems. When observation is carried out by ground-based infrared systems, a missile is characterized by long distance, small size, and large variation of radiance. Therefore, the infrared systems should be manufactured with a larger aperture to enhance detection ability and calibrated at a broader dynamic range to extend measurable radiance. Nevertheless, the frequently used calibration methods demand an extended-area blackbody with broad dynamic range or a huge collimator for filling the system's field stop, which would greatly increase manufacturing costs and difficulties. To overcome this restriction, a calibration method based on amendment of inner and outer calibration is proposed. First, the principles and procedures of this method are introduced. Then, a shifting strategy of infrared systems for measuring targets with large fluctuations of infrared radiance is put forward. Finally, several experiments are performed on a shortwave infrared system with Φ400  mm aperture. The results indicate that the proposed method cannot only ensure accuracy of calibration but have the advantage of low cost, low power, and high motility. Hence, it is an effective radiometric calibration method in the outfield.

  17. Automatic alignment method for calibration of hydrometers

    NASA Astrophysics Data System (ADS)

    Lee, Y. J.; Chang, K. H.; Chon, J. C.; Oh, C. Y.

    2004-04-01

    This paper presents a new method to automatically align specific scale-marks for the calibration of hydrometers. A hydrometer calibration system adopting the new method consists of a vision system, a stepping motor, and software to control the system. The vision system is composed of a CCD camera and a frame grabber, and is used to acquire images. The stepping motor moves the camera, which is attached to the vessel containing a reference liquid, along the hydrometer. The operating program has two main functions: to process images from the camera to find the position of the horizontal plane and to control the stepping motor for the alignment of the horizontal plane with a particular scale-mark. Any system adopting this automatic alignment method is a convenient and precise means of calibrating a hydrometer. The performance of the proposed method is illustrated by comparing the calibration results using the automatic alignment method with those obtained using the manual method.

  18. An overview of in-orbit radiometric calibration of typical satellite sensors

    NASA Astrophysics Data System (ADS)

    Zhou, G. Q.; Li, C. Y.; Yue, T.; Jiang, L. J.; Liu, N.; Sun, Y.; Li, M. Y.

    2015-06-01

    This paper reviews the development of in-orbit radiometric calibration methods in the past 40 years. It summarizes the development of in-orbit radiometric calibration technology of typical satellite sensors in the visible/near-infrared bands and the thermal infrared band. Focuses on the visible/near-infrared bands radiometric calibration method including: Lamp calibration and solar radiationbased calibration. Summarizes the calibration technology of Landsat series satellite sensors including MSS, TM, ETM+, OLI, TIRS; SPOT series satellite sensors including HRV, HRS. In addition to the above sensors, there are also summarizing ALI which was equipped on EO-1, IRMSS which was equipped on CBERS series satellite. Comparing the in-orbit radiometric calibration technology of different periods but the same type satellite sensors analyzes the similarities and differences of calibration technology. Meanwhile summarizes the in-orbit radiometric calibration technology in the same periods but different country satellite sensors advantages and disadvantages of calibration technology.

  19. Cloned plasmid DNA fragments as calibrators for controlling GMOs: different real-time duplex quantitative PCR methods.

    PubMed

    Taverniers, Isabel; Van Bockstaele, Erik; De Loose, Marc

    2004-03-01

    Analytical real-time PCR technology is a powerful tool for implementation of the GMO labeling regulations enforced in the EU. The quality of analytical measurement data obtained by quantitative real-time PCR depends on the correct use of calibrator and reference materials (RMs). For GMO methods of analysis, the choice of appropriate RMs is currently under debate. So far, genomic DNA solutions from certified reference materials (CRMs) are most often used as calibrators for GMO quantification by means of real-time PCR. However, due to some intrinsic features of these CRMs, errors may be expected in the estimations of DNA sequence quantities. In this paper, two new real-time PCR methods are presented for Roundup Ready soybean, in which two types of plasmid DNA fragments are used as calibrators. Single-target plasmids (STPs) diluted in a background of genomic DNA were used in the first method. Multiple-target plasmids (MTPs) containing both sequences in one molecule were used as calibrators for the second method. Both methods simultaneously detect a promoter 35S sequence as GMO-specific target and a lectin gene sequence as endogenous reference target in a duplex PCR. For the estimation of relative GMO percentages both "delta C(T)" and "standard curve" approaches are tested. Delta C(T) methods are based on direct comparison of measured C(T) values of both the GMO-specific target and the endogenous target. Standard curve methods measure absolute amounts of target copies or haploid genome equivalents. A duplex delta C(T) method with STP calibrators performed at least as well as a similar method with genomic DNA calibrators from commercial CRMs. Besides this, high quality results were obtained with a standard curve method using MTP calibrators. This paper demonstrates that plasmid DNA molecules containing either one or multiple target sequences form perfect alternative calibrators for GMO quantification and are especially suitable for duplex PCR reactions.

  20. Development of a droplet digital PCR assay for population analysis of aflatoxigenic and atoxigenic Aspergillus flavus mixtures in soil

    USDA-ARS?s Scientific Manuscript database

    Application of atoxigenic strains to compete against aflatoxigenic strains of A. flavus strains has emerged as one of the practical strategy for reducing aflatoxins contamination in food. Droplet digital PCR (ddPCR) is a new DNA quantification platform without an external DNA calibrator. For ddPCR, ...

  1. 30 CFR 70.204 - Approved sampling devices; maintenance and calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... clean and in proper working condition by a person certified in accordance with § 70.202 (Certified... components of the cyclone to assure that they are clean and free of dust and dirt; (3) Examination of the...) Examination of the external tubing on the approved sampling device to assure that it is clean and free of...

  2. 30 CFR 70.204 - Approved sampling devices; maintenance and calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... clean and in proper working condition by a person certified in accordance with § 70.202 (Certified... components of the cyclone to assure that they are clean and free of dust and dirt; (3) Examination of the...) Examination of the external tubing on the approved sampling device to assure that it is clean and free of...

  3. 30 CFR 70.204 - Approved sampling devices; maintenance and calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... clean and in proper working condition by a person certified in accordance with § 70.202 (Certified... components of the cyclone to assure that they are clean and free of dust and dirt; (3) Examination of the...) Examination of the external tubing on the approved sampling device to assure that it is clean and free of...

  4. An external heat pulse method for measurement of sap flow through fruit pedicels, leaf petioles and other small-diameter stems.

    PubMed

    Clearwater, Michael J; Luo, Zhiwei; Mazzeo, Mariarosaria; Dichio, Bartolomeo

    2009-12-01

    The external heat ratio method is described for measurement of low rates of sap flow in both directions through stems and other plant organs, including fruit pedicels, with diameters up to 5 mm and flows less than 2 g h(-1). Calibration was empirical, with heat pulse velocity (v(h)) compared to gravimetric measurements of sap flow. In the four stem types tested (Actinidia sp. fruit pedicels, Schefflera arboricola petioles, Pittosporum crassifolium stems and Fagus sylvatica stems), v(h) was linearly correlated with sap velocity (v(s)) up to a v(s) of approximately 0.007 cm s(-1), equivalent to a flow of 1.8 g h(-1) through a 3-mm-diameter stem. Minimum detectable v(s) was approximately 0.0001 cm s(-1), equivalent to 0.025 g h(-1) through a 3-mm-diameter stem. Sensitivity increased with bark removal. Girdling had no effect on short-term measurements of in vivo sap flow, suggesting that phloem flows were too low to be separated from xylem flows. Fluctuating ambient temperatures increased variability in outdoor sap flow measurements. However, a consistent diurnal time-course of fruit pedicel sap flow was obtained, with flows towards 75-day-old kiwifruit lagging behind evaporative demand and peaking at 0.3 g h(-1) in the late afternoon.

  5. Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).

    PubMed

    Bevilacqua, Marta; Marini, Federico

    2014-08-01

    The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks). Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Experimental Demonstration of In-Place Calibration for Time Domain Microwave Imaging System

    NASA Astrophysics Data System (ADS)

    Kwon, S.; Son, S.; Lee, K.

    2018-04-01

    In this study, the experimental demonstration of in-place calibration was conducted using the developed time domain measurement system. Experiments were conducted using three calibration methods—in-place calibration and two existing calibrations, that is, array rotation and differential calibration. The in-place calibration uses dual receivers located at an equal distance from the transmitter. The received signals at the dual receivers contain similar unwanted signals, that is, the directly received signal and antenna coupling. In contrast to the simulations, the antennas are not perfectly matched and there might be unexpected environmental errors. Thus, we experimented with the developed experimental system to demonstrate the proposed method. The possible problems with low signal-to-noise ratio and clock jitter, which may exist in time domain systems, were rectified by averaging repeatedly measured signals. The tumor was successfully detected using the three calibration methods according to the experimental results. The cross correlation was calculated using the reconstructed image of the ideal differential calibration for a quantitative comparison between the existing rotation calibration and the proposed in-place calibration. The mean value of cross correlation between the in-place calibration and ideal differential calibration was 0.80, and the mean value of cross correlation of the rotation calibration was 0.55. Furthermore, the results of simulation were compared with the experimental results to verify the in-place calibration method. A quantitative analysis was also performed, and the experimental results show a tendency similar to the simulation.

  7. Influence of Installation Errors On the Output Data of the Piezoelectric Vibrations Transducers

    NASA Astrophysics Data System (ADS)

    Kozuch, Barbara; Chelmecki, Jaroslaw; Tatara, Tadeusz

    2017-10-01

    The paper examines an influence of installation errors of the piezoelectric vibrations transducers on the output data. PCB Piezotronics piezoelectric accelerometers were used to perform calibrations by comparison. The measurements were performed with TMS 9155 Calibration Workstation version 5.4.0 at frequency in the range of 5Hz - 2000Hz. Accelerometers were fixed on the calibration station in a so-called back-to-back configuration in accordance with the applicable international standard - ISO 16063-21: Methods for the calibration of vibration and shock transducers - Part 21: Vibration calibration by comparison to a reference transducer. The first accelerometer was calibrated by suitable methods with traceability to a primary reference transducer. Each subsequent calibration was performed when changing one setting in relation to the original calibration. The alterations were related to negligence and failures in relation to the above-mentioned standards and operating guidelines - e.g. the sensor was not tightened or appropriate substance was not placed. Also, there was modified the method of connection which was in the standards requirements. Different kind of wax, light oil, grease and other assembly methods were used. The aim of the study was to verify the significance of standards requirements and to estimate of their validity. The authors also wanted to highlight the most significant calibration errors. Moreover, relation between various appropriate methods of the connection was demonstrated.

  8. Analysis of anthocyanins in commercial fruit juices by using nano-liquid chromatography-electrospray-mass spectrometry and high-performance liquid chromatography with UV-vis detector.

    PubMed

    Fanali, Chiara; Dugo, Laura; D'Orazio, Giovanni; Lirangi, Melania; Dachà, Marina; Dugo, Paola; Mondello, Luigi

    2011-01-01

    Nano-LC and conventional HPLC techniques were applied for the analysis of anthocyanins present in commercial fruit juices using a capillary column of 100 μm id and a 2.1 mm id narrow-bore C(18) column. Analytes were detected by UV-Vis at 518 nm and ESI-ion trap MS with HPLC and nano-LC, respectively. Commercial blueberry juice (14 anthocyanins detected) was used to optimize chromatographic separation of analytes and other analysis parameters. Qualitative identification of anthocyanins was performed by comparing the recorded mass spectral data with those of published papers. The use of the same mobile phase composition in both techniques revealed that the miniaturized method exhibited shorter analysis time and higher sensitivity than narrow-bore chromatography. Good intra-day and day-to-day precision of retention time was obtained in both methods with values of RSD less than 3.4 and 0.8% for nano-LC and HPLC, respectively. Quantitative analysis was performed by external standard curve calibration of cyanidin-3-O-glucoside standard. Calibration curves were linear in the concentration ranges studied, 0.1-50 and 6-50 μg/mL for HPLC-UV/Vis and nano-LC-MS, respectively. LOD and LOQ values were good for both methods. In addition to commercial blueberry juice, qualitative and quantitative analysis of other juices (e.g. raspberry, sweet cherry and pomegranate) was performed. The optimized nano-LC-MS method allowed an easy and selective identification and quantification of anthocyanins in commercial fruit juices; it offered good results, shorter analysis time and reduced mobile phase volume with respect to narrow-bore HPLC. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  10. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  11. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  12. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  13. External validation of urinary PCA3-based nomograms to individually predict prostate biopsy outcome.

    PubMed

    Auprich, Marco; Haese, Alexander; Walz, Jochen; Pummer, Karl; de la Taille, Alexandre; Graefen, Markus; de Reijke, Theo; Fisch, Margit; Kil, Paul; Gontero, Paolo; Irani, Jacques; Chun, Felix K-H

    2010-11-01

    Prior to safely adopting risk stratification tools, their performance must be tested in an external patient cohort. To assess accuracy and generalizability of previously reported, internally validated, prebiopsy prostate cancer antigen 3 (PCA3) gene-based nomograms when applied to a large, external, European cohort of men at risk of prostate cancer (PCa). Biopsy data, including urinary PCA3 score, were available for 621 men at risk of PCa who were participating in a European multi-institutional study. All patients underwent a ≥10-core prostate biopsy. Biopsy indication was based on suspicious digital rectal examination, persistently elevated prostate-specific antigen level (2.5-10 ng/ml) and/or suspicious histology (atypical small acinar proliferation of the prostate, >/= two cores affected by high-grade prostatic intraepithelial neoplasia in first set of biopsies). PCA3 scores were assessed using the Progensa assay (Gen-Probe Inc, San Diego, CA, USA). According to the previously reported nomograms, different PCA3 score codings were used. The probability of a positive biopsy was calculated using previously published logistic regression coefficients. Predicted outcomes were compared to the actual biopsy results. Accuracy was calculated using the area under the curve as a measure of discrimination; calibration was explored graphically. Biopsy-confirmed PCa was detected in 255 (41.1%) men. Median PCA3 score of biopsy-negative versus biopsy-positive men was 20 versus 48 in the total cohort, 17 versus 47 at initial biopsy, and 37 versus 53 at repeat biopsy (all p≤0.002). External validation of all four previously reported PCA3-based nomograms demonstrated equally high accuracy (0.73-0.75) and excellent calibration. The main limitations of the study reside in its early detection setting, referral scenario, and participation of only tertiary-care centers. In accordance with the original publication, previously developed PCA3-based nomograms achieved high accuracy and sufficient calibration. These novel nomograms represent robust tools and are thus generalizable to European men at risk of harboring PCa. Consequently, in presence of a PCA3 score, these nomograms may be safely used to assist clinicians when prostate biopsy is contemplated. Copyright © 2010 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  14. Markets, Herding and Response to External Information.

    PubMed

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2015-01-01

    We focus on the influence of external sources of information upon financial markets. In particular, we develop a stochastic agent-based market model characterized by a certain herding behavior as well as allowing traders to be influenced by an external dynamic signal of information. This signal can be interpreted as a time-varying advertising, public perception or rumor, in favor or against one of two possible trading behaviors, thus breaking the symmetry of the system and acting as a continuously varying exogenous shock. As an illustration, we use a well-known German Indicator of Economic Sentiment as information input and compare our results with Germany's leading stock market index, the DAX, in order to calibrate some of the model parameters. We study the conditions for the ensemble of agents to more accurately follow the information input signal. The response of the system to the external information is maximal for an intermediate range of values of a market parameter, suggesting the existence of three different market regimes: amplification, precise assimilation and undervaluation of incoming information.

  15. On-demand calibration and evaluation for electromagnetically tracked laparoscope in augmented reality visualization.

    PubMed

    Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2016-06-01

    Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that the calibration can be performed in the OR on demand. We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration results in the OR, we integrated a tube phantom with fCalib prototype and overlaid a virtual representation of the tube on the live video scene. We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggest that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, might affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s-22.7 s). We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand.

  16. Determining geometric error model parameters of a terrestrial laser scanner through Two-face, Length-consistency, and Network methods

    PubMed Central

    Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel

    2017-01-01

    Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607

  17. Weld Development for Aluminum Fission Chamber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cross, Carl Edward; Martinez, Jesse Norris

    2017-05-16

    The Sigma welding team was approached to help fabricate a small fission chamber (roughly ½ inch dia. x ½ inch tall cylinder). These chambers are used as radiation sensors that contain small traces of radionuclides (Cf 252, U 235, and U 238) that serve to ionize gas atoms in addition to external radiation. When a voltage is applied within the chamber, the resulting ion flow can be calibrated and monitored. Aluminum has the advantage of not forming radioactive compounds when exposed to high external radiation (except from minor Na alloy content). Since aluminum has not been used before in thismore » application, this presented an unexplored challenge.« less

  18. External comparisons of reprocessed SBUV/TOMS ozone data

    NASA Technical Reports Server (NTRS)

    Wellemeyer, C. G.; Taylor, S. L.; Singh, R. R.; Mcpeters, R. D.

    1994-01-01

    Ozone Retrievals from the Solar Backscatter Ultraviolet (SBUV) Instrument on-board the Nimbus-7 Satellite have been reprocessed using an improved internal calibration. The resulting data set covering November, 1978 through January, 1987 has been archived at the National Space Science Data Center in Greenbelt, Maryland. The reprocessed SBUV total ozone data as well as recalibrated Total Ozone Mapping Spectrometer (TOMS) data are compared with total ozone measurements from a network of ground based Dobson spectrophotometers. The SBUV also measures the vertical distribution of ozone, and these measurements are compared with external measurements made by SAGE II, Umkehr, and Ozonesondes. Special attention is paid to long-term changes in ozone bias.

  19. Developing and refining NIR calibrations for total carbohydrate composition and isoflavones and saponins in ground whole soy meal

    USDA-ARS?s Scientific Manuscript database

    Although many near infrared (NIR) spectrometric calibrations exist for a variety of components in soy, current calibration methods are often limited by either a small sample size on which the calibrations are based or a wide variation in sample preparation and measurement methods, which yields unrel...

  20. Multiplexed fluctuation-dissipation-theorem calibration of optical tweezers inside living cells

    NASA Astrophysics Data System (ADS)

    Yan, Hao; Johnston, Jessica F.; Cahn, Sidney B.; King, Megan C.; Mochrie, Simon G. J.

    2017-11-01

    In order to apply optical tweezers-based force measurements within an uncharacterized viscoelastic medium such as the cytoplasm of a living cell, a quantitative calibration method that may be applied in this complex environment is needed. We describe an improved version of the fluctuation-dissipation-theorem calibration method, which has been developed to perform in situ calibration in viscoelastic media without prior knowledge of the trapped object. Using this calibration procedure, it is possible to extract values of the medium's viscoelastic moduli as well as the force constant describing the optical trap. To demonstrate our method, we calibrate an optical trap in water, in polyethylene oxide solutions of different concentrations, and inside living fission yeast (S. pombe).

  1. Flight Test Results of an Angle of Attack and Angle of Sideslip Calibration Method Using Output-Error Optimization

    NASA Technical Reports Server (NTRS)

    Siu, Marie-Michele; Martos, Borja; Foster, John V.

    2013-01-01

    As part of a joint partnership between the NASA Aviation Safety Program (AvSP) and the University of Tennessee Space Institute (UTSI), research on advanced air data calibration methods has been in progress. This research was initiated to expand a novel pitot-static calibration method that was developed to allow rapid in-flight calibration for the NASA Airborne Subscale Transport Aircraft Research (AirSTAR) facility. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. Subscale flight tests demonstrated small 2-s error bounds with significant reduction in test time compared to other methods. Recent UTSI full scale flight tests have shown airspeed calibrations with the same accuracy or better as the Federal Aviation Administration (FAA) accepted GPS 'four-leg' method in a smaller test area and in less time. The current research was motivated by the desire to extend this method for inflight calibration of angle of attack (AOA) and angle of sideslip (AOS) flow vanes. An instrumented Piper Saratoga research aircraft from the UTSI was used to collect the flight test data and evaluate flight test maneuvers. Results showed that the output-error approach produces good results for flow vane calibration. In addition, maneuvers for pitot-static and flow vane calibration can be integrated to enable simultaneous and efficient testing of each system.

  2. Improved absolute calibration of LOPES measurements and its impact on the comparison with REAS 3.11 and CoREAS simulations

    NASA Astrophysics Data System (ADS)

    Apel, W. D.; Arteaga-Velázquez, J. C.; Bähren, L.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Falcke, H.; Fuchs, B.; Gemmeke, H.; Grupen, C.; Haungs, A.; Heck, D.; Hiller, R.; Hörandel, J. R.; Horneffer, A.; Huber, D.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Krömer, O.; Kuijpers, J.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Melissas, M.; Morello, C.; Nehls, S.; Oehlschläger, J.; Palmieri, N.; Pierog, T.; Rautenberg, J.; Rebel, H.; Roth, M.; Rühle, C.; Saftoiu, A.; Schieler, H.; Schmidt, A.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Weindl, A.; Wochele, J.; Zabierowski, J.; Zensus, J. A.

    2016-02-01

    LOPES was a digital antenna array detecting the radio emission of cosmic-ray air showers. The calibration of the absolute amplitude scale of the measurements was done using an external, commercial reference source, which emits a frequency comb with defined amplitudes. Recently, we obtained improved reference values by the manufacturer of the reference source, which significantly changed the absolute calibration of LOPES. We reanalyzed previously published LOPES measurements, studying the impact of the changed calibration. The main effect is an overall decrease of the LOPES amplitude scale by a factor of 2.6 ± 0.2, affecting all previously published values for measurements of the electric-field strength. This results in a major change in the conclusion of the paper 'Comparing LOPES measurements of air-shower radio emission with REAS 3.11 and CoREAS simulations' published by Apel et al. (2013) : With the revised calibration, LOPES measurements now are compatible with CoREAS simulations, but in tension with REAS 3.11 simulations. Since CoREAS is the latest version of the simulation code incorporating the current state of knowledge on the radio emission of air showers, this new result indicates that the absolute amplitude prediction of current simulations now is in agreement with experimental data.

  3. Development of Rapid, Continuous Calibration Techniques and Implementation as a Prototype System for Civil Engineering Materials Evaluation

    NASA Astrophysics Data System (ADS)

    Scott, M. L.; Gagarin, N.; Mekemson, J. R.; Chintakunta, S. R.

    2011-06-01

    Until recently, civil engineering material calibration data could only be obtained from material sample cores or via time consuming, stationary calibration measurements in a limited number of locations. Calibration data are used to determine material propagation velocities of electromagnetic waves in test materials for use in layer thickness measurements and subsurface imaging. Limitations these calibration methods impose have been a significant impediment to broader use of nondestructive evaluation methods such as ground-penetrating radar (GPR). In 2006, a new rapid, continuous calibration approach was designed using simulation software to address these measurement limitations during a Federal Highway Administration (FHWA) research and development effort. This continuous calibration method combines a digitally-synthesized step-frequency (SF)-GPR array and a data collection protocol sequence for the common midpoint (CMP) method. Modeling and laboratory test results for various data collection protocols and materials are presented in this paper. The continuous-CMP concept was finally implemented for FHWA in a prototype demonstration system called the Advanced Pavement Evaluation (APE) system in 2009. Data from the continuous-CMP protocol is processed using a semblance/coherency analysis to determine material propagation velocities. Continuously calibrated pavement thicknesses measured with the APE system in 2009 are presented. This method is efficient, accurate, and cost-effective.

  4. User-friendly freehand ultrasound calibration using Lego bricks and automatic registration.

    PubMed

    Xiao, Yiming; Yan, Charles Xiao Bo; Drouin, Simon; De Nigris, Dante; Kochanowska, Anna; Collins, D Louis

    2016-09-01

    As an inexpensive, noninvasive, and portable clinical imaging modality, ultrasound (US) has been widely employed in many interventional procedures for monitoring potential tissue deformation, surgical tool placement, and locating surgical targets. The application requires the spatial mapping between 2D US images and 3D coordinates of the patient. Although positions of the devices (i.e., ultrasound transducer) and the patient can be easily recorded by a motion tracking system, the spatial relationship between the US image and the tracker attached to the US transducer needs to be estimated through an US calibration procedure. Previously, various calibration techniques have been proposed, where a spatial transformation is computed to match the coordinates of corresponding features in a physical phantom and those seen in the US scans. However, most of these methods are difficult to use for novel users. We proposed an ultrasound calibration method by constructing a phantom from simple Lego bricks and applying an automated multi-slice 2D-3D registration scheme without volumetric reconstruction. The method was validated for its calibration accuracy and reproducibility. Our method yields a calibration accuracy of [Formula: see text] mm and a calibration reproducibility of 1.29 mm. We have proposed a robust, inexpensive, and easy-to-use ultrasound calibration method.

  5. Development of rapid, continuous calibration techniques and implementation as a prototype system for civil engineering materials evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, M. L.; Gagarin, N.; Mekemson, J. R.

    Until recently, civil engineering material calibration data could only be obtained from material sample cores or via time consuming, stationary calibration measurements in a limited number of locations. Calibration data are used to determine material propagation velocities of electromagnetic waves in test materials for use in layer thickness measurements and subsurface imaging. Limitations these calibration methods impose have been a significant impediment to broader use of nondestructive evaluation methods such as ground-penetrating radar (GPR). In 2006, a new rapid, continuous calibration approach was designed using simulation software to address these measurement limitations during a Federal Highway Administration (FHWA) research andmore » development effort. This continuous calibration method combines a digitally-synthesized step-frequency (SF)-GPR array and a data collection protocol sequence for the common midpoint (CMP) method. Modeling and laboratory test results for various data collection protocols and materials are presented in this paper. The continuous-CMP concept was finally implemented for FHWA in a prototype demonstration system called the Advanced Pavement Evaluation (APE) system in 2009. Data from the continuous-CMP protocol is processed using a semblance/coherency analysis to determine material propagation velocities. Continuously calibrated pavement thicknesses measured with the APE system in 2009 are presented. This method is efficient, accurate, and cost-effective.« less

  6. Detection Angle Calibration of Pressure-Sensitive Paints

    NASA Technical Reports Server (NTRS)

    Bencic, Timothy J.

    2000-01-01

    Uses of the pressure-sensitive paint (PSP) techniques in areas other than external aerodynamics continue to expand. The NASA Glenn Research Center has become a leader in the application of the global technique to non-conventional aeropropulsion applications including turbomachinery testing. The use of the global PSP technique in turbomachinery applications often requires detection of the luminescent paint in confined areas. With the limited viewing usually available, highly oblique illumination and detection angles are common in the confined areas in these applications. This paper will describe the results of pressure, viewing and excitation angle dependence calibrations using three popular PSP formulations to get a better understanding of the errors associated with these non-traditional views.

  7. IMU-Based Online Kinematic Calibration of Robot Manipulator

    PubMed Central

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods. PMID:24302854

  8. Extrinsic Calibration of Camera Networks Based on Pedestrians

    PubMed Central

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  9. Hybrid Geometric Calibration Method for Multi-Platform Spaceborne SAR Image with Sparse Gcps

    NASA Astrophysics Data System (ADS)

    Lv, G.; Tang, X.; Ai, B.; Li, T.; Chen, Q.

    2018-04-01

    Geometric calibration is able to provide high-accuracy geometric coordinates of spaceborne SAR image through accurate geometric parameters in the Range-Doppler model by ground control points (GCPs). However, it is very difficult to obtain GCPs that covering large-scale areas, especially in the mountainous regions. In addition, the traditional calibration method is only used for single platform SAR images and can't support the hybrid geometric calibration for multi-platform images. To solve the above problems, a hybrid geometric calibration method for multi-platform spaceborne SAR images with sparse GCPs is proposed in this paper. First, we calibrate the master image that contains GCPs. Secondly, the point tracking algorithm is used to obtain the tie points (TPs) between the master and slave images. Finally, we calibrate the slave images using TPs as the GCPs. We take the Beijing-Tianjin- Hebei region as an example to study SAR image hybrid geometric calibration method using 3 TerraSAR-X images, 3 TanDEM-X images and 5 GF-3 images covering more than 235 kilometers in the north-south direction. Geometric calibration of all images is completed using only 5 GCPs. The GPS data extracted from GNSS receiver are used to assess the plane accuracy after calibration. The results after geometric calibration with sparse GCPs show that the geometric positioning accuracy is 3 m for TSX/TDX images and 7.5 m for GF-3 images.

  10. Standardization of glycohemoglobin results and reference values in whole blood studied in 103 laboratories using 20 methods.

    PubMed

    Weykamp, C W; Penders, T J; Miedema, K; Muskiet, F A; van der Slik, W

    1995-01-01

    We investigated the effect of calibration with lyophilized calibrators on whole-blood glycohemoglobin (glyHb) results. One hundred three laboratories, using 20 different methods, determined glyHb in two lyophilized calibrators and two whole-blood samples. For whole-blood samples with low (5%) and high (9%) glyHb percentages, respectively, calibration decreased overall interlaboratory variation (CV) from 16% to 9% and from 11% to 6% and decreased intermethod variation from 14% to 6% and from 12% to 5%. Forty-seven laboratories, using 14 different methods, determined mean glyHb percentages in self-selected groups of 10 nondiabetic volunteers each. With calibration their overall mean (2SD) was 5.0% (0.5%), very close to the 5.0% (0.3%) derived from the reference method used in the Diabetes Control and Complications Trial. In both experiments the Abbott IMx and Vision showed deviating results. We conclude that, irrespective of the analytical method used, calibration enables standardization of glyHb results, reference values, and interpretation criteria.

  11. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  12. Risk assessment model for development of advanced age-related macular degeneration.

    PubMed

    Klein, Michael L; Francis, Peter J; Ferris, Frederick L; Hamon, Sara C; Clemons, Traci E

    2011-12-01

    To design a risk assessment model for development of advanced age-related macular degeneration (AMD) incorporating phenotypic, demographic, environmental, and genetic risk factors. We evaluated longitudinal data from 2846 participants in the Age-Related Eye Disease Study. At baseline, these individuals had all levels of AMD, ranging from none to unilateral advanced AMD (neovascular or geographic atrophy). Follow-up averaged 9.3 years. We performed a Cox proportional hazards analysis with demographic, environmental, phenotypic, and genetic covariates and constructed a risk assessment model for development of advanced AMD. Performance of the model was evaluated using the C statistic and the Brier score and externally validated in participants in the Complications of Age-Related Macular Degeneration Prevention Trial. The final model included the following independent variables: age, smoking history, family history of AMD (first-degree member), phenotype based on a modified Age-Related Eye Disease Study simple scale score, and genetic variants CFH Y402H and ARMS2 A69S. The model did well on performance measures, with very good discrimination (C statistic = 0.872) and excellent calibration and overall performance (Brier score at 5 years = 0.08). Successful external validation was performed, and a risk assessment tool was designed for use with or without the genetic component. We constructed a risk assessment model for development of advanced AMD. The model performed well on measures of discrimination, calibration, and overall performance and was successfully externally validated. This risk assessment tool is available for online use.

  13. Was The 01.09.2001 Etarpas Rockfall Detectable? Answer Using A Gis Approach

    NASA Astrophysics Data System (ADS)

    Baillifard, F.; Jaboyedoff, M.; Rouiller, J.-D.; Sartori, M.

    As a general rule, "a posteriori" studies of rock slope instabilities show that rock- falls don't occur in casual locations. First, many geomorphologic arguments allow to identify the rupture zone as sensitive; secondly, external factors such as groundwa- ter circulations, freezing and thaw cycles, etc., induce long-term solicitations of the rock mass, and thus the diminution of the resistance along the discontinuities and the probably progressive rupture of the thrust. Once the sensitive zones are detected, the global activity induced by the external factors must be assessed, and the probability of rupture may be evaluated. Taking the opportunity of a 2'000 m3 rockfall that occurred on January, 9th, 2001, along a mountain road near Sion (Switzerland), a simple method to detect rock slope instabilities was tested. In order to locate sensitive areas, a set of five criterions was chosen, using available GIS formatted data such as vectorized topographic and geological maps, and a 25 m grid DTM. The chosen criterions are: the presence of faults and screes within a short distance, the presence of a rock face, a steep slope and a road. This scaling leads to a linear rating from 0 to 5. The location of the 01.09.01 rockfall obtains a score of 5. Once applied to the entire length of the road (4 km), the present method indicates two others areas which are highly sensitive to rupture, allowing to detect the main instabilities along this road. Such methods based on rough available parameters have now to be applied to larger areas. They also must be calibrated using a survey of past events. The studied rockfall area is affected by a high probability of rupture, as far as some necessary criteria are respected: first, the structural pattern has to be unfavorable; sec- ondly, the morphological conditions have to be favorable to the action of external factors.

  14. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition

    PubMed Central

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-01-01

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible. PMID:29695041

  15. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition.

    PubMed

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-04-24

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible.

  16. A method for soil moisture probes calibration and validation of satellite estimates.

    PubMed

    Holzman, Mauro; Rivas, Raúl; Carmona, Facundo; Niclòs, Raquel

    2017-01-01

    Optimization of field techniques is crucial to ensure high quality soil moisture data. The aim of the work is to present a sampling method for undisturbed soil and soil water content to calibrated soil moisture probes, in a context of the SMOS (Soil Moisture and Ocean Salinity) mission MIRAS Level 2 soil moisture product validation in Pampean Region of Argentina. The method avoids soil alteration and is recommended to calibrated probes based on soil type under a freely drying process at ambient temperature. A detailed explanation of field and laboratory procedures to obtain reference soil moisture is shown. The calibration results reflected accurate operation for the Delta-T thetaProbe ML2x probes in most of analyzed cases (RMSE and bias ≤ 0.05 m 3 /m 3 ). Post-calibration results indicated that the accuracy improves significantly applying the adjustments of the calibration based on soil types (RMSE ≤ 0.022 m 3 /m 3 , bias ≤ -0.010 m 3 /m 3 ). •A sampling method that provides high quality data of soil water content for calibration of probes is described.•Importance of calibration based on soil types.•A calibration process for similar soil types could be suitable in practical terms, depending on the required accuracy level.

  17. Simplified stereo-optical ultrasound plane calibration

    NASA Astrophysics Data System (ADS)

    Hoßbach, Martin; Noll, Matthias; Wesarg, Stefan

    2013-03-01

    Image guided therapy is a natural concept and commonly used in medicine. In anesthesia, a common task is the injection of an anesthetic close to a nerve under freehand ultrasound guidance. Several guidance systems exist using electromagnetic tracking of the ultrasound probe as well as the needle, providing the physician with a precise projection of the needle into the ultrasound image. This, however, requires additional expensive devices. We suggest using optical tracking with miniature cameras attached to a 2D ultrasound probe to achieve a higher acceptance among physicians. The purpose of this paper is to present an intuitive method to calibrate freehand ultrasound needle guidance systems employing a rigid stereo camera system. State of the art methods are based on a complex series of error prone coordinate system transformations which makes them susceptible to error accumulation. By reducing the amount of calibration steps to a single calibration procedure we provide a calibration method that is equivalent, yet not prone to error accumulation. It requires a linear calibration object and is validated on three datasets utilizing di erent calibration objects: a 6mm metal bar and a 1:25mm biopsy needle were used for experiments. Compared to existing calibration methods for freehand ultrasound needle guidance systems, we are able to achieve higher accuracy results while additionally reducing the overall calibration complexity. Ke

  18. Simultaneous multi-headed imager geometry calibration method

    DOEpatents

    Tran, Vi-Hoa [Newport News, VA; Meikle, Steven Richard [Penshurst, AU; Smith, Mark Frederick [Yorktown, VA

    2008-02-19

    A method for calibrating multi-headed high sensitivity and high spatial resolution dynamic imaging systems, especially those useful in the acquisition of tomographic images of small animals. The method of the present invention comprises: simultaneously calibrating two or more detectors to the same coordinate system; and functionally correcting for unwanted detector movement due to gantry flexing.

  19. Model independent approach to the single photoelectron calibration of photomultiplier tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saldanha, R.; Grandi, L.; Guardincerri, Y.

    2017-08-01

    The accurate calibration of photomultiplier tubes is critical in a wide variety of applications in which it is necessary to know the absolute number of detected photons or precisely determine the resolution of the signal. Conventional calibration methods rely on fitting the photomultiplier response to a low intensity light source with analytical approximations to the single photoelectron distribution, often leading to biased estimates due to the inability to accurately model the full distribution, especially at low charge values. In this paper we present a simple statistical method to extract the relevant single photoelectron calibration parameters without making any assumptions aboutmore » the underlying single photoelectron distribution. We illustrate the use of this method through the calibration of a Hamamatsu R11410 photomultiplier tube and study the accuracy and precision of the method using Monte Carlo simulations. The method is found to have significantly reduced bias compared to conventional methods and works under a wide range of light intensities, making it suitable for simultaneously calibrating large arrays of photomultiplier tubes.« less

  20. A Theoretical Framework for Calibration in Computer Models: Parametrization, Estimation and Convergence Properties

    DOE PAGES

    Tuo, Rui; Jeff Wu, C. F.

    2016-07-19

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

Top