Sample records for calibration method suitable

  1. Comparison of infusion pumps calibration methods

    NASA Astrophysics Data System (ADS)

    Batista, Elsa; Godinho, Isabel; do Céu Ferreira, Maria; Furtado, Andreia; Lucas, Peter; Silva, Claudia

    2017-12-01

    Nowadays, several types of infusion pump are commonly used for drug delivery, such as syringe pumps and peristaltic pumps. These instruments present different measuring features and capacities according to their use and therapeutic application. In order to ensure the metrological traceability of these flow and volume measuring equipment, it is necessary to use suitable calibration methods and standards. Two different calibration methods can be used to determine the flow error of infusion pumps. One is the gravimetric method, considered as a primary method, commonly used by National Metrology Institutes. The other calibration method, a secondary method, relies on an infusion device analyser (IDA) and is typically used by hospital maintenance offices. The suitability of the IDA calibration method was assessed by testing several infusion instruments at different flow rates using the gravimetric method. In addition, a measurement comparison between Portuguese Accredited Laboratories and hospital maintenance offices was performed under the coordination of the Portuguese Institute for Quality, the National Metrology Institute. The obtained results were directly related to the used calibration method and are presented in this paper. This work has been developed in the framework of the EURAMET projects EMRP MeDD and EMPIR 15SIP03.

  2. Calibration of gravitational radiation antenna by dynamic Newton field

    NASA Astrophysics Data System (ADS)

    Suzuki, T.; Tsubono, K.; Kuroda, K.; Hirakawa, H.

    1981-07-01

    A method is presented of calibrating antennas for gravitational radiation. The method, which used the dynamic Newton field of a rotating body, is suitable in experiments for frequencies up to several hundred hertz. What is more, the method requires no hardware inside the vacuum chamber of the antenna and is particularly convenient for calibration of low-temperature antenna systems.

  3. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  4. Accuracy evaluation of optical distortion calibration by digital image correlation

    NASA Astrophysics Data System (ADS)

    Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan

    2017-11-01

    Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.

  5. Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration.

    PubMed

    Nikitichev, Daniil I; Shakir, Dzhoshkun I; Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom

    2017-02-23

    We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community.

  6. Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration

    PubMed Central

    Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom

    2017-01-01

    We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community. PMID:28287588

  7. Systemic errors calibration in dynamic stitching interferometry

    NASA Astrophysics Data System (ADS)

    Wu, Xin; Qi, Te; Yu, Yingjie; Zhang, Linna

    2016-05-01

    The systemic error is the main error sauce in sub-aperture stitching calculation. In this paper, a systemic error calibration method is proposed based on pseudo shearing. This method is suitable in dynamic stitching interferometry for large optical plane. The feasibility is vibrated by some simulations and experiments.

  8. Mesoscale hybrid calibration artifact

    DOEpatents

    Tran, Hy D.; Claudet, Andre A.; Oliver, Andrew D.

    2010-09-07

    A mesoscale calibration artifact, also called a hybrid artifact, suitable for hybrid dimensional measurement and the method for make the artifact. The hybrid artifact has structural characteristics that make it suitable for dimensional measurement in both vision-based systems and touch-probe-based systems. The hybrid artifact employs the intersection of bulk-micromachined planes to fabricate edges that are sharp to the nanometer level and intersecting planes with crystal-lattice-defined angles.

  9. Humidity Measurements: A Psychrometer Suitable for On-Line Data Acquisition.

    ERIC Educational Resources Information Center

    Caporaloni, Marina; Ambrosini, Roberto

    1992-01-01

    Explains the typical design, operation, and calibration of a traditional psychrometer. Presents the method utilized for this class project with design considerations, calibration techniques, remote data sensing schematic, and specifics of the implementation process. (JJK)

  10. Evaluation of plasmid and genomic DNA calibrants used for the quantification of genetically modified organisms.

    PubMed

    Caprioara-Buda, M; Meyer, W; Jeynov, B; Corbisier, P; Trapmann, S; Emons, H

    2012-07-01

    The reliable quantification of genetically modified organisms (GMOs) by real-time PCR requires, besides thoroughly validated quantitative detection methods, sustainable calibration systems. The latter establishes the anchor points for the measured value and the measurement unit, respectively. In this paper, the suitability of two types of DNA calibrants, i.e. plasmid DNA and genomic DNA extracted from plant leaves, for the certification of the GMO content in reference materials as copy number ratio between two targeted DNA sequences was investigated. The PCR efficiencies and coefficients of determination of the calibration curves as well as the measured copy number ratios for three powder certified reference materials (CRMs), namely ERM-BF415e (NK603 maize), ERM-BF425c (356043 soya), and ERM-BF427c (98140 maize), originally certified for their mass fraction of GMO, were compared for both types of calibrants. In all three systems investigated, the PCR efficiencies of plasmid DNA were slightly closer to the PCR efficiencies observed for the genomic DNA extracted from seed powders rather than those of the genomic DNA extracted from leaves. Although the mean DNA copy number ratios for each CRM overlapped within their uncertainties, the DNA copy number ratios were significantly different using the two types of calibrants. Based on these observations, both plasmid and leaf genomic DNA calibrants would be technically suitable as anchor points for the calibration of the real-time PCR methods applied in this study. However, the most suitable approach to establish a sustainable traceability chain is to fix a reference system based on plasmid DNA.

  11. In situ calibration of inductively coupled plasma-atomic emission and mass spectroscopy

    DOEpatents

    Braymen, Steven D.

    1996-06-11

    A method and apparatus for in situ addition calibration of an inductively coupled plasma atomic emission spectrometer or mass spectrometer using a precision gas metering valve to introduce a volatile calibration gas of an element of interest directly into an aerosol particle stream. The present situ calibration technique is suitable for various remote, on-site sampling systems such as laser ablation or nebulization.

  12. Features calibration of the dynamic force transducers

    NASA Astrophysics Data System (ADS)

    Sc., M. Yu Prilepko D.; Lysenko, V. G.

    2018-04-01

    The article discusses calibration methods of dynamic forces measuring instruments. The relevance of work is dictated by need to valid definition of the dynamic forces transducers metrological characteristics taking into account their intended application. The aim of this work is choice justification of calibration method, which provides the definition dynamic forces transducers metrological characteristics under simulation operating conditions for determining suitability for using in accordance with its purpose. The following tasks are solved: the mathematical model and the main measurements equation of calibration dynamic forces transducers by load weight, the main budget uncertainty components of calibration are defined. The new method of dynamic forces transducers calibration with use the reference converter “force-deformation” based on the calibrated elastic element and measurement of his deformation by a laser interferometer is offered. The mathematical model and the main measurements equation of the offered method is constructed. It is shown that use of calibration method based on measurements by the laser interferometer of calibrated elastic element deformations allows to exclude or to considerably reduce the uncertainty budget components inherent to method of load weight.

  13. Fast wavelength calibration method for spectrometers based on waveguide comb optical filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Zhengang; Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240; Huang, Meizhen, E-mail: mzhuang@sjtu.edu.cn

    2015-04-15

    A novel fast wavelength calibration method for spectrometers based on a standard spectrometer and a double metal-cladding waveguide comb optical filter (WCOF) is proposed and demonstrated. By using the WCOF device, a wide-spectrum beam is comb-filtered, which is very suitable for spectrometer wavelength calibration. The influence of waveguide filter’s structural parameters and the beam incident angle on the comb absorption peaks’ wavelength and its bandwidth are also discussed. The verification experiments were carried out in the wavelength range of 200–1100 nm with satisfactory results. Comparing with the traditional wavelength calibration method based on discrete sparse atomic emission or absorption lines,more » the new method has some advantages: sufficient calibration data, high accuracy, short calibration time, fit for produce process, stability, etc.« less

  14. In situ calibration of inductively coupled plasma-atomic emission and mass spectroscopy

    DOEpatents

    Braymen, S.D.

    1996-06-11

    A method and apparatus are disclosed for in situ addition calibration of an inductively coupled plasma atomic emission spectrometer or mass spectrometer using a precision gas metering valve to introduce a volatile calibration gas of an element of interest directly into an aerosol particle stream. The present in situ calibration technique is suitable for various remote, on-site sampling systems such as laser ablation or nebulization. 5 figs.

  15. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition

    PubMed Central

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-01-01

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible. PMID:29695041

  16. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition.

    PubMed

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-04-24

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible.

  17. Laser's calibration of an AOTF-based spectral colorimeter

    NASA Astrophysics Data System (ADS)

    Emelianov, Sergey P.; Khrustalev, Vladimir N.; Kochin, Leonid B.; Polosin, Lev L.

    2003-06-01

    The paper is devoted to expedients of AOTF spectral colorimeters calibration. The spectrometer method of color values measuring with reference to spectral colorimeters on AOTF surveyed. The theoretical exposition of spectrometer data processing expedients is offered. The justified source of radiation choice, suitable for calibration of spectral colorimeters is carried out. The experimental results for different acousto-optical mediums and modes of interaction are submitted.

  18. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  19. A novel calibration method for non-orthogonal shaft laser theodolite measurement system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Bin, E-mail: wubin@tju.edu.cn, E-mail: xueting@tju.edu.cn; Yang, Fengting; Ding, Wen

    2016-03-15

    Non-orthogonal shaft laser theodolite (N-theodolite) is a new kind of large-scale metrological instrument made up by two rotary tables and one collimated laser. There are three axes for an N-theodolite. According to naming conventions in traditional theodolite, rotary axes of two rotary tables are called as horizontal axis and vertical axis, respectively, and the collimated laser beam is named as sight axis. And the difference between N-theodolite and traditional theodolite is obvious, since the former one with no orthogonal and intersecting accuracy requirements. So the calibration method for traditional theodolite is no longer suitable for N-theodolite, while the calibration methodmore » applied currently is really complicated. Thus this paper introduces a novel calibration method for non-orthogonal shaft laser theodolite measurement system to simplify the procedure and to improve the calibration accuracy. A simple two-step process, calibration for intrinsic parameters and for extrinsic parameters, is proposed by the novel method. And experiments have shown its efficiency and accuracy.« less

  20. Model independent approach to the single photoelectron calibration of photomultiplier tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saldanha, R.; Grandi, L.; Guardincerri, Y.

    2017-08-01

    The accurate calibration of photomultiplier tubes is critical in a wide variety of applications in which it is necessary to know the absolute number of detected photons or precisely determine the resolution of the signal. Conventional calibration methods rely on fitting the photomultiplier response to a low intensity light source with analytical approximations to the single photoelectron distribution, often leading to biased estimates due to the inability to accurately model the full distribution, especially at low charge values. In this paper we present a simple statistical method to extract the relevant single photoelectron calibration parameters without making any assumptions aboutmore » the underlying single photoelectron distribution. We illustrate the use of this method through the calibration of a Hamamatsu R11410 photomultiplier tube and study the accuracy and precision of the method using Monte Carlo simulations. The method is found to have significantly reduced bias compared to conventional methods and works under a wide range of light intensities, making it suitable for simultaneously calibrating large arrays of photomultiplier tubes.« less

  1. a Contemporary Approach for Evaluation of the best Measurement Capability of a Force Calibration Machine

    NASA Astrophysics Data System (ADS)

    Kumar, Harish

    The present paper discusses the procedure for evaluation of best measurement capability of a force calibration machine. The best measurement capability of force calibration machine is evaluated by a comparison through the precision force transfer standards to the force standard machines. The force transfer standards are calibrated by the force standard machine and then by the force calibration machine by adopting the similar procedure. The results are reported and discussed in the paper and suitable discussion has been made for force calibration machine of 200 kN capacity. Different force transfer standards of nominal capacity 20 kN, 50 kN and 200 kN are used. It is found that there are significant variations in the .uncertainty of force realization by the force calibration machine according to the proposed method in comparison to the earlier method adopted.

  2. Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor

    NASA Astrophysics Data System (ADS)

    Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick

    This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  3. True logarithmic amplification of frequency clock in SS-OCT for calibration

    PubMed Central

    Liu, Bin; Azimi, Ehsan; Brezinski, Mark E.

    2011-01-01

    With swept source optical coherence tomography (SS-OCT), imprecise signal calibration prevents optimal imaging of biological tissues such as coronary artery. This work demonstrates an approach using a true logarithmic amplifier to precondition the clock signal, with the effort to minimize the noises and phase errors for optimal calibration. This method was validated and tested with a high-speed SS-OCT. The experimental results manifest its superior ability on optimization of the calibration and improvement of the imaging performance. Particularly, this hardware-based approach is suitable for real-time calibration in a high-speed system where computation time is constrained. PMID:21698036

  4. A method for soil moisture probes calibration and validation of satellite estimates.

    PubMed

    Holzman, Mauro; Rivas, Raúl; Carmona, Facundo; Niclòs, Raquel

    2017-01-01

    Optimization of field techniques is crucial to ensure high quality soil moisture data. The aim of the work is to present a sampling method for undisturbed soil and soil water content to calibrated soil moisture probes, in a context of the SMOS (Soil Moisture and Ocean Salinity) mission MIRAS Level 2 soil moisture product validation in Pampean Region of Argentina. The method avoids soil alteration and is recommended to calibrated probes based on soil type under a freely drying process at ambient temperature. A detailed explanation of field and laboratory procedures to obtain reference soil moisture is shown. The calibration results reflected accurate operation for the Delta-T thetaProbe ML2x probes in most of analyzed cases (RMSE and bias ≤ 0.05 m 3 /m 3 ). Post-calibration results indicated that the accuracy improves significantly applying the adjustments of the calibration based on soil types (RMSE ≤ 0.022 m 3 /m 3 , bias ≤ -0.010 m 3 /m 3 ). •A sampling method that provides high quality data of soil water content for calibration of probes is described.•Importance of calibration based on soil types.•A calibration process for similar soil types could be suitable in practical terms, depending on the required accuracy level.

  5. The Use of Color Sensors for Spectrographic Calibration

    NASA Astrophysics Data System (ADS)

    Thomas, Neil B.

    2018-04-01

    The wavelength calibration of spectrographs is an essential but challenging task in many disciplines. Calibration is traditionally accomplished by imaging the spectrum of a light source containing features that are known to appear at certain wavelengths and mapping them to their location on the sensor. This is typically required in conjunction with each scientific observation to account for mechanical and optical variations of the instrument over time, which may span years for certain projects. The method presented here investigates the usage of color itself instead of spectral features to calibrate a spectrograph. The primary advantage of such a calibration is that any broad-spectrum light source such as the sky or an incandescent bulb is suitable. This method allows for calibration using the full optical pathway of the instrument instead of incorporating separate calibration equipment that may introduce errors. This paper focuses on the potential for color calibration in the field of radial velocity astronomy, in which instruments must be finely calibrated for long periods of time to detect tiny Doppler wavelength shifts. This method is not restricted to radial velocity, however, and may find application in any field requiring calibrated spectrometers such as sea water analysis, cellular biology, chemistry, atmospheric studies, and so on. This paper demonstrates that color sensors have the potential to provide calibration with greatly reduced complexity.

  6. Influence of Installation Errors On the Output Data of the Piezoelectric Vibrations Transducers

    NASA Astrophysics Data System (ADS)

    Kozuch, Barbara; Chelmecki, Jaroslaw; Tatara, Tadeusz

    2017-10-01

    The paper examines an influence of installation errors of the piezoelectric vibrations transducers on the output data. PCB Piezotronics piezoelectric accelerometers were used to perform calibrations by comparison. The measurements were performed with TMS 9155 Calibration Workstation version 5.4.0 at frequency in the range of 5Hz - 2000Hz. Accelerometers were fixed on the calibration station in a so-called back-to-back configuration in accordance with the applicable international standard - ISO 16063-21: Methods for the calibration of vibration and shock transducers - Part 21: Vibration calibration by comparison to a reference transducer. The first accelerometer was calibrated by suitable methods with traceability to a primary reference transducer. Each subsequent calibration was performed when changing one setting in relation to the original calibration. The alterations were related to negligence and failures in relation to the above-mentioned standards and operating guidelines - e.g. the sensor was not tightened or appropriate substance was not placed. Also, there was modified the method of connection which was in the standards requirements. Different kind of wax, light oil, grease and other assembly methods were used. The aim of the study was to verify the significance of standards requirements and to estimate of their validity. The authors also wanted to highlight the most significant calibration errors. Moreover, relation between various appropriate methods of the connection was demonstrated.

  7. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  8. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  9. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  10. Innovative self-calibration method for accelerometer scale factor of the missile-borne RINS with fiber optic gyro.

    PubMed

    Zhang, Qian; Wang, Lei; Liu, Zengjun; Zhang, Yiming

    2016-09-19

    The calibration of an inertial measurement unit (IMU) is a key technique to improve the preciseness of the inertial navigation system (INS) for missile, especially for the calibration of accelerometer scale factor. Traditional calibration method is generally based on the high accuracy turntable, however, it leads to expensive costs and the calibration results are not suitable to the actual operating environment. In the wake of developments in multi-axis rotational INS (RINS) with optical inertial sensors, self-calibration is utilized as an effective way to calibrate IMU on missile and the calibration results are more accurate in practical application. However, the introduction of multi-axis RINS causes additional calibration errors, including non-orthogonality errors of mechanical processing and non-horizontal errors of operating environment, it means that the multi-axis gimbals could not be regarded as a high accuracy turntable. As for its application on missiles, in this paper, after analyzing the relationship between the calibration error of accelerometer scale factor and non-orthogonality and non-horizontal angles, an innovative calibration procedure using the signals of fiber optic gyro and photoelectric encoder is proposed. The laboratory and vehicle experiment results validate the theory and prove that the proposed method relaxes the orthogonality requirement of rotation axes and eliminates the strict application condition of the system.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, P.

    The majority of general-purpose low-temperature handheld radiation thermometers are severely affected by the size-of-source effect (SSE). Calibration of these instruments is pointless unless the SSE is accounted for in the calibration process. Traditional SSE measurement techniques, however, are costly and time consuming, and because the instruments are direct-reading in temperature, traditional SSE results are not easily interpretable, particularly by the general user. This paper describes a simplified method for measuring the SSE, suitable for second-tier calibration laboratories and requiring no additional equipment, and proposes a means of reporting SSE results on a calibration certificate that should be easily understood bymore » the non-specialist user.« less

  12. New results of ground target based calibration of MOS on IRS

    NASA Astrophysics Data System (ADS)

    Schwarzer, Horst H.; Franz, Bryan A.; Neumann, Andreas; Suemnich, Karl-Heinz; Walzel, Thomas; Zimmermann, Gerhard

    2002-09-01

    The success of the Modular Optoelectronic Scanner MOS on the Indian Remote Sensing Satellite IRS-P3 during the 6 years mission time has been based on its sophisticated in-orbit calibration concept to a large extent. When the internal lamp and the sun calibration failed in September 2000 we tested the possibility of ground target based (or vicarious) calibration of the MOS instruments to continue the high data quality. This is essential for future watching of global changes of the ocean coastal zones (phytoplancton, sediments, pollution, etc.) using spectral measurements of the VIS/NIR MOS spectral channels. The investigations have shown the suitability of a part of the Great Eastern Erg in the Sahara desert for this purpose. The satellite crosses this very homogeneous area every 24 days. Because of the good cloudfree conditions we can use 6 - 8 overflys a year for calibration. The seasonal variability of the surface reflectance is very small so that we obtain relative calibration data of sufficient accuracy even without ground truth measurements for most of the channels. The trend of this "vicarious" calibration corresponds very well with the previous trend of the failed lamp and sun calibration. Dfferences between the three methods will be discussed. In the paper we will also present the results of a comparison between SeaWiFS and MOS data of comparable spectral channels from the Great Eastern Erg area. They confirm the suitability of this area for calibration purposes too.

  13. Current profilers and current meters: compass and tilt sensors errors and calibration

    NASA Astrophysics Data System (ADS)

    Le Menn, M.; Lusven, A.; Bongiovanni, E.; Le Dû, P.; Rouxel, D.; Lucas, S.; Pacaud, L.

    2014-08-01

    Current profilers and current meters have a magnetic compass and tilt sensors for relating measurements to a terrestrial reference frame. As compasses are sensitive to their magnetic environment, they must be calibrated in the configuration in which they will be used. A calibration platform for magnetic compasses and tilt sensors was built, based on a method developed in 2007, to correct angular errors and guarantee a measurement uncertainty for instruments mounted in mooring cages. As mooring cages can weigh up to 800 kg, it was necessary to find a suitable place to set up this platform, map the magnetic fields in this area and dimension the platform to withstand these loads. It was calibrated using a GPS positioning technique. The platform has a table that can be tilted to calibrate the tilt sensors. The measurement uncertainty of the system was evaluated. Sinusoidal corrections based on the anomalies created by soft and hard magnetic materials were tested, as well as manufacturers’ calibration methods.

  14. Spectro-photometric determinations of Mn, Fe and Cu in aluminum master alloys

    NASA Astrophysics Data System (ADS)

    Rehan; Naveed, A.; Shan, A.; Afzal, M.; Saleem, J.; Noshad, M. A.

    2016-08-01

    Highly reliable, fast and cost effective Spectro-photometric methods have been developed for the determination of Mn, Fe & Cu in aluminum master alloys, based on the development of calibration curves being prepared via laboratory standards. The calibration curves are designed so as to induce maximum sensitivity and minimum instrumental error (Mn 1mg/100ml-2mg/100ml, Fe 0.01mg/100ml-0.2mg/100ml and Cu 2mg/100ml-10mg/ 100ml). The developed Spectro-photometric methods produce accurate results while analyzing Mn, Fe and Cu in certified reference materials. Particularly, these methods are suitable for all types of Al-Mn, Al-Fe and Al-Cu master alloys (5%, 10%, 50% etc. master alloys).Moreover, the sampling practices suggested herein include a reasonable amount of analytical sample, which truly represent the whole lot of a particular master alloy. Successive dilution technique was utilized to meet the calibration curve range. Furthermore, the workout methods were also found suitable for the analysis of said elements in ordinary aluminum alloys. However, it was observed that Cush owed a considerable interference with Fe, the later one may not be accurately measured in the presence of Cu greater than 0.01 %.

  15. Calibration of a flexible measurement system based on industrial articulated robot and structured light sensor

    NASA Astrophysics Data System (ADS)

    Mu, Nan; Wang, Kun; Xie, Zexiao; Ren, Ping

    2017-05-01

    To realize online rapid measurement for complex workpieces, a flexible measurement system based on an articulated industrial robot with a structured light sensor mounted on the end-effector is developed. A method for calibrating the system parameters is proposed in which the hand-eye transformation parameters and the robot kinematic parameters are synthesized in the calibration process. An initial hand-eye calibration is first performed using a standard sphere as the calibration target. By applying the modified complete and parametrically continuous method, we establish a synthesized kinematic model that combines the initial hand-eye transformation and distal link parameters as a whole with the sensor coordinate system as the tool frame. According to the synthesized kinematic model, an error model is constructed based on spheres' center-to-center distance errors. Consequently, the error model parameters can be identified in a calibration experiment using a three-standard-sphere target. Furthermore, the redundancy of error model parameters is eliminated to ensure the accuracy and robustness of the parameter identification. Calibration and measurement experiments are carried out based on an ER3A-C60 robot. The experimental results show that the proposed calibration method enjoys high measurement accuracy, and this efficient and flexible system is suitable for online measurement in industrial scenes.

  16. Calibration of Reduced Dynamic Models of Power Systems using Phasor Measurement Unit (PMU) Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Lu, Shuai; Singh, Ruchi

    2011-09-23

    Accuracy of a power system dynamic model is essential to the secure and efficient operation of the system. Lower confidence on model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, identification algorithms have been developed to calibrate parameters of individual components using measurement data from staged tests. To facilitate online dynamic studies for large power system interconnections, this paper proposes a model reduction and calibration approach using phasor measurement unit (PMU) data. First, a model reduction method is used to reduce the number of dynamic components. Then, a calibration algorithm is developed to estimatemore » parameters of the reduced model. This approach will help to maintain an accurate dynamic model suitable for online dynamic studies. The performance of the proposed method is verified through simulation studies.« less

  17. Kinect based real-time position calibration for nasal endoscopic surgical navigation system

    NASA Astrophysics Data System (ADS)

    Fan, Jingfan; Yang, Jian; Chu, Yakui; Ma, Shaodong; Wang, Yongtian

    2016-03-01

    Unanticipated, reactive motion of the patient during skull based tumor resective surgery is the source of the consequence that the nasal endoscopic tracking system is compelled to be recalibrated. To accommodate the calibration process with patient's movement, this paper developed a Kinect based Real-time positional calibration method for nasal endoscopic surgical navigation system. In this method, a Kinect scanner was employed as the acquisition part of the point cloud volumetric reconstruction of the patient's head during surgery. Then, a convex hull based registration algorithm aligned the real-time image of the patient head with a model built upon the CT scans performed in the preoperative preparation to dynamically calibrate the tracking system if a movement was detected. Experimental results confirmed the robustness of the proposed method, presenting a total tracking error within 1 mm under the circumstance of relatively violent motions. These results point out the tracking accuracy can be retained stably and the potential to expedite the calibration of the tracking system against strong interfering conditions, demonstrating high suitability for a wide range of surgical applications.

  18. Camera calibration: active versus passive targets

    NASA Astrophysics Data System (ADS)

    Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli

    2011-11-01

    Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.

  19. A statistical method for estimating rates of soil development and ages of geologic deposits: A design for soil-chronosequence studies

    USGS Publications Warehouse

    Switzer, P.; Harden, J.W.; Mark, R.K.

    1988-01-01

    A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.

  20. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    NASA Astrophysics Data System (ADS)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  1. Cloned plasmid DNA fragments as calibrators for controlling GMOs: different real-time duplex quantitative PCR methods.

    PubMed

    Taverniers, Isabel; Van Bockstaele, Erik; De Loose, Marc

    2004-03-01

    Analytical real-time PCR technology is a powerful tool for implementation of the GMO labeling regulations enforced in the EU. The quality of analytical measurement data obtained by quantitative real-time PCR depends on the correct use of calibrator and reference materials (RMs). For GMO methods of analysis, the choice of appropriate RMs is currently under debate. So far, genomic DNA solutions from certified reference materials (CRMs) are most often used as calibrators for GMO quantification by means of real-time PCR. However, due to some intrinsic features of these CRMs, errors may be expected in the estimations of DNA sequence quantities. In this paper, two new real-time PCR methods are presented for Roundup Ready soybean, in which two types of plasmid DNA fragments are used as calibrators. Single-target plasmids (STPs) diluted in a background of genomic DNA were used in the first method. Multiple-target plasmids (MTPs) containing both sequences in one molecule were used as calibrators for the second method. Both methods simultaneously detect a promoter 35S sequence as GMO-specific target and a lectin gene sequence as endogenous reference target in a duplex PCR. For the estimation of relative GMO percentages both "delta C(T)" and "standard curve" approaches are tested. Delta C(T) methods are based on direct comparison of measured C(T) values of both the GMO-specific target and the endogenous target. Standard curve methods measure absolute amounts of target copies or haploid genome equivalents. A duplex delta C(T) method with STP calibrators performed at least as well as a similar method with genomic DNA calibrators from commercial CRMs. Besides this, high quality results were obtained with a standard curve method using MTP calibrators. This paper demonstrates that plasmid DNA molecules containing either one or multiple target sequences form perfect alternative calibrators for GMO quantification and are especially suitable for duplex PCR reactions.

  2. A transmission line method for the measurement of microwave permittivity and permeability

    NASA Astrophysics Data System (ADS)

    Lederer, P. G.

    1990-12-01

    A method for determining complex permittivity and permeability at microwave frequencies from two port S parameter measurements of lossy solids in coaxial or waveguide transmission lines is described. The use of the TRL (Through Reflect Line) calibration scheme allows the measuring system to be calibrated right up to the specimen faces thereby eliminating most of the sample cell from the measurement and allowing suitable materials to be molded directly into the specimen cell in order to eliminate air gaps between specimen and transmission line walls. Some illustrative measurements for dielectric and magnetic materials are presented.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, V.V.; Takacs, P.; Anderson, E.H.

    A modulation transfer function (MTF) calibration method based on binary pseudorandom (BPR) gratings and arrays has been proven to be an effective MTF calibration method for interferometric microscopes and a scatterometer. Here we report on a further expansion of the application range of the method. We describe the MTF calibration of a 6 in. phase shifting Fizeau interferometer. Beyond providing a direct measurement of the interferometer's MTF, tests with a BPR array surface have revealed an asymmetry in the instrument's data processing algorithm that fundamentally limits its bandwidth. Moreover, the tests have illustrated the effects of the instrument's detrending andmore » filtering procedures on power spectral density measurements. The details of the development of a BPR test sample suitable for calibration of scanning and transmission electron microscopes are also presented. Such a test sample is realized as a multilayer structure with the layer thicknesses of two materials corresponding to the BPR sequence. The investigations confirm the universal character of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less

  4. Fast calibration of electromagnetically tracked oblique-viewing rigid endoscopes.

    PubMed

    Liu, Xinyang; Rice, Christina E; Shekhar, Raj

    2017-10-01

    The oblique-viewing (i.e., angled) rigid endoscope is a commonly used tool in conventional endoscopic surgeries. The relative rotation between its two moveable parts, the telescope and the camera head, creates a rotation offset between the actual and the projection of an object in the camera image. A calibration method tailored to compensate such offset is needed. We developed a fast calibration method for oblique-viewing rigid endoscopes suitable for clinical use. In contrast to prior approaches based on optical tracking, we used electromagnetic (EM) tracking as the external tracking hardware to improve compactness and practicality. Two EM sensors were mounted on the telescope and the camera head, respectively, with considerations to minimize EM tracking errors. Single-image calibration was incorporated into the method, and a sterilizable plate, laser-marked with the calibration pattern, was also developed. Furthermore, we proposed a general algorithm to estimate the rotation center in the camera image. Formulas for updating the camera matrix in terms of clockwise and counterclockwise rotations were also developed. The proposed calibration method was validated using a conventional [Formula: see text], 5-mm laparoscope. Freehand calibrations were performed using the proposed method, and the calibration time averaged 2 min and 8 s. The calibration accuracy was evaluated in a simulated clinical setting with several surgical tools present in the magnetic field of EM tracking. The root-mean-square re-projection error averaged 4.9 pixel (range 2.4-8.5 pixel, with image resolution of [Formula: see text] for rotation angles ranged from [Formula: see text] to [Formula: see text]. We developed a method for fast and accurate calibration of oblique-viewing rigid endoscopes. The method was also designed to be performed in the operating room and will therefore support clinical translation of many emerging endoscopic computer-assisted surgical systems.

  5. Analysis of Lard in Lipstick Formulation Using FTIR Spectroscopy and Multivariate Calibration: A Comparison of Three Extraction Methods.

    PubMed

    Waskitho, Dri; Lukitaningsih, Endang; Sudjadi; Rohman, Abdul

    2016-01-01

    Analysis of lard extracted from lipstick formulation containing castor oil has been performed using FTIR spectroscopic method combined with multivariate calibration. Three different extraction methods were compared, namely saponification method followed by liquid/liquid extraction with hexane/dichlorometane/ethanol/water, saponification method followed by liquid/liquid extraction with dichloromethane/ethanol/water, and Bligh & Dyer method using chloroform/methanol/water as extracting solvent. Qualitative and quantitative analysis of lard were performed using principle component (PCA) and partial least square (PLS) analysis, respectively. The results showed that, in all samples prepared by the three extraction methods, PCA was capable of identifying lard at wavelength region of 1200-800 cm -1 with the best result was obtained by Bligh & Dyer method. Furthermore, PLS analysis at the same wavelength region used for qualification showed that Bligh and Dyer was the most suitable extraction method with the highest determination coefficient (R 2 ) and the lowest root mean square error of calibration (RMSEC) as well as root mean square error of prediction (RMSEP) values.

  6. Isotope Inversion Experiment evaluating the suitability of calibration in surrogate matrix for quantification via LC-MS/MS-Exemplary application for a steroid multi-method.

    PubMed

    Suhr, Anna Catharina; Vogeser, Michael; Grimm, Stefanie H

    2016-05-30

    For quotable quantitative analysis of endogenous analytes in complex biological samples by isotope dilution LC-MS/MS, the creation of appropriate calibrators is a challenge, since analyte-free authentic material is in general not available. Thus, surrogate matrices are often used to prepare calibrators and controls. However, currently employed validation protocols do not include specific experiments to verify the suitability of a surrogate matrix calibration for quantification of authentic matrix samples. The aim of the study was the development of a novel validation experiment to test whether surrogate matrix based calibrators enable correct quantification of authentic matrix samples. The key element of the novel validation experiment is the inversion of nonlabelled analytes and their stable isotope labelled (SIL) counterparts in respect to their functions, i.e. SIL compound is the analyte and nonlabelled substance is employed as internal standard. As a consequence, both surrogate and authentic matrix are analyte-free regarding SIL analytes, which allows a comparison of both matrices. We called this approach Isotope Inversion Experiment. As figure of merit we defined the accuracy of inverse quality controls in authentic matrix quantified by means of a surrogate matrix calibration curve. As a proof-of-concept application a LC-MS/MS assay addressing six corticosteroids (cortisol, cortisone, corticosterone, 11-deoxycortisol, 11-deoxycorticosterone, and 17-OH-progesterone) was chosen. The integration of the Isotope Inversion Experiment in the validation protocol for the steroid assay was successfully realized. The accuracy results of the inverse quality controls were all in all very satisfying. As a consequence the suitability of a surrogate matrix calibration for quantification of the targeted steroids in human serum as authentic matrix could be successfully demonstrated. The Isotope Inversion Experiment fills a gap in the validation process for LC-MS/MS assays quantifying endogenous analytes. We consider it a valuable and convenient tool to evaluate the correct quantification of authentic matrix samples based on a calibration curve in surrogate matrix. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Reactive Burn Model Calibration for PETN Using Ultra-High-Speed Phase Contrast Imaging

    NASA Astrophysics Data System (ADS)

    Johnson, Carl; Ramos, Kyle; Bolme, Cindy; Sanchez, Nathaniel; Barber, John; Montgomery, David

    2017-06-01

    A 1D reactive burn model (RBM) calibration for a plastic bonded high explosive (HE) requires run-to-detonation data. In PETN (pentaerythritol tetranitrate, 1.65 g/cc) the shock to detonation transition (SDT) is on the order of a few millimeters. This rapid SDT imposes experimental length scales that preclude application of traditional calibration methods such as embedded electromagnetic gauge methods (EEGM) which are very effective when used to study 10 - 20 mm thick HE specimens. In recent work at Argonne National Laboratory's Advanced Photon Source we have obtained run-to-detonation data in PETN using ultra-high-speed dynamic phase contrast imaging (PCI). A reactive burn model calibration valid for 1D shock waves is obtained using density profiles spanning the transition to detonation as opposed to particle velocity profiles from EEGM. Particle swarm optimization (PSO) methods were used to operate the LANL hydrocode FLAG iteratively to refine SURF RBM parameters until a suitable parameter set attained. These methods will be presented along with model validation simulations. The novel method described is generally applicable to `sensitive' energetic materials particularly those with areal densities amenable to radiography.

  8. Forecasting financial asset processes: stochastic dynamics via learning neural networks.

    PubMed

    Giebel, S; Rainer, M

    2010-01-01

    Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.

  9. Calibration of the modulation transfer function of surface profilometers with binary pseudo-random test standards: expanding the application range

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.

    2011-03-14

    A modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays [Proc. SPIE 7077-7 (2007), Opt. Eng. 47, 073602 (2008)] has been proven to be an effective MTF calibration method for a number of interferometric microscopes and a scatterometer [Nucl. Instr. and Meth. A616, 172 (2010)]. Here we report on a further expansion of the application range of the method. We describe the MTF calibration of a 6 inch phase shifting Fizeau interferometer. Beyond providing a direct measurement of the interferometer's MTF, tests with a BPR array surface have revealed an asymmetry in the instrument's datamore » processing algorithm that fundamentally limits its bandwidth. Moreover, the tests have illustrated the effects of the instrument's detrending and filtering procedures on power spectral density measurements. The details of the development of a BPR test sample suitable for calibration of scanning and transmission electron microscopes are also presented. Such a test sample is realized as a multilayer structure with the layer thicknesses of two materials corresponding to BPR sequence. The investigations confirm the universal character of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less

  10. Calibration of Valiantzas' reference evapotranspiration equations for the Pilbara region, Western Australia

    NASA Astrophysics Data System (ADS)

    Ahooghalandari, Matin; Khiadani, Mehdi; Jahromi, Mina Esmi

    2017-05-01

    Reference evapotranspiration (ET0) is a critical component of water resources management and planning. Different methods have been developed to estimate ET0 with various required data. In this study, Hargreaves, Turc, Oudin, Copais, Abtew methods and three forms of Valiantzas' formulas, developed in recent years, were used to estimate ET0 for the Pilbara region of Western Australia. The estimated ET0 values from these methods were compared with those from the FAO-56 Penman-Monteith (PM) method. The results showed that the Copais methods and two of Valiantzas' equations, in their original forms, are suitable for estimating ET0 for the study area. A modification of Honey-Bee Mating Optimization (MHBMO) algorithm was further implemented, and three Valiantzas' equations for a region located in the southern hemisphere were calibrated.

  11. A non-contact, thermal noise based method for the calibration of lateral deflection sensitivity in atomic force microscopy.

    PubMed

    Mullin, Nic; Hobbs, Jamie K

    2014-11-01

    Calibration of lateral forces and displacements has been a long standing problem in lateral force microscopies. Recently, it was shown by Wagner et al. that the thermal noise spectrum of the first torsional mode may be used to calibrate the deflection sensitivity of the detector. This method is quick, non-destructive and may be performed in situ in air or liquid. Here we make a full quantitative comparison of the lateral inverse optical lever sensitivity obtained by the lateral thermal noise method and the shape independent method developed by Anderson et al. We find that the thermal method provides accurate results for a wide variety of rectangular cantilevers, provided that the geometry of the cantilever is suitable for torsional stiffness calibration by the torsional Sader method, in-plane bending of the cantilever may be eliminated or accounted for and that any scaling of the lateral deflection signal between the measurement of the lateral thermal noise and the measurement of the lateral deflection is eliminated or corrected for. We also demonstrate that the thermal method may be used to characterize the linearity of the detector signal as a function of position, and find a deviation of less than 8% for the instrument used.

  12. Local-scale spatial modelling for interpolating climatic temperature variables to predict agricultural plant suitability

    NASA Astrophysics Data System (ADS)

    Webb, Mathew A.; Hall, Andrew; Kidd, Darren; Minansy, Budiman

    2016-05-01

    Assessment of local spatial climatic variability is important in the planning of planting locations for horticultural crops. This study investigated three regression-based calibration methods (i.e. traditional versus two optimized methods) to relate short-term 12-month data series from 170 temperature loggers and 4 weather station sites with data series from nearby long-term Australian Bureau of Meteorology climate stations. The techniques trialled to interpolate climatic temperature variables, such as frost risk, growing degree days (GDDs) and chill hours, were regression kriging (RK), regression trees (RTs) and random forests (RFs). All three calibration methods produced accurate results, with the RK-based calibration method delivering the most accurate validation measures: coefficients of determination ( R 2) of 0.92, 0.97 and 0.95 and root-mean-square errors of 1.30, 0.80 and 1.31 °C, for daily minimum, daily maximum and hourly temperatures, respectively. Compared with the traditional method of calibration using direct linear regression between short-term and long-term stations, the RK-based calibration method improved R 2 and reduced root-mean-square error (RMSE) by at least 5 % and 0.47 °C for daily minimum temperature, 1 % and 0.23 °C for daily maximum temperature and 3 % and 0.33 °C for hourly temperature. Spatial modelling indicated insignificant differences between the interpolation methods, with the RK technique tending to be the slightly better method due to the high degree of spatial autocorrelation between logger sites.

  13. Development of a non-destructive method for determining protein nitrogen in a yellow fever vaccine by near infrared spectroscopy and multivariate calibration.

    PubMed

    Dabkiewicz, Vanessa Emídio; de Mello Pereira Abrantes, Shirley; Cassella, Ricardo Jorgensen

    2018-08-05

    Near infrared spectroscopy (NIR) with diffuse reflectance associated to multivariate calibration has as main advantage the replacement of the physical separation of interferents by the mathematical separation of their signals, rapidly with no need for reagent consumption, chemical waste production or sample manipulation. Seeking to optimize quality control analyses, this spectroscopic analytical method was shown to be a viable alternative to the classical Kjeldahl method for the determination of protein nitrogen in yellow fever vaccine. The most suitable multivariate calibration was achieved by the partial least squares method (PLS) with multiplicative signal correction (MSC) treatment and data mean centering (MC), using a minimum number of latent variables (LV) equal to 1, with the lower value of the square root of the mean squared prediction error (0.00330) associated with the highest percentage value (91%) of samples. Accuracy ranged 95 to 105% recovery in the 4000-5184 cm -1 region. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Using a heterodyne vibrometer in combination with pulse excitation for primary calibration of ultrasonic hydrophones in amplitude and phase

    NASA Astrophysics Data System (ADS)

    Weber, Martin; Wilkens, Volker

    2017-08-01

    A high-frequency vibrometer was used with ultrasonic pulse excitation in order to perform a primary hydrophone calibration. This approach enables the simultaneous characterization of the amplitude and phase transfer characteristic of ultrasonic hydrophones. The method allows a high frequency resolution in a considerably short time for the measurement. Furthermore, the uncertainty contributions of this approach were investigated and quantified. A membrane hydrophone was calibrated and the uncertainty budget for this measurement was determined. The calibration results are presented up to 70~\\text{MHz} . The measurement results show good agreement with the results obtained by sinusoidal burst excitation through the use of the vibrometer and by a homodyne laser interferometer, with RMS deviation of approximately 3% -4% in the frequency range from 1 to 60~\\text{MHz} . Further hydrophones were characterized up to 100~\\text{MHz} with this procedure to demonstrate the suitability for very high frequency calibration.

  15. Dynamic calibration of pan-tilt-zoom cameras for traffic monitoring.

    PubMed

    Song, Kai-Tai; Tai, Jen-Chao

    2006-10-01

    Pan-tilt-zoom (PTZ) cameras have been widely used in recent years for monitoring and surveillance applications. These cameras provide flexible view selection as well as a wider observation range. This makes them suitable for vision-based traffic monitoring and enforcement systems. To employ PTZ cameras for image measurement applications, one first needs to calibrate the camera to obtain meaningful results. For instance, the accuracy of estimating vehicle speed depends on the accuracy of camera calibration and that of vehicle tracking results. This paper presents a novel calibration method for a PTZ camera overlooking a traffic scene. The proposed approach requires no manual operation to select the positions of special features. It automatically uses a set of parallel lane markings and the lane width to compute the camera parameters, namely, focal length, tilt angle, and pan angle. Image processing procedures have been developed for automatically finding parallel lane markings. Interesting experimental results are presented to validate the robustness and accuracy of the proposed method.

  16. Test Plan for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; Hair, Jason; McAndrew, Brendan; Daw, Adrian; Jennings, Donald; Rabin, Douglas

    2012-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change. One of the major objectives of CLARREO is to advance the accuracy of SI traceable absolute calibration at infrared and reflected solar wavelengths. This advance is required to reach the on-orbit absolute accuracy required to allow climate change observations to survive data gaps while remaining sufficiently accurate to observe climate change to within the uncertainty of the limit of natural variability. While these capabilities exist at NIST in the laboratory, there is a need to demonstrate that it can move successfully from NIST to NASA and/or instrument vendor capabilities for future spaceborne instruments. The current work describes the test plan for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches , alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The end result of efforts with the SOLARIS CDS will be an SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climate-quality data collections. The CLARREO mission addresses the need to observe high-accuracy, long-term climate change trends and advance the accuracy of SI traceable absolute calibration. The current work describes the test plan for the SOLARIS which is the calibration demonstration system for the reflected solar portion of CLARREO. SOLARIS provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The end result will be an SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climate-quality data collections.

  17. Atomic Resonance Radiation Energetics Investigation as a Diagnostic Method for Non-Equilibrium Hypervelocity Flows

    NASA Technical Reports Server (NTRS)

    Meyer, Scott A.; Bershader, Daniel; Sharma, Surendra P.; Deiwert, George S.

    1996-01-01

    Absorption measurements with a tunable vacuum ultraviolet light source have been proposed as a concentration diagnostic for atomic oxygen, and the viability of this technique is assessed in light of recent measurements. The instrumentation, as well as initial calibration measurements, have been reported previously. We report here additional calibration measurements performed to study the resonance broadening line shape for atomic oxygen. The application of this diagnostic is evaluated by considering the range of suitable test conditions and requirements, and by identifying issues that remain to be addressed.

  18. Development of a multi-residue analytical methodology based on liquid chromatography-tandem mass spectrometry (LC-MS/MS) for screening and trace level determination of pharmaceuticals in surface and wastewaters.

    PubMed

    Gros, Meritxell; Petrović, Mira; Barceló, Damiá

    2006-11-15

    This paper describes development, optimization and validation of a method for the simultaneous determination of 29 multi-class pharmaceuticals using off line solid phase extraction (SPE) followed by liquid chromatography-triple quadrupole mass spectrometry (LC-MS-MS). Target compounds include analgesics and non-steroidal anti-inflammatories (NSAIDs), lipid regulators, psychiatric drugs, anti-histaminics, anti-ulcer agent, antibiotics and beta-blockers. Recoveries obtained were generally higher than 60% for both surface and wastewaters, with exception of several compounds that yielded lower, but still acceptable recoveries: ranitidine (50%), sotalol (50%), famotidine (50%) and mevastatin (34%). The overall variability of the method was below 15%, for all compounds and all tested matrices. Method detection limits (MDL) varied between 1 and 30ng/L and from 3 to 160ng/L for surface and wastewaters, respectively. The precision of the method, calculated as relative standard deviation (R.S.D.), ranged from 0.2 to 6% and from 1 to 11% for inter and intra-day analysis, respectively. A detailed study of matrix effects was performed in order to evaluate the suitability of different calibration approaches (matrix-matched external calibration, internal calibration, extract dilution) to reduce analyte suppression or enhancement during instrumental analysis. The main advantages and drawbacks of each approach are demonstrated, justifying the selection of internal standard calibration as the most suitable approach for our study. The developed analytical method was successfully applied to the analysis of pharmaceutical residues in WWTP influents and effluents, as well as in river water. For both, river and wastewaters, the most ubiquitous compounds belonged to the group of anti-inflammatories and analgesics, antibiotics, the lipid regulators being acetaminophen, trimethoprim, ibuprofen, ketoprofen, atenolol, propranolol, mevastatin, carbamazepine and ranitidine the most frequently detected compounds.

  19. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  20. Determination of relative ion chamber calibration coefficients from depth-ionization measurements in clinical electron beams

    NASA Astrophysics Data System (ADS)

    Muir, B. R.; McEwen, M. R.; Rogers, D. W. O.

    2014-10-01

    A method is presented to obtain ion chamber calibration coefficients relative to secondary standard reference chambers in electron beams using depth-ionization measurements. Results are obtained as a function of depth and average electron energy at depth in 4, 8, 12 and 18 MeV electron beams from the NRC Elekta Precise linac. The PTW Roos, Scanditronix NACP-02, PTW Advanced Markus and NE 2571 ion chambers are investigated. The challenges and limitations of the method are discussed. The proposed method produces useful data at shallow depths. At depths past the reference depth, small shifts in positioning or drifts in the incident beam energy affect the results, thereby providing a built-in test of incident electron energy drifts and/or chamber set-up. Polarity corrections for ion chambers as a function of average electron energy at depth agree with literature data. The proposed method produces results consistent with those obtained using the conventional calibration procedure while gaining much more information about the behavior of the ion chamber with similar data acquisition time. Measurement uncertainties in calibration coefficients obtained with this method are estimated to be less than 0.5%. These results open up the possibility of using depth-ionization measurements to yield chamber ratios which may be suitable for primary standards-level dissemination.

  1. Method of loading organic materials with group III plus lanthanide and actinide elements

    DOEpatents

    Bell, Zane W [Oak Ridge, TN; Huei-Ho, Chuen [Oak Ridge, TN; Brown, Gilbert M [Knoxville, TN; Hurlbut, Charles [Sweetwater, TX

    2003-04-08

    Disclosed is a composition of matter comprising a tributyl phosphate complex of a group 3, lanthanide, actinide, or group 13 salt in an organic carrier and a method of making the complex. These materials are suitable for use in solid or liquid organic scintillators, as in x-ray absorption standards, x-ray fluorescence standards, and neutron detector calibration standards.

  2. Development of buried wire gages for measurement of wall shear stress in Blastane experiments

    NASA Technical Reports Server (NTRS)

    Murthy, S. V.; Steinle, F. W.

    1986-01-01

    Buried Wire Gages operated from a Constant Temperature Anemometer System are among the special types of instrumentation to be used in the Boundary Layer Apparatus for Subsonic and Transonic flow Affected by Noise Environment (BLASTANE). These Gages are of a new type and need to be adapted for specific applications. Methods were developed to fabricate Gage inserts and mount those in the BLASTANE Instrumentation Plugs. A large number of Gages were prepared and operated from a Constant Temperature Anemometer System to derive some of the calibration constants for application to fluid-flow wall shear-stress measurements. The final stage of the calibration was defined, but could not be accomplished because of non-availability of a suitable flow simulating apparatus. This report provides a description of the Buried Wire Gage technique, an explanation of the method evolved for making proper Gages and the calibration constants, namely Temperature Coefficient of Resistance and Conduction Loss Factor.

  3. Calibration of the modulation transfer function of surface profilometers with binary pseudo-random test standards: Expanding the application range

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, Valeriy V; Anderson, Erik H.; Barber, Samuel K.

    2010-07-26

    A modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays [Proc. SPIE 7077-7 (2007), Opt. Eng. 47(7), 073602-1-5 (2008)] has been proven to be an effective MTF calibration method for a number of interferometric microscopes and a scatterometer [Nucl. Instr. and Meth. A 616, 172-82 (2010]. Here we report on a significant expansion of the application range of the method. We describe the MTF calibration of a 6 inch phase shifting Fizeau interferometer. Beyond providing a direct measurement of the interferometer's MTF, tests with a BPR array surface have revealed an asymmetry in the instrument'smore » data processing algorithm that fundamentally limits its bandwidth. Moreover, the tests have illustrated the effects of the instrument's detrending and filtering procedures on power spectral density measurements. The details of the development of a BPR test sample suitable for calibration of scanning and transmission electron microscopes are also presented. Such a test sample is realized as a multilayer structure with the layer thicknesses of two materials corresponding to BPR sequence. The investigations confirm the universal character of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less

  4. Analysis of Natural Toxins by Liquid Chromatography-Chemiluminescence Nitrogen Detection and Application to the Preparation of Certified Reference Materials.

    PubMed

    Thomas, Krista; Wechsler, Dominik; Chen, Yi-Min; Crain, Sheila; Quilliam, Michael A

    2016-09-01

    The implementation of instrumental analytical methods such as LC-MS for routine monitoring of toxins requires the availability of accurate calibration standards. This is a challenge because many toxins are rare, expensive, dangerous to handle, and/or unstable, and simple gravimetric procedures are not reliable for establishing accurate concentrations in solution. NMR has served as one method of qualitative and quantitative characterization of toxin calibration solution Certified Reference Materials (CRMs). LC with chemiluminescence N detection (LC-CLND) was selected as a complementary method for comprehensive characterization of CRMs because it provides a molar response to N. Here we report on our investigation of LC-CLND as a method suitable for quantitative analysis of nitrogenous toxins. It was demonstrated that a wide range of toxins could be analyzed quantitatively by LC-CLND. Furthermore, equimolar responses among diverse structures were established and it was shown that a single high-purity standard such as caffeine could be used for instrument calibration. The limit of detection was approximately 0.6 ng N. Measurement of several of Canada's National Research Council toxin CRMs with caffeine as the calibrant showed precision averaging 2% RSD and accuracy ranging from 97 to 102%. Application of LC-CLND to the production of calibration solution CRMs and the establishment of traceability of measurement results are presented.

  5. High-efficiency non-uniformity correction for wide dynamic linear infrared radiometry system

    NASA Astrophysics Data System (ADS)

    Li, Zhou; Yu, Yi; Tian, Qi-Jie; Chang, Song-Tao; He, Feng-Yun; Yin, Yan-He; Qiao, Yan-Feng

    2017-09-01

    Several different integration times are always set for a wide dynamic linear and continuous variable integration time infrared radiometry system, therefore, traditional calibration-based non-uniformity correction (NUC) are usually conducted one by one, and furthermore, several calibration sources required, consequently makes calibration and process of NUC time-consuming. In this paper, the difference of NUC coefficients between different integration times have been discussed, and then a novel NUC method called high-efficiency NUC, which combines the traditional calibration-based non-uniformity correction, has been proposed. It obtains the correction coefficients of all integration times in whole linear dynamic rangesonly by recording three different images of a standard blackbody. Firstly, mathematical procedure of the proposed non-uniformity correction method is validated and then its performance is demonstrated by a 400 mm diameter ground-based infrared radiometry system. Experimental results show that the mean value of Normalized Root Mean Square (NRMS) is reduced from 3.78% to 0.24% by the proposed method. In addition, the results at 4 ms and 70 °C prove that this method has a higher accuracy compared with traditional calibration-based NUC. In the meantime, at other integration time and temperature there is still a good correction effect. Moreover, it greatly reduces the number of correction time and temperature sampling point, and is characterized by good real-time performance and suitable for field measurement.

  6. Application of miniaturized near-infrared spectroscopy for quality control of extemporaneous orodispersible films.

    PubMed

    Foo, Wen Chin; Widjaja, Effendi; Khong, Yuet Mei; Gokhale, Rajeev; Chan, Sui Yung

    2018-02-20

    Extemporaneous oral preparations are routinely compounded in the pharmacy due to a lack of suitable formulations for special populations. Such small-scale pharmacy preparations also present an avenue for individualized pharmacotherapy. Orodispersible films (ODF) have increasingly been evaluated as a suitable dosage form for extemporaneous oral preparations. Nevertheless, as with all other extemporaneous preparations, safety and quality remain a concern. Although the United States Pharmacopeia (USP) recommends analytical testing of compounded preparations for quality assurance, pharmaceutical assays are typically not routinely performed for such non-sterile pharmacy preparations, due to the complexity and high cost of conventional assay methods such as high performance liquid chromatography (HPLC). Spectroscopic methods including Raman, infrared and near-infrared spectroscopy have been successfully applied as quality control tools in the industry. The state-of-art benchtop spectrometers used in those studies have the advantage of superior resolution and performance, but are not suitable for use in a small-scale pharmacy setting. In this study, we investigated the application of a miniaturized near infrared (NIR) spectrometer as a quality control tool for identification and quantification of drug content in extemporaneous ODFs. Miniaturized near infrared (NIR) spectroscopy is suitable for small-scale pharmacy applications in view of its small size, portability, simple user interface, rapid measurement and real-time prediction results. Nevertheless, the challenge with miniaturized NIR spectroscopy is its lower resolution compared to state-of-art benchtop equipment. We have successfully developed NIR spectroscopy calibration models for identification of ODFs containing five different drugs, and quantification of drug content in ODFs containing 2-10mg ondansetron (OND). The qualitative model for drug identification produced 100% prediction accuracy. The quantitative model to predict OND drug content in ODFs was divided into two calibrations for improved accuracy: Calibration I and II covered the 2-4mg and 4-10mg ranges respectively. Validation was performed for method accuracy, linearity and precision. In conclusion, this study demonstrates the feasibility of miniaturized NIR spectroscopy as a quality control tool for small-scale, pharmacy preparations. Due to its non-destructive nature, every dosage unit can be tested thus affording positive impact on patient safety. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Salting-out assisted liquid-liquid extraction and partial least squares regression to assay low molecular weight polycyclic aromatic hydrocarbons leached from soils and sediments

    NASA Astrophysics Data System (ADS)

    Bressan, Lucas P.; do Nascimento, Paulo Cícero; Schmidt, Marcella E. P.; Faccin, Henrique; de Machado, Leandro Carvalho; Bohrer, Denise

    2017-02-01

    A novel method was developed to determine low molecular weight polycyclic aromatic hydrocarbons in aqueous leachates from soils and sediments using a salting-out assisted liquid-liquid extraction, synchronous fluorescence spectrometry and a multivariate calibration technique. Several experimental parameters were controlled and the optimum conditions were: sodium carbonate as the salting-out agent at concentration of 2 mol L- 1, 3 mL of acetonitrile as extraction solvent, 6 mL of aqueous leachate, vortexing for 5 min and centrifuging at 4000 rpm for 5 min. The partial least squares calibration was optimized to the lowest values of root mean squared error and five latent variables were chosen for each of the targeted compounds. The regression coefficients for the true versus predicted concentrations were higher than 0.99. Figures of merit for the multivariate method were calculated, namely sensitivity, multivariate detection limit and multivariate quantification limit. The selectivity was also evaluated and other polycyclic aromatic hydrocarbons did not interfere in the analysis. Likewise, high performance liquid chromatography was used as a comparative methodology, and the regression analysis between the methods showed no statistical difference (t-test). The proposed methodology was applied to soils and sediments of a Brazilian river and the recoveries ranged from 74.3% to 105.8%. Overall, the proposed methodology was suitable for the targeted compounds, showing that the extraction method can be applied to spectrofluorometric analysis and that the multivariate calibration is also suitable for these compounds in leachates from real samples.

  8. The Kjeldahl method as a primary reference procedure for total protein in certified reference materials used in clinical chemistry. I. A review of Kjeldahl methods adopted by laboratory medicine.

    PubMed

    Chromý, Vratislav; Vinklárková, Bára; Šprongl, Luděk; Bittová, Miroslava

    2015-01-01

    We found previously that albumin-calibrated total protein in certified reference materials causes unacceptable positive bias in analysis of human sera. The simplest way to cure this defect is the use of human-based serum/plasma standards calibrated by the Kjeldahl method. Such standards, commutative with serum samples, will compensate for bias caused by lipids and bilirubin in most human sera. To find a suitable primary reference procedure for total protein in reference materials, we reviewed Kjeldahl methods adopted by laboratory medicine. We found two methods recommended for total protein in human samples: an indirect analysis based on total Kjeldahl nitrogen corrected for its nonprotein nitrogen and a direct analysis made on isolated protein precipitates. The methods found will be assessed in a subsequent article.

  9. A counting-weighted calibration method for a field-programmable-gate-array-based time-to-digital converter

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Ho

    2017-05-01

    In this work, we propose a counting-weighted calibration method for field-programmable-gate-array (FPGA)-based time-to-digital converter (TDC) to provide non-linearity calibration for use in positron emission tomography (PET) scanners. To deal with the non-linearity in FPGA, we developed a counting-weighted delay line (CWD) to count the delay time of the delay cells in the TDC in order to reduce the differential non-linearity (DNL) values based on code density counts. The performance of the proposed CWD-TDC with regard to linearity far exceeds that of TDC with a traditional tapped delay line (TDL) architecture, without the need for nonlinearity calibration. When implemented in a Xilinx Vertix-5 FPGA device, the proposed CWD-TDC achieved time resolution of 60 ps with integral non-linearity (INL) and DNL of [-0.54, 0.24] and [-0.66, 0.65] least-significant-bit (LSB), respectively. This is a clear indication of the suitability of the proposed FPGA-based CWD-TDC for use in PET scanners.

  10. Calibration of CR-39-based thoron progeny device.

    PubMed

    Fábián, F; Csordás, A; Shahrokhi, A; Somlai, J; Kovács, T

    2014-07-01

    Radon isotopes and their progenies have proven significant role in respiratory tumour formation. In most cases, the radiological effect of one of the radon isotopes (thoron) and its progenies has been neglected together with its measurement technique; however, latest surveys proved that thoron's existence is expectable in flats and in workplace in Europe. Detectors based on different track detector measurement technologies have recently spread for measuring thoron progenies; however, the calibration is not yet completely elaborated. This study deals with the calibration of the track detector measurement method suitable for measuring thoron progenies using different devices with measurement techniques capable of measuring several progenies (Pylon AB5 and WLx, Sarad EQF 3220). The calibration factor values related to the thoron progeny monitors, the measurement uncertainty, reproducibility and other parameters were found using the calibration chamber. In the future, the effects of the different parameters (aerosol distribution, etc.) will be determined. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Parameter regionalization of a monthly water balance model for the conterminous United States

    USGS Publications Warehouse

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2016-01-01

    A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash–Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.

  12. Parameter regionalization of a monthly water balance model for the conterminous United States

    NASA Astrophysics Data System (ADS)

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2016-07-01

    A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.

  13. Calibration Of Partial-Pressure-Of-Oxygen Sensors

    NASA Technical Reports Server (NTRS)

    Yount, David W.; Heronimus, Kevin

    1995-01-01

    Report and analysis of, and discussion of improvements in, procedure for calibrating partial-pressure-of-oxygen sensors to satisfy Spacelab calibration requirements released. Sensors exhibit fast drift, which results in short calibration period not suitable for Spacelab. By assessing complete process of determining total drift range available, calibration procedure modified to eliminate errors and still satisfy requirements without compromising integrity of system.

  14. A multimethod Global Sensitivity Analysis to aid the calibration of geomechanical models via time-lapse seismic data

    NASA Astrophysics Data System (ADS)

    Price, D. C.; Angus, D. A.; Garcia, A.; Fisher, Q. J.; Parsons, S.; Kato, J.

    2018-03-01

    Time-lapse seismic attributes are used extensively in the history matching of production simulator models. However, although proven to contain information regarding production induced stress change, it is typically only loosely (i.e. qualitatively) used to calibrate geomechanical models. In this study we conduct a multimethod Global Sensitivity Analysis (GSA) to assess the feasibility and aid the quantitative calibration of geomechanical models via near-offset time-lapse seismic data. Specifically, the calibration of mechanical properties of the overburden. Via the GSA, we analyse the near-offset overburden seismic traveltimes from over 4000 perturbations of a Finite Element (FE) geomechanical model of a typical High Pressure High Temperature (HPHT) reservoir in the North Sea. We find that, out of an initially large set of material properties, the near-offset overburden traveltimes are primarily affected by Young's modulus and the effective stress (i.e. Biot) coefficient. The unexpected significance of the Biot coefficient highlights the importance of modelling fluid flow and pore pressure outside of the reservoir. The FE model is complex and highly nonlinear. Multiple combinations of model parameters can yield equally possible model realizations. Consequently, numerical calibration via a large number of random model perturbations is unfeasible. However, the significant differences in traveltime results suggest that more sophisticated calibration methods could potentially be feasible for finding numerous suitable solutions. The results of the time-varying GSA demonstrate how acquiring multiple vintages of time-lapse seismic data can be advantageous. However, they also suggest that significant overburden near-offset seismic time-shifts, useful for model calibration, may take up to 3 yrs after the start of production to manifest. Due to the nonlinearity of the model behaviour, similar uncertainty in the reservoir mechanical properties appears to influence overburden traveltime to a much greater extent. Therefore, reservoir properties must be known to a suitable degree of accuracy before the calibration of the overburden can be considered.

  15. Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors

    PubMed Central

    Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka

    2016-01-01

    In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people. PMID:28036015

  16. Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors.

    PubMed

    Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka

    2016-12-28

    In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people.

  17. Experimental Approach for the Uncertainty Assessment of 3D Complex Geometry Dimensional Measurements Using Computed Tomography at the mm and Sub-mm Scales

    PubMed Central

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A.; Ontiveros, Sinué; Tosello, Guido

    2017-01-01

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems’ traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component’s calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from the validation workpiece are, respectively, 0.27 (VDI) and 0.35 (MPE), by assuring tolerances in the range of ± 20–30 µm. For the dental file, the EN < 1 value analysis is favorable in the majority of the cases (70.4%) and 2U/T is equal to 0.31 for sub-mm measurands (L < 1 mm and tolerance intervals of ± 40–80 µm). PMID:28509869

  18. Experimental Approach for the Uncertainty Assessment of 3D Complex Geometry Dimensional Measurements Using Computed Tomography at the mm and Sub-mm Scales.

    PubMed

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A; Ontiveros, Sinué; Tosello, Guido

    2017-05-16

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems' traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component's calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from the validation workpiece are, respectively, 0.27 (VDI) and 0.35 (MPE), by assuring tolerances in the range of ± 20-30 µm. For the dental file, the E N < 1 value analysis is favorable in the majority of the cases (70.4%) and 2U/T is equal to 0.31 for sub-mm measurands (L < 1 mm and tolerance intervals of ± 40-80 µm).

  19. Implementation of a near real-time burned area detection algorithm calibrated for VIIRS imagery

    Treesearch

    Brenna Schwert; Carl Albury; Jess Clark; Abigail Schaaf; Shawn Urbanski; Bryce Nordgren

    2016-01-01

    There is a need to implement methods for rapid burned area detection using a suitable replacement for Moderate Resolution Imaging Spectroradiometer (MODIS) imagery to meet future mapping and monitoring needs (Roy and Boschetti 2009, Tucker and Yager 2011). The Visible Infrared Imaging Radiometer Suite (VIIRS) sensor onboard the Suomi-National Polar-orbiting Partnership...

  20. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    PubMed

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant role in the calibration while wavelength selection plays a marginal role and the combination of certain pre-processing, wavelength selection, and nonlinear regression methods can achieve superior performance over traditional linear regression-based calibration.

  1. Inflight calibration of the modular airborne imaging spectrometer (MAIS) and its application to reflectance retrieval

    NASA Astrophysics Data System (ADS)

    Min, Xiangjun; Zhu, Yonghao

    1998-08-01

    Inflight experiment of Modular Airborne Imaging Spectrometer (MAIS) and ground-based measurements using GER MARK-V spectroradiometer simultaneously with the MAIS overpass were performed during Autumn 1995 at the semiarid area of Inner Mongolia, China. Based on these measurements and MAIS image data, we designed a method for the radiometric calibration of MAIS sensor using 6S and LOWTRAN 7 codes. The results show that the uncertainty of MAIS calibration is about 8% in the visible and near infrared wavelengths (0.4 - 1.2 micrometer). To verify our calibration algorithm, the calibrated results of MAIS sensor was used to derive the ground reflectances. The accuracy of reflectance retrieval is about 8.5% in the spectral range of 0.4 to 1.2 micrometer, i.e., the uncertainty of derived near-nadir reflectances is within 0.01 - 0.05 in reflectance unit at ground reflectance between 3% and 50%. The distinguishing feature of the ground-based measurements, which will be paid special attention in this paper, is that obtaining simultaneously the reflectance factors of the calibration target, atmospheric optical depth, and water vapor abundance from the same one set of measurement data by only one suit of instruments. The analysis indicates that the method presented here is suitable to the quantitative analysis of imaging spectral data in China.

  2. Calibration of limited-area ensemble precipitation forecasts for hydrological predictions

    NASA Astrophysics Data System (ADS)

    Diomede, Tommaso; Marsigli, Chiara; Montani, Andrea; Nerozzi, Fabrizio; Paccagnella, Tiziana

    2015-04-01

    The main objective of this study is to investigate the impact of calibration for limited-area ensemble precipitation forecasts, to be used for driving discharge predictions up to 5 days in advance. A reforecast dataset, which spans 30 years, based on the Consortium for Small Scale Modeling Limited-Area Ensemble Prediction System (COSMO-LEPS) was used for testing the calibration strategy. Three calibration techniques were applied: quantile-to-quantile mapping, linear regression, and analogs. The performance of these methodologies was evaluated in terms of statistical scores for the precipitation forecasts operationally provided by COSMO-LEPS in the years 2003-2007 over Germany, Switzerland, and the Emilia-Romagna region (northern Italy). The analog-based method seemed to be preferred because of its capability of correct position errors and spread deficiencies. A suitable spatial domain for the analog search can help to handle model spatial errors as systematic errors. However, the performance of the analog-based method may degrade in cases where a limited training dataset is available. A sensitivity test on the length of the training dataset over which to perform the analog search has been performed. The quantile-to-quantile mapping and linear regression methods were less effective, mainly because the forecast-analysis relation was not so strong for the available training dataset. A comparison between the calibration based on the deterministic reforecast and the calibration based on the full operational ensemble used as training dataset has been considered, with the aim to evaluate whether reforecasts are really worthy for calibration, given that their computational cost is remarkable. The verification of the calibration process was then performed by coupling ensemble precipitation forecasts with a distributed rainfall-runoff model. This test was carried out for a medium-sized catchment located in Emilia-Romagna, showing a beneficial impact of the analog-based method on the reduction of missed events for discharge predictions.

  3. Systematic review of statistical approaches to quantify, or correct for, measurement error in a continuous exposure in nutritional epidemiology.

    PubMed

    Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta

    2017-09-19

    Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.

  4. Entrance surface dose measurements using a small OSL dosimeter with a computed tomography scanner having 320 rows of detectors.

    PubMed

    Takegami, Kazuki; Hayashi, Hiroaki; Yamada, Kenji; Mihara, Yoshiki; Kimoto, Natsumi; Kanazawa, Yuki; Higashino, Kousaku; Yamashita, Kazuta; Hayashi, Fumio; Okazaki, Tohru; Hashizume, Takuya; Kobayashi, Ikuo

    2017-03-01

    Entrance surface dose (ESD) measurements are important in X-ray computed tomography (CT) for examination, but in clinical settings it is difficult to measure ESDs because of a lack of suitable dosimeters. We focus on the capability of a small optically stimulated luminescence (OSL) dosimeter. The aim of this study is to propose a practical method for using an OSL dosimeter to measure the ESD when performing a CT examination. The small OSL dosimeter has an outer width of 10 mm; it is assumed that a partial dose may be measured because the slice thickness and helical pitch can be set to various values. To verify our method, we used a CT scanner having 320 rows of detectors and checked the consistencies of the ESDs measured using OSL dosimeters by comparing them with those measured using Gafchromic™ films. The films were calibrated using an ionization chamber on the basis of half-value layer estimation. On the other hand, the OSL dosimeter was appropriately calibrated using a practical calibration curve previously proposed by our group. The ESDs measured using the OSL dosimeters were in good agreement with the reference ESDs from the Gafchromic™ films. Using these data, we also estimated the uncertainty of ESDs measured with small OSL dosimeters. We concluded that a small OSL dosimeter can be considered suitable for measuring the ESD with an uncertainty of 30 % during CT examinations in which pitch factors below 1.000 are applied.

  5. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2011-07-01

    The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow and where peak-flow timing at sub-daily time scales is of high importance. The results suggest that the calibration method can be useful when observation time periods for discharge and model input data do not overlap. The method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.

  6. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2010-12-01

    The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow. The results suggest that the new calibration method can be useful when observation time periods for discharge and model input data do not overlap. The new method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.

  7. Cellular Oxygen and Nutrient Sensing in Microgravity Using Time-Resolved Fluorescence Microscopy

    NASA Technical Reports Server (NTRS)

    Szmacinski, Henryk

    2003-01-01

    Oxygen and nutrient sensing is fundamental to the understanding of cell growth and metabolism. This requires identification of optical probes and suitable detection technology without complex calibration procedures. Under this project Microcosm developed an experimental technique that allows for simultaneous imaging of intra- and inter-cellular events. The technique consists of frequency-domain Fluorescence Lifetime Imaging Microscopy (FLIM), a set of identified oxygen and pH probes, and methods for fabrication of microsensors. Specifications for electronic and optical components of FLIM instrumentation are provided. Hardware and software were developed for data acquisition and analysis. Principles, procedures, and representative images are demonstrated. Suitable lifetime sensitive oxygen, pH, and glucose probes for intra- and extra-cellular measurements of analyte concentrations have been identified and tested. Lifetime sensing and imaging have been performed using PBS buffer, culture media, and yeast cells as a model systems. Spectral specifications, calibration curves, and probes availability are also provided in the report.

  8. Quantitation of fumonisin B1 and B2 in feed using FMOC pre-column derivatization with HPLC and fluorescence detection.

    PubMed

    Smith, Lori L; Francis, Kyle A; Johnson, Joseph T; Gaskill, Cynthia L

    2017-11-01

    Pre-column derivatization with 9-fluorenylmethyl chloroformate (FMOC-Cl) was determined to be effective for quantitation of fumonisins B 1 and B 2 in feed. Liquid-solid extraction, clean-up using immunoaffinity solid phase extraction chromatography, and FMOC-derivatization preceded analysis by reverse phase HPLC with fluorescence. Instrument response was unchanged in the presence of matrix, indicating no need to use matrix-matched calibrants. Furthermore, high method recoveries indicated calibrants do not need to undergo clean-up to account for analyte loss. Established method features include linear instrument response from 0.04-2.5µg/mL and stable derivatized calibrants over 7days. Fortified cornmeal method recoveries from 0.1-30.0μg/g were determined for FB 1 (75.1%-109%) and FB 2 (96.0%-115.2%). Inter-assay precision ranged from 1.0%-16.7%. Method accuracy was further confirmed using certified reference material. Inter-laboratory comparison with naturally-contaminated field corn demonstrated equivalent results with conventional derivatization. These results indicate FMOC derivatization is a suitable alternative for fumonisins B 1 and B 2 quantitation in corn-based feeds. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Analysis of Electric Vehicle DC High Current Conversion Technology

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Bai, Jing-fen; Lin, Fan-tao; Lu, Da

    2017-05-01

    Based on the background of electric vehicles, it is elaborated the necessity about electric energy accurate metering of electric vehicle power batteries, and it is analyzed about the charging and discharging characteristics of power batteries. It is needed a DC large current converter to realize accurate calibration of power batteries electric energy metering. Several kinds of measuring methods are analyzed based on shunts and magnetic induction principle in detail. It is put forward power batteries charge and discharge calibration system principle, and it is simulated and analyzed ripple waves containing rate and harmonic waves containing rate of power batteries AC side and DC side. It is put forward suitable DC large current measurement methods of power batteries by comparing different measurement principles and it is looked forward the DC large current measurement techniques.

  10. Laser Induced Breakdown Spectroscopy for Elemental Analysis in Environmental, Cultural Heritage and Space Applications: A Review of Methods and Results

    PubMed Central

    Gaudiuso, Rosalba; Dell’Aglio, Marcella; De Pascale, Olga; Senesi, Giorgio S.; De Giacomo, Alessandro

    2010-01-01

    Analytical applications of Laser Induced Breakdown Spectroscopy (LIBS), namely optical emission spectroscopy of laser-induced plasmas, have been constantly growing thanks to its intrinsic conceptual simplicity and versatility. Qualitative and quantitative analysis can be performed by LIBS both by drawing calibration lines and by using calibration-free methods and some of its features, so as fast multi-elemental response, micro-destructiveness, instrumentation portability, have rendered it particularly suitable for analytical applications in the field of environmental science, space exploration and cultural heritage. This review reports and discusses LIBS achievements in these areas and results obtained for soils and aqueous samples, meteorites and terrestrial samples simulating extraterrestrial planets, and cultural heritage samples, including buildings and objects of various kinds. PMID:22163611

  11. Laser induced breakdown spectroscopy for elemental analysis in environmental, cultural heritage and space applications: a review of methods and results.

    PubMed

    Gaudiuso, Rosalba; Dell'Aglio, Marcella; De Pascale, Olga; Senesi, Giorgio S; De Giacomo, Alessandro

    2010-01-01

    Analytical applications of Laser Induced Breakdown Spectroscopy (LIBS), namely optical emission spectroscopy of laser-induced plasmas, have been constantly growing thanks to its intrinsic conceptual simplicity and versatility. Qualitative and quantitative analysis can be performed by LIBS both by drawing calibration lines and by using calibration-free methods and some of its features, so as fast multi-elemental response, micro-destructiveness, instrumentation portability, have rendered it particularly suitable for analytical applications in the field of environmental science, space exploration and cultural heritage. This review reports and discusses LIBS achievements in these areas and results obtained for soils and aqueous samples, meteorites and terrestrial samples simulating extraterrestrial planets, and cultural heritage samples, including buildings and objects of various kinds.

  12. Metafitting: Weight optimization for least-squares fitting of PTTI data

    NASA Technical Reports Server (NTRS)

    Douglas, Rob J.; Boulanger, J.-S.

    1995-01-01

    For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.

  13. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2016-12-06

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.

  14. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method

    PubMed Central

    Tuta, Jure; Juric, Matjaz B.

    2016-01-01

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements. PMID:27929453

  15. Parameter regionalization of a monthly water balance model for the conterminous United States

    NASA Astrophysics Data System (ADS)

    Bock, A. R.; Hay, L. E.; McCabe, G. J.; Markstrom, S. L.; Atkinson, R. D.

    2015-09-01

    A parameter regionalization scheme to transfer parameter values and model uncertainty information from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe Efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.

  16. Energy calibration of CALET onboard the International Space Station

    NASA Astrophysics Data System (ADS)

    Asaoka, Y.; Akaike, Y.; Komiya, Y.; Miyata, R.; Torii, S.; Adriani, O.; Asano, K.; Bagliesi, M. G.; Bigongiari, G.; Binns, W. R.; Bonechi, S.; Bongi, M.; Brogi, P.; Buckley, J. H.; Cannady, N.; Castellini, G.; Checchia, C.; Cherry, M. L.; Collazuol, G.; Di Felice, V.; Ebisawa, K.; Fuke, H.; Guzik, T. G.; Hams, T.; Hareyama, M.; Hasebe, N.; Hibino, K.; Ichimura, M.; Ioka, K.; Ishizaki, W.; Israel, M. H.; Javaid, A.; Kasahara, K.; Kataoka, J.; Kataoka, R.; Katayose, Y.; Kato, C.; Kawanaka, N.; Kawakubo, Y.; Kitamura, H.; Krawczynski, H. S.; Krizmanic, J. F.; Kuramata, S.; Lomtadze, T.; Maestro, P.; Marrocchesi, P. S.; Messineo, A. M.; Mitchell, J. W.; Miyake, S.; Mizutani, K.; Moiseev, A. A.; Mori, K.; Mori, M.; Mori, N.; Motz, H. M.; Munakata, K.; Murakami, H.; Nakagawa, Y. E.; Nakahira, S.; Nishimura, J.; Okuno, S.; Ormes, J. F.; Ozawa, S.; Pacini, L.; Palma, F.; Papini, P.; Penacchioni, A. V.; Rauch, B. F.; Ricciarini, S.; Sakai, K.; Sakamoto, T.; Sasaki, M.; Shimizu, Y.; Shiomi, A.; Sparvoli, R.; Spillantini, P.; Stolzi, F.; Takahashi, I.; Takayanagi, M.; Takita, M.; Tamura, T.; Tateyama, N.; Terasawa, T.; Tomida, H.; Tsunesada, Y.; Uchihori, Y.; Ueno, S.; Vannuccini, E.; Wefel, J. P.; Yamaoka, K.; Yanagita, S.; Yoshida, A.; Yoshida, K.; Yuda, T.

    2017-05-01

    In August 2015, the CALorimetric Electron Telescope (CALET), designed for long exposure observations of high energy cosmic rays, docked with the International Space Station (ISS) and shortly thereafter began to collect data. CALET will measure the cosmic ray electron spectrum over the energy range of 1 GeV to 20 TeV with a very high resolution of 2% above 100 GeV, based on a dedicated instrument incorporating an exceptionally thick 30 radiation-length calorimeter with both total absorption and imaging (TASC and IMC) units. Each TASC readout channel must be carefully calibrated over the extremely wide dynamic range of CALET that spans six orders of magnitude in order to obtain a degree of calibration accuracy matching the resolution of energy measurements. These calibrations consist of calculating the conversion factors between ADC units and energy deposits, ensuring linearity over each gain range, and providing a seamless transition between neighboring gain ranges. This paper describes these calibration methods in detail, along with the resulting data and associated accuracies. The results presented in this paper show that a sufficient accuracy was achieved for the calibrations of each channel in order to obtain a suitable resolution over the entire dynamic range of the electron spectrum measurement.

  17. Total internal reflection fluorescence anisotropy imaging microscopy: setup, calibration, and data processing for protein polymerization measurements in living cells

    NASA Astrophysics Data System (ADS)

    Ströhl, Florian; Wong, Hovy H. W.; Holt, Christine E.; Kaminski, Clemens F.

    2018-01-01

    Fluorescence anisotropy imaging microscopy (FAIM) measures the depolarization properties of fluorophores to deduce molecular changes in their environment. For successful FAIM, several design principles have to be considered and a thorough system-specific calibration protocol is paramount. One important calibration parameter is the G factor, which describes the system-induced errors for different polarization states of light. The determination and calibration of the G factor is discussed in detail in this article. We present a novel measurement strategy, which is particularly suitable for FAIM with high numerical aperture objectives operating in TIRF illumination mode. The method makes use of evanescent fields that excite the sample with a polarization direction perpendicular to the image plane. Furthermore, we have developed an ImageJ/Fiji plugin, AniCalc, for FAIM data processing. We demonstrate the capabilities of our TIRF-FAIM system by measuring β -actin polymerization in human embryonic kidney cells and in retinal neurons.

  18. Internal Water Vapor Photoacoustic Calibration

    NASA Technical Reports Server (NTRS)

    Pilgrim, Jeffrey S.

    2009-01-01

    Water vapor absorption is ubiquitous in the infrared wavelength range where photoacoustic trace gas detectors operate. This technique allows for discontinuous wavelength tuning by temperature-jumping a laser diode from one range to another within a time span suitable for photoacoustic calibration. The use of an internal calibration eliminates the need for external calibrated reference gases. Commercial applications include an improvement of photoacoustic spectrometers in all fields of use.

  19. Including Fossils in Phylogenetic Climate Reconstructions: A Deep Time Perspective on the Climatic Niche Evolution and Diversification of Spiny Lizards (Sceloporus).

    PubMed

    Lawing, A Michelle; Polly, P David; Hews, Diana K; Martins, Emília P

    2016-08-01

    Fossils and other paleontological information can improve phylogenetic comparative method estimates of phenotypic evolution and generate hypotheses related to species diversification. Here, we use fossil information to calibrate ancestral reconstructions of suitable climate for Sceloporus lizards in North America. Integrating data from the fossil record, general circulation models of paleoclimate during the Miocene, climate envelope modeling, and phylogenetic comparative methods provides a geographically and temporally explicit species distribution model of Sceloporus-suitable habitat through time. We provide evidence to support the historic biogeographic hypothesis of Sceloporus diversification in warm North American deserts and suggest a relatively recent Sceloporus invasion into Mexico around 6 Ma. We use a physiological model to map extinction risk. We suggest that the number of hours of restriction to a thermal refuge limited Sceloporus from inhabiting Mexico until the climate cooled enough to provide suitable habitat at approximately 6 Ma. If the future climate returns to the hotter climates of the past, Mexico, the place of highest modern Sceloporus richness, will no longer provide suitable habitats for Sceloporus to survive and reproduce.

  20. An integrated approach to monitoring the calibration stability of operational dual-polarization radars

    DOE PAGES

    Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.; ...

    2016-11-08

    The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less

  1. On aspects of characterising and calibrating the interferometric gravitational wave detector, GEO 600

    NASA Astrophysics Data System (ADS)

    Hewitson, Martin R.

    Gravitational waves are small disturbances, or strains, in the fabric of space-time. The detection of these waves has been a major goal of modern physics since they were predicted as a consequence of Einstein's General Theory of Relativity. Large-scale astro- physical events, such as colliding neutron stars or supernovae, are predicted to release energy in the form of gravitational waves. However, even with such cataclysmic events, the strain amplitudes of the gravitational waves expected to be seen at the Earth are incredibly small: of the order 1 part in 10. 21 or less at audio frequencies. Because of theseextremely small amplitudes, the search for gravitational waves remains one of the most challenging goals of modem physics. This thesis starts by detailing the data recording system of GEO 600: an essential part of producing a calibrated data set. The full data acquisition system, including all hardware and software aspects, is described in detail. Comprehensive tests of the stability and timing accuracy of the system show that it has a typical duty cycle of greater than 99% with an absolute timing accuracy (measured against GPS) of the order 15 mus. The thesis then goes on to describe the design and implementation of a time-domain calibration method, based on the use of time-domain filters, for the power-recycled configuration of GEO 600. This time-domain method is then extended to deal with the more complicated case of calibrating the dual-recycled configuration of GEO 600. The time-domain calibration method was applied to two long data-taking (science) runs. The method proved successful in recovering (in real-time) a calibrated strain time-series suitable for use in astrophysical searches. The accuracy of the calibration process was shown to be good to 10% or less across the detection band of the detector. In principle, the time-domain method presents no restrictions in the achievable calibration accuracy; most of the uncertainty in the calibration process is shown to arise from the actuator used to inject the calibradon signals. The recovered strain series was shown to be equivalent to a frequency-domain calibration at the level of a few percent. A number of ways are presented in which the initial calibration pipeline can be improved to increase the calibration accuracy. The production and subsequent distribution of a calibrated time- series allows for a single point of control over the validity and quality of the calibrated data. The techniques developed in this thesis are currently being adopted by the LIGO interferometers to perform time-domain calibration of their three long-baseline detectors. In addition, a data storage system is currently being developed by the author, together with the LIGO calibration team, to allow all the information used in the time-domain calibration process to be captured in a concise and coherent form that is consistent across multiple detectors in the LSC. (Abstract shortened by ProQuest.).

  2. Simple Parametric Model for Intensity Calibration of Cassini Composite Infrared Spectrometer Data

    NASA Technical Reports Server (NTRS)

    Brasunas, J.; Mamoutkine, A.; Gorius, N.

    2016-01-01

    Accurate intensity calibration of a linear Fourier-transform spectrometer typically requires the unknown science target and the two calibration targets to be acquired under identical conditions. We present a simple model suitable for vector calibration that enables accurate calibration via adjustments of measured spectral amplitudes and phases when these three targets are recorded at different detector or optics temperatures. Our model makes calibration more accurate both by minimizing biases due to changing instrument temperatures that are always present at some level and by decreasing estimate variance through incorporating larger averages of science and calibration interferogram scans.

  3. An atlas of selected calibrated stellar spectra

    NASA Technical Reports Server (NTRS)

    Walker, Russell G.; Cohen, Martin

    1992-01-01

    Five hundred and fifty six stars in the IRAS PSC-2 that are suitable for stellar radiometric standards and are brighter than 1 Jy at 25 microns were identified. In addition, 123 stars that meet all of our criteria for calibration standards, but which lack a luminosity class were identified. An approach to absolute stellar calibration of broadband infrared filters based upon new models of Vega and Sirius due to Kurucz (1992) is presented. A general technique used to assemble continuous wide-band calibrated infrared spectra is described and an absolutely calibrated 1-35 micron spectrum of alpha(Tau) is constructed and the method using new and carefully designed observations is independently validated. The absolute calibration of the IRAS Low Resolution Spectrometer (LRS) database is investigated by comparing the observed spectrum of alpha(Tau) with that assumed in the original LRS calibration scheme. Neglect of the SiO fundamental band in alpha(Tau) has led to the presence of a specious 'emission' feature in all LRS spectra near 8.5 microns, and to an incorrect spectral slope between 8 and 12 microns. Finally, some of the properties of asteroids that effect their utility as calibration objects for the middle and far infrared region are examined. A technique to determine, from IRAS multiwaveband observations, the basic physical parameters needed by various asteroid thermal models that minimize the number of assumptions required is developed.

  4. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William

    2017-09-01

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  5. Boresight alignment method for mobile laser scanning systems

    NASA Astrophysics Data System (ADS)

    Rieger, P.; Studnicka, N.; Pfennigbauer, M.; Zach, G.

    2010-06-01

    Mobile laser scanning (MLS) is the latest approach towards fast and cost-efficient acquisition of 3-dimensional spatial data. Accurately evaluating the boresight alignment in MLS systems is an obvious necessity. However, recent systems available on the market may lack of suitable and efficient practical workflows on how to perform this calibration. This paper discusses an innovative method for accurately determining the boresight alignment of MLS systems by employing 3D laser scanners. Scanning objects using a 3D laser scanner operating in a 2D line-scan mode from various different runs and scan directions provides valuable scan data for determining the angular alignment between inertial measurement unit and laser scanner. Field data is presented demonstrating the final accuracy of the calibration and the high quality of the point cloud acquired during an MLS campaign.

  6. Monte Carlo simulation of gamma-ray interactions in an over-square high-purity germanium detector for in-vivo measurements

    NASA Astrophysics Data System (ADS)

    Saizu, Mirela Angela

    2016-09-01

    The developments of high-purity germanium detectors match very well the requirements of the in-vivo human body measurements regarding the gamma energy ranges of the radionuclides intended to be measured, the shape of the extended radioactive sources, and the measurement geometries. The Whole Body Counter (WBC) from IFIN-HH is based on an “over-square” high-purity germanium detector (HPGe) to perform accurate measurements of the incorporated radionuclides emitting X and gamma rays in the energy range of 10 keV-1500 keV, under conditions of good shielding, suitable collimation, and calibration. As an alternative to the experimental efficiency calibration method consisting of using reference calibration sources with gamma energy lines that cover all the considered energy range, it is proposed to use the Monte Carlo method for the efficiency calibration of the WBC using the radiation transport code MCNP5. The HPGe detector was modelled and the gamma energy lines of 241Am, 57Co, 133Ba, 137Cs, 60Co, and 152Eu were simulated in order to obtain the virtual efficiency calibration curve of the WBC. The Monte Carlo method was validated by comparing the simulated results with the experimental measurements using point-like sources. For their optimum matching, the impact of the variation of the front dead layer thickness and of the detector photon absorbing layers materials on the HPGe detector efficiency was studied, and the detector’s model was refined. In order to perform the WBC efficiency calibration for realistic people monitoring, more numerical calculations were generated simulating extended sources of specific shape according to the standard man characteristics.

  7. Comparison of three chemometrics methods for near-infrared spectra of glucose in the whole blood

    NASA Astrophysics Data System (ADS)

    Zhang, Hongyan; Ding, Dong; Li, Xin; Chen, Yu; Tang, Yuguo

    2005-01-01

    Principal Component Regression (PCR), Partial Least Square (PLS) and Artificial Neural Networks (ANN) methods are used in the analysis for the near infrared (NIR) spectra of glucose in the whole blood. The calibration model is built up in the spectrum band where there are the glucose has much more spectral absorption than the water, fat, and protein with these methods and the correlation coefficients of the model are showed in this paper. Comparing these results, a suitable method to analyze the glucose NIR spectrum in the whole blood is found.

  8. Characterization and Application of Passive Samplers for Monitoring of Pesticides in Water.

    PubMed

    Ahrens, Lutz; Daneshvar, Atlasi; Lau, Anna E; Kreuger, Jenny

    2016-08-03

    Five different water passive samplers were calibrated under laboratory conditions for measurement of 124 legacy and current used pesticides. This study provides a protocol for the passive sampler preparation, calibration, extraction method and instrumental analysis. Sampling rates (RS) and passive sampler-water partition coefficients (KPW) were calculated for silicone rubber, polar organic chemical integrative sampler POCIS-A, POCIS-B, SDB-RPS and C18 disk. The uptake of the selected compounds depended on their physicochemical properties, i.e., silicone rubber showed a better uptake for more hydrophobic compounds (log octanol-water partition coefficient (KOW) > 5.3), whereas POCIS-A, POCIS-B and SDB-RPS disk were more suitable for hydrophilic compounds (log KOW < 0.70).

  9. Monitoring angiogenesis using a human compatible calibration for broadband near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Yang, Runze; Zhang, Qiong; Wu, Ying; Dunn, Jeff F.

    2013-01-01

    Angiogenesis is a hallmark of many conditions, including cancer, stroke, vascular disease, diabetes, and high-altitude exposure. We have previously shown that one can study angiogenesis in animal models by using total hemoglobin (tHb) as a marker of cerebral blood volume (CBV), measured using broadband near-infrared spectroscopy (bNIRS). However, the method was not suitable for patients as global anoxia was used for the calibration. Here we determine if angiogenesis could be detected using a calibration method that could be applied to patients. CBV, as a marker of angiogenesis, is quantified in a rat cortex before and after hypoxia acclimation. Rats are acclimated at 370-mmHg pressure for three weeks, while rats in the control group are housed under the same conditions, but under normal pressure. CBV increased in each animal in the acclimation group. The mean CBV (%volume/volume) is 3.49%±0.43% (mean±SD) before acclimation for the experimental group, and 4.76%±0.29% after acclimation. The CBV for the control group is 3.28%±0.75%, and 3.09%±0.48% for the two measurements. This demonstrates that angiogenesis can be monitored noninvasively over time using a bNIRS system with a calibration method that is compatible with human use and less stressful for studies using animals.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, V.V.; Conley, R.; Anderson, E.H.

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binarypseudo-random (BPR) gratings and arrays has been suggested and and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer. Here we describe the details of development of binarypseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electronmore » microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi{sub 2}/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML testsamples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.« less

  11. Detection of heavy metal Cd in polluted fresh leafy vegetables by laser-induced breakdown spectroscopy.

    PubMed

    Yao, Mingyin; Yang, Hui; Huang, Lin; Chen, Tianbing; Rao, Gangfu; Liu, Muhua

    2017-05-10

    In seeking a novel method with the ability of green analysis in monitoring toxic heavy metals residue in fresh leafy vegetables, laser-induced breakdown spectroscopy (LIBS) was applied to prove its capability in performing this work. The spectra of fresh vegetable samples polluted in the lab were collected by optimized LIBS experimental setup, and the reference concentrations of cadmium (Cd) from samples were obtained by conventional atomic absorption spectroscopy after wet digestion. The direct calibration employing intensity of single Cd line and Cd concentration exposed the weakness of this calibration method. Furthermore, the accuracy of linear calibration can be improved a little by triple Cd lines as characteristic variables, especially after the spectra were pretreated. However, it is not enough in predicting Cd in samples. Therefore, partial least-squares regression (PLSR) was utilized to enhance the robustness of quantitative analysis. The results of the PLSR model showed that the prediction accuracy of the Cd target can meet the requirement of determination in food safety. This investigation presented that LIBS is a promising and emerging method in analyzing toxic compositions in agricultural products, especially combined with suitable chemometrics.

  12. In-air calibration of an HDR 192Ir brachytherapy source using therapy ion chambers.

    PubMed

    Patel, Narayan Prasad; Majumdar, Bishnu; Vijiyan, V; Hota, Pradeep K

    2005-01-01

    The Gammamed Plus 192Ir high dose rate brachytherapy sources were calibrated using the therapy level ionization chambers (0.1 and 0.6 cc) and the well-type chamber. The aim of the present study was to assess the accuracy and suitability of use of the therapy level chambers for in-air calibration of brachytherapy sources in routine clinical practice. In a calibration procedure using therapy ion chambers, the air kerma was measured at several distances from the source in a specially designed jig. The room scatter correction factor was determined by superimposition method based on the inverse square law. Various other correction factors were applied on measured air kerma values at multiple distances and mean value was taken to determine the air kerma strength of the source. The results from four sources, the overall mean deviation between measured and quoted source strength by manufacturers was found -2.04% (N = 18) for well-type chamber. The mean deviation for the 0.6 cc chamber with buildup cap was found -1.48 % (N = 19) and without buildup cap was 0.11% (N = 22). The mean deviation for the 0.1 cc chamber was found -0.24% (N = 27). Result shows that probably the excess ionization in case of 0.6 cc therapy ion chamber without buildup cap was estimated about 2.74% and 1.99% at 10 and 20 cm from the source respectively. Scattered radiation measured by the 0.1 cc and 0.6 cc chamber at 10 cm measurement distance was about 1.1% and 0.33% of the primary radiation respectively. The study concludes that the results obtained with therapy level ionization chambers were extremely reproducible and in good agreement with the results of the well-type ionization chamber and source supplier quoted value. The calibration procedure with therapy ionization chambers is equally competent and suitable for routine calibration of the brachytherapy sources.

  13. Quantifying Particle Numbers and Mass Flux in Drifting Snow

    NASA Astrophysics Data System (ADS)

    Crivelli, Philip; Paterna, Enrico; Horender, Stefan; Lehning, Michael

    2016-12-01

    We compare two of the most common methods of quantifying mass flux, particle numbers and particle-size distribution for drifting snow events, the snow-particle counter (SPC), a laser-diode-based particle detector, and particle tracking velocimetry based on digital shadowgraphic imaging. The two methods were correlated for mass flux and particle number flux. For the SPC measurements, the device was calibrated by the manufacturer beforehand. The shadowgrapic imaging method measures particle size and velocity directly from consecutive images, and before each new test the image pixel length is newly calibrated. A calibration study with artificially scattered sand particles and glass beads provides suitable settings for the shadowgraphical imaging as well as obtaining a first correlation of the two methods in a controlled environment. In addition, using snow collected in trays during snowfall, several experiments were performed to observe drifting snow events in a cold wind tunnel. The results demonstrate a high correlation between the mass flux obtained for the calibration studies (r ≥slant 0.93) and good correlation for the drifting snow experiments (r ≥slant 0.81). The impact of measurement settings is discussed in order to reliably quantify particle numbers and mass flux in drifting snow. The study was designed and performed to optimize the settings of the digital shadowgraphic imaging system for both the acquisition and the processing of particles in a drifting snow event. Our results suggest that these optimal settings can be transferred to different imaging set-ups to investigate sediment transport processes.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.

    The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less

  15. Comparison between different direct search optimization algorithms in the calibration of a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca

    2010-05-01

    The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.

  16. A novel method to calibrate DOI function of a PET detector with a dual-ended-scintillator readout.

    PubMed

    Shao, Yiping; Yao, Rutao; Ma, Tianyu

    2008-12-01

    The detection of depth-of-interaction (DOI) is a critical detector capability to improve the PET spatial resolution uniformity across the field-of-view and will significantly enhance, in particular, small bore system performance for brain, breast, and small animal imaging. One promising technique of DOI detection is to use dual-ended-scintillator readout that uses two photon sensors to detect scintillation light from both ends of a scintillator array and estimate DOI based on the ratio of signals (similar to Anger logic). This approach needs a careful DOI function calibration to establish accurate relationship between DOI and signal ratios, and to recalibrate if the detection condition is shifted due to the drift of sensor gain, bias variations, or degraded optical coupling, etc. However, the current calibration method that uses coincident events to locate interaction positions inside a single scintillator crystal has severe drawbacks, such as complicated setup, long and repetitive measurements, and being prone to errors from various possible misalignments among the source and detector components. This method is also not practically suitable to calibrate multiple DOI functions of a crystal array. To solve these problems, a new method has been developed that requires only a uniform flood source to irradiate a crystal array without the need to locate the interaction positions, and calculates DOI functions based solely on the uniform probability distribution of interactions over DOI positions without knowledge or assumption of detector responses. Simulation and experiment have been studied to validate the new method, and the results show that the new method, with a simple setup and one single measurement, can provide consistent and accurate DOI functions for the entire array of multiple scintillator crystals. This will enable an accurate, simple, and practical DOI function calibration for the PET detectors based on the design of dual-ended-scintillator readout. In addition, the new method can be generally applied to calibrating other types of detectors that use the similar dual-ended readout to acquire the radiation interaction position.

  17. A novel method to calibrate DOI function of a PET detector with a dual-ended-scintillator readout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao Yiping; Yao Rutao; Ma Tianyu

    The detection of depth-of-interaction (DOI) is a critical detector capability to improve the PET spatial resolution uniformity across the field-of-view and will significantly enhance, in particular, small bore system performance for brain, breast, and small animal imaging. One promising technique of DOI detection is to use dual-ended-scintillator readout that uses two photon sensors to detect scintillation light from both ends of a scintillator array and estimate DOI based on the ratio of signals (similar to Anger logic). This approach needs a careful DOI function calibration to establish accurate relationship between DOI and signal ratios, and to recalibrate if the detectionmore » condition is shifted due to the drift of sensor gain, bias variations, or degraded optical coupling, etc. However, the current calibration method that uses coincident events to locate interaction positions inside a single scintillator crystal has severe drawbacks, such as complicated setup, long and repetitive measurements, and being prone to errors from various possible misalignments among the source and detector components. This method is also not practically suitable to calibrate multiple DOI functions of a crystal array. To solve these problems, a new method has been developed that requires only a uniform flood source to irradiate a crystal array without the need to locate the interaction positions, and calculates DOI functions based solely on the uniform probability distribution of interactions over DOI positions without knowledge or assumption of detector responses. Simulation and experiment have been studied to validate the new method, and the results show that the new method, with a simple setup and one single measurement, can provide consistent and accurate DOI functions for the entire array of multiple scintillator crystals. This will enable an accurate, simple, and practical DOI function calibration for the PET detectors based on the design of dual-ended-scintillator readout. In addition, the new method can be generally applied to calibrating other types of detectors that use the similar dual-ended readout to acquire the radiation interaction position.« less

  18. Automated Calibration For Numerical Models Of Riverflow

    NASA Astrophysics Data System (ADS)

    Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey

    2017-04-01

    Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.

  19. A Preliminary Design of a Calibration Chamber for Evaluating the Stability of Unsaturated Soil Slope

    NASA Astrophysics Data System (ADS)

    Hsu, H.-H.

    2012-04-01

    The unsaturated soil slopes, which have ground water tables and are easily failure caused by heavy rainfalls, are widely distributed in the arid and semi-arid areas. For analyzing the stability of slope, in situ tests are the direct methods to obtain the test site characteristics. The cone penetration test (CPT) is a popular in situ test method. Some of the CPT empirical equations established from calibration chamber tests. The CPT performed in calibration chamber was commonly used clean quartz sand as testing material in the past. The silty sand is observed in many actual slopes. Because silty sand is relatively compressible than quartz sand, it is not suitable to apply the correlations between soil properties and CPT results built from quartz sand to silty sand. The experience on CPT calibration in silty sand has been limited. CPT calibration tests were mostly performed in dry or saturated soils. The condition around cone tip during penetration is assumed to be fully drained or fully undrained, yet it was observed to be partially drained for unsaturated soils. Because of the suction matrix has a great effect on the characteristics of unsaturated soils, they are much sensitive to the water content than saturated soils. The design of an unsaturated calibration chamber is in progress. The air pressure is supplied from the top plate and the pore water pressure is provided through the high air entry value ceramic disks located at the bottom plate of chamber cell. To boost and uniform distribute the unsaturated effect, four perforated burettes are installed onto the ceramic disks and stretch upwards to the midheight of specimen. This paper describes design concepts, illustrates this unsaturated calibration chamber, and presents the preliminary test results.

  20. Direct Reading Particle Counters: Calibration Verification and Multiple Instrument Agreement via Bump Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jankovic, John; Zontek, Tracy L.; Ogle, Burton R.

    We examined the calibration records of two direct reading instruments designated as condensation particle counters in order to determine the number of times they were found to be out of tolerance at annual manufacturer's recalibration. For both instruments were found to be out of tolerance more times than within tolerance. And, it was concluded that annual calibration alone was insufficient to provide operational confidence in an instrument's response. Thus, a method based on subsequent agreement with data gathered from a newly calibrated instrument was developed to confirm operational readiness between annual calibrations, hereafter referred to as bump testing. The methodmore » consists of measuring source particles produced by a gas grille spark igniter in a gallon-size jar. Sampling from this chamber with a newly calibrated instrument to determine the calibrated response over the particle concentration range of interest serves as a reference. Agreement between this reference response and subsequent responses at later dates implies that the instrument is performing as it was at the time of calibration. Side-by-side sampling allows the level of agreement between two or more instruments to be determined. This is useful when simultaneously collected data are compared for differences, i.e., background with process aerosol concentrations. A reference set of data was obtained using the spark igniter. The generation system was found to be reproducible and suitable to form the basis of calibration verification. Finally, the bump test is simple enough to be performed periodically throughout the calibration year or prior to field monitoring.« less

  1. Direct Reading Particle Counters: Calibration Verification and Multiple Instrument Agreement via Bump Testing

    DOE PAGES

    Jankovic, John; Zontek, Tracy L.; Ogle, Burton R.; ...

    2015-01-27

    We examined the calibration records of two direct reading instruments designated as condensation particle counters in order to determine the number of times they were found to be out of tolerance at annual manufacturer's recalibration. For both instruments were found to be out of tolerance more times than within tolerance. And, it was concluded that annual calibration alone was insufficient to provide operational confidence in an instrument's response. Thus, a method based on subsequent agreement with data gathered from a newly calibrated instrument was developed to confirm operational readiness between annual calibrations, hereafter referred to as bump testing. The methodmore » consists of measuring source particles produced by a gas grille spark igniter in a gallon-size jar. Sampling from this chamber with a newly calibrated instrument to determine the calibrated response over the particle concentration range of interest serves as a reference. Agreement between this reference response and subsequent responses at later dates implies that the instrument is performing as it was at the time of calibration. Side-by-side sampling allows the level of agreement between two or more instruments to be determined. This is useful when simultaneously collected data are compared for differences, i.e., background with process aerosol concentrations. A reference set of data was obtained using the spark igniter. The generation system was found to be reproducible and suitable to form the basis of calibration verification. Finally, the bump test is simple enough to be performed periodically throughout the calibration year or prior to field monitoring.« less

  2. EO-1 Hyperion reflectance time series at calibration and validation sites: stability and sensitivity to seasonal dynamics

    Treesearch

    Petya K. Entcheva Campbell; Elizabeth M. Middleton; Kurt J. Thome; Raymond F. Kokaly; Karl Fred Huemmrich; David Lagomasino; Kimberly A. Novick; Nathaniel A. Brunsell

    2013-01-01

    This study evaluated Earth Observing 1 (EO-1) Hyperion reflectance time series at established calibration sites to assess the instrument stability and suitability for monitoring vegetation functional parameters. Our analysis using three pseudo-invariant calibration sites in North America indicated that the reflectance time series are devoid of apparent spectral trends...

  3. Application of the Shiono and Knight Method in asymmetric compound channels with different side slopes of the internal wall

    NASA Astrophysics Data System (ADS)

    Alawadi, Wisam; Al-Rekabi, Wisam S.; Al-Aboodi, Ali H.

    2018-03-01

    The Shiono and Knight Method (SKM) is widely used to predict the lateral distribution of depth-averaged velocity and boundary shear stress for flows in compound channels. Three calibrating coefficients need to be estimated for applying the SKM, namely eddy viscosity coefficient ( λ), friction factor ( f) and secondary flow coefficient ( k). There are several tested methods which can satisfactorily be used to estimate λ, f. However, the calibration of secondary flow coefficients k to account for secondary flow effects correctly is still problematic. In this paper, the calibration of secondary flow coefficients is established by employing two approaches to estimate correct values of k for simulating asymmetric compound channel with different side slopes of the internal wall. The first approach is based on Abril and Knight (2004) who suggest fixed values for main channel and floodplain regions. In the second approach, the equations developed by Devi and Khatua (2017) that relate the variation of the secondary flow coefficients with the relative depth ( β) and width ratio ( α) are used. The results indicate that the calibration method developed by Devi and Khatua (2017) is a better choice for calibrating the secondary flow coefficients than using the first approach which assumes a fixed value of k for different flow depths. The results also indicate that the boundary condition based on the shear force continuity can successfully be used for simulating rectangular compound channels, while the continuity of depth-averaged velocity and its gradient is accepted boundary condition in simulations of trapezoidal compound channels. However, the SKM performance for predicting the boundary shear stress over the shear layer region may not be improved by only imposing the suitable calibrated values of secondary flow coefficients. This is because difficulties of modelling the complex interaction that develops between the flows in the main channel and on the floodplain in this region.

  4. Clusters of Monoisotopic Elements for Calibration in (TOF) Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Kolářová, Lenka; Prokeš, Lubomír; Kučera, Lukáš; Hampl, Aleš; Peňa-Méndez, Eladia; Vaňhara, Petr; Havel, Josef

    2017-03-01

    Precise calibration in TOF MS requires suitable and reliable standards, which are not always available for high masses. We evaluated inorganic clusters of the monoisotopic elements gold and phosphorus (Au n +/Au n - and P n +/P n -) as an alternative to peptides or proteins for the external and internal calibration of mass spectra in various experimental and instrumental scenarios. Monoisotopic gold or phosphorus clusters can be easily generated in situ from suitable precursors by laser desorption/ionization (LDI) or matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS). Their use offers numerous advantages, including simplicity of preparation, biological inertness, and exact mass determination even at lower mass resolution. We used citrate-stabilized gold nanoparticles to generate gold calibration clusters, and red phosphorus powder to generate phosphorus clusters. Both elements can be added to samples to perform internal calibration up to mass-to-charge ( m/z) 10-15,000 without significantly interfering with the analyte. We demonstrated the use of the gold and phosphorous clusters in the MS analysis of complex biological samples, including microbial standards and total extracts of mouse embryonic fibroblasts. We believe that clusters of monoisotopic elements could be used as generally applicable calibrants for complex biological samples.

  5. Identification of suitable fundus images using automated quality assessment methods.

    PubMed

    Şevik, Uğur; Köse, Cemal; Berber, Tolga; Erdöl, Hidayet

    2014-04-01

    Retinal image quality assessment (IQA) is a crucial process for automated retinal image analysis systems to obtain an accurate and successful diagnosis of retinal diseases. Consequently, the first step in a good retinal image analysis system is measuring the quality of the input image. We present an approach for finding medically suitable retinal images for retinal diagnosis. We used a three-class grading system that consists of good, bad, and outlier classes. We created a retinal image quality dataset with a total of 216 consecutive images called the Diabetic Retinopathy Image Database. We identified the suitable images within the good images for automatic retinal image analysis systems using a novel method. Subsequently, we evaluated our retinal image suitability approach using the Digital Retinal Images for Vessel Extraction and Standard Diabetic Retinopathy Database Calibration level 1 public datasets. The results were measured through the F1 metric, which is a harmonic mean of precision and recall metrics. The highest F1 scores of the IQA tests were 99.60%, 96.50%, and 85.00% for good, bad, and outlier classes, respectively. Additionally, the accuracy of our suitable image detection approach was 98.08%. Our approach can be integrated into any automatic retinal analysis system with sufficient performance scores.

  6. Cloud cover determination in polar regions from satellite imagery

    NASA Technical Reports Server (NTRS)

    Barry, R. G.; Maslanik, J. A.; Key, J. R.

    1987-01-01

    A definition is undertaken of the spectral and spatial characteristics of clouds and surface conditions in the polar regions, and to the creation of calibrated, geometrically correct data sets suitable for quantitative analysis. Ways are explored in which this information can be applied to cloud classifications as new methods or as extensions to existing classification schemes. A methodology is developed that uses automated techniques to merge Advanced Very High Resolution Radiometer (AVHRR) and Scanning Multichannel Microwave Radiometer (SMMR) data, and to apply first-order calibration and zenith angle corrections to the AVHRR imagery. Cloud cover and surface types are manually interpreted, and manual methods are used to define relatively pure training areas to describe the textural and multispectral characteristics of clouds over several surface conditions. The effects of viewing angle and bidirectional reflectance differences are studied for several classes, and the effectiveness of some key components of existing classification schemes is tested.

  7. [Binding properties of components removable from dental base plate, analysed by Fourier-Transform Surface Plasmon Resonance (FT-SPR) method].

    PubMed

    Bakó, József; Kelemen, Máté; Szalóki, Melinda; Vitályos, Géza; Radics, Tünde; Hegedüs, Csaba

    2015-03-01

    In parallel with the emergence of new dental materials the number of allergic diseases is continuously increasing. Extremely small quantities of the allergens are capable to inducing an allergic reaction. Therefore it is particularly important to examine these materials as antigens and investigate their binding properties to proteins (e.g. formaldehyde, methacrylic acid, benzoyl-peroxide...). The Fourier Transform Surface Plasmon Resonance Spectroscopy (FT-SPR) is a suitable examination method for this type of procedure. FT-SPR measurement is performed at a fixed angel of incident light, and reflectivity is measured over a range of wavelength in the near infrared. The advantages of this method are the outstanding sensitivity, the label-free detection capability and the possibility of the real-time testing procedure. Formaldehyde and methacrylic acid are among the most common dental allergens. In our study we examined these molecules by FT-SPR spectroscopy. The aim of this work was to investigate the suitability of this method to the detection of these materials, with special focuses on the analysis and evaluation concentration-dependent measurements. Different concentrations (0.01 %-0.2%) of formaldehyde and methacrylic acid solutions were measured. The individual spectra were measured for all of the solutions, and calibration curves were calculated for the materials for the possibility of the determination of an unknown concentration. The results confirmed that the method is theoretically capable to detect hundred-thousandths scale concentration-changes in the solution flowing above the SPR-chip. The concentration-dependent studies had proved that the method capable to measure directly these materials and can provide appropriate calibration for quantitative determination. These experiments show the broad applicability of the FT-SPR method, which can greatly facilitate the mapping and understanding of biomolecular interactions in the future.

  8. Open-field test site

    NASA Astrophysics Data System (ADS)

    Gyoda, Koichi; Shinozuka, Takashi

    1995-06-01

    An open-field test site with measurement equipment, a turn table, antenna positioners, and measurement auxiliary equipment was remodelled at the CRL north-site. This paper introduces the configuration, specifications and characteristics of this new open-field test site. Measured 3-m and 10-m site attenuations are in good agreement with theoretical values, and this means that this site is suitable for using 3-m and 10-m method EMI/EMC measurements. The site is expected to be effective for antenna measurement, antenna calibration, and studies on EMI/EMC measurement methods.

  9. Spectral characterization of near-infrared acousto-optic tunable filter (AOTF) hyperspectral imaging systems using standard calibration materials.

    PubMed

    Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2011-04-01

    In this study, we propose and evaluate a method for spectral characterization of acousto-optic tunable filter (AOTF) hyperspectral imaging systems in the near-infrared (NIR) spectral region from 900 nm to 1700 nm. The proposed spectral characterization method is based on the SRM-2035 standard reference material, exhibiting distinct spectral features, which enables robust non-rigid matching of the acquired and reference spectra. The matching is performed by simultaneously optimizing the parameters of the AOTF tuning curve, spectral resolution, baseline, and multiplicative effects. In this way, the tuning curve (frequency-wavelength characteristics) and the corresponding spectral resolution of the AOTF hyperspectral imaging system can be characterized simultaneously. Also, the method enables simple spectral characterization of the entire imaging plane of hyperspectral imaging systems. The results indicate that the method is accurate and efficient and can easily be integrated with systems operating in diffuse reflection or transmission modes. Therefore, the proposed method is suitable for characterization, calibration, or validation of AOTF hyperspectral imaging systems. © 2011 Society for Applied Spectroscopy

  10. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Dan; Ricciuto, Daniel M.; Walker, Anthony P.

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results inmore » a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. Here, the result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.« less

  11. Integrating dynamic stopping, transfer learning and language models in an adaptive zero-training ERP speller.

    PubMed

    Kindermans, Pieter-Jan; Tangermann, Michael; Müller, Klaus-Robert; Schrauwen, Benjamin

    2014-06-01

    Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance--competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.

  12. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    DOE PAGES

    Lu, Dan; Ricciuto, Daniel M.; Walker, Anthony P.; ...

    2017-09-27

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results inmore » a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. Here, the result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.« less

  13. Integrating dynamic stopping, transfer learning and language models in an adaptive zero-training ERP speller

    NASA Astrophysics Data System (ADS)

    Kindermans, Pieter-Jan; Tangermann, Michael; Müller, Klaus-Robert; Schrauwen, Benjamin

    2014-06-01

    Objective. Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. Approach. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated. Main results. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance—competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. Significance. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.

  14. The calibration analysis of soil infiltration formula in farmland scale

    NASA Astrophysics Data System (ADS)

    Qian, Tao; Han, Na Na; Chang, Shuan Ling

    2018-06-01

    Soil infiltration characteristic is an important basis of farmland scale parameter estimation. Based on 12 groups of double-loop infiltration tests conducted in the test field of tianjin agricultural university west campus. Based on the calibration theory and the combination of statistics, the calibration analysis of phillips formula was carried out and the spatial variation characteristics of the calibration factor were analyzed. Results show that in study area based on the soil stability infiltration rate A calculate calibration factor αA calibration effect is best, that is suitable for the area formula of calibration infiltration and αA variation coefficient is 0.3234, with A certain degree of spatial variability.

  15. The application of PGNAA borehole logging for copper grade estimation at Chuquicamata mine.

    PubMed

    Charbucinski, J; Duran, O; Freraut, R; Heresi, N; Pineyro, I

    2004-05-01

    The field trials of a prompt gamma neutron activation (PGNAA) spectrometric logging method and instrumentation (SIROLOG) for copper grade estimation in production holes of a porphyry type copper ore mine, Chuquicamata in Chile, are described. Examples of data analysis, calibration procedures and copper grade profiles are provided. The field tests have proved the suitability of the PGNAA logging system for in situ quality control of copper ore.

  16. Simultaneous estimation of ramipril, acetylsalicylic acid and atorvastatin calcium by chemometrics assisted UV-spectrophotometric method in capsules.

    PubMed

    Sankar, A S Kamatchi; Vetrichelvan, Thangarasu; Venkappaya, Devashya

    2011-09-01

    In the present work, three different spectrophotometric methods for simultaneous estimation of ramipril, aspirin and atorvastatin calcium in raw materials and in formulations are described. Overlapped data was quantitatively resolved by using chemometric methods, viz. inverse least squares (ILS), principal component regression (PCR) and partial least squares (PLS). Calibrations were constructed using the absorption data matrix corresponding to the concentration data matrix. The linearity range was found to be 1-5, 10-50 and 2-10 μg mL-1 for ramipril, aspirin and atorvastatin calcium, respectively. The absorbance matrix was obtained by measuring the zero-order absorbance in the wavelength range between 210 and 320 nm. A training set design of the concentration data corresponding to the ramipril, aspirin and atorvastatin calcium mixtures was organized statistically to maximize the information content from the spectra and to minimize the error of multivariate calibrations. By applying the respective algorithms for PLS 1, PCR and ILS to the measured spectra of the calibration set, a suitable model was obtained. This model was selected on the basis of RMSECV and RMSEP values. The same was applied to the prediction set and capsule formulation. Mean recoveries of the commercial formulation set together with the figures of merit (calibration sensitivity, selectivity, limit of detection, limit of quantification and analytical sensitivity) were estimated. Validity of the proposed approaches was successfully assessed for analyses of drugs in the various prepared physical mixtures and formulations.

  17. A validated method for the quantitation of 1,1-difluoroethane using a gas in equilibrium method of calibration.

    PubMed

    Avella, Joseph; Lehrer, Michael; Zito, S William

    2008-10-01

    1,1-Difluoroethane (DFE), also known as Freon 152A, is a member of a class of compounds known as halogenated hydrocarbons. A number of these compounds have gained notoriety because of their ability to induce rapid onset of intoxication after inhalation exposure. Abuse of DFE has necessitated development of methods for its detection and quantitation in postmortem and human performance specimens. Furthermore, methodologies applicable to research studies are required as there have been limited toxicokinetic and toxicodynamic reports published on DFE. This paper describes a method for the quantitation of DFE using a gas chromatography-flame-ionization headspace technique that employs solventless standards for calibration. Two calibration curves using 0.5 mL whole blood calibrators which ranged from A: 0.225-1.350 to B: 9.0-180.0 mg/L were developed. These were evaluated for linearity (0.9992 and 0.9995), limit of detection of 0.018 mg/L, limit of quantitation of 0.099 mg/L (recovery 111.9%, CV 9.92%), and upper limit of linearity of 27,000.0 mg/L. Combined curve recovery results of a 98.0 mg/L DFE control that was prepared using an alternate technique was 102.2% with CV of 3.09%. No matrix interference was observed in DFE enriched blood, urine or brain specimens nor did analysis of variance detect any significant differences (alpha = 0.01) in the area under the curve of blood, urine or brain specimens at three identical DFE concentrations. The method is suitable for use in forensic laboratories because validation was performed on instrumentation routinely used in forensic labs and due to the ease with which the calibration range can be adjusted. Perhaps more importantly it is also useful for research oriented studies because the removal of solvent from standard preparation eliminates the possibility for solvent induced changes to the gas/liquid partitioning of DFE or chromatographic interference due to the presence of solvent in specimens.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    V Yashchuk; R Conley; E Anderson

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1] and [2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanningmore » (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.« less

  19. Calibration of imaging parameters for space-borne airglow photography using city light positions

    NASA Astrophysics Data System (ADS)

    Hozumi, Yuta; Saito, Akinori; Ejiri, Mitsumu K.

    2016-09-01

    A new method for calibrating imaging parameters of photographs taken from the International Space Station (ISS) is presented in this report. Airglow in the mesosphere and the F-region ionosphere was captured on the limb of the Earth with a digital single-lens reflex camera from the ISS by astronauts. To utilize the photographs as scientific data, imaging parameters, such as the angle of view, exact position, and orientation of the camera, should be determined because they are not measured at the time of imaging. A new calibration method using city light positions shown in the photographs was developed to determine these imaging parameters with high accuracy suitable for airglow study. Applying the pinhole camera model, the apparent city light positions on the photograph are matched with the actual city light locations on Earth, which are derived from the global nighttime stable light map data obtained by the Defense Meteorological Satellite Program satellite. The correct imaging parameters are determined in an iterative process by matching the apparent positions on the image with the actual city light locations. We applied this calibration method to photographs taken on August 26, 2014, and confirmed that the result is correct. The precision of the calibration was evaluated by comparing the results from six different photographs with the same imaging parameters. The precisions in determining the camera position and orientation are estimated to be ±2.2 km and ±0.08°, respectively. The 0.08° difference in the orientation yields a 2.9-km difference at a tangential point of 90 km in altitude. The airglow structures in the photographs were mapped to geographical points using the calibrated imaging parameters and compared with a simultaneous observation by the Visible and near-Infrared Spectral Imager of the Ionosphere, Mesosphere, Upper Atmosphere, and Plasmasphere mapping mission installed on the ISS. The comparison shows good agreements and supports the validity of the calibration. This calibration technique makes it possible to utilize photographs taken on low-Earth-orbit satellites in the nighttime as a reference for the airglow and aurora structures.[Figure not available: see fulltext.

  20. Process analytical technology in continuous manufacturing of a commercial pharmaceutical product.

    PubMed

    Vargas, Jenny M; Nielsen, Sarah; Cárdenas, Vanessa; Gonzalez, Anthony; Aymat, Efrain Y; Almodovar, Elvin; Classe, Gustavo; Colón, Yleana; Sanchez, Eric; Romañach, Rodolfo J

    2018-03-01

    The implementation of process analytical technology and continuous manufacturing at an FDA approved commercial manufacturing site is described. In this direct compaction process the blends produced were monitored with a Near Infrared (NIR) spectroscopic calibration model developed with partial least squares (PLS) regression. The authors understand that this is the first study where the continuous manufacturing (CM) equipment was used as a gravimetric reference method for the calibration model. A principal component analysis (PCA) model was also developed to identify the powder blend, and determine whether it was similar to the calibration blends. An air diagnostic test was developed to assure that powder was present within the interface when the NIR spectra were obtained. The air diagnostic test as well the PCA and PLS calibration model were integrated into an industrial software platform that collects the real time NIR spectra and applies the calibration models. The PCA test successfully detected an equipment malfunction. Variographic analysis was also performed to estimate the sampling analytical errors that affect the results from the NIR spectroscopic method during commercial production. The system was used to monitor and control a 28 h continuous manufacturing run, where the average drug concentration determined by the NIR method was 101.17% of label claim with a standard deviation of 2.17%, based on 12,633 spectra collected. The average drug concentration for the tablets produced from these blends was 100.86% of label claim with a standard deviation of 0.4%, for 500 tablets analyzed by Fourier Transform Near Infrared (FT-NIR) transmission spectroscopy. The excellent agreement between the mean drug concentration values in the blends and tablets produced provides further evidence of the suitability of the validation strategy that was followed. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Measurement error corrected sodium and potassium intake estimation using 24-hour urinary excretion.

    PubMed

    Huang, Ying; Van Horn, Linda; Tinker, Lesley F; Neuhouser, Marian L; Carbone, Laura; Mossavar-Rahmani, Yasmin; Thomas, Fridtjof; Prentice, Ross L

    2014-02-01

    Epidemiological studies of the association of sodium and potassium intake with cardiovascular disease risk have almost exclusively relied on self-reported dietary data. Here, 24-hour urinary excretion assessments are used to correct the dietary self-report data for measurement error under the assumption that 24-hour urine recovery provides a biomarker that differs from usual intake according to a classical measurement model. Under this assumption, dietary self-reports underestimate sodium by 0% to 15%, overestimate potassium by 8% to 15%, and underestimate sodium/potassium ratio by ≈20% using food frequency questionnaires, 4-day food records, or three 24-hour dietary recalls in Women's Health Initiative studies. Calibration equations are developed by linear regression of log-transformed 24-hour urine assessments on corresponding log-transformed self-report assessments and several study subject characteristics. For each self-report method, the calibration equations turned out to depend on race and age and strongly on body mass index. After adjustment for temporal variation, calibration equations using food records or recalls explained 45% to 50% of the variation in (log-transformed) 24-hour urine assessments for sodium, 60% to 70% of the variation for potassium, and 55% to 60% of the variation for sodium/potassium ratio. These equations may be suitable for use in epidemiological disease association studies among postmenopausal women. The corresponding signals from food frequency questionnaire data were weak, but calibration equations for the ratios of sodium and potassium/total energy explained ≈35%, 50%, and 45% of log-biomarker variation for sodium, potassium, and their ratio, respectively, after the adjustment for temporal biomarker variation and may be suitable for cautious use in epidemiological studies. Clinical Trial Registration- URL: www.clinicaltrials.gov. Unique identifier: NCT00000611.

  2. Existing methods for improving the accuracy of digital-to-analog converters

    NASA Astrophysics Data System (ADS)

    Eielsen, Arnfinn A.; Fleming, Andrew J.

    2017-09-01

    The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.

  3. Process Diagnostics and Monitoring Using the Multipole Resonance Probe (MRP)

    NASA Astrophysics Data System (ADS)

    Harhausen, J.; Awakowicz, P.; Brinkmann, R. P.; Foest, R.; Lapke, M.; Musch, T.; Mussenbrock, T.; Oberrath, J.; Ohl, A.; Rolfes, I.; Schulz, Ch.; Storch, R.; Styrnoll, T.

    2011-10-01

    In this contribution we present the application of the MRP in an industrial plasma ion assisted deposition (PIAD) chamber (Leybold optics SYRUS-pro). The MRP is a novel plasma diagnostic which is suitable for an industrial environment - which means that the proposed method is robust, calibration free, and economical, and can be used for ideal and reactive plasmas alike. In order to employ the MRP as process diagnostics we mounted the probe on a manipulator to obtain spatially resolved information on the electron density and temperature. As monitoring tool the MRP is installed at a fixed position. Even during the deposition process it provides stable measurement results while other diagnostic methods, e.g. the Langmuir probe, may suffer from dielectric coatings. In this contribution we present the application of the MRP in an industrial plasma ion assisted deposition (PIAD) chamber (Leybold optics SYRUS-pro). The MRP is a novel plasma diagnostic which is suitable for an industrial environment - which means that the proposed method is robust, calibration free, and economical, and can be used for ideal and reactive plasmas alike. In order to employ the MRP as process diagnostics we mounted the probe on a manipulator to obtain spatially resolved information on the electron density and temperature. As monitoring tool the MRP is installed at a fixed position. Even during the deposition process it provides stable measurement results while other diagnostic methods, e.g. the Langmuir probe, may suffer from dielectric coatings. Funded by the German Ministry for Education and Research (BMBF, Fkz. 13N10462).

  4. Priority design parameters of industrialized optical fiber sensors in civil engineering

    NASA Astrophysics Data System (ADS)

    Wang, Huaping; Jiang, Lizhong; Xiang, Ping

    2018-03-01

    Considering the mechanical effects and the different paths for transferring deformation, optical fiber sensors commonly used in civil engineering have been systematically classified. Based on the strain transfer theory, the relationship between the strain transfer coefficient and allowable testing error is established. The proposed relationship is regarded as the optimal control equation to obtain the optimal value of sensors that satisfy the requirement of measurement precision. Furthermore, specific optimization design methods and priority design parameters of the classified sensors are presented. This research indicates that (1) strain transfer theory-based optimization design method is much suitable for the sensor that depends on the interfacial shear stress to transfer the deformation; (2) the priority design parameters are bonded (sensing) length, interfacial bonded strength, elastic modulus and radius of protective layer and thickness of adhesive layer; (3) the optimization design of sensors with two anchor pieces at two ends is independent of strain transfer theory as the strain transfer coefficient can be conveniently calibrated by test, and this kind of sensors has no obvious priority design parameters. Improved calibration test is put forward to enhance the accuracy of the calibration coefficient of end-expanding sensors. By considering the practical state of sensors and the testing accuracy, comprehensive and systematic analyses on optical fiber sensors are provided from the perspective of mechanical actions, which could scientifically instruct the application design and calibration test of industrialized optical fiber sensors.

  5. NONPOINT SOURCE MODEL CALIBRATION IN HONEY CREEK WATERSHED

    EPA Science Inventory

    The U.S. EPA Non-Point Source Model has been applied and calibrated to a fairly large (187 sq. mi.) agricultural watershed in the Lake Erie Drainage basin of north central Ohio. Hydrologic and chemical routing algorithms have been developed. The model is evaluated for suitability...

  6. Human wound photogrammetry with low-cost hardware based on automatic calibration of geometry and color

    NASA Astrophysics Data System (ADS)

    Jose, Abin; Haak, Daniel; Jonas, Stephan; Brandenburg, Vincent; Deserno, Thomas M.

    2015-03-01

    Photographic documentation and image-based wound assessment is frequently performed in medical diagnostics, patient care, and clinical research. To support quantitative assessment, photographic imaging is based on expensive and high-quality hardware and still needs appropriate registration and calibration. Using inexpensive consumer hardware such as smartphone-integrated cameras, calibration of geometry, color, and contrast is challenging. Some methods involve color calibration using a reference pattern such as a standard color card, which is located manually in the photographs. In this paper, we adopt the lattice detection algorithm by Park et al. from real world to medicine. At first, the algorithm extracts and clusters feature points according to their local intensity patterns. Groups of similar points are fed into a selection process, which tests for suitability as a lattice grid. The group which describes the largest probability of the meshes of a lattice is selected and from it a template for an initial lattice cell is extracted. Then, a Markov random field is modeled. Using the mean-shift belief propagation, the detection of the 2D lattice is solved iteratively as a spatial tracking problem. Least-squares geometric calibration of projective distortions and non-linear color calibration in RGB space is supported by 35 corner points of 24 color patches, respectively. The method is tested on 37 photographs taken from the German Calciphylaxis registry, where non-standardized photographic documentation is collected nationwide from all contributing trial sites. In all images, the reference card location is correctly identified. At least, 28 out of 35 lattice points were detected, outperforming the SIFT-based approach previously applied. Based on these coordinates, robust geometry and color registration is performed making the photographs comparable for quantitative analysis.

  7. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  8. Calibrated photostimulated luminescence is an effective approach to identify irradiated orange during storage

    NASA Astrophysics Data System (ADS)

    Jo, Yunhee; Sanyal, Bhaskar; Chung, Namhyeok; Lee, Hyun-Gyu; Park, Yunji; Park, Hae-Jun; Kwon, Joong-Ho

    2015-06-01

    Photostimulated luminescence (PSL) has been employed as a fast screening method for various irradiated foods. In this study the potential use of PSL was evaluated to identify oranges irradiated with gamma ray, electron beam and X-ray (0-2 kGy) and stored under different conditions for 6 weeks. The effects of light conditions (natural light, artificial light, and dark) and storage temperatures (4 and 20 °C) on PSL photon counts (PCs) during post-irradiation periods were studied. Non-irradiated samples always showed negative values of PCs, while irradiated oranges exhibited intermediate results after first PSL measurements. However, the irradiated samples had much higher PCs. The PCs of all the samples declined as the storage time increased. Calibrated second PSL measurements showed PSL ratio <10 for the irradiated samples after 3 weeks of irradiation confirming their irradiation status in all the storage conditions. Calibrated PSL and sample storage in dark at 4 °C were found out to be most suitable approaches to identify irradiated oranges during storage.

  9. Apparatus and method for monitoring the intensities of charged particle beams

    DOEpatents

    Varma, Matesh N.; Baum, John W.

    1982-11-02

    Charged particle beam monitoring means (40) are disposed in the path of a charged particle beam (44) in an experimental device (10). The monitoring means comprise a beam monitoring component (42) which is operable to prevent passage of a portion of beam (44), while concomitantly permitting passage of another portion thereof (46) for incidence in an experimental chamber (18), and providing a signal (I.sub.m) indicative of the intensity of the beam portion which is not passed. Calibration means (36) are disposed in the experimental chamber in the path of the said another beam portion and are operable to provide a signal (I.sub.f) indicative of the intensity thereof. Means (41 and 43) are provided to determine the ratio (R) between said signals whereby, after suitable calibration, the calibration means may be removed from the experimental chamber and the intensity of the said another beam portion determined by monitoring of the monitoring means signal, per se.

  10. Certification and uncertainty evaluation of the certified reference materials of poly(ethylene glycol) for molecular mass fractions by using supercritical fluid chromatography.

    PubMed

    Takahashi, Kayori; Kishine, Kana; Matsuyama, Shigetomo; Saito, Takeshi; Kato, Haruhisa; Kinugasa, Shinichi

    2008-07-01

    Poly(ethylene glycol) (PEG) is a useful water-soluble polymer that has attracted considerable interest in medical and biological science applications as well as in polymer physics. Through the use of a well-calibrated evaporative light-scattering detector coupled with high performance supercritical fluid chromatography, we are able to determine exactly not only the average mass but also all of the molecular mass fractions of PEG samples needed for certified reference materials issued by the National Metrology Institute of Japan. In addition, experimental uncertainty was determined in accordance with the Guide to the expression of uncertainty in measurement (GUM). This reference material can be used to calibrate measuring instruments, to control measurement precision, and to confirm the validity of measurement methods when determining molecular mass distributions and average molecular masses. Especially, it is suitable for calibration against both masses and intensities for matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

  11. Bore-sight calibration of the profile laser scanner using a large size exterior calibration field

    NASA Astrophysics Data System (ADS)

    Koska, Bronislav; Křemen, Tomáš; Štroner, Martin

    2014-10-01

    The bore-sight calibration procedure and results of a profile laser scanner using a large size exterior calibration field is presented in the paper. The task is a part of Autonomous Mapping Airship (AMA) project which aims to create s surveying system with specific properties suitable for effective surveying of medium-wide areas (units to tens of square kilometers per a day). As is obvious from the project name an airship is used as a carrier. This vehicle has some specific properties. The most important properties are high carrying capacity (15 kg), long flight time (3 hours), high operating safety and special flight characteristics such as stability of flight, in terms of vibrations, and possibility to flight at low speed. The high carrying capacity enables using of high quality sensors like professional infrared (IR) camera FLIR SC645, high-end visible spectrum (VIS) digital camera and optics in the visible spectrum and tactical grade INSGPS sensor iMAR iTracerRT-F200 and profile laser scanner SICK LD-LRS1000. The calibration method is based on direct laboratory measuring of coordinate offset (lever-arm) and in-flight determination of rotation offsets (bore-sights). The bore-sight determination is based on the minimization of squares of individual point distances from measured planar surfaces.

  12. Powder X-ray diffraction method for the quantification of cocrystals in the crystallization mixture.

    PubMed

    Padrela, Luis; de Azevedo, Edmundo Gomes; Velaga, Sitaram P

    2012-08-01

    The solid state purity of cocrystals critically affects their performance. Thus, it is important to accurately quantify the purity of cocrystals in the final crystallization product. The aim of this study was to develop a powder X-ray diffraction (PXRD) quantification method for investigating the purity of cocrystals. The method developed was employed to study the formation of indomethacin-saccharin (IND-SAC) cocrystals by mechanochemical methods. Pure IND-SAC cocrystals were geometrically mixed with 1:1 w/w mixture of indomethacin/saccharin in various proportions. An accurately measured amount (550 mg) of the mixture was used for the PXRD measurements. The most intense, non-overlapping, characteristic diffraction peak of IND-SAC was used to construct the calibration curve in the range 0-100% (w/w). This calibration model was validated and used to monitor the formation of IND-SAC cocrystals by liquid-assisted grinding (LAG). The IND-SAC cocrystal calibration curve showed excellent linearity (R(2) = 0.9996) over the entire concentration range, displaying limit of detection (LOD) and limit of quantification (LOQ) values of 1.23% (w/w) and 3.74% (w/w), respectively. Validation results showed excellent correlations between actual and predicted concentrations of IND-SAC cocrystals (R(2) = 0.9981). The accuracy and reliability of the PXRD quantification method depend on the methods of sample preparation and handling. The crystallinity of the IND-SAC cocrystals was higher when larger amounts of methanol were used in the LAG method. The PXRD quantification method is suitable and reliable for verifying the purity of cocrystals in the final crystallization product.

  13. Strategy for the absolute neutron emission measurement on ITER.

    PubMed

    Sasao, M; Bertalot, L; Ishikawa, M; Popovichev, S

    2010-10-01

    Accuracy of 10% is demanded to the absolute fusion measurement on ITER. To achieve this accuracy, a functional combination of several types of neutron measurement subsystem, cross calibration among them, and in situ calibration are needed. Neutron transport calculation shows the suitable calibration source is a DT/DD neutron generator of source strength higher than 10(10) n/s (neutron/second) for DT and 10(8) n/s for DD. It will take eight weeks at the minimum with this source to calibrate flux monitors, profile monitors, and the activation system.

  14. Detection of tire tread particles using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Prochazka, David; Bilík, Martin; Prochazková, Petra; Klus, Jakub; Pořízka, Pavel; Novotný, Jan; Novotný, Karel; Ticová, Barbora; Bradáč, Albert; Semela, Marek; Kaiser, Jozef

    2015-06-01

    The objective of this paper is a study of the potential of laser induced breakdown spectroscopy (LIBS) for detection of tire tread particles. Tire tread particles may represent pollutants; simultaneously, it is potentially possible to exploit detection of tire tread particles for identification of optically imperceptible braking tracks at locations of road accidents. The paper describes the general composition of tire treads and selection of an element suitable for detection using the LIBS method. Subsequently, the applicable spectral line is selected considering interferences with lines of elements that might be present together with the detected particles, and optimization of measurement parameters such as incident laser energy, gate delay and gate width is performed. In order to eliminate the matrix effect, measurements were performed using 4 types of tires manufactured by 3 different producers. An adhesive tape was used as a sample carrier. The most suitable adhesive tape was selected from 5 commonly available tapes, on the basis of their respective LIBS spectra. Calibration standards, i.e. an adhesive tape with different area content of tire tread particles, were prepared for the selected tire. A calibration line was created on the basis of the aforementioned calibration standards. The linear section of this line was used for determination of the detection limit value applicable to the selected tire. Considering the insignificant influence of matrix of various types of tires, it is possible to make a simple recalculation of the detection limit value on the basis of zinc content in a specific tire.

  15. Direct injection GC method for measuring light hydrocarbon emissions from cooling-tower water.

    PubMed

    Lee, Max M; Logan, Tim D; Sun, Kefu; Hurley, N Spencer; Swatloski, Robert A; Gluck, Steve J

    2003-12-15

    A Direct Injection GC method for quantifying low levels of light hydrocarbons (C6 and below) in cooling water has been developed. It is intended to overcome the limitations of the currently available technology. The principle of this method is to use a stripper column in a GC to strip waterfrom the hydrocarbons prior to entering the separation column. No sample preparation is required since the water sample is introduced directly into the GC. Method validation indicates that the Direct Injection GC method offers approximately 15 min analysis time with excellent precision and recovery. The calibration studies with ethylene and propylene show that both liquid and gas standards are suitable for routine calibration and calibration verification. The sampling method using zero headspace traditional VOA (Volatile Organic Analysis) vials and a sample chiller has also been validated. It is apparent that the sampling method is sufficient to minimize the potential for losses of light hydrocarbons, and samples can be held at 4 degrees C for up to 7 days with more than 93% recovery. The Direct Injection GC method also offers <1 ppb (w/v) level method detection limits for ethylene, propylene, and benzene. It is superior to the existing El Paso stripper method. In addition to lower detection limits for ethylene and propylene, the Direct Injection GC method quantifies individual light hydrocarbons in cooling water, provides better recoveries, and requires less maintenance and setup costs. Since the instrumentation and supplies are readily available, this technique could easily be established as a standard or alternative method for routine emission monitoring and leak detection of light hydrocarbons in cooling-tower water.

  16. Droplet sizing instrumentation used for icing research: Operation, calibration, and accuracy

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.

    1989-01-01

    The accuracy of the Forward Scattering Spectrometer Probe (FSSP) is determined using laboratory tests, wind tunnel comparisons, and computer simulations. Operation in an icing environment is discussed and a new calibration device for the FSSP (the rotating pinhole) is demonstrated to be a valuable tool. Operation of the Optical Array Probe is also presented along with a calibration device (the rotating reticle) which is suitable for performing detailed analysis of that instrument.

  17. Effects of experimental design on calibration curve precision in routine analysis

    PubMed Central

    Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.

    1998-01-01

    A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816

  18. Neural networks for calibration tomography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur

    1993-01-01

    Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.

  19. A simplified gross primary production and evapotranspiration model for boreal coniferous forests - is a generic calibration sufficient?

    NASA Astrophysics Data System (ADS)

    Minunno, F.; Peltoniemi, M.; Launiainen, S.; Aurela, M.; Lindroth, A.; Lohila, A.; Mammarella, I.; Minkkinen, K.; Mäkelä, A.

    2015-07-01

    The problem of model complexity has been lively debated in environmental sciences as well as in the forest modelling community. Simple models are less input demanding and their calibration involves a lower number of parameters, but they might be suitable only at local scale. In this work we calibrated a simplified ecosystem process model (PRELES) to data from multiple sites and we tested if PRELES can be used at regional scale to estimate the carbon and water fluxes of Boreal conifer forests. We compared a multi-site (M-S) with site-specific (S-S) calibrations. Model calibrations and evaluations were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. To evaluate model performances BMC results were combined with more classical analysis of model-data mismatch (M-DM). Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 10 sites of Finland and Sweden were used in the study. Calibration results showed that similar estimates were obtained for the parameters at which model outputs are most sensitive. No significant differences were encountered in the predictions of the multi-site and site-specific versions of PRELES with exception of a site with agricultural history (Alkkia). Although PRELES predicted GPP better than evapotranspiration, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Our analyses underlined also the importance of using long and carefully collected flux datasets in model calibration. In fact, even a single site can provide model calibrations that can be applied at a wider spatial scale, since it covers a wide range of variability in climatic conditions.

  20. Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2012-01-01

    The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.

  1. [Validation of an in-house method for the determination of zinc in serum: Meeting the requirements of ISO 17025].

    PubMed

    Llorente Ballesteros, M T; Navarro Serrano, I; López Colón, J L

    2015-01-01

    The aim of this report is to propose a scheme for validation of an analytical technique according to ISO 17025. According to ISO 17025, the fundamental parameters tested were: selectivity, calibration model, precision, accuracy, uncertainty of measurement, and analytical interference. A protocol has been developed that has been applied successfully to quantify zinc in serum by atomic absorption spectrometry. It is demonstrated that our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  2. Solid matrix transformation and tracer addition using molten ammonium bifluoride salt as a sample preparation method for laser ablation inductively coupled plasma mass spectrometry.

    PubMed

    Grate, Jay W; Gonzalez, Jhanis J; O'Hara, Matthew J; Kellogg, Cynthia M; Morrison, Samuel S; Koppenaal, David W; Chan, George C-Y; Mao, Xianglei; Zorba, Vassilia; Russo, Richard E

    2017-09-08

    Solid sampling and analysis methods, such as laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), are challenged by matrix effects and calibration difficulties. Matrix-matched standards for external calibration are seldom available and it is difficult to distribute spikes evenly into a solid matrix as internal standards. While isotopic ratios of the same element can be measured to high precision, matrix-dependent effects in the sampling and analysis process frustrate accurate quantification and elemental ratio determinations. Here we introduce a potentially general solid matrix transformation approach entailing chemical reactions in molten ammonium bifluoride (ABF) salt that enables the introduction of spikes as tracers or internal standards. Proof of principle experiments show that the decomposition of uranium ore in sealed PFA fluoropolymer vials at 230 °C yields, after cooling, new solids suitable for direct solid sampling by LA. When spikes are included in the molten salt reaction, subsequent LA-ICP-MS sampling at several spots indicate that the spikes are evenly distributed, and that U-235 tracer dramatically improves reproducibility in U-238 analysis. Precisions improved from 17% relative standard deviation for U-238 signals to 0.1% for the ratio of sample U-238 to spiked U-235, a factor of over two orders of magnitude. These results introduce the concept of solid matrix transformation (SMT) using ABF, and provide proof of principle for a new method of incorporating internal standards into a solid for LA-ICP-MS. This new approach, SMT-LA-ICP-MS, provides opportunities to improve calibration and quantification in solids based analysis. Looking forward, tracer addition to transformed solids opens up LA-based methods to analytical methodologies such as standard addition, isotope dilution, preparation of matrix-matched solid standards, external calibration, and monitoring instrument drift against external calibration standards.

  3. (Mis)use of (133)Ba as a calibration surrogate for (131)I in clinical activity calibrators.

    PubMed

    Zimmerman, B E; Bergeron, D E

    2016-03-01

    Using NIST-calibrated solutions of (131)Ba and (131)I in the 5mL NIST ampoule geometry, measurements were made in three NIST-maintained Capintec activity calibrators and the NIST Vinten 671 ionization chamber to evaluate the suitability of using (133)Ba as a calibration surrogate for (131)I. For the Capintec calibrators, the (133)Ba response was a factor of about 300% higher than that of the same amount of (131)I. For the Vinten 671, the Ba-133 response was about 7% higher than that of (131)I. These results demonstrate that (133)Ba is a poor surrogate for (131)I. New calibration factors for these radionuclides in the ampoule geometry for the Vinten 671 and Capintec activity calibrators were also determined. Published by Elsevier Ltd.

  4. Numerical emulation of Thru-Reflection-Line calibration for the de-embedding of Surface Acoustic Wave devices.

    PubMed

    Mencarelli, D; Djafari-Rouhani, B; Pennec, Y; Pitanti, A; Zanotto, S; Stocchi, M; Pierantoni, L

    2018-06-18

    In this contribution, a rigorous numerical calibration is proposed to characterize the excitation of propagating mechanical waves by interdigitated transducers (IDTs). The transition from IDT terminals to phonon waveguides is modeled by means of a general circuit representation that makes use of Scattering Matrix (SM) formalism. In particular, the three-step calibration approach called the Thru-Reflection-Line (TRL), that is a well-established technique in microwave engineering, has been successfully applied to emulate typical experimental conditions. The proposed procedure is suitable for the synthesis/optimization of surface-acoustic-wave (SAW) based devices: the TRL calibration allows to extract/de-embed the acoustic component, namely resonator or filter, from the outer IDT structure, regardless of complexity and size of the letter. We report, as a result, the hybrid scattering parameters of the IDT transition to a mechanical waveguide formed by a phononic crystal patterned on a piezoelectric AlN membrane, where the effect of a discontinuity from periodic to uniform mechanical waveguide is also characterized. In addition, to ensure the correctness of our numerical calculations, the proposed method has been validated by independent calculations.

  5. Chromatic aberration correction: an enhancement to the calibration of low-cost digital dermoscopes.

    PubMed

    Wighton, Paul; Lee, Tim K; Lui, Harvey; McLean, David; Atkins, M Stella

    2011-08-01

    We present a method for calibrating low-cost digital dermoscopes that corrects for color and inconsistent lighting and also corrects for chromatic aberration. Chromatic aberration is a form of radial distortion that often occurs in inexpensive digital dermoscopes and creates red and blue halo-like effects on edges. Being radial in nature, distortions due to chromatic aberration are not constant across the image, but rather vary in both magnitude and direction. As a result, distortions are not only visually distracting but could also mislead automated characterization techniques. Two low-cost dermoscopes, based on different consumer-grade cameras, were tested. Color is corrected by imaging a reference and applying singular value decomposition to determine the transformation required to ensure accurate color reproduction. Lighting is corrected by imaging a uniform surface and creating lighting correction maps. Chromatic aberration is corrected using a second-order radial distortion model. Our results for color and lighting calibration are consistent with previously published results, while distortions due to chromatic aberration can be reduced by 42-47% in the two systems considered. The disadvantages of inexpensive dermoscopy can be quickly substantially mitigated with a suitable calibration procedure. © 2011 John Wiley & Sons A/S.

  6. Autocalibration method for non-stationary CT bias correction.

    PubMed

    Vegas-Sánchez-Ferrero, Gonzalo; Ledesma-Carbayo, Maria J; Washko, George R; Estépar, Raúl San José

    2018-02-01

    Computed tomography (CT) is a widely used imaging modality for screening and diagnosis. However, the deleterious effects of radiation exposure inherent in CT imaging require the development of image reconstruction methods which can reduce exposure levels. The development of iterative reconstruction techniques is now enabling the acquisition of low-dose CT images whose quality is comparable to that of CT images acquired with much higher radiation dosages. However, the characterization and calibration of the CT signal due to changes in dosage and reconstruction approaches is crucial to provide clinically relevant data. Although CT scanners are calibrated as part of the imaging workflow, the calibration is limited to select global reference values and does not consider other inherent factors of the acquisition that depend on the subject scanned (e.g. photon starvation, partial volume effect, beam hardening) and result in a non-stationary noise response. In this work, we analyze the effect of reconstruction biases caused by non-stationary noise and propose an autocalibration methodology to compensate it. Our contributions are: 1) the derivation of a functional relationship between observed bias and non-stationary noise, 2) a robust and accurate method to estimate the local variance, 3) an autocalibration methodology that does not necessarily rely on a calibration phantom, attenuates the bias caused by noise and removes the systematic bias observed in devices from different vendors. The validation of the proposed methodology was performed with a physical phantom and clinical CT scans acquired with different configurations (kernels, doses, algorithms including iterative reconstruction). The results confirmed the suitability of the proposed methods for removing the intra-device and inter-device reconstruction biases. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Geometric Characterization of Multi-Axis Multi-Pinhole SPECT

    PubMed Central

    DiFilippo, Frank P.

    2008-01-01

    A geometric model and calibration process are developed for SPECT imaging with multiple pinholes and multiple mechanical axes. Unlike the typical situation where pinhole collimators are mounted directly to rotating gamma ray detectors, this geometric model allows for independent rotation of the detectors and pinholes, for the case where the pinhole collimator is physically detached from the detectors. This geometric model is applied to a prototype small animal SPECT device with a total of 22 pinholes and which uses dual clinical SPECT detectors. All free parameters in the model are estimated from a calibration scan of point sources and without the need for a precision point source phantom. For a full calibration of this device, a scan of four point sources with 360° rotation is suitable for estimating all 95 free parameters of the geometric model. After a full calibration, a rapid calibration scan of two point sources with 180° rotation is suitable for estimating the subset of 22 parameters associated with repositioning the collimation device relative to the detectors. The high accuracy of the calibration process is validated experimentally. Residual differences between predicted and measured coordinates are normally distributed with 0.8 mm full width at half maximum and are estimated to contribute 0.12 mm root mean square to the reconstructed spatial resolution. Since this error is small compared to other contributions arising from the pinhole diameter and the detector, the accuracy of the calibration is sufficient for high resolution small animal SPECT imaging. PMID:18293574

  8. Traceable Co-C eutectic points for thermocouple calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jahan, F.; Ballico, M. J.

    2013-09-11

    National Measurement Institute of Australia (NMIA) has developed a miniature crucible design suitable for measurement by both thermocouples and radiation thermometry, and has established an ensemble of five Co-C eutectic-point cells based on this design. The cells in this ensemble have been individually calibrated using both ITS-90 radiation thermometry and thermocouples calibrated on the ITS-90 by the NMIA mini-coil methodology. The assigned ITS-90 temperatures obtained using these different techniques are both repeatable and consistent, despite the use of different furnaces and measurement conditions. The results demonstrate that, if individually calibrated, such cells can be practically used as part of amore » national traceability scheme for thermocouple calibration, providing a useful intermediate calibration point between Cu and Pd.« less

  9. LIBS coupled with ICP/OES for the spectral analysis of betel leaves

    NASA Astrophysics Data System (ADS)

    Rehan, I.; Rehan, K.; Sultana, S.; Khan, M. Z.; Muhammad, R.

    2018-05-01

    Laser-induced breakdown spectroscopy (LIBS) system was optimized and was applied for the elemental analysis and exposure of the heavy metals in betel leaves in air. Pulsed Nd:YAG (1064 nm) in conjunction with a suitable detector (LIBS 2000+, Ocean Optics, Inc) having the optical resolution of 0.06 nm was used to record the emission spectra from 200 to 720 nm. Elements like Al, Ba, Ca, Cr, Cu, P, Fe, K, Mg, Mn, Na, P, S, Sr, and Zn were found to be present in the samples. The abundances of observed elements were calculated through normalized calibration curve method, integrated intensity ratio method, and calibration free-LIBS approach. Quantitative analyses were accomplished under the assumption of local thermodynamic equilibrium (LTE) and optically thin plasma. LIBS findings were validated by comparing its results with the results obtained using a typical analytical technique of inductively coupled plasma-optical emission spectroscopy (ICP/OES). Limit of detection (LOD) of the LIBS system was also estimated for heavy metals.

  10. Simple solution for a complex problem: proanthocyanidins, galloyl glucoses and ellagitannins fit on a single calibration curve in high performance-gel permeation chromatography.

    PubMed

    Stringano, Elisabetta; Gea, An; Salminen, Juha-Pekka; Mueller-Harvey, Irene

    2011-10-28

    This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 °C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Size-of-source Effect in Infrared Thermometers with Direct Reading of Temperature

    NASA Astrophysics Data System (ADS)

    Manoi, A.; Saunders, P.

    2017-07-01

    The size-of-source effect (SSE) for six infrared (IR) thermometers with direct reading of temperature was measured in this work. The alternative direct method for SSE determination, where the aperture size is fixed and the measurement distance is varied, was used in this study. The experimental equivalence between the usual and the alternative direct methods is presented. The magnitudes of the SSE for different types of IR thermometers were investigated. The maxima of the SSE were found to be up to 5 %, 8 %, and 28 % for focusable, closed-focus, and open-focus thermometers, respectively. At 275°C, an SSE of 28 % corresponds to 52°C, indicating the severe effect on the accuracy of this type of IR thermometer. A method to realize the calibration conditions used by the manufacturer, in terms of aperture size and measurement distance, is discussed and validated by experimental results. This study would be of benefit to users in choosing the best IR thermometer to match their work and for calibration laboratories in selecting the technique most suitable for determining the SSE.

  12. Partial synthesis of ganglioside and lysoganglioside lipoforms as internal standards for MS quantification.

    PubMed

    Gantner, Martin; Schwarzmann, Günter; Sandhoff, Konrad; Kolter, Thomas

    2014-12-01

    Within recent years, ganglioside patterns have been increasingly analyzed by MS. However, internal standards for calibration are only available for gangliosides GM1, GM2, and GM3. For this reason, we prepared homologous internal standards bearing nonnatural fatty acids of the major mammalian brain gangliosides GM1, GD1a, GD1b, GT1b, and GQ1b, and of the tumor-associated gangliosides GM2 and GD2. The fatty acid moieties were incorporated after selective chemical or enzymatic deacylation of bovine brain gangliosides. For modification of the sphingoid bases, we developed a new synthetic method based on olefin cross metathesis. This method was used for the preparation of a lyso-GM1 and a lyso-GM2 standard. The total yield of this method was 8.7% for the synthesis of d17:1-lyso-GM1 from d20:1/18:0-GM1 in four steps. The title compounds are currently used as calibration substances for MS quantification and are also suitable for functional studies. Copyright © 2014 by the American Society for Biochemistry and Molecular Biology, Inc.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, Valeriy V; Conley, Raymond; Anderson, Erik H

    Verification of the reliability of metrology data from high quality x-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [Proc. SPIE 7077-7 (2007), Opt. Eng. 47(7), 073602-1-5 (2008)} and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [Nucl. Instr. and Meth. A 616, 172-82 (2010)]. Here we describe the details ofmore » development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-ray microscopes. Corresponding work with x-ray microscopes is in progress.« less

  14. Calibration and testing of selected portable flowmeters for use on large irrigation systems

    USGS Publications Warehouse

    Luckey, Richard R.; Heimes, Frederick J.; Gaggiani, Neville G.

    1980-01-01

    Existing methods for measuring discharge of irrigation systems in the High Plains region are not suitable to provide the pumpage data required by the High Plains Regional Aquifer System Analysis. Three portable flowmeters that might be suitable for obtaining fast and accurate discharge measure-ments on large irrigation systems were tested during 1979 under both laboratory and field conditions: propeller type gated-pipe meter, a Doppler meter, and a transient-time meter.The gated-pipe meter was found to be difficult to use and sensitive to particulate matter in the fluid. The Doppler meter, while easy to use, would not function suitably on steel pipe 6 inches or larger in diameter, or on aluminum pipe larger than 8 inches in diameter. The transient-time meter was more difficult to use than the other two meters; however, this instrument provided a high degree of accuracy and reliability under a variety of conditions. Of the three meters tested, only the transient-time meter was found to be suitable for providing reliable discharge measurements on the variety of irrigation systems used in the High Plains region.

  15. Hybrid x-space: a new approach for MPI reconstruction.

    PubMed

    Tateo, A; Iurino, A; Settanni, G; Andrisani, A; Stifanelli, P F; Larizza, P; Mazzia, F; Mininni, R M; Tangaro, S; Bellotti, R

    2016-06-07

    Magnetic particle imaging (MPI) is a new medical imaging technique capable of recovering the distribution of superparamagnetic particles from their measured induced signals. In literature there are two main MPI reconstruction techniques: measurement-based (MB) and x-space (XS). The MB method is expensive because it requires a long calibration procedure as well as a reconstruction phase that can be numerically costly. On the other side, the XS method is simpler than MB but the exact knowledge of the field free point (FFP) motion is essential for its implementation. Our simulation work focuses on the implementation of a new approach for MPI reconstruction: it is called hybrid x-space (HXS), representing a combination of the previous methods. Specifically, our approach is based on XS reconstruction because it requires the knowledge of the FFP position and velocity at each time instant. The difference with respect to the original XS formulation is how the FFP velocity is computed: we estimate it from the experimental measurements of the calibration scans, typical of the MB approach. Moreover, a compressive sensing technique is applied in order to reduce the calibration time, setting a fewer number of sampling positions. Simulations highlight that HXS and XS methods give similar results. Furthermore, an appropriate use of compressive sensing is crucial for obtaining a good balance between time reduction and reconstructed image quality. Our proposal is suitable for open geometry configurations of human size devices, where incidental factors could make the currents, the fields and the FFP trajectory irregular.

  16. Exploring the Use of Alfven Waves in Magnetometer Calibration at Geosynchronous Orbit

    NASA Technical Reports Server (NTRS)

    Bentley, John; Sheppard, David; RIch, Frederick; Redmon, Robert; Loto'aniu, Paul; Chu, Donald

    2016-01-01

    An Alfven wave is a type magnetohydrodynamicwave that travels through a conducting fluid under the influence of a magnetic field. Researchers have successfully calculated offset vectors of magnetometers in interplanetary space by optimizing the offset to maximize certain Alfvenic properties of observed waves (Leinweber, Belcher). If suitable Alfven waves can be found in the magnetosphere at geosynchronous altitude then these techniques could be used to augment the overall calibration plan for magnetometers in this region such as on the GOES spacecraft, possibly increasing the time between regular maneuvers. Calibration maneuvers may be undesirable because they disrupt the activities of other instruments. Various algorithms to calculate an offset using Alfven waves were considered. A new variation of the Davis-Smith method was derived because it can be mathematically shown that the Davis-Smith method tolerates filtered data, which expands potential applications. The variant developed was designed to find only the offset in the plane normal to the main field because the overall direction of Earth's magnetic field rarely changes, and theory suggests the Alfvenic disturbances occur transverse to the main field. Other variations of the Davis-Smith method encounter problems with data containing waves that propagate in mostly the same direction. A searching algorithm was then designed to look for periods of time with potential Alfven waves in GOES 15 data based on parameters requiring that disturbances be normal to the main field and not change field magnitude. Final waves for calculation were hand-selected. These waves produced credible two-dimensional offset vectors when input to the Davis-Smith method. Multiple two-dimensional solutions in different planes can be combined to get a measurement of the complete offset. The resulting three dimensional offset did not show sufficient precision over several years to be used as a primary calibration method, but reflected changes in the offset fairly well, suggesting that the method could be helpful in monitoring trends of the offset vector when maneuvers cannot be used.

  17. Integrating satellite actual evapotranspiration patterns into distributed model parametrization and evaluation for a mesoscale catchment

    NASA Astrophysics Data System (ADS)

    Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.

    2016-12-01

    Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.

  18. Applicability of Infrared Photorefraction for Measurement of Accommodation in Awake-Behaving Normal and Strabismic Monkeys

    PubMed Central

    Bossong, Heather; Swann, Michelle; Glasser, Adrian; Das, Vallabh E.

    2010-01-01

    Purpose This study was designed to use infrared photorefraction to measure accommodation in awake-behaving normal and strabismic monkeys and describe properties of photorefraction calibrations in these monkeys. Methods Ophthalmic trial lenses were used to calibrate the slope of pupil vertical pixel intensity profile measurements that were made with a custom-built infrared photorefractor. Day to day variability in photorefraction calibration curves, variability in calibration coefficients due to misalignment of the photorefractor Purkinje image and the center of the pupil, and variability in refractive error due to off-axis measurements were evaluated. Results The linear range of calibration of the photorefractor was found for ophthalmic lenses ranging from –1 D to +4 D. Calibration coefficients were different across monkeys tested (two strabismic, one normal) but were similar for each monkey over different experimental days. In both normal and strabismic monkeys, small misalignment of the photorefractor Purkinje image with the center of pupil resulted in only small changes in calibration coefficients, that were not statistically significant (P > 0.05). Off-axis measurement of refractive error was also small in the normal and strabismic monkeys (~1 D to 2 D) as long as the magnitude of misalignment was <10°. Conclusions Remote infrared photorefraction is suitable for measuring accommodation in awake, behaving normal, and strabismic monkeys. Specific challenges posed by the strabismic monkeys, such as possible misalignment of the photorefractor Purkinje image and the center of the pupil during either calibration or measurement of accommodation, that may arise due to unsteady fixation or small eye movements including nystagmus, results in small changes in measured refractive error. PMID:19029024

  19. Comparison of three-way and four-way calibration for the real-time quantitative analysis of drug hydrolysis in complex dynamic samples by excitation-emission matrix fluorescence.

    PubMed

    Yin, Xiao-Li; Gu, Hui-Wen; Liu, Xiao-Lu; Zhang, Shan-Hui; Wu, Hai-Long

    2018-03-05

    Multiway calibration in combination with spectroscopic technique is an attractive tool for online or real-time monitoring of target analyte(s) in complex samples. However, how to choose a suitable multiway calibration method for the resolution of spectroscopic-kinetic data is a troubling problem in practical application. In this work, for the first time, three-way and four-way fluorescence-kinetic data arrays were generated during the real-time monitoring of the hydrolysis of irinotecan (CPT-11) in human plasma by excitation-emission matrix fluorescence. Alternating normalization-weighted error (ANWE) and alternating penalty trilinear decomposition (APTLD) were used as three-way calibration for the decomposition of the three-way kinetic data array, whereas alternating weighted residual constraint quadrilinear decomposition (AWRCQLD) and alternating penalty quadrilinear decomposition (APQLD) were applied as four-way calibration to the four-way kinetic data array. The quantitative results of the two kinds of calibration models were fully compared from the perspective of predicted real-time concentrations, spiked recoveries of initial concentration, and analytical figures of merit. The comparison study demonstrated that both three-way and four-way calibration models could achieve real-time quantitative analysis of the hydrolysis of CPT-11 in human plasma under certain conditions. However, it was also found that both of them possess some critical advantages and shortcomings during the process of dynamic analysis. The conclusions obtained in this paper can provide some helpful guidance for the reasonable selection of multiway calibration models to achieve the real-time quantitative analysis of target analyte(s) in complex dynamic systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. An Improved Calibration Method for Hydrazine Monitors for the United States Air Force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korsah, K

    2003-07-07

    This report documents the results of Phase 1 of the ''Air Force Hydrazine Detector Characterization and Calibration Project''. A method for calibrating model MDA 7100 hydrazine detectors in the United States Air Force (AF) inventory has been developed. The calibration system consists of a Kintek 491 reference gas generation system, a humidifier/mixer system which combines the dry reference hydrazine gas with humidified diluent or carrier gas to generate the required humidified reference for calibrations, and a gas sampling interface. The Kintek reference gas generation system itself is periodically calibrated using an ORNL-constructed coulometric titration system to verify the hydrazine concentrationmore » of the sample atmosphere in the interface module. The Kintek reference gas is then used to calibrate the hydrazine monitors. Thus, coulometric titration is only used to periodically assess the performance of the Kintek reference gas generation system, and is not required for hydrazine monitor calibrations. One advantage of using coulometric titration for verifying the concentration of the reference gas is that it is a primary standard (if used for simple solutions), thereby guaranteeing, in principle, that measurements will be traceable to SI units (i.e., to the mole). The effect of humidity of the reference gas was characterized by using the results of concentrations determined by coulometric titration to develop a humidity correction graph for the Kintek 491 reference gas generation system. Using this calibration method, calibration uncertainty has been reduced by 50% compared to the current method used to calibrate hydrazine monitors in the Air Force inventory and calibration time has also been reduced by more than 20%. Significant findings from studies documented in this report are the following: (1) The Kintek 491 reference gas generation system (generator, humidifier and interface module) can be used to calibrate hydrazine detectors. (2) The Kintek system output concentration is less than the calculated output of the generator alone but can be calibrated as a system by using coulometric titration of gas samples collected with impingers. (3) The calibrated Kintek system output concentration is reproducible even after having been disassembled and moved and reassembled. (4) The uncertainty of the reference gas concentration generated by the Kintek system is less than half the uncertainty of the Zellweger Analytics' (ZA) reference gas concentration and can be easily lowered to one third or less of the ZA method by using lower-uncertainty flow rate or total flow measuring instruments. (5) The largest sources of uncertainty in the current ORNL calibration system are the permeation rate of the permeation tubes and the flow rate of the impinger sampling pump used to collect gas samples for calibrating the Kintek system. Upgrading the measurement equipment, as stated in (4), can reduce both of these. (6) The coulometric titration technique can be used to periodically assess the performance of the Kintek system and determine a suitable recalibration interval. (7) The Kintek system has been used to calibrate two MDA 7100s and an Interscan 4187 in less than one workday. The system can be upgraded (e.g., by automating it) to provide more calibrations per day. (8) The humidity of both the reference gas and the environment of the Chemcassette affect the MDA 7100 hydrazine detector's readings. However, ORNL believes that the environmental effect is less significant than the effect of the reference gas humidity. (9) The ORNL calibration method based on the Kintek 491 M-B gas standard can correct for the effect of the humidity of the reference gas to produce the same calibration as that of ZA's. Zellweger Analytics calibrations are typically performed at 45%-55% relative humidity. (10) Tests using the Interscan 4187 showed that the instrument was not accurate in its lower (0-100 ppb) range. Subsequent discussions with Kennedy Space Center (KSC) personnel also indicated that the Interscan units were not reproducible when new sensors were used. KSC had discovered that the Interscan units read incorrectly on the low range because of the presence of carbon dioxide. ORNL did not test the carbon dioxide effect, but it was found that the units did not read zero when a test gas containing no hydrazine was sampled. According to the KSC personnel that ORNL had these discussions with, NASA is phasing out the use of these Interscan detectors.« less

  1. Non-convex optimization for self-calibration of direction-dependent effects in radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves

    2017-10-01

    Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, I.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.

  2. A simplified gross thrust computing technique for an afterburning turbofan engine

    NASA Technical Reports Server (NTRS)

    Hamer, M. J.; Kurtenbach, F. J.

    1978-01-01

    A simplified gross thrust computing technique extended to the F100-PW-100 afterburning turbofan engine is described. The technique uses measured total and static pressures in the engine tailpipe and ambient static pressure to compute gross thrust. Empirically evaluated calibration factors account for three-dimensional effects, the effects of friction and mass transfer, and the effects of simplifying assumptions for solving the equations. Instrumentation requirements and the sensitivity of computed thrust to transducer errors are presented. NASA altitude facility tests on F100 engines (computed thrust versus measured thrust) are presented, and calibration factors obtained on one engine are shown to be applicable to the second engine by comparing the computed gross thrust. It is concluded that this thrust method is potentially suitable for flight test application and engine maintenance on production engines with a minimum amount of instrumentation.

  3. Measurement of non-sugar solids content in Chinese rice wine using near infrared spectroscopy combined with an efficient characteristic variables selection algorithm.

    PubMed

    Ouyang, Qin; Zhao, Jiewen; Chen, Quansheng

    2015-01-01

    The non-sugar solids (NSS) content is one of the most important nutrition indicators of Chinese rice wine. This study proposed a rapid method for the measurement of NSS content in Chinese rice wine using near infrared (NIR) spectroscopy. We also systemically studied the efficient spectral variables selection algorithms that have to go through modeling. A new algorithm of synergy interval partial least square with competitive adaptive reweighted sampling (Si-CARS-PLS) was proposed for modeling. The performance of the final model was back-evaluated using root mean square error of calibration (RMSEC) and correlation coefficient (Rc) in calibration set and similarly tested by mean square error of prediction (RMSEP) and correlation coefficient (Rp) in prediction set. The optimum model by Si-CARS-PLS algorithm was achieved when 7 PLS factors and 18 variables were included, and the results were as follows: Rc=0.95 and RMSEC=1.12 in the calibration set, Rp=0.95 and RMSEP=1.22 in the prediction set. In addition, Si-CARS-PLS algorithm showed its superiority when compared with the commonly used algorithms in multivariate calibration. This work demonstrated that NIR spectroscopy technique combined with a suitable multivariate calibration algorithm has a high potential in rapid measurement of NSS content in Chinese rice wine. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Determination of the laser intensity applied to a Ta witness plate from the measured X-ray signal using a pulsed micro-channel plate detector

    DOE PAGES

    Pickworth, L. A.; Rosen, M. D.; Schneider, M. B.; ...

    2017-04-14

    The laser intensity distribution at the surface of a high-Z material, such as Ta, can be deduced from imaging the self-emission of the produced x-ray spot using suitable calibration data. This paper presents a calibration method which uses the measured x-ray emissions from laser spots of di erent intensities hitting a Ta witness plate. The x-ray emission is measured with a micro-channel plate (MCP) based x-ray framing camera plus filters. Data from di erent positions on one MCP strip or from di erent MCP assemblies are normalized to each other using a standard candle laser beam spot at 1x10 14more » W/cm 2 intensity. The distribution of the resulting dataset agrees with results from a pseudo spectroscopic model for laser intensities between 4 and 15x10 13 W/cm 2. The model is then used to determine the absolute scaling factor between the experimental results from assemblies using two di erent x-ray filters. The data and model method also allows unique calibration factors for each MCP system and each MCP gain to be compared. We also present simulation results investigating alternate witness plate materials (Ag, Eu and Au).« less

  5. Determination of the laser intensity applied to a Ta witness plate from the measured X-ray signal using a pulsed micro-channel plate detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickworth, L. A.; Rosen, M. D.; Schneider, M. B.

    The laser intensity distribution at the surface of a high-Z material, such as Ta, can be deduced from imaging the self-emission of the produced x-ray spot using suitable calibration data. This paper presents a calibration method which uses the measured x-ray emissions from laser spots of di erent intensities hitting a Ta witness plate. The x-ray emission is measured with a micro-channel plate (MCP) based x-ray framing camera plus filters. Data from di erent positions on one MCP strip or from di erent MCP assemblies are normalized to each other using a standard candle laser beam spot at 1x10 14more » W/cm 2 intensity. The distribution of the resulting dataset agrees with results from a pseudo spectroscopic model for laser intensities between 4 and 15x10 13 W/cm 2. The model is then used to determine the absolute scaling factor between the experimental results from assemblies using two di erent x-ray filters. The data and model method also allows unique calibration factors for each MCP system and each MCP gain to be compared. We also present simulation results investigating alternate witness plate materials (Ag, Eu and Au).« less

  6. Microhotplate Temperature Sensor Calibration and BIST.

    PubMed

    Afridi, M; Montgomery, C; Cooper-Balis, E; Semancik, S; Kreider, K G; Geist, J

    2011-01-01

    In this paper we describe a novel long-term microhotplate temperature sensor calibration technique suitable for Built-In Self Test (BIST). The microhotplate thermal resistance (thermal efficiency) and the thermal voltage from an integrated platinum-rhodium thermocouple were calibrated against a freshly calibrated four-wire polysilicon microhotplate-heater temperature sensor (heater) that is not stable over long periods of time when exposed to higher temperatures. To stress the microhotplate, its temperature was raised to around 400 °C and held there for days. The heater was then recalibrated as a temperature sensor, and microhotplate temperature measurements were made based on the fresh calibration of the heater, the first calibration of the heater, the microhotplate thermal resistance, and the thermocouple voltage. This procedure was repeated 10 times over a period of 80 days. The results show that the heater calibration drifted substantially during the period of the test while the microhotplate thermal resistance and the thermocouple-voltage remained stable to within about plus or minus 1 °C over the same period. Therefore, the combination of a microhotplate heater-temperature sensor and either the microhotplate thermal resistance or an integrated thin film platinum-rhodium thermocouple can be used to provide a stable, calibrated, microhotplate-temperature sensor, and the combination of the three sensor is suitable for implementing BIST functionality. Alternatively, if a stable microhotplate-heater temperature sensor is available, such as a properly annealed platinum heater-temperature sensor, then the thermal resistance of the microhotplate and the electrical resistance of the platinum heater will be sufficient to implement BIST. It is also shown that aluminum- and polysilicon-based temperature sensors, which are not stable enough for measuring high microhotplate temperatures (>220 °C) without impractically frequent recalibration, can be used to measure the silicon substrate temperature if never exposed to temperatures above about 220 °C.

  7. An Item-Driven Adaptive Design for Calibrating Pretest Items. Research Report. ETS RR-14-38

    ERIC Educational Resources Information Center

    Ali, Usama S.; Chang, Hua-Hua

    2014-01-01

    Adaptive testing is advantageous in that it provides more efficient ability estimates with fewer items than linear testing does. Item-driven adaptive pretesting may also offer similar advantages, and verification of such a hypothesis about item calibration was the main objective of this study. A suitability index (SI) was introduced to adaptively…

  8. Multivariate methods on the excitation emission matrix fluorescence spectroscopic data of diesel-kerosene mixtures: a comparative study.

    PubMed

    Divya, O; Mishra, Ashok K

    2007-05-29

    Quantitative determination of kerosene fraction present in diesel has been carried out based on excitation emission matrix fluorescence (EEMF) along with parallel factor analysis (PARAFAC) and N-way partial least squares regression (N-PLS). EEMF is a simple, sensitive and nondestructive method suitable for the analysis of multifluorophoric mixtures. Calibration models consisting of varying compositions of diesel and kerosene were constructed and their validation was carried out using leave-one-out cross validation method. The accuracy of the model was evaluated through the root mean square error of prediction (RMSEP) for the PARAFAC, N-PLS and unfold PLS methods. N-PLS was found to be a better method compared to PARAFAC and unfold PLS method because of its low RMSEP values.

  9. Calibration procedures to test the feasibility of heated fiber optics for measuring soil water content in field conditions.

    NASA Astrophysics Data System (ADS)

    Benítez, Javier; Sayde, Chadi; Rodríguez Sinobas, Leonor; Sánchez, Raúl; Gil, María; Selker, John

    2013-04-01

    This research provides insights of the calibration procedures carried out at the agricultural field of La Nava de Arévalo (Spain). The suitability of the heat pulse theory applied to fiber optics for measuring soil water content, in field conditions, is here analyzed. In addition, it highlights the major findings obtained and the weakness to be addressed in future studies. Within a corn field, in a plot of 500 m2 of bare soil, 600 m of fiber optic cable (BruggSteal) were buried on a ziz-zag deployment at two depths, 30cm and 60cm. Various electrical heat pulses of 20W/m were applied to the stainless steel shield of the fiber optic cable during 2 minutes. The resulting thermal response was captured by means of Distributed Fiber Optic Temperature sensing (DFOT), within a spatial and temporal resolution up to 25 cm and 1 s, respectively. The soil thermal response was then correlated to the soil water content by using undisturbed soil samples and soil moisture sensors (Decagon ECHO 5TM). The process was also modeled by applying the numerical methods software Hydrus 2D. Also, the soil thermal properties were measured in situ by using a dual heat pulse probe (Decagon Kd2Pro). For an ongoing process, first results obtained show the suitability of heated fiber optics for measuring soil water content, in real field conditions. Also, they highlight the usefulness of Hydrus 2D as a complementary tool for calibration purposes and for reducing uncertainty in addressing soil spatial variability.

  10. Development and Calibration of a Field-Deployable Microphone Phased Array for Propulsion and Airframe Noise Flyover Measurements

    NASA Technical Reports Server (NTRS)

    Humphreys, William M., Jr.; Lockard, David P.; Khorrami, Mehdi R.; Culliton, William G.; McSwain, Robert G.; Ravetta, Patricio A.; Johns, Zachary

    2016-01-01

    A new aeroacoustic measurement capability has been developed consisting of a large channelcount, field-deployable microphone phased array suitable for airframe noise flyover measurements for a range of aircraft types and scales. The array incorporates up to 185 hardened, weather-resistant sensors suitable for outdoor use. A custom 4-mA current loop receiver circuit with temperature compensation was developed to power the sensors over extended cable lengths with minimal degradation of the signal to noise ratio and frequency response. Extensive laboratory calibrations and environmental testing of the sensors were conducted to verify the design's performance specifications. A compact data system combining sensor power, signal conditioning, and digitization was assembled for use with the array. Complementing the data system is a robust analysis system capable of near real-time presentation of beamformed and deconvolved contour plots and integrated spectra obtained from array data acquired during flyover passes. Additional instrumentation systems needed to process the array data were also assembled. These include a commercial weather station and a video monitoring / recording system. A detailed mock-up of the instrumentation suite (phased array, weather station, and data processor) was performed in the NASA Langley Acoustic Development Laboratory to vet the system performance. The first deployment of the system occurred at Finnegan Airfield at Fort A.P. Hill where the array was utilized to measure the vehicle noise from a number of sUAS (small Unmanned Aerial System) aircraft. A unique in-situ calibration method for the array microphones using a hovering aerial sound source was attempted for the first time during the deployment.

  11. Long-Term Stability of One-Inch Condenser Microphones Calibrated at the National Institute of Standards and Technology

    PubMed Central

    Wagner, Randall P.; Guthrie, William F.

    2015-01-01

    The devices calibrated most frequently by the acoustical measurement services at the National Institute of Standards and Technology (NIST) over the 50-year period from 1963 to 20121 were one-inch condenser microphones of three specific standard types: LS1Pn, LS1Po, and WS1P. Due to its long history of providing calibrations of such microphones to customers, NIST is in a unique position to analyze data concerning the long-term stability of these devices. This long history has enabled NIST to acquire and aggregate a substantial amount of repeat calibration data for a large number of microphones that belong to various other standards and calibration laboratories. In addition to determining microphone sensitivities at the time of calibration, it is important to have confidence that the microphones do not typically undergo significant drift as compared to the calibration uncertainty during the periods between calibrations. For each of the three microphone types, an average drift rate and approximate 95 % confidence interval were computed by two different statistical methods, and the results from the two methods were found to differ insignificantly in each case. These results apply to typical microphones of these types that are used in a suitable environment and handled with care. The average drift rate for Type LS1Pn microphones was −0.004 dB/year to 0.003 dB/year. The average drift rate for Type LS1Po microphones was −0.016 dB/year to 0.008 dB/year. The average drift rate for Type WS1P microphones was −0.004 dB/year to 0.018 dB/year. For each of these microphone types, the average drift rate is not significantly different from zero. This result is consistent with the performance expected of condenser microphones designed for use as transfer standards. In addition, the values that bound the confidence intervals are well within the limits specified for long-term stability in international standards. Even though these results show very good long-term stability historically for these microphone types, it is expected that periodic calibrations will always be done to track the calibration history of individual microphones and check for anomalies indicative of shifts in sensitivity. PMID:26958445

  12. Quantification of transformation products of rocket fuel unsymmetrical dimethylhydrazine in soils using SPME and GC-MS.

    PubMed

    Bakaikina, Nadezhda V; Kenessov, Bulat; Ul'yanovskii, Nikolay V; Kosyakov, Dmitry S

    2018-07-01

    Determination of transformation products (TPs) of rocket fuel unsymmetrical dimethylhydrazine (UDMH) in soil is highly important for environmental impact assessment of the launches of heavy space rockets from Kazakhstan, Russia, China and India. The method based on headspace solid-phase microextraction (HS SPME) and gas chromatography-mass spectrometry is advantageous over other known methods due to greater simplicity and cost efficiency. However, accurate quantification of these analytes using HS SPME is limited by the matrix effect. In this research, we proposed using internal standard and standard addition calibrations to achieve proper combination of accuracies of the quantification of key TPs of UDMH and cost efficiency. 1-Trideuteromethyl-1H-1,2,4-triazole (MTA-d3) was used as the internal standard. Internal standard calibration allowed controlling matrix effects during quantification of 1-methyl-1H-1,2,4-triazole (MTA), N,N-dimethylformamide (DMF), and N-nitrosodimethylamine (NDMA) in soils with humus content < 1%. Using SPME at 60 °C for 15 min by 65 µm Carboxen/polydimethylsiloxane fiber, recoveries of MTA, DMF and NDMA for sandy and loamy soil samples were 91-117, 85-123 and 64-132%, respectively. For improving the method accuracy and widening the range of analytes, standard addition and its combination with internal standard calibration were tested and compared on real soil samples. The combined calibration approach provided greatest accuracies for NDMA, DMF, N-methylformamide, formamide, 1H-pyrazole, 3-methyl-1H-pyrazole and 1H-pyrazole. For determination of 1-formyl-2,2-dimethylhydrazine, 3,5-dimethylpyrazole, 2-ethyl-1H-imidazole, 1H-imidazole, 1H-1,2,4-triazole, pyrazines and pyridines, standard addition calibration is more suitable. However, the proposed approach and collected data allow using both approaches simultaneously. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Working towards accreditation by the International Standards Organization 15189 Standard: how to validate an in-house developed method an example of lead determination in whole blood by electrothermal atomic absorption spectrometry.

    PubMed

    Garcia Hejl, Carine; Ramirez, Jose Manuel; Vest, Philippe; Chianea, Denis; Renard, Christophe

    2014-09-01

    Laboratories working towards accreditation by the International Standards Organization (ISO) 15189 standard are required to demonstrate the validity of their analytical methods. The different guidelines set by various accreditation organizations make it difficult to provide objective evidence that an in-house method is fit for the intended purpose. Besides, the required performance characteristics tests and acceptance criteria are not always detailed. The laboratory must choose the most suitable validation protocol and set the acceptance criteria. Therefore, we propose a validation protocol to evaluate the performance of an in-house method. As an example, we validated the process for the detection and quantification of lead in whole blood by electrothermal absorption spectrometry. The fundamental parameters tested were, selectivity, calibration model, precision, accuracy (and uncertainty of measurement), contamination, stability of the sample, reference interval, and analytical interference. We have developed a protocol that has been applied successfully to quantify lead in whole blood by electrothermal atomic absorption spectrometry (ETAAS). In particular, our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics.

  14. Total anthocyanin content determination in intact açaí (Euterpe oleracea Mart.) and palmitero-juçara (Euterpe edulis Mart.) fruit using near infrared spectroscopy (NIR) and multivariate calibration.

    PubMed

    Inácio, Maria Raquel Cavalcanti; de Lima, Kássio Michell Gomes; Lopes, Valquiria Garcia; Pessoa, José Dalton Cruz; de Almeida Teixeira, Gustavo Henrique

    2013-02-15

    The aim of this study was to evaluate near-infrared reflectance spectroscopy (NIR), and multivariate calibration potential as a rapid method to determinate anthocyanin content in intact fruit (açaí and palmitero-juçara). Several multivariate calibration techniques, including partial least squares (PLS), interval partial least squares, genetic algorithm, successive projections algorithm, and net analyte signal were compared and validated by establishing figures of merit. Suitable results were obtained with the PLS model (four latent variables and 5-point smoothing) with a detection limit of 6.2 g kg(-1), limit of quantification of 20.7 g kg(-1), accuracy estimated as root mean square error of prediction of 4.8 g kg(-1), mean selectivity of 0.79 g kg(-1), sensitivity of 5.04×10(-3) g kg(-1), precision of 27.8 g kg(-1), and signal-to-noise ratio of 1.04×10(-3) g kg(-1). These results suggest NIR spectroscopy and multivariate calibration can be effectively used to determine anthocyanin content in intact açaí and palmitero-juçara fruit. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Application of standard addition for the determination of carboxypeptidase activity in Actinomucor elegans bran koji.

    PubMed

    Fu, J; Li, L; Yang, X Q; Zhu, M J

    2011-01-01

    Leucine carboxypeptidase (EC 3.4.16) activity in Actinomucor elegans bran koji was investigated via absorbance at 507 nm after stained by Cd-nihydrin solution, with calibration curve A, which was made by a set of known concentration standard leucine, calibration B, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations inactive crude enzyme extract, and calibration C, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations crude enzyme extract. The results indicated that application of pure amino acid standard curve was not a suitable way to determine carboxypeptidase in complicate mixture, and it probably led to overestimated carboxypeptidase activity. It was found that addition of crude exact into pure amino acid standard curve had a significant difference from pure amino acid standard curve method (p < 0.05). There was no significant enzyme activity difference (p > 0.05) between addition of active crude exact and addition of inactive crude kind, when the proper dilute multiple was used. It was concluded that the addition of crude enzyme extract to the calibration was needed to eliminate the interference of free amino acids and related compounds presented in crude enzyme extract.

  16. Calibration of a pitot-static rake

    NASA Technical Reports Server (NTRS)

    Stump, H. P.

    1977-01-01

    A five-element pitot-static rake was tested to confirm its accuracy and determine its suitability for use at Langley during low-speed tunnel calibration primarily at full-scale tunnel. The rake was tested at one airspeed of 74 miles per hour (33 meters per second) and at pitch and yaw angles of 0 to + or - 20 degrees in 4 deg increments.

  17. SU-F-BRA-08: An Investigation of Well-Chamber Responses for An Electronic Brachytherapy Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Culberson, W; Micka, J

    Purpose: The aim of this study was to investigate the variation of well-type ionization chamber response between a Xoft Axxent™ electronic brachytherapy (EBT) source and a GE Oncoseed™ 6711 I-125 seed. Methods: A new EBT air-kerma standard has recently been introduced by the National Institute of Standards and Technology (NIST). Historically, the Axxent source strength has been based on a well chamber calibration from an I-125 brachytherapy source due to the lack of a primary standard. Xoft utilizes a calibration procedure that employs a GE 6711 seed calibration as a surrogate standard to represent the air-kerma strength of an Axxentmore » source. This method is based on the premise that the energies of the two sources are similar and thus, a conversion factor would be a suitable interim solution until a NIST standard was established. For this investigation, a number of well chambers of the same model type and three different EBT sources were used to determine NIST-traceable calibration coefficients for both the GE 6711 seed and the Axxent source. The ratio of the two coefficients was analyzed for consistency and also to identify any possible correlations with chamber vintage or the sources themselves. Results: For all well chambers studied, the relative standard deviation of the ratio of calibration coefficients between the two standards is less than 1%. No specific trends were found with the well chamber vintage or between the three different EBT sources used. Conclusion: The variation of well chamber calibration coefficients between a Xoft Axxent™ EBT source versus a GE 6711 Oncoseed™ are consistent across well chamber vintage and between sources. The results of this investigation confirm the underlying assumptions and stability of the surrogate standard currently in use by Xoft, and establishes a migration path for future implementation of the new NIST air kerma standard. This research is supported in part by Xoft, a subsidiary of iCAD.« less

  18. TIME CALIBRATED OSCILLOSCOPE SWEEP

    DOEpatents

    Owren, H.M.; Johnson, B.M.; Smith, V.L.

    1958-04-22

    The time calibrator of an electric signal displayed on an oscilloscope is described. In contrast to the conventional technique of using time-calibrated divisions on the face of the oscilloscope, this invention provides means for directly superimposing equal time spaced markers upon a signal displayed upon an oscilloscope. More explicitly, the present invention includes generally a generator for developing a linear saw-tooth voltage and a circuit for combining a high-frequency sinusoidal voltage of a suitable amplitude and frequency with the saw-tooth voltage to produce a resultant sweep deflection voltage having a wave shape which is substantially linear with respect to time between equal time spaced incremental plateau regions occurring once each cycle of the sinusoidal voltage. The foregoing sweep voltage when applied to the horizontal deflection plates in combination with a signal to be observed applied to the vertical deflection plates of a cathode ray oscilloscope produces an image on the viewing screen which is essentially a display of the signal to be observed with respect to time. Intensified spots, or certain other conspicuous indications corresponding to the equal time spaced plateau regions of said sweep voltage, appear superimposed upon said displayed signal, which indications are therefore suitable for direct time calibration purposes.

  19. Comparison of calibration strategies for optical 3D scanners based on structured light projection using a new evaluation methodology

    NASA Astrophysics Data System (ADS)

    Bräuer-Burchardt, Christian; Ölsner, Sandy; Kühmstedt, Peter; Notni, Gunther

    2017-06-01

    In this paper a new evaluation strategy for optical 3D scanners based on structured light projection is introduced. It can be used for the characterization of the expected measurement accuracy. Compared to the procedure proposed in the VDI/VDE guidelines for optical 3D measurement systems based on area scanning it requires less effort and provides more impartiality. The methodology is suitable for the evaluation of sets of calibration parameters, which mainly determine the quality of the measurement result. It was applied to several calibrations of a mobile stereo camera based optical 3D scanner. The performed calibrations followed different strategies regarding calibration bodies and arrangement of the observed scene. The results obtained by the different calibration strategies are discussed and suggestions concerning future work on this area are given.

  20. Dynamic trends in cardiac surgery: why the logistic EuroSCORE is no longer suitable for contemporary cardiac surgery and implications for future risk models

    PubMed Central

    Hickey, Graeme L.; Grant, Stuart W.; Murphy, Gavin J.; Bhabra, Moninder; Pagano, Domenico; McAllister, Katherine; Buchan, Iain; Bridgewater, Ben

    2013-01-01

    OBJECTIVES Progressive loss of calibration of the original EuroSCORE models has necessitated the introduction of the EuroSCORE II model. Poor model calibration has important implications for clinical decision-making and risk adjustment of governance analyses. The objective of this study was to explore the reasons for the calibration drift of the logistic EuroSCORE. METHODS Data from the Society for Cardiothoracic Surgery in Great Britain and Ireland database were analysed for procedures performed at all National Health Service and some private hospitals in England and Wales between April 2001 and March 2011. The primary outcome was in-hospital mortality. EuroSCORE risk factors, overall model calibration and discrimination were assessed over time. RESULTS A total of 317 292 procedures were included. Over the study period, mean age at surgery increased from 64.6 to 67.2 years. The proportion of procedures that were isolated coronary artery bypass grafts decreased from 67.5 to 51.2%. In-hospital mortality fell from 4.1 to 2.8%, but the mean logistic EuroSCORE increased from 5.6 to 7.6%. The logistic EuroSCORE remained a good discriminant throughout the study period (area under the receiver-operating characteristic curve between 0.79 and 0.85), but calibration (observed-to-expected mortality ratio) fell from 0.76 to 0.37. Inadequate adjustment for decreasing baseline risk affected calibration considerably. DISCUSSIONS Patient risk factors and case-mix in adult cardiac surgery change dynamically over time. Models like the EuroSCORE that are developed using a ‘snapshot’ of data in time do not account for this and can subsequently lose calibration. It is therefore important to regularly revalidate clinical prediction models. PMID:23152436

  1. Selective and sensitive fluorimetric determination of carbendazim in apple and orange after preconcentration with magnetite-molecularly imprinted polymer

    NASA Astrophysics Data System (ADS)

    İlktaç, Raif; Aksuner, Nur; Henden, Emur

    2017-03-01

    In this study, magnetite-molecularly imprinted polymer has been used for the first time as selective adsorbent before the fluorimetric determination of carbendazim. Adsorption capacity of the magnetite-molecularly imprinted polymer was found to be 2.31 ± 0.63 mg g- 1 (n = 3). Limit of detection (LOD) and limit of quantification (LOQ) of the method were found to be 2.3 and 7.8 μg L- 1, respectively. Calibration graph was linear in the range of 10-1000 μg L- 1. Rapidity is an important advantage of the method where re-binding and recovery processes of carbendazim can be completed within an hour. The same imprinted polymer can be used for the determination of carbendazim without any capacity loss repeatedly for at least ten times. Proposed method has been successfully applied to determine carbendazim residues in apple and orange, where the recoveries of the spiked samples were found to be in the range of 95.7-103%. Characterization of the adsorbent and the effects of some potential interferences were also evaluated. With the reasonably high capacity and reusability of the adsorbent, dynamic calibration range, rapidity, simplicity, cost-effectiveness and with suitable LOD and LOQ, the proposed method is an ideal method for the determination of carbendazim.

  2. Gearbox Reliability Collaborative Gearbox 3 Planet Bearing Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, Jonathan

    2017-03-24

    The Gearbox Reliability Collaborative gearbox was redesigned to improve its load-sharing characteristics and predicted fatigue life. The most important aspect of the redesign was to replace the cylindrical roller bearings with preloaded tapered roller bearings in the planetary section. Similar to previous work, the strain gages installed on the planet tapered roller bearings were calibrated in a load frame. This report describes the calibration tests and provides the factors necessary to convert the measured units from dynamometer testing to bearing loads, suitable for comparison to engineering models.

  3. Calibration of ultra-high frequency (UHF) partial discharge sensors using FDTD method

    NASA Astrophysics Data System (ADS)

    Ishak, Asnor Mazuan; Ishak, Mohd Taufiq

    2018-02-01

    Ultra-high frequency (UHF) partial discharge sensors are widely used for conditioning monitoring and defect location in insulation system of high voltage equipment. Designing sensors for specific applications often requires an iterative process of manufacturing, testing and mechanical modifications. This paper demonstrates the use of finite-difference time-domain (FDTD) technique as a tool to predict the frequency response of UHF PD sensors. Using this approach, the design process can be simplified and parametric studies can be conducted in order to assess the influence of component dimensions and material properties on the sensor response. The modelling approach is validated using gigahertz transverse electromagnetic (GTEM) calibration system. The use of a transient excitation source is particularly suitable for modeling using FDTD, which is able to simulate the step response output voltage of the sensor from which the frequency response is obtained using the same post-processing applied to the physical measurement.

  4. New and improved apparatus and method for monitoring the intensities of charged-particle beams

    DOEpatents

    Varma, M.N.; Baum, J.W.

    1981-01-16

    Charged particle beam monitoring means are disposed in the path of a charged particle beam in an experimental device. The monitoring means comprise a beam monitoring component which is operable to prevent passage of a portion of beam, while concomitantly permitting passage of another portion thereof for incidence in an experimental chamber, and providing a signal (I/sub m/) indicative of the intensity of the beam portion which is not passed. Caibration means are disposed in the experimental chamber in the path of the said another beam portion and are operable to provide a signal (I/sub f/) indicative of the intensity thereof. Means are provided to determine the ratio (R) between said signals whereby, after suitable calibration, the calibration means may be removed from the experimental chamber and the intensity of the said another beam portion determined by monitoring of the monitoring means signal, per se.

  5. A flexible 3D laser scanning system using a robotic arm

    NASA Astrophysics Data System (ADS)

    Fei, Zixuan; Zhou, Xiang; Gao, Xiaofei; Zhang, Guanliang

    2017-06-01

    In this paper, we present a flexible 3D scanning system based on a MEMS scanner mounted on an industrial arm with a turntable. This system has 7-degrees of freedom and is able to conduct a full field scan from any angle, suitable for scanning object with the complex shape. The existing non-contact 3D scanning system usually uses laser scanner that projects fixed stripe mounted on the Coordinate Measuring Machine (CMM) or industrial robot. These existing systems can't perform path planning without CAD models. The 3D scanning system presented in this paper can scan the object without CAD models, and we introduced this path planning method in the paper. We also propose a practical approach to calibrating the hand-in-eye system based on binocular stereo vision and analyzes the errors of the hand-eye calibration.

  6. Translating pharmacodynamic biomarkers from bench to bedside: analytical validation and fit-for-purpose studies to qualify multiplex immunofluorescent assays for use on clinical core biopsy specimens.

    PubMed

    Marrero, Allison; Lawrence, Scott; Wilsker, Deborah; Voth, Andrea Regier; Kinders, Robert J

    2016-08-01

    Multiplex pharmacodynamic (PD) assays have the potential to increase sensitivity of biomarker-based reporting for new targeted agents, as well as revealing significantly more information about target and pathway activation than single-biomarker PD assays. Stringent methodology is required to ensure reliable and reproducible results. Common to all PD assays is the importance of reagent validation, assay and instrument calibration, and the determination of suitable response calibrators; however, multiplex assays, particularly those performed on paraffin specimens from tissue blocks, bring format-specific challenges adding a layer of complexity to assay development. We discuss existing multiplex approaches and the development of a multiplex immunofluorescence assay measuring DNA damage and DNA repair enzymes in response to anti-cancer therapeutics and describe how our novel method addresses known issues. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. A New First Break Picking for Three-Component VSP Data Using Gesture Sensor and Polarization Analysis

    PubMed Central

    Li, Huailiang; Tuo, Xianguo; Shen, Tong; Wang, Ruili; Courtois, Jérémie; Yan, Minhao

    2017-01-01

    A new first break picking for three-component (3C) vertical seismic profiling (VSP) data is proposed to improve the estimation accuracy of first arrivals, which adopts gesture detection calibration and polarization analysis based on the eigenvalue of the covariance matrix. This study aims at addressing the problem that calibration is required for VSP data using the azimuth and dip angle of geophones, due to the direction of geophones being random when applied in a borehole, which will further lead to the first break picking possibly being unreliable. Initially, a gesture-measuring module is integrated in the seismometer to rapidly obtain high-precision gesture data (including azimuth and dip angle information). Using re-rotating and re-projecting using earlier gesture data, the seismic dataset of each component will be calibrated to the direction that is consistent with the vibrator shot orientation. It will promote the reliability of the original data when making each component waveform calibrated to the same virtual reference component, and the corresponding first break will also be properly adjusted. After achieving 3C data calibration, an automatic first break picking algorithm based on the autoregressive-Akaike information criterion (AR-AIC) is adopted to evaluate the first break. Furthermore, in order to enhance the accuracy of the first break picking, the polarization attributes of 3C VSP recordings is applied to constrain the scanning segment of AR-AIC picker, which uses the maximum eigenvalue calculation of the covariance matrix. The contrast results between pre-calibration and post-calibration using field data show that it can further improve the quality of the 3C VSP waveform, which is favorable to subsequent picking. Compared to the obtained short-term average to long-term average (STA/LTA) and the AR-AIC algorithm, the proposed method, combined with polarization analysis, can significantly reduce the picking error. Applications of actual field experiments have also confirmed that the proposed method may be more suitable for the first break picking of 3C VSP. Test using synthesized 3C seismic data with low SNR indicates that the first break is picked with an error between 0.75 ms and 1.5 ms. Accordingly, the proposed method can reduce the picking error for 3C VSP data. PMID:28925981

  8. High-performance liquid chromatographic method for potency determination of amoxicillin in commercial preparations and for stability studies.

    PubMed Central

    Hsu, M C; Hsu, P W

    1992-01-01

    A reversed-phase column liquid chromatographic method was developed for the assay of amoxicillin and its preparations. The linear calibration range was 0.2 to 2.0 mg/ml (r = 0.9998), and recoveries were generally greater than 99%. The high-performance liquid chromatographic assay results were compared with those obtained from a microbiological assay of bulk drug substance and capsule, injection, and granule formulations containing amoxicillin and degraded amoxicillin. At the 99% confidence level, no significant intermethod differences were noted for the paired results. Commercial formulations were also analyzed, and the results obtained by the proposed method closely agreed with those found by the microbiological method. The results indicated that the proposed method is a suitable substitute for the microbiological method for assays and stability studies of amoxicillin preparations. PMID:1416827

  9. Hindcasting of decadal‐timescale estuarine bathymetric change with a tidal‐timescale model

    USGS Publications Warehouse

    Ganju, Neil K.; Schoellhamer, David H.; Jaffe, Bruce E.

    2009-01-01

    Hindcasting decadal-timescale bathymetric change in estuaries is prone to error due to limited data for initial conditions, boundary forcing, and calibration; computational limitations further hinder efforts. We developed and calibrated a tidal-timescale model to bathymetric change in Suisun Bay, California, over the 1867–1887 period. A general, multiple-timescale calibration ensured robustness over all timescales; two input reduction methods, the morphological hydrograph and the morphological acceleration factor, were applied at the decadal timescale. The model was calibrated to net bathymetric change in the entire basin; average error for bathymetric change over individual depth ranges was 37%. On a model cell-by-cell basis, performance for spatial amplitude correlation was poor over the majority of the domain, though spatial phase correlation was better, with 61% of the domain correctly indicated as erosional or depositional. Poor agreement was likely caused by the specification of initial bed composition, which was unknown during the 1867–1887 period. Cross-sectional bathymetric change between channels and flats, driven primarily by wind wave resuspension, was modeled with higher skill than longitudinal change, which is driven in part by gravitational circulation. The accelerated response of depth may have prevented gravitational circulation from being represented properly. As performance criteria became more stringent in a spatial sense, the error of the model increased. While these methods are useful for estimating basin-scale sedimentation changes, they may not be suitable for predicting specific locations of erosion or deposition. They do, however, provide a foundation for realistic estuarine geomorphic modeling applications.

  10. Analysis of ecstasy tablets: comparison of reflectance and transmittance near infrared spectroscopy.

    PubMed

    Schneider, Ralph Carsten; Kovar, Karl-Artur

    2003-07-08

    Calibration models for the quantitation of commonly used ecstasy substances have been developed using near infrared spectroscopy (NIR) in diffuse reflectance and in transmission mode by applying seized ecstasy tablets for model building and validation. The samples contained amphetamine, N-methyl-3,4-methylenedioxy-amphetamine (MDMA) and N-ethyl-3,4-methylenedioxy-amphetamine (MDE) in different concentrations. All tablets were analyzed using high performance liquid chromatography (HPLC) with diode array detection as reference method. We evaluated the performance of each NIR measurement method with regard to its ability to predict the content of each tablet with a low root mean square error of prediction (RMSEP). Best calibration models could be generated by using NIR measurement in transmittance mode with wavelength selection and 1/x-transformation of the raw data. The models build in reflectance mode showed higher RMSEPs using as data pretreatment, wavelength selection, 1/x-transformation and a second order Savitzky-Golay derivative with five point smoothing was applied to obtain the best models. To estimate the influence of inhomogeneities in the illegal tablets, a calibration of the destroyed, i.e. triturated samples was build and compared to the corresponding data of the whole tablets. The calibrations using these homogenized tablets showed lower RMSEPs. We can conclude that NIR analysis of ecstasy tablets in transmission mode is more suitable than measurement in diffuse reflectance to obtain quantification models for their active ingredients with regard to low errors of prediction. Inhomogeneities in the samples are equalized when measuring the tablets as powdered samples.

  11. Use of the Airborne Visible/Infrared Imaging Spectrometer to calibrate the optical sensor on board the Japanese Earth Resources Satellite-1

    NASA Technical Reports Server (NTRS)

    Green, Robert O.; Conel, James E.; Vandenbosch, Jeannette; Shimada, Masanobu

    1993-01-01

    We describe an experiment to calibrate the optical sensor (OPS) on board the Japanese Earth Resources Satellite-1 with data acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). On 27 Aug. 1992 both the OPS and AVIRIS acquired data concurrently over a calibration target on the surface of Rogers Dry Lake, California. The high spectral resolution measurements of AVIRIS have been convolved to the spectral response curves of the OPS. These data in conjunction with the corresponding OPS digitized numbers have been used to generate the radiometric calibration coefficients for the eight OPS bands. This experiment establishes the suitability of AVIRIS for the calibration of spaceborne sensors in the 400 to 2500 nm spectral region.

  12. Multiple internal standard normalization for improving HS-SPME-GC-MS quantitation in virgin olive oil volatile organic compounds (VOO-VOCs) profile.

    PubMed

    Fortini, Martina; Migliorini, Marzia; Cherubini, Chiara; Cecchi, Lorenzo; Calamai, Luca

    2017-04-01

    The commercial value of virgin olive oils (VOOs) strongly depends on their classification, also based on the aroma of the oils, usually evaluated by a panel test. Nowadays, a reliable analytical method is still needed to evaluate the volatile organic compounds (VOCs) and support the standard panel test method. To date, the use of HS-SPME sampling coupled to GC-MS is generally accepted for the analysis of VOCs in VOOs. However, VOO is a challenging matrix due to the simultaneous presence of: i) compounds at ppm and ppb concentrations; ii) molecules belonging to different chemical classes and iii) analytes with a wide range of molecular mass. Therefore, HS-SPME-GC-MS quantitation based upon the use of external standard method or of only a single internal standard (ISTD) for data normalization in an internal standard method, may be troublesome. In this work a multiple internal standard normalization is proposed to overcome these problems and improving quantitation of VOO-VOCs. As many as 11 ISTDs were used for quantitation of 71 VOCs. For each of them the most suitable ISTD was selected and a good linearity in a wide range of calibration was obtained. Except for E-2-hexenal, without ISTD or with an unsuitable ISTD, the linear range of calibration was narrower with respect to that obtained by a suitable ISTD, confirming the usefulness of multiple internal standard normalization for the correct quantitation of VOCs profile in VOOs. The method was validated for 71 VOCs, and then applied to a series of lampante virgin olive oils and extra virgin olive oils. In light of our results, we propose the application of this analytical approach for routine quantitative analyses and to support sensorial analysis for the evaluation of positive and negative VOOs attributes. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. A practical approach for linearity assessment of calibration curves under the International Union of Pure and Applied Chemistry (IUPAC) guidelines for an in-house validation of method of analysis.

    PubMed

    Sanagi, M Marsin; Nasir, Zalilah; Ling, Susie Lu; Hermawan, Dadan; Ibrahim, Wan Aini Wan; Naim, Ahmedy Abu

    2010-01-01

    Linearity assessment as required in method validation has always been subject to different interpretations and definitions by various guidelines and protocols. However, there are very limited applicable implementation procedures that can be followed by a laboratory chemist in assessing linearity. Thus, this work proposes a simple method for linearity assessment in method validation by a regression analysis that covers experimental design, estimation of the parameters, outlier treatment, and evaluation of the assumptions according to the International Union of Pure and Applied Chemistry guidelines. The suitability of this procedure was demonstrated by its application to an in-house validation for the determination of plasticizers in plastic food packaging by GC.

  14. Least-Squares Self-Calibration of Imaging Array Data

    NASA Technical Reports Server (NTRS)

    Arendt, R. G.; Moseley, S. H.; Fixsen, D. J.

    2004-01-01

    When arrays are used to collect multiple appropriately-dithered images of the same region of sky, the resulting data set can be calibrated using a least-squares minimization procedure that determines the optimal fit between the data and a model of that data. The model parameters include the desired sky intensities as well as instrument parameters such as pixel-to-pixel gains and offsets. The least-squares solution simultaneously provides the formal error estimates for the model parameters. With a suitable observing strategy, the need for separate calibration observations is reduced or eliminated. We show examples of this calibration technique applied to HST NICMOS observations of the Hubble Deep Fields and simulated SIRTF IRAC observations.

  15. A catalogue of photometric sequences (suppl. 3). [for astronomical photograph calibration

    NASA Technical Reports Server (NTRS)

    Argue, A. N.; Miller, E. W.; Warren, W. H., Jr.

    1983-01-01

    In stellar photometry studies, certain difficulties have arisen because of the lack of suitable photometric sequences for calibrating astronomical photographs. In order to eliminate these difficulties, active observers were contacted with a view to drawing up lists of suitable sequences. Replies from 63 authors offering data on 412 sequences were received. Most data were in the UBV system and had been obtained between 1968 and 1973. These were included in the original catalogue. The Catalogue represents a continuation of the earlier Photometric Catalogue compiled by Sharov and Jakimova (1970). A small supplement containing 69 sequences was issued in 1973. Supplement 2 was produced in 1976 and contained 320 sequences. Supplement 3 has now been compiled. It contains 1271 sequences.

  16. An Improved Weighted Partial Least Squares Method Coupled with Near Infrared Spectroscopy for Rapid Determination of Multiple Components and Anti-Oxidant Activity of Pu-Erh Tea.

    PubMed

    Liu, Ze; Xie, Hua-Lin; Chen, Lin; Huang, Jian-Hua

    2018-05-02

    Background: Pu-erh tea is a unique microbially fermented tea, which distinctive chemical constituents and activities are worthy of systematic study. Near infrared spectroscopy (NIR) coupled with suitable chemometrics approaches can rapidly and accurately quantitatively analyze multiple compounds in samples. Methods: In this study, an improved weighted partial least squares (PLS) algorithm combined with near infrared spectroscopy (NIR) was used to construct a fast calibration model for determining four main components, i.e., tea polyphenols, tea polysaccharide, total flavonoids, theanine content, and further determine the total antioxidant capacity of pu-erh tea. Results: The final correlation coefficients R square for tea polyphenols, tea polysaccharide, total flavonoids content, theanine content, and total antioxidant capacity were 0.8288, 0.8403, 0.8415, 0.8537 and 0.8682, respectively. Conclusions : The current study provided a comprehensive study of four main ingredients and activity of pu-erh tea, and demonstrated that NIR spectroscopy technology coupled with multivariate calibration analysis could be successfully applied to pu-erh tea quality assessment.

  17. Determination of the object surface function by structured light: application to the study of spinal deformities

    NASA Astrophysics Data System (ADS)

    Buendía, M.; Salvador, R.; Cibrián, R.; Laguia, M.; Sotoca, J. M.

    1999-01-01

    The projection of structured light is a technique frequently used to determine the surface shape of an object. In this paper, a new procedure is described that efficiently resolves the correspondence between the knots of the projected grid and those obtained on the object when the projection is made. The method is based on the use of three images of the projected grid. In two of them the grid is projected over a flat surface placed, respectively, before and behind the object; both images are used for calibration. In the third image the grid is projected over the object. It is not reliant on accurate determination of the camera and projector pair relative to the grid and object. Once the method is calibrated, we can obtain the surface function by just analysing the projected grid on the object. The procedure is especially suitable for the study of objects without discontinuities or large depth gradients. It can be employed for determining, in a non-invasive way, the patient's back surface function. Symmetry differences permit a quantitative diagnosis of spinal deformities such as scoliosis.

  18. Note: An improved calibration system with phase correction for electronic transformers with digital output.

    PubMed

    Cheng, Han-miao; Li, Hong-bin

    2015-08-01

    The existing electronic transformer calibration systems employing data acquisition cards cannot satisfy some practical applications, because the calibration systems have phase measurement errors when they work in the mode of receiving external synchronization signals. This paper proposes an improved calibration system scheme with phase correction to improve the phase measurement accuracy. We employ NI PCI-4474 to design a calibration system, and the system has the potential to receive external synchronization signals and reach extremely high accuracy classes. Accuracy verification has been carried out in the China Electric Power Research Institute, and results demonstrate that the system surpasses the accuracy class 0.05. Furthermore, this system has been used to test the harmonics measurement accuracy of all-fiber optical current transformers. In the same process, we have used an existing calibration system, and a comparison of the test results is presented. The system after improvement is suitable for the intended applications.

  19. Onsite Calibration of a Precision IPRT Based on Gallium and Gallium-Based Small-Size Eutectic Points

    NASA Astrophysics Data System (ADS)

    Sun, Jianping; Hao, Xiaopeng; Zeng, Fanchao; Zhang, Lin; Fang, Xinyun

    2017-04-01

    Onsite thermometer calibration with temperature scale transfer technology based on fixed points can effectively improve the level of industrial temperature measurement and calibration. The present work performs an onsite calibration of a precision industrial platinum resistance thermometer near room temperature. The calibration is based on a series of small-size eutectic points, including Ga-In (15.7°C), Ga-Sn (20.5°C), Ga-Zn (25.2°C), and a Ga fixed point (29.7°C), developed in a portable multi-point automatic realization apparatus. The temperature plateaus of the Ga-In, Ga-Sn, and Ga-Zn eutectic points and the Ga fixed point last for longer than 2 h, and their reproducibility was better than 5 mK. The device is suitable for calibrating non-detachable temperature sensors in advanced environmental laboratories and industrial fields.

  20. Energy calibration issues in nuclear resonant vibrational spectroscopy: observing small spectral shifts and making fast calibrations.

    PubMed

    Wang, Hongxin; Yoda, Yoshitaka; Dong, Weibing; Huang, Songping D

    2013-09-01

    The conventional energy calibration for nuclear resonant vibrational spectroscopy (NRVS) is usually long. Meanwhile, taking NRVS samples out of the cryostat increases the chance of sample damage, which makes it impossible to carry out an energy calibration during one NRVS measurement. In this study, by manipulating the 14.4 keV beam through the main measurement chamber without moving out the NRVS sample, two alternative calibration procedures have been proposed and established: (i) an in situ calibration procedure, which measures the main NRVS sample at stage A and the calibration sample at stage B simultaneously, and calibrates the energies for observing extremely small spectral shifts; for example, the 0.3 meV energy shift between the 100%-(57)Fe-enriched [Fe4S4Cl4](=) and 10%-(57)Fe and 90%-(54)Fe labeled [Fe4S4Cl4](=) has been well resolved; (ii) a quick-switching energy calibration procedure, which reduces each calibration time from 3-4 h to about 30 min. Although the quick-switching calibration is not in situ, it is suitable for normal NRVS measurements.

  1. An innovative iterative thresholding algorithm for tumour segmentation and volumetric quantification on SPECT images: Monte Carlo-based methodology and validation.

    PubMed

    Pacilio, M; Basile, C; Shcherbinin, S; Caselli, F; Ventroni, G; Aragno, D; Mango, L; Santini, E

    2011-06-01

    Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging play an important role in the segmentation of functioning parts of organs or tumours, but an accurate and reproducible delineation is still a challenging task. In this work, an innovative iterative thresholding method for tumour segmentation has been proposed and implemented for a SPECT system. This method, which is based on experimental threshold-volume calibrations, implements also the recovery coefficients (RC) of the imaging system, so it has been called recovering iterative thresholding method (RIThM). The possibility to employ Monte Carlo (MC) simulations for system calibration was also investigated. The RIThM is an iterative algorithm coded using MATLAB: after an initial rough estimate of the volume of interest, the following calculations are repeated: (i) the corresponding source-to-background ratio (SBR) is measured and corrected by means of the RC curve; (ii) the threshold corresponding to the amended SBR value and the volume estimate is then found using threshold-volume data; (iii) new volume estimate is obtained by image thresholding. The process goes on until convergence. The RIThM was implemented for an Infinia Hawkeye 4 (GE Healthcare) SPECT/CT system, using a Jaszczak phantom and several test objects. Two MC codes were tested to simulate the calibration images: SIMIND and SimSet. For validation, test images consisting of hot spheres and some anatomical structures of the Zubal head phantom were simulated with SIMIND code. Additional test objects (flasks and vials) were also imaged experimentally. Finally, the RIThM was applied to evaluate three cases of brain metastases and two cases of high grade gliomas. Comparing experimental thresholds and those obtained by MC simulations, a maximum difference of about 4% was found, within the errors (+/- 2% and +/- 5%, for volumes > or = 5 ml or < 5 ml, respectively). Also for the RC data, the comparison showed differences (up to 8%) within the assigned error (+/- 6%). ANOVA test demonstrated that the calibration results (in terms of thresholds or RCs at various volumes) obtained by MC simulations were indistinguishable from those obtained experimentally. The accuracy in volume determination for the simulated hot spheres was between -9% and 15% in the range 4-270 ml, whereas for volumes less than 4 ml (in the range 1-3 ml) the difference increased abruptly reaching values greater than 100%. For the Zubal head phantom, errors ranged between 9% and 18%. For the experimental test images, the accuracy level was within +/- 10%, for volumes in the range 20-110 ml. The preliminary test of application on patients evidenced the suitability of the method in a clinical setting. The MC-guided delineation of tumor volume may reduce the acquisition time required for the experimental calibration. Analysis of images of several simulated and experimental test objects, Zubal head phantom and clinical cases demonstrated the robustness, suitability, accuracy, and speed of the proposed method. Nevertheless, studies concerning tumors of irregular shape and/or nonuniform distribution of the background activity are still in progress.

  2. Multi-camera digital image correlation method with distributed fields of view

    NASA Astrophysics Data System (ADS)

    Malowany, Krzysztof; Malesa, Marcin; Kowaluk, Tomasz; Kujawinska, Malgorzata

    2017-11-01

    A multi-camera digital image correlation (DIC) method and system for measurements of large engineering objects with distributed, non-overlapping areas of interest are described. The data obtained with individual 3D DIC systems are stitched by an algorithm which utilizes the positions of fiducial markers determined simultaneously by Stereo-DIC units and laser tracker. The proposed calibration method enables reliable determination of transformations between local (3D DIC) and global coordinate systems. The applicability of the method was proven during in-situ measurements of a hall made of arch-shaped (18 m span) self-supporting metal-plates. The proposed method is highly recommended for 3D measurements of shape and displacements of large and complex engineering objects made from multiple directions and it provides the suitable accuracy of data for further advanced structural integrity analysis of such objects.

  3. Development of a commercially viable piezoelectric force sensor system for static force measurement

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Luo, Xinwei; Liu, Jingcheng; Li, Min; Qin, Lan

    2017-09-01

    A compensation method for measuring static force with a commercial piezoelectric force sensor is proposed to disprove the theory that piezoelectric sensors and generators can only operate under dynamic force. After studying the model of the piezoelectric force sensor measurement system, the principle of static force measurement using a piezoelectric material or piezoelectric force sensor is analyzed. Then, the distribution law of the decay time constant of the measurement system and the variation law of the measurement system’s output are studied, and a compensation method based on the time interval threshold Δ t and attenuation threshold Δ {{u}th} is proposed. By calibrating the system and considering the influences of the environment and the hardware, a suitable Δ {{u}th} value is determined, and the system’s output attenuation is compensated based on the Δ {{u}th} value to realize the measurement. Finally, a static force measurement system with a piezoelectric force sensor is developed based on the compensation method. The experimental results confirm the successful development of a simple compensation method for static force measurement with a commercial piezoelectric force sensor. In addition, it is established that, contrary to the current perception, a piezoelectric force sensor system can be used to measure static force through further calibration.

  4. Microhotplate Temperature Sensor Calibration and BIST

    PubMed Central

    Afridi, M.; Montgomery, C.; Cooper-Balis, E.; Semancik, S.; Kreider, K. G.; Geist, J.

    2011-01-01

    In this paper we describe a novel long-term microhotplate temperature sensor calibration technique suitable for Built-In Self Test (BIST). The microhotplate thermal resistance (thermal efficiency) and the thermal voltage from an integrated platinum-rhodium thermocouple were calibrated against a freshly calibrated four-wire polysilicon microhotplate-heater temperature sensor (heater) that is not stable over long periods of time when exposed to higher temperatures. To stress the microhotplate, its temperature was raised to around 400 °C and held there for days. The heater was then recalibrated as a temperature sensor, and microhotplate temperature measurements were made based on the fresh calibration of the heater, the first calibration of the heater, the microhotplate thermal resistance, and the thermocouple voltage. This procedure was repeated 10 times over a period of 80 days. The results show that the heater calibration drifted substantially during the period of the test while the microhotplate thermal resistance and the thermocouple-voltage remained stable to within about plus or minus 1 °C over the same period. Therefore, the combination of a microhotplate heater-temperature sensor and either the microhotplate thermal resistance or an integrated thin film platinum-rhodium thermocouple can be used to provide a stable, calibrated, microhotplate-temperature sensor, and the combination of the three sensor is suitable for implementing BIST functionality. Alternatively, if a stable microhotplate-heater temperature sensor is available, such as a properly annealed platinum heater-temperature sensor, then the thermal resistance of the microhotplate and the electrical resistance of the platinum heater will be sufficient to implement BIST. It is also shown that aluminum- and polysilicon-based temperature sensors, which are not stable enough for measuring high microhotplate temperatures (>220 °C) without impractically frequent recalibration, can be used to measure the silicon substrate temperature if never exposed to temperatures above about 220 °C. PMID:26989603

  5. Optimization, evaluation and calibration of a cross-strip DOI detector

    NASA Astrophysics Data System (ADS)

    Schmidt, F. P.; Kolb, A.; Pichler, B. J.

    2018-02-01

    This study depicts the evaluation of a SiPM detector with depth of interaction (DOI) capability via a dual-sided readout that is suitable for high-resolution positron emission tomography and magnetic resonance (PET/MR) imaging. Two different 12  ×  12 pixelated LSO scintillator arrays with a crystal pitch of 1.60 mm are examined. One array is 20 mm-long with a crystal separation by the specular reflector Vikuiti enhanced specular reflector (ESR), and the other one is 18 mm-long and separated by the diffuse reflector Lumirror E60 (E60). An improvement in energy resolution from 22.6% to 15.5% for the scintillator array with the E60 reflector is achieved by taking a nonlinear light collection correction into account. The results are FWHM energy resolutions of 14.0% and 15.5%, average FWHM DOI resolutions of 2.96 mm and 1.83 mm, and FWHM coincidence resolving times of 1.09 ns and 1.48 ns for the scintillator array with the ESR and that with the E60 reflector, respectively. The measured DOI signal ratios need to be assigned to an interaction depth inside the scintillator crystal. A linear and a nonlinear method, using the intrinsic scintillator radiation from lutetium, are implemented for an easy to apply calibration and are compared to the conventional method, which exploits a setup with an externally collimated radiation beam. The deviation between the DOI functions of the linear or nonlinear method and the conventional method is determined. The resulting average of differences in DOI positions is 0.67 mm and 0.45 mm for the nonlinear calibration method for the scintillator array with the ESR and with the E60 reflector, respectively; Whereas the linear calibration method results in 0.51 mm and 0.32 mm for the scintillator array with the ESR and the E60 reflector, respectively; and is, due to its simplicity, also applicable in assembled detector systems.

  6. Optimization, evaluation and calibration of a cross-strip DOI detector.

    PubMed

    Schmidt, F P; Kolb, A; Pichler, B J

    2018-02-20

    This study depicts the evaluation of a SiPM detector with depth of interaction (DOI) capability via a dual-sided readout that is suitable for high-resolution positron emission tomography and magnetic resonance (PET/MR) imaging. Two different 12  ×  12 pixelated LSO scintillator arrays with a crystal pitch of 1.60 mm are examined. One array is 20 mm-long with a crystal separation by the specular reflector Vikuiti enhanced specular reflector (ESR), and the other one is 18 mm-long and separated by the diffuse reflector Lumirror E60 (E60). An improvement in energy resolution from 22.6% to 15.5% for the scintillator array with the E60 reflector is achieved by taking a nonlinear light collection correction into account. The results are FWHM energy resolutions of 14.0% and 15.5%, average FWHM DOI resolutions of 2.96 mm and 1.83 mm, and FWHM coincidence resolving times of 1.09 ns and 1.48 ns for the scintillator array with the ESR and that with the E60 reflector, respectively. The measured DOI signal ratios need to be assigned to an interaction depth inside the scintillator crystal. A linear and a nonlinear method, using the intrinsic scintillator radiation from lutetium, are implemented for an easy to apply calibration and are compared to the conventional method, which exploits a setup with an externally collimated radiation beam. The deviation between the DOI functions of the linear or nonlinear method and the conventional method is determined. The resulting average of differences in DOI positions is 0.67 mm and 0.45 mm for the nonlinear calibration method for the scintillator array with the ESR and with the E60 reflector, respectively; Whereas the linear calibration method results in 0.51 mm and 0.32 mm for the scintillator array with the ESR and the E60 reflector, respectively; and is, due to its simplicity, also applicable in assembled detector systems.

  7. Design of system calibration for effective imaging

    NASA Astrophysics Data System (ADS)

    Varaprasad Babu, G.; Rao, K. M. M.

    2006-12-01

    A CCD based characterization setup comprising of a light source, CCD linear array, Electronics for signal conditioning/ amplification, PC interface has been developed to generate images at varying densities and at multiple view angles. This arrangement is used to simulate and evaluate images by Super Resolution technique with multiple overlaps and yaw rotated images at different view angles. This setup also generates images at different densities to analyze the response of the detector port wise separately. The light intensity produced by the source needs to be calibrated for proper imaging by the high sensitive CCD detector over the FOV. One approach is to design a complex integrating sphere arrangement which costs higher for such applications. Another approach is to provide a suitable intensity feed back correction wherein the current through the lamp is controlled in a closed loop arrangement. This method is generally used in the applications where the light source is a point source. The third method is to control the time of exposure inversely to the lamp variations where lamp intensity is not possible to control. In this method, light intensity during the start of each line is sampled and the correction factor is applied for the full line. The fourth method is to provide correction through Look Up Table where the response of all the detectors are normalized through the digital transfer function. The fifth method is to have a light line arrangement where the light through multiple fiber optic cables are derived from a single source and arranged them in line. This is generally applicable and economical for low width cases. In our applications, a new method wherein an inverse multi density filter is designed which provides an effective calibration for the full swath even at low light intensities. The light intensity along the length is measured, an inverse density is computed, a correction filter is generated and implemented in the CCD based Characterization setup. This paper describes certain novel techniques of design and implementation of system calibration for effective Imaging to produce better quality data product especially while handling high resolution data.

  8. The use of the Sonoran Desert as a pseudo-invariant site for optical sensor cross-calibration and long-term stability monitoring

    USGS Publications Warehouse

    Angal, A.; Chander, Gyanesh; Choi, Taeyoung; Wu, Aisheng; Xiong, Xiaoxiong

    2010-01-01

    The Sonoran Desert is a large, flat, pseudo-invariant site near the United States-Mexico border. It is one of the largest and hottest deserts in North America, with an area of 311,000 square km. This site is particularly suitable for calibration purposes because of its high spatial and spectral uniformity and reasonable temporal stability. This study uses measurements from four different sensors, Terra Moderate Resolution Imaging Spectroradiometer (MODIS), Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+), Aqua MODIS, and Landsat 5 (L5) Thematic Mapper (TM), to assess the suitability of this site for long-term stability monitoring and to evaluate the “radiometric calibration differences” between spectrally matching bands of all four sensors. In general, the drift in the top-of-atmosphere (TOA) reflectance of each sensor over a span of nine years is within the specified calibration uncertainties. Monthly precipitation measurements of the Sonoran Desert region were obtained from the Global Historical Climatology Network (GHCN), and their effects on the retrieved TOA reflectances were evaluated. To account for the combined uncertainties in the TOA reflectance due to the surface and atmospheric Bi-directional Reflectance Distribution Function (BRDF), a semi-empirical BRDF model has been adopted to monitor and reduce the impact of illumination geometry differences on the retrieved TOA reflectances. To evaluate calibration differences between the MODIS and Landsat sensors, correction for spectral response differences using a hyperspectral sensor is also demonstrated.

  9. Characterization and calibration of a combined laser Raman, fluorescence and coherent Raman spectrometer

    NASA Astrophysics Data System (ADS)

    Lawhead, Carlos; Cooper, Nathan; Anderson, Josiah; Shiver, Tegan; Ujj, Laszlo

    2014-03-01

    Electronic and vibrational spectroscopy is extremely important tools used in material characterization; therefore a table-top laser spectrometer system was built in the spectroscopy lab at the UWF physics department. The system is based upon an injection seeded nanosecond Nd:YAG Laser. The second and the third harmonics of the fundamental 1064 nm radiation are used to generate Raman and fluorescence spectra measured with MS260i imaging spectrograph occupied with a CCD detector and cooled to -85 °C, in order to minimize the dark background noise. The wavelength calibration was performed with the emission spectra of standard gas-discharge lamps. Spectral sensitivity calibration is needed before any spectra are recorded, because of the table-top nature of the instrument. A variety of intensity standards were investigated to find standards suitable for our table top setup that do not change the geometry of the system. High quality measurement of Raman standards where analyzed to test spectral corrections. Background fluorescence removal methods were used to improve Raman signal intensity reading on highly fluorescent molecules. This instrument will be used to measure vibrational and electronic spectra of biological molecules.

  10. On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration

    PubMed Central

    Wang, Mi; Fan, Chengcheng; Yang, Bo; Jin, Shuying; Pan, Jun

    2016-01-01

    Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. PMID:27483287

  11. Determination of Stark parameters by cross-calibration in a multi-element laser-induced plasma

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Truscott, Benjamin S.; Ashfold, Michael N. R.

    2016-05-01

    We illustrate a Stark broadening analysis of the electron density Ne and temperature Te in a laser-induced plasma (LIP), using a model free of assumptions regarding local thermodynamic equilibrium (LTE). The method relies on Stark parameters determined also without assuming LTE, which are often unknown and unavailable in the literature. Here, we demonstrate that the necessary values can be obtained in situ by cross-calibration between the spectral lines of different charge states, and even different elements, given determinations of Ne and Te based on appropriate parameters for at least one observed transition. This approach enables essentially free choice between species on which to base the analysis, extending the range over which these properties can be measured and giving improved access to low-density plasmas out of LTE. Because of the availability of suitable tabulated values for several charge states of both Si and C, the example of a SiC LIP is taken to illustrate the consistency and accuracy of the procedure. The cross-calibrated Stark parameters are at least as reliable as values obtained by other means, offering a straightforward route to extending the literature in this area.

  12. Predictive modeling and mapping of Malayan Sun Bear (Helarctos malayanus) distribution using maximum entropy.

    PubMed

    Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit

    2012-01-01

    One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.

  13. Predictive Modeling and Mapping of Malayan Sun Bear (Helarctos malayanus) Distribution Using Maximum Entropy

    PubMed Central

    Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit

    2012-01-01

    One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear’s population. PMID:23110182

  14. Climate, soil or both? Which variables are better predictors of the distributions of Australian shrub species?

    PubMed Central

    Esperón-Rodríguez, Manuel; Baumgartner, John B.; Beaumont, Linda J.

    2017-01-01

    Background Shrubs play a key role in biogeochemical cycles, prevent soil and water erosion, provide forage for livestock, and are a source of food, wood and non-wood products. However, despite their ecological and societal importance, the influence of different environmental variables on shrub distributions remains unclear. We evaluated the influence of climate and soil characteristics, and whether including soil variables improved the performance of a species distribution model (SDM), Maxent. Methods This study assessed variation in predictions of environmental suitability for 29 Australian shrub species (representing dominant members of six shrubland classes) due to the use of alternative sets of predictor variables. Models were calibrated with (1) climate variables only, (2) climate and soil variables, and (3) soil variables only. Results The predictive power of SDMs differed substantially across species, but generally models calibrated with both climate and soil data performed better than those calibrated only with climate variables. Models calibrated solely with soil variables were the least accurate. We found regional differences in potential shrub species richness across Australia due to the use of different sets of variables. Conclusions Our study provides evidence that predicted patterns of species richness may be sensitive to the choice of predictor set when multiple, plausible alternatives exist, and demonstrates the importance of considering soil properties when modeling availability of habitat for plants. PMID:28652933

  15. Study of Lever-Arm Effect Using Embedded Photogrammetry and On-Board GPS Receiver on Uav for Metrological Mapping Purpose and Proposal of a Free Ground Measurements Calibration Procedure

    NASA Astrophysics Data System (ADS)

    Daakir, M.; Pierrot-Deseilligny, M.; Bosser, P.; Pichard, F.; Thom, C.; Rabot, Y.

    2016-03-01

    Nowadays, Unmanned Aerial Vehicle (UAV) on-board photogrammetry knows a significant growth due to the democratization of using drones in the civilian sector. Also, due to changes in regulations laws governing the rules of inclusion of a UAV in the airspace which become suitable for the development of professional activities. Fields of application of photogrammetry are diverse, for instance: architecture, geology, archaeology, mapping, industrial metrology, etc. Our research concerns the latter area. Vinci-Construction- Terrassement is a private company specialized in public earthworks that uses UAVs for metrology applications. This article deals with maximum accuracy one can achieve with a coupled camera and GPS receiver system for direct-georeferencing of Digital Surface Models (DSMs) without relying on Ground Control Points (GCPs) measurements. This article focuses specially on the lever-arm calibration part. This proposed calibration method is based on two steps: a first step involves the proper calibration for each sensor, i.e. to determine the position of the optical center of the camera and the GPS antenna phase center in a local coordinate system relative to the sensor. A second step concerns a 3d modeling of the UAV with embedded sensors through a photogrammetric acquisition. Processing this acquisition allows to determine the value of the lever-arm offset without using GCPs.

  16. An additional study and implementation of tone calibrated technique of modulation

    NASA Technical Reports Server (NTRS)

    Rafferty, W.; Bechtel, L. K.; Lay, N. E.

    1985-01-01

    The Tone Calibrated Technique (TCT) was shown to be theoretically free from an error floor, and is only limited, in practice, by implementation constraints. The concept of the TCT transmission scheme along with a baseband implementation of a suitable demodulator is introduced. Two techniques for the generation of the TCT signal are considered: a Manchester source encoding scheme (MTCT) and a subcarrier based technique (STCT). The results are summarized for the TCT link computer simulation. The hardware implementation of the MTCT system is addressed and the digital signal processing design considerations involved in satisfying the modulator/demodulator requirements are outlined. The program findings are discussed and future direction are suggested based on conclusions made regarding the suitability of the TCT system for the transmission channel presently under consideration.

  17. Method to improve the blade tip-timing accuracy of fiber bundle sensor under varying tip clearance

    NASA Astrophysics Data System (ADS)

    Duan, Fajie; Zhang, Jilong; Jiang, Jiajia; Guo, Haotian; Ye, Dechao

    2016-01-01

    Blade vibration measurement based on the blade tip-timing method has become an industry-standard procedure. Fiber bundle sensors are widely used for tip-timing measurement. However, the variation of clearance between the sensor and the blade will bring a tip-timing error to fiber bundle sensors due to the change in signal amplitude. This article presents methods based on software and hardware to reduce the error caused by the tip clearance change. The software method utilizes both the rising and falling edges of the tip-timing signal to determine the blade arrival time, and a calibration process suitable for asymmetric tip-timing signals is presented. The hardware method uses an automatic gain control circuit to stabilize the signal amplitude. Experiments are conducted and the results prove that both methods can effectively reduce the impact of tip clearance variation on the blade tip-timing and improve the accuracy of measurements.

  18. Habitat modeling for brown trout population in alpine region of Slovenia with focus on determination of preference functions, fuzzy rules and fuzzy sets

    NASA Astrophysics Data System (ADS)

    Santl, Saso; Carf, Masa; Preseren, Tanja; Jenic, Aljaz

    2013-04-01

    Water withdrawals and consequently reduction of discharges in river streams for different water uses (hydro power, irrigation, etc.) usually impoverish habitat suitability for naturally present river fish fauna. In Slovenia reduction of suitable habitats resulting from water abstractions frequently impacts local brown trout (Salmo truta) populations. This is the reason for establishment of habitat modeling which can qualitatively and quantitatively support decision making for determination of the environmental flow and other mitigation measures. Paper introduces applied methodology for habitat modeling where input data preparation and elaboration with required accuracy has to be considered. For model development four (4) representative and heterogeneous sampling sites were chosen. Two (2) sampling sections were located within the sections with small hydropower plants and were considered as sections affected by water abstractions. The other two (2) sampling sections were chosen where there are no existing water abstractions. Precise bathymetric mapping for chosen river sections has been performed. Topographic data and series of discharge and water level measurements enabled establishment of calibrated hydraulic models, which provide data on water velocities and depths for analyzed discharges. Brief field measurements were also performed to gather required data on dominant and subdominant substrate size and cover type. Since the accuracy of fish distribution on small scale is very important for habitat modeling, a fish sampling method had to be selected and modified for existing river microhabitats. The brown trout specimen's locations were collected with two (2) different sampling methods. A method of riverbank observation which is suitable for adult fish in pools and a method of electro fishing for locating small fish and fish in riffles or hiding in cover. Ecological and habitat requirements for fish species vary regarding different fish populations as well as eco and hydro morphological types of streams. Therefore, if habitat modeling for brown trout in Slovenia should be applied, it is necessary to determine preference requirements for the locally present brown trout populations. For efficient determination of applied preference functions and linked fuzzy sets/rules, beside expert determination, calibration according to field sampling must also be performed. After this final step a model is prepared for the analysis to support decision making in the field of environmental flow and other mitigation measures determination.

  19. Sugeno-Fuzzy Expert System Modeling for Quality Prediction of Non-Contact Machining Process

    NASA Astrophysics Data System (ADS)

    Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.

    2018-03-01

    Modeling can be categorised into four main domains: prediction, optimisation, estimation and calibration. In this paper, the Takagi-Sugeno-Kang (TSK) fuzzy logic method is examined as a prediction modelling method to investigate the taper quality of laser lathing, which seeks to replace traditional lathe machines with 3D laser lathing in order to achieve the desired cylindrical shape of stock materials. Three design parameters were selected: feed rate, cutting speed and depth of cut. A total of twenty-four experiments were conducted with eight sequential runs and replicated three times. The results were found to be 99% of accuracy rate of the TSK fuzzy predictive model, which suggests that the model is a suitable and practical method for non-linear laser lathing process.

  20. Ultrasonic Blood Flow Measurement in Haemodialysis

    PubMed Central

    Sampson, D.; Papadimitriou, M.; Kulatilake, A. E.

    1970-01-01

    A 5-megacycle Doppler flow meter, calibrated in-vitro, was found to give a linear response to blood flow in the ranges commonly encountered in haemodialysis. With this, blood flow through artificial kidneys could be measured simply and with a clinically acceptable error. The method is safe, as blood lines do not have to be punctured or disconnected and hence there is no risk of introducing infection. Besides its value as a research tool the flow meter is useful in evaluating new artificial kidneys. Suitably modified it could form the basis of an arterial flow alarm system. PMID:5416812

  1. Advances in measuring techniques for turbine cooling test rigs

    NASA Technical Reports Server (NTRS)

    Pollack, F. G.

    1972-01-01

    Surface temperature distribution measurements for turbine vanes and blades were obtained by measuring the infrared energy emitted by the airfoil. The IR distribution can be related to temperature distribution by suitable calibration methods and the data presented in the form of isotherm maps. Both IR photographic and real time electro-optical methods are being investigated. The methods can be adapted to rotating as well as stationary targets, and both methods can utilize computer processing. Pressure measurements on rotating components are made with a rotating system incorporating 10 miniature transducers. A mercury wetted slip ring assembly was used to supply excitation power and as a signal transfer device. The system was successfully tested up to speeds of 9000 rpm and is now being adapted to measure rotating blade airflow quantities in a spin rig and a research engine.

  2. Development of a fast screening and confirmatory method by liquid chromatography-quadrupole-time-of-flight mass spectrometry for glucuronide-conjugated methyltestosterone metabolite in tilapia.

    PubMed

    Amarasinghe, Kande; Chu, Pak-Sin; Evans, Eric; Reimschuessel, Renate; Hasbrouck, Nicholas; Jayasuriya, Hiranthi

    2012-05-23

    This paper describes the development of a fast method to screen and confirm methyltestosterone 17-O-glucuronide (MT-glu) in tilapia bile. The method consists of solid-phase extraction (SPE) followed by high-performance liquid chromatography-mass spectrometry. The system used was an Agilent 6530 Q-TOF with an Agilent Jet stream electrospray ionization interface. The glucuronide detected in the bile was characterized as MT-glu by comparison with a chemically synthesized standard. MT-glu was detected in bile for up to 7 days after dosing. Semiquantification was done with matrix-matched calibration curves, because MT-glu showed signal suppression due to matrix effects. This method provides a suitable tool to monitor the illegal use of methyltestosterone in tilapia culture.

  3. Calibration free beam hardening correction for cardiac CT perfusion imaging

    NASA Astrophysics Data System (ADS)

    Levi, Jacob; Fahmi, Rachid; Eck, Brendan L.; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    Myocardial perfusion imaging using CT (MPI-CT) and coronary CTA have the potential to make CT an ideal noninvasive gate-keeper for invasive coronary angiography. However, beam hardening artifacts (BHA) prevent accurate blood flow calculation in MPI-CT. BH Correction (BHC) methods require either energy-sensitive CT, not widely available, or typically a calibration-based method. We developed a calibration-free, automatic BHC (ABHC) method suitable for MPI-CT. The algorithm works with any BHC method and iteratively determines model parameters using proposed BHA-specific cost function. In this work, we use the polynomial BHC extended to three materials. The image is segmented into soft tissue, bone, and iodine images, based on mean HU and temporal enhancement. Forward projections of bone and iodine images are obtained, and in each iteration polynomial correction is applied. Corrections are then back projected and combined to obtain the current iteration's BHC image. This process is iterated until cost is minimized. We evaluate the algorithm on simulated and physical phantom images and on preclinical MPI-CT data. The scans were obtained on a prototype spectral detector CT (SDCT) scanner (Philips Healthcare). Mono-energetic reconstructed images were used as the reference. In the simulated phantom, BH streak artifacts were reduced from 12+/-2HU to 1+/-1HU and cupping was reduced by 81%. Similarly, in physical phantom, BH streak artifacts were reduced from 48+/-6HU to 1+/-5HU and cupping was reduced by 86%. In preclinical MPI-CT images, BHA was reduced from 28+/-6 HU to less than 4+/-4HU at peak enhancement. Results suggest that the algorithm can be used to reduce BHA in conventional CT and improve MPI-CT accuracy.

  4. Detection of diethylene glycol adulteration in propylene glycol--method validation through a multi-instrument collaborative study.

    PubMed

    Li, Xiang; Arzhantsev, Sergey; Kauffman, John F; Spencer, John A

    2011-04-05

    Four portable NIR instruments from the same manufacturer that were nominally identical were programmed with a PLS model for the detection of diethylene glycol (DEG) contamination in propylene glycol (PG)-water mixtures. The model was developed on one spectrometer and used on other units after a calibration transfer procedure that used piecewise direct standardization. Although quantitative results were produced, in practice the instrument interface was programmed to report in Pass/Fail mode. The Pass/Fail determinations were made within 10s and were based on a threshold that passed a blank sample with 95% confidence. The detection limit was then established as the concentration at which a sample would fail with 95% confidence. For a 1% DEG threshold one false negative (Type II) and eight false positive (Type I) errors were found in over 500 samples measured. A representative test set produced standard errors of less than 2%. Since the range of diethylene glycol for economically motivated adulteration (EMA) is expected to be above 1%, the sensitivity of field calibrated portable NIR instruments is sufficient to rapidly screen out potentially problematic materials. Following method development, the instruments were shipped to different sites around the country for a collaborative study with a fixed protocol to be carried out by different analysts. NIR spectra of replicate sets of calibration transfer, system suitability and test samples were all processed with the same chemometric model on multiple instruments to determine the overall analytical precision of the method. The combined results collected for all participants were statistically analyzed to determine a limit of detection (2.0% DEG) and limit of quantitation (6.5%) that can be expected for a method distributed to multiple field laboratories. Published by Elsevier B.V.

  5. Calibration methods and performance evaluation for pnCCDs in experiments with FEL radiation

    NASA Astrophysics Data System (ADS)

    Kimmel, N.; Andritschke, R.; Englert, L.; Epp, S.; Hartmann, A.; Hartmann, R.; Hauser, G.; Holl, P.; Ordavo, I.; Richter, R.; Strüder, L.; Ullrich, J.

    2011-06-01

    Measurement campaigns of the Max-Planck Advanced Study Group (ASG) in cooperation with the Center for Free Electron Laser Science (CFEL) at DESY-FLASH and SLAC-LCLS have established pnCCDs as universal photon imaging spectrometers in the energy range from 90 eV to 2 keV. In the CFEL-ASG multi purpose chamber (CAMP), pnCCD detector modules are an integral part of the design with the ability to detect photons at very small scattering angles. In order to fully exploit the spectroscopic and intensity imaging capability of pnCCDs, it is essentially important to translate the unprocessed raw data into units of photon counts for any given position on the detection area. We have studied the performance of pnCCDs in FEL experiments and laboratory test setups for the range of signal intensities from a few X-ray photons per signal frame to 100 or more photons with an energy of 2 keV per pixel. Based on these measurement results, we were able to characterize the response of pnCCDs over the experimentally relevant photon energy and intensity range. The obtained calibration results are directly relevant for the physics data analysis. The accumulated knowledge of the detector performance was implemented in guidelines for detector calibration methods which are suitable for the specific requirements in photon science experiments at Free Electron Lasers. We discuss the achievable accuracy of photon energy and photon count measurements before and after the application of calibration data. Charge spreading due to illumination of small spots with high photon rates is discussed with respect to the charge handling capacity of a pixel and the effect of the charge spreading process on the resulting signal patterns.

  6. Suitability of satellite derived and gridded sea surface temperature data sets for calibrating high-resolution marine proxy records

    NASA Astrophysics Data System (ADS)

    Ouellette, G., Jr.; DeLong, K. L.

    2016-02-01

    High-resolution proxy records of sea surface temperature (SST) are increasingly being produced using trace element and isotope variability within the skeletal materials of marine organisms such as corals, mollusks, sclerosponges, and coralline algae. Translating the geochemical variations within these organisms into records of SST requires calibration with SST observations using linear regression methods, preferably with in situ SST records that span several years. However, locations with such records are sparse; therefore, calibration is often accomplished using gridded SST data products such as the Hadley Center's HADSST (5º) and interpolated HADISST (1º) data sets, NOAA's extended reconstructed SST data set (ERSST; 2º), optimum interpolation SST (OISST; 1º), and Kaplan SST data sets (5º). From these data products, the SST used for proxy calibration is obtained for a single grid cell that includes the proxy's study site. The gridded data sets are based on the International Comprehensive Ocean-Atmosphere Data Set (ICOADS) and each uses different methods of interpolation to produce the globally and temporally complete data products except for HadSST, which is not interpolated but quality controlled. This study compares SST for a single site from these gridded data products with a high-resolution satellite-based SST data set from NOAA (Pathfinder; 4 km) with in situ SST data and coral Sr/Ca variability for our study site in Haiti to assess differences between these SST records with a focus on seasonal variability. Our results indicate substantial differences in the seasonal variability captured for the same site among these data sets on the order of 1-3°C. This analysis suggests that of the data products, high-resolution satellite SST best captured seasonal variability at the study site. Unfortunately, satellite SST records are limited to the past few decades. If satellite SST are to be used to calibrate proxy records, collecting modern, living samples is desirable.

  7. Characterization of oscillator circuits for monitoring the density-viscosity of liquids by means of piezoelectric MEMS microresonators

    NASA Astrophysics Data System (ADS)

    Toledo, J.; Ruiz-Díez, V.; Pfusterschmied, G.; Schmid, U.; Sánchez-Rojas, J. L.

    2017-06-01

    Real-time monitoring of the physical properties of liquids, such as lubricants, is a very important issue for the automotive industry. For example, contamination of lubricating oil by diesel soot has a significant impact on engine wear. Resonant microstructures are regarded as a precise and compact solution for tracking the viscosity and density of lubricant oils. In this work, we report a piezoelectric resonator, designed to resonate with the 4th order out-of-plane modal vibration, 15-mode, and the interface circuit and calibration process for the monitoring of oil dilution with diesel fuel. In order to determine the resonance parameters of interest, i.e. resonant frequency and quality factor, an interface circuit was implemented and included within a closed-loop scheme. Two types of oscillator circuits were tested, a Phase-Locked Loop based on instrumentation, and a more compact version based on discrete electronics, showing similar resolution. Another objective of this work is the assessment of a calibration method for piezoelectric MEMS resonators in simultaneous density and viscosity sensing. An advanced calibration model, based on a Taylor series of the hydrodynamic function, was established as a suitable method for determining the density and viscosity with the lowest calibration error. Our results demonstrate the performance of the resonator in different oil samples with viscosities up to 90 mPa•s. At the highest value, the quality factor measured at 25°C was around 22. The best resolution obtained was 2.4•10-6 g/ml for the density and 2.7•10-3 mPa•s for the viscosity, in pure lubricant oil SAE 0W30 at 90°C. Furthermore, the estimated density and viscosity values with the MEMS resonator were compared to those obtained with a commercial density-viscosity meter, reaching a mean calibration error in the best scenario of around 0.08% for the density and 3.8% for the viscosity.

  8. Evaluation of a liquid chromatography method for compound-specific δ13C analysis of plant carbohydrates in alkaline media.

    PubMed

    Rinne, Katja T; Saurer, Matthias; Streit, Kathrin; Siegwolf, Rolf T W

    2012-09-30

    Isotope analysis of carbohydrates is important for improved understanding of plant carbon metabolism and plant physiological response to the environment. High-performance liquid chromatography/isotope ratio mass spectrometry (HPLC/IRMS) for direct compound-specific δ(13)C measurements of soluble carbohydrates has recently been developed, but the still challenging sample preparation and the fact that no single method is capable of separating all compounds of interest hinder its wide-spread application. Here we tested in detail a chromatography method in alkaline media. We examined the most suitable chromatographic conditions for HPLC/IRMS analysis of carbohydrates in aqueous conifer needle extracts using a CarboPac PA20 anion-exchange column with NaOH eluent, paying specific attention to compound yields, carbon isotope fractionation processes and the reproducibility of the method. Furthermore, we adapted and calibrated sample preparation methods for HPLC/IRMS analysis. OnGuard II cartridges were used for sample purification. Good peak separation and highly linear and reproducible concentration and δ(13)C measurements were obtained. The alkaline eluent was observed to induce isomerization of hexoses, detected as reduced yields and (13)C fractionation of the affected compounds. A reproducible pre-purification method providing ~100% yield for the carbohydrate compounds of interest was calibrated. The good level of peak separation obtained in this study is reflected in the good precision and linearity of concentration and δ(13)C results. The data provided crucial information on the behaviour of sugars in LC analysis with alkaline media. The observations highlight the importance for the application of compound-matched standard solution for the detection and correction of instrumental biases in concentration and δ(13)C analysis performed under identical chromatographic conditions. The calibrated pre-purification method is well suited for studies with complex matrices that disable the use of a spiked internal standard for the detection of procedural losses. Copyright © 2012 John Wiley & Sons, Ltd.

  9. Development and validation of an ultra-performance liquid chromatography quadrupole time of flight mass spectrometry method for rapid quantification of free amino acids in human urine.

    PubMed

    Joyce, Richard; Kuziene, Viktorija; Zou, Xin; Wang, Xueting; Pullen, Frank; Loo, Ruey Leng

    2016-01-01

    An ultra-performance liquid chromatography quadrupole time of flight mass spectrometry (UPLC-qTOF-MS) method using hydrophilic interaction liquid chromatography was developed and validated for simultaneous quantification of 18 free amino acids in urine with a total acquisition time including the column re-equilibration of less than 18 min per sample. This method involves simple sample preparation steps which consisted of 15 times dilution with acetonitrile to give a final composition of 25 % aqueous and 75 % acetonitrile without the need of any derivatization. The dynamic range for our calibration curve is approximately two orders of magnitude (120-fold from the lowest calibration curve point) with good linearity (r (2) ≥ 0.995 for all amino acids). Good separation of all amino acids as well as good intra- and inter-day accuracy (<15 %) and precision (<15 %) were observed using three quality control samples at a concentration of low, medium and high range of the calibration curve. The limits of detection (LOD) and lower limit of quantification of our method were ranging from approximately 1-300 nM and 0.01-0.5 µM, respectively. The stability of amino acids in the prepared urine samples was found to be stable for 72 h at 4 °C, after one freeze thaw cycle and for up to 4 weeks at -80 °C. We have applied this method to quantify the content of 18 free amino acids in 646 urine samples from a dietary intervention study. We were able to quantify all 18 free amino acids in these urine samples, if they were present at a level above the LOD. We found our method to be reproducible (accuracy and precision were typically <10 % for QCL, QCM and QCH) and the relatively high sample throughput nature of this method potentially makes it a suitable alternative for the analysis of urine samples in clinical setting.

  10. Temperature corrected-calibration of GRACE's accelerometer

    NASA Astrophysics Data System (ADS)

    Encarnacao, J.; Save, H.; Siemes, C.; Doornbos, E.; Tapley, B. D.

    2017-12-01

    Since April 2011, the thermal control of the accelerometers on board the GRACE satellites has been turned off. The time series of along-track bias clearly show a drastic change in the behaviour of this parameter, while the calibration model has remained unchanged throughout the entire mission lifetime. In an effort to improve the quality of the gravity field models produced at CSR in future mission-long re-processing of GRACE data, we quantify the added value of different calibration strategies. In one approach, the temperature effects that distort the raw accelerometer measurements collected without thermal control are corrected considering the housekeeping temperature readings. In this way, one single calibration strategy can be consistently applied during the whole mission lifetime, since it is valid to thermal the conditions before and after April 2011. Finally, we illustrate that the resulting calibrated accelerations are suitable for neutral thermospheric density studies.

  11. Earth Radiation Budget Experiment scanner radiometric calibration results

    NASA Technical Reports Server (NTRS)

    Lee, Robert B., III; Gibson, M. A.; Thomas, Susan; Meekins, Jeffrey L.; Mahan, J. R.

    1990-01-01

    The Earth Radiation Budget Experiment (ERBE) scanning radiometers are producing measurements of the incoming solar, earth/atmosphere-reflected solar, and earth/atmosphere-emitted radiation fields with measurement precisions and absolute accuracies, approaching 1 percent. ERBE uses thermistor bolometers as the detection elements in the narrow-field-of-view scanning radiometers. The scanning radiometers can sense radiation in the shortwave, longwave, and total broadband spectral regions of 0.2 to 5.0, 5.0 to 50.0, and 0.2 to 50.0 micrometers, respectively. Detailed models of the radiometers' response functions were developed in order to design the most suitable calibration techniques. These models guided the design of in-flight calibration procedures as well as the development and characterization of a vacuum-calibration chamber and the blackbody source which provided the absolute basis upon which the total and longwave radiometers were characterized. The flight calibration instrumentation for the narror-field-of-view scanning radiometers is presented and evaluated.

  12. Performance of a laser frequency comb calibration system with a high-resolution solar echelle spectrograph

    NASA Astrophysics Data System (ADS)

    Doerr, H.-P.; Kentischer, T. J.; Steinmetz, T.; Probst, R. A.; Franz, M.; Holzwarth, R.; Udem, Th.; Hänsch, T. W.; Schmidt, W.

    2012-09-01

    Laser frequency combs (LFC) provide a direct link between the radio frequency (RF) and the optical frequency regime. The comb-like spectrum of an LFC is formed by exact equidistant laser modes, whose absolute optical frequencies are controlled by RF-references such as atomic clocks or GPS receivers. While nowadays LFCs are routinely used in metrological and spectroscopic fields, their application in astronomy was delayed until recently when systems became available with a mode spacing and wavelength coverage suitable for calibration of astronomical spectrographs. We developed a LFC based calibration system for the high-resolution echelle spectrograph at the German Vacuum Tower Telescope (VTT), located at the Teide observatory, Tenerife, Canary Islands. To characterize the calibration performance of the instrument, we use an all-fiber setup where sunlight and calibration light are fed to the spectrograph by the same single-mode fiber, eliminating systematic effects related to variable grating illumination.

  13. Note: An improved calibration system with phase correction for electronic transformers with digital output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Han-miao, E-mail: chenghanmiao@hust.edu.cn; Li, Hong-bin, E-mail: lihongbin@hust.edu.cn; State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074

    The existing electronic transformer calibration systems employing data acquisition cards cannot satisfy some practical applications, because the calibration systems have phase measurement errors when they work in the mode of receiving external synchronization signals. This paper proposes an improved calibration system scheme with phase correction to improve the phase measurement accuracy. We employ NI PCI-4474 to design a calibration system, and the system has the potential to receive external synchronization signals and reach extremely high accuracy classes. Accuracy verification has been carried out in the China Electric Power Research Institute, and results demonstrate that the system surpasses the accuracy classmore » 0.05. Furthermore, this system has been used to test the harmonics measurement accuracy of all-fiber optical current transformers. In the same process, we have used an existing calibration system, and a comparison of the test results is presented. The system after improvement is suitable for the intended applications.« less

  14. General calibration methodology for a combined Horton-SCS infiltration scheme in flash flood modeling

    NASA Astrophysics Data System (ADS)

    Gabellani, S.; Silvestro, F.; Rudari, R.; Boni, G.

    2008-12-01

    Flood forecasting undergoes a constant evolution, becoming more and more demanding about the models used for hydrologic simulations. The advantages of developing distributed or semi-distributed models have currently been made clear. Now the importance of using continuous distributed modeling emerges. A proper schematization of the infiltration process is vital to these types of models. Many popular infiltration schemes, reliable and easy to implement, are too simplistic for the development of continuous hydrologic models. On the other hand, the unavailability of detailed and descriptive information on soil properties often limits the implementation of complete infiltration schemes. In this work, a combination between the Soil Conservation Service Curve Number method (SCS-CN) and a method derived from Horton equation is proposed in order to overcome the inherent limits of the two schemes. The SCS-CN method is easily applicable on large areas, but has structural limitations. The Horton-like methods present parameters that, though measurable to a point, are difficult to achieve a reliable estimate at catchment scale. The objective of this work is to overcome these limits by proposing a calibration procedure which maintains the large applicability of the SCS-CN method as well as the continuous description of the infiltration process given by the Horton's equation suitably modified. The estimation of the parameters of the modified Horton method is carried out using a formal analogy with the SCS-CN method under specific conditions. Some applications, at catchment scale within a distributed model, are presented.

  15. Limits to lichenometry

    NASA Astrophysics Data System (ADS)

    Rosenwinkel, Swenja; Korup, Oliver; Landgraf, Angela; Dzhumabaeva, Atyrgul

    2015-12-01

    Lichenometry is a straightforward and inexpensive method for dating Holocene rock surfaces. The rationale is that the diameter of the largest lichen scales with the age of the originally fresh rock surface that it colonised. The success of the method depends on finding the largest lichen diameters, a suitable lichen-growth model, and a robust calibration curve. Recent critique of the method motivates us to revisit the accuracy and uncertainties of lichenometry. Specifically, we test how well lichenometry is capable of resolving the ages of different lobes of large active rock glaciers in the Kyrgyz Tien Shan. We use a bootstrapped quantile regression to calibrate local growth curves of Xanthoria elegans, Aspicilia tianshanica, and Rhizocarpon geographicum, and report a nonlinear decrease in dating accuracy with increasing lichen diameter. A Bayesian type of an analysis of variance demonstrates that our calibration allows discriminating credibly between rock-glacier lobes of different ages despite the uncertainties tied to sample size and correctly identifying the largest lichen thalli. Our results also show that calibration error grows with lichen size, so that the separability of rock-glacier lobes of different ages decreases, while the tendency to assign coeval ages increases. The abundant young (<200 yr) specimen of fast-growing X. elegans are in contrast with the fewer, slow-growing, but older (200-1500 yr) R. geographicum and A. tianshanica, and record either a regional reactivation of lobes in the past 200 years, or simply a censoring effect of lichen mortality during early phases of colonisation. The high variance of lichen sizes captures the activity of rock-glacier lobes, which is difficult to explain by regional climatic cooling or earthquake triggers alone. Therefore, we caution against inferring palaeoclimatic conditions from the topographic position of rock-glacier lobes. We conclude that lichenometry works better as a tool for establishing a relative, rather than an absolute, chronology of rock-glacier lobes in the northern Tien Shan.

  16. A calibration method for fringe reflection technique based on the analytical phase-slope description

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong

    2018-05-01

    The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.

  17. Simultaneous quantitative analysis of nine vitamin D compounds in human blood using LC-MS/MS.

    PubMed

    Abu Kassim, Nur Sofiah; Gomes, Fabio P; Shaw, Paul Nicholas; Hewavitharana, Amitha K

    2016-01-01

    It has been suggested that each member of the family of vitamin D compounds may have different function(s). Therefore, selective quantification of each compound is important in clinical research. Development and validation attempts of a simultaneous determination method of 12 vitamin D compounds in human blood using precolumn derivatization followed by LC-MS/MS is described. Internal standard calibration with 12 stable isotope labeled analogs was used to correct for matrix effects in MS detector. Nine vitamin D compounds were quantifiable in blood samples with detection limits within femtomole levels. Serum (compared with plasma) was found to be a more suitable sample type, and protein precipitation (compared with saponification) a more effective extraction method for vitamin D assay.

  18. Apparatus and method for acoustic monitoring of steam quality and flow

    DOEpatents

    Sinha, Dipen N.; Pantea, Cristian

    2016-09-13

    An apparatus and method for noninvasively monitoring steam quality and flow and in pipes or conduits bearing flowing steam, are described. By measuring the acoustic vibrations generated in steam-carrying conduits by the flowing steam either by direct contact with the pipe or remotely thereto, converting the measured acoustic vibrations into a frequency spectrum characteristic of the natural resonance vibrations of the pipe, and monitoring the amplitude and/or the frequency of one or more chosen resonance frequencies, changes in the steam quality in the pipe are determined. The steam flow rate and the steam quality are inversely related, and changes in the steam flow rate are calculated from changes in the steam quality once suitable calibration curves are obtained.

  19. Radiometric calibration updates to the Landsat collection

    USGS Publications Warehouse

    Micijevic, Esad; Haque, Md. Obaidul; Mishra, Nischal

    2016-01-01

    The Landsat Project is planning to implement a new collection management strategy for Landsat products generated at the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center. The goal of the initiative is to identify a collection of consistently geolocated and radiometrically calibrated images across the entire Landsat archive that is readily suitable for time-series analyses. In order to perform an accurate land change analysis, the data from all Landsat sensors must be on the same radiometric scale. Landsat 7 Enhanced Thematic Mapper Plus (ETM+) is calibrated to a radiance standard and all previous sensors are cross-calibrated to its radiometric scale. Landsat 8 Operational Land Imager (OLI) is calibrated to both radiance and reflectance standards independently. The Landsat 8 OLI reflectance calibration is considered to be most accurate. To improve radiometric calibration accuracy of historical data, Landsat 1-7 sensors also need to be cross-calibrated to the OLI reflectance scale. Results of that effort, as well as other calibration updates including the absolute and relative radiometric calibration and saturated pixel replacement for Landsat 8 OLI and absolute calibration for Landsat 4 and 5 Thematic Mappers (TM), will be implemented into Landsat products during the archive reprocessing campaign planned within the new collection management strategy. This paper reports on the planned radiometric calibration updates to the solar reflective bands of the new Landsat collection.

  20. Extending neutron autoradiography technique for boron concentration measurements in hard tissues.

    PubMed

    Provenzano, Lucas; Olivera, María Silvina; Saint Martin, Gisela; Rodríguez, Luis Miguel; Fregenal, Daniel; Thorp, Silvia I; Pozzi, Emiliano C C; Curotto, Paula; Postuma, Ian; Altieri, Saverio; González, Sara J; Bortolussi, Silva; Portu, Agustina

    2018-07-01

    The neutron autoradiography technique using polycarbonate nuclear track detectors (NTD) has been extended to quantify the boron concentration in hard tissues, an application of special interest in Boron Neutron Capture Therapy (BNCT). Chemical and mechanical processing methods to prepare thin tissue sections as required by this technique have been explored. Four different decalcification methods governed by slow and fast kinetics were tested in boron-loaded bones. Due to the significant loss of the boron content, this technique was discarded. On the contrary, mechanical manipulation to obtain bone powder and tissue sections of tens of microns thick proved reproducible and suitable, ensuring a proper conservation of the boron content in the samples. A calibration curve that relates the 10 B concentration of a bone sample and the track density in a Lexan NTD is presented. Bone powder embedded in boric acid solution with known boron concentrations between 0 and 100 ppm was used as a standard material. The samples, contained in slim Lexan cases, were exposed to a neutron fluence of 10 12 cm -2 at the thermal column central facility of the RA-3 reactor (Argentina). The revealed tracks in the NTD were counted with an image processing software. The effect of track overlapping was studied and corresponding corrections were implemented in the presented calibration curve. Stochastic simulations of the track densities produced by the products of the 10 B thermal neutron capture reaction for different boron concentrations in bone were performed and compared with the experimental results. The remarkable agreement between the two curves suggested the suitability of the obtained experimental calibration curve. This neutron autoradiography technique was finally applied to determine the boron concentration in pulverized and compact bone samples coming from a sheep experimental model. The obtained results for both type of samples agreed with boron measurements carried out by ICP-OES within experimental uncertainties. The fact that the histological structure of bone sections remains preserved allows for future boron microdistribution analysis. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Irradiation dose detection of irradiated milk powder using visible and near-infrared spectroscopy and chemometrics.

    PubMed

    Kong, W W; Zhang, C; Liu, F; Gong, A P; He, Y

    2013-08-01

    The objective of this study was to examine the possibility of applying visible and near-infrared spectroscopy to the quantitative detection of irradiation dose of irradiated milk powder. A total of 150 samples were used: 100 for the calibration set and 50 for the validation set. The samples were irradiated at 5 different dose levels in the dose range 0 to 6.0 kGy. Six different pretreatment methods were compared. The prediction results of full spectra given by linear and nonlinear calibration methods suggested that Savitzky-Golay smoothing and first derivative were suitable pretreatment methods in this study. Regression coefficient analysis was applied to select effective wavelengths (EW). Less than 10 EW were selected and they were useful for portable detection instrument or sensor development. Partial least squares, extreme learning machine, and least squares support vector machine were used. The best prediction performance was achieved by the EW-extreme learning machine model with first-derivative spectra, and correlation coefficients=0.97 and root mean square error of prediction=0.844. This study provided a new approach for the fast detection of irradiation dose of milk powder. The results could be helpful for quality detection and safety monitoring of milk powder. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  2. Calibration of δ13C and δ18O measurements in CO2 using Off-axis Integrated Cavity Output Spectrometer (ICOS)

    NASA Astrophysics Data System (ADS)

    Joseph, Jobin; Külls, Christoph

    2014-05-01

    The δ13C and δ18O of CO2 has enormous potential as tracers to study and quantify the interaction between the water and carbon cycles. Isotope ratio mass spectrometry (IRMS) being the conventional method for stable isotopic measurements, has many limitations making it impossible for deploying them in remote areas for online or in-situ sampling. New laser based absorption spectroscopy approaches like Cavity Ring Down Spectroscopy (CRDS) and Integrated Cavity Output Spectroscopy (ICOS) have been developed for online measurements of stable isotopes at an expense of considerably less power requirement but with precision comparable to IRMS. In this research project, we introduce a new calibration system for an Off- Axis ICOS (Los Gatos Research CCIA-36d) for a wide range of varying concentrations of CO2 (800ppm - 25,000ppm), a typical CO2 flux range at the plant-soil continuum. The calibration compensates for the concentration dependency of δ13C and δ18O measurements, and was performed using various CO2 standards with known CO2 concentration and δC13 and δO18 values. A mathematical model was developed after the calibration procedure as a correction factor for the concentration dependency of δ13C and δ18O measurements. Temperature dependency of δ13C and δ18O measurements were investigated and no significant influence was found. Simultaneous calibration of δ13C and δ18O is achieved using this calibration system with an overall accuracy of (~ 0.75±0.24 ‰ for δ13C, ~ 0.81 ±0.26‰ for δ18O). This calibration procedure is found to be appropriate for making Off-Axis ICOS suitable for measuring CO2 concentration and δ13C and δ18O measurements at atmosphere-plant-soil continuum.

  3. Using a down-scaled bioclimate envelope model to determine long-term temporal connectivity of Garry oak (Quercus garryana) habitat in western North America: implications for protected area planning.

    PubMed

    Pellatt, Marlow G; Goring, Simon J; Bodtker, Karin M; Cannon, Alex J

    2012-04-01

    Under the Canadian Species at Risk Act (SARA), Garry oak (Quercus garryana) ecosystems are listed as "at-risk" and act as an umbrella for over one hundred species that are endangered to some degree. Understanding Garry oak responses to future climate scenarios at scales relevant to protected area managers is essential to effectively manage existing protected area networks and to guide the selection of temporally connected migration corridors, additional protected areas, and to maintain Garry oak populations over the next century. We present Garry oak distribution scenarios using two random forest models calibrated with down-scaled bioclimatic data for British Columbia, Washington, and Oregon based on 1961-1990 climate normals. The suitability models are calibrated using either both precipitation and temperature variables or using only temperature variables. We compare suitability predictions from four General Circulation Models (GCMs) and present CGCM2 model results under two emissions scenarios. For each GCM and emissions scenario we apply the two Garry oak suitability models and use the suitability models to determine the extent and temporal connectivity of climatically suitable Garry oak habitat within protected areas from 2010 to 2099. The suitability models indicate that while 164 km(2) of the total protected area network in the region (47,990 km(2)) contains recorded Garry oak presence, 1635 and 1680 km(2) of climatically suitable Garry oak habitat is currently under some form of protection. Of this suitable protected area, only between 6.6 and 7.3% will be "temporally connected" between 2010 and 2099 based on the CGCM2 model. These results highlight the need for public and private protected area organizations to work cooperatively in the development of corridors to maintain temporal connectivity in climatically suitable areas for the future of Garry oak ecosystems.

  4. Development of departmental standard for traceability of measured activity for I-131 therapy capsules used in nuclear medicine.

    PubMed

    Ravichandran, Ramamoorthy; Binukumar, Jp

    2011-01-01

    International Basic Safety Standards (International Atomic Energy Agency, IAEA) provide guidance levels for diagnostic procedures in nuclear medicine indicating the maximum usual activity for various diagnostic tests in terms of activities of injected radioactive formulations. An accuracy of ± 10% in the activities of administered radio-pharmaceuticals is being recommended, for expected outcome in diagnostic and therapeutic nuclear medicine procedures. It is recommended that the long-term stability of isotope calibrators used in nuclear medicine is to be checked periodically for their performance using a long-lived check source, such as Cs-137, of suitable activity. In view of the un-availability of such a radioactive source, we tried to develop methods to maintain traceability of these instruments, for certifying measured activities for human use. Two re-entrant chambers [(HDR 1000 and Selectron Source Dosimetry System (SSDS)] with I-125 and Ir-192 calibration factors in the Department of Radiotherapy were used to measure Iodine-131 (I-131) therapy capsules to establish traceability to Mark V isotope calibrator of the Department of Nuclear Medicine. Special nylon jigs were fabricated to keep I-131 capsule holder in position. Measured activities in all the chambers showed good agreement. The accuracy of SSDS chamber in measuring Ir-192 activities in the last 5 years was within 0.5%, validating its role as departmental standard for measuring activity. The above method is adopted because mean energies of I-131 and Ir-192 are comparable.

  5. Accurate quantitation standards of glutathione via traceable sulfur measurement by inductively coupled plasma optical emission spectrometry and ion chromatography

    PubMed Central

    Rastogi, L.; Dash, K.; Arunachalam, J.

    2013-01-01

    The quantitative analysis of glutathione (GSH) is important in different fields like medicine, biology, and biotechnology. Accurate quantitative measurements of this analyte have been hampered by the lack of well characterized reference standards. The proposed procedure is intended to provide an accurate and definitive method for the quantitation of GSH for reference measurements. Measurement of the stoichiometrically existing sulfur content in purified GSH offers an approach for its quantitation and calibration through an appropriate characterized reference material (CRM) for sulfur would provide a methodology for the certification of GSH quantity, that is traceable to SI (International system of units). The inductively coupled plasma optical emission spectrometry (ICP-OES) approach negates the need for any sample digestion. The sulfur content of the purified GSH is quantitatively converted into sulfate ions by microwave-assisted UV digestion in the presence of hydrogen peroxide prior to ion chromatography (IC) measurements. The measurement of sulfur by ICP-OES and IC (as sulfate) using the “high performance” methodology could be useful for characterizing primary calibration standards and certified reference materials with low uncertainties. The relative expanded uncertainties (% U) expressed at 95% confidence interval for ICP-OES analyses varied from 0.1% to 0.3%, while in the case of IC, they were between 0.2% and 1.2%. The described methods are more suitable for characterizing primary calibration standards and certifying reference materials of GSH, than for routine measurements. PMID:29403814

  6. A new calibration methodology for thorax and upper limbs motion capture in children using magneto and inertial sensors.

    PubMed

    Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio

    2014-01-09

    Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.

  7. A Penning discharge source for extreme ultraviolet calibration

    NASA Technical Reports Server (NTRS)

    Finley, David S.; Jelinsky, Patrick; Bowyer, Stuart; Malina, Roger F.

    1986-01-01

    A Penning discharge lamp for use in the calibration of instruments and components for the extreme ultraviolet has been developed. This source is sufficiently light and compact to make it suitable for mounting on the movable slit assembly of a grazing incidence Rowland circle monochromator. Because this is a continuous discharge source, it is suitable for use with photon counting detectors. Line radiation is provided both by the gas and by atoms sputtered off the interchangeable metal cathodes. Usable lines are produced by species as highly ionized as Ne IV and Al V. The wavelength coverage provided is such that a good density of emission lines is available down to wavelengths as short as 100A. This source fills the gap between 100 and 300A, which is inadequately covered by the other available compact continuous radiation sources.

  8. Quantitation without Calibration: Response Profile as an Indicator of Target Amount.

    PubMed

    Debnath, Mrittika; Farace, Jessica M; Johnson, Kristopher D; Nesterova, Irina V

    2018-06-21

    Quantitative assessment of biomarkers is essential in numerous contexts from decision-making in clinical situations to food quality monitoring to interpretation of life-science research findings. However, appropriate quantitation techniques are not as widely addressed as detection methods. One of the major challenges in biomarker's quantitation is the need to have a calibration for correlating a measured signal to a target amount. The step complicates the methodologies and makes them less sustainable. In this work we address the issue via a new strategy: relying on position of response profile rather than on an absolute signal value for assessment of a target's amount. In order to enable the capability we develop a target-probe binding mechanism based on a negative cooperativity effect. A proof-of-concept example demonstrates that the model is suitable for quantitative analysis of nucleic acids over a wide concentration range. The general principles of the platform will be applicable toward a variety of biomarkers such as nucleic acids, proteins, peptides, and others.

  9. Total 25-Hydroxyvitamin D Determination by an Entry Level Triple Quadrupole Instrument: Comparison between Two Commercial Kits

    PubMed Central

    Cocci, Andrea; Zuppi, Cecilia; Persichilli, Silvia

    2013-01-01

    Objective. 25-hydroxyvitamin D2/D3 (25-OHD2/D3) determination is a reliable biomarker for vitamin D status. Liquid chromatography-tandem mass spectrometry was recently proposed as a reference method for vitamin D status evaluation. The aim of this work is to compare two commercial kits (Chromsystems and PerkinElmer) for 25-OHD2/D3 determination by our entry level LC-MS/MS. Design and Methods. Chromsystems kit adds an online trap column to an HPLC column and provides atmospheric pressure chemical ionization, isotopically labeled internal standard, and 4 calibrator points. PerkinElmer kit uses a solvent extraction and protein precipitation method. This kit can be used with or without derivatization with, respectively, electrospray and atmospheric pressure chemical ionization. For each analyte, there are isotopically labeled internal standards and 7 deuterated calibrator points. Results. Performance characteristics are acceptable for both methods. Mean bias between methods calculated on 70 samples was 1.9 ng/mL. Linear regression analysis gave an R 2 of 0.94. 25-OHD2 is detectable only with PerkinElmer kit in derivatized assay option. Conclusion. Both methods are suitable for routine. Chromsystems kit minimizes manual sample preparation, requiring only protein precipitation, but, with our system, 25-OHD2 is not detectable. PerkinElmer kit without derivatization does not guarantee acceptable performance with our LC-MS/MS system, as sample is not purified online. Derivatization provides sufficient sensitivity for 25-OHD2 detection. PMID:23555079

  10. Application of Laser Induced Breakdown Spectroscopy to the identification of emeralds from different synthetic processes

    NASA Astrophysics Data System (ADS)

    Agrosì, G.; Tempesta, G.; Scandale, E.; Legnaioli, S.; Lorenzetti, G.; Pagnotta, S.; Palleschi, V.; Mangone, A.; Lezzerini, M.

    2014-12-01

    Laser Induced Breakdown Spectroscopy can provide a useful contribution in mineralogical field in which the quantitative chemical analyses (including the evaluation of light elements) can play a key role in the studies on the origin of the emeralds. In particular, the chemical analyses permit to determine those trace elements, known as fingerprints, that can be useful to study their provenance. This technique, not requiring sample preparation results particularly suitable for gemstones, that obviously must be studied in a non-destructive way. In this paper, the LIBS technique was applied to distinguish synthetic emeralds grown by Biron hydrothermal method from those grown by Chatham flux method. The analyses performed by collinear double-pulse LIBS give a signal enhancement useful for the quantitative chemical analyses while guaranteeing a minimal sample damage. In this way it was obtained a considerable improvement on the detection limit of the trace elements, whose determination is essential for determining the origin of emerald gemstone. The trace elements V, Cr, and Fe and their relative amounts allowed the correct attribution of the manufacturer. Two different methods for quantitative analyses were used for this study: the standard Calibration-Free LIBS (CF-LIBS) method and its recent evolution, the One Point Calibration LIBS (OPC-LIBS). This is the first approach to the evaluation of the emerald origin by means of the LIBS technique.

  11. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  12. Validation of a deformable image registration technique for cone beam CT-based dose verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moteabbed, M., E-mail: mmoteabbed@partners.org; Sharp, G. C.; Wang, Y.

    2015-01-15

    Purpose: As radiation therapy evolves toward more adaptive techniques, image guidance plays an increasingly important role, not only in patient setup but also in monitoring the delivered dose and adapting the treatment to patient changes. This study aimed to validate a method for evaluation of delivered intensity modulated radiotherapy (IMRT) dose based on multimodal deformable image registration (DIR) for prostate treatments. Methods: A pelvic phantom was scanned with CT and cone-beam computed tomography (CBCT). Both images were digitally deformed using two realistic patient-based deformation fields. The original CT was then registered to the deformed CBCT resulting in a secondary deformedmore » CT. The registration quality was assessed as the ability of the DIR method to recover the artificially induced deformations. The primary and secondary deformed CT images as well as vector fields were compared to evaluate the efficacy of the registration method and it’s suitability to be used for dose calculation. PLASTIMATCH, a free and open source software was used for deformable image registration. A B-spline algorithm with optimized parameters was used to achieve the best registration quality. Geometric image evaluation was performed through voxel-based Hounsfield unit (HU) and vector field comparison. For dosimetric evaluation, IMRT treatment plans were created and optimized on the original CT image and recomputed on the two warped images to be compared. The dose volume histograms were compared for the warped structures that were identical in both warped images. This procedure was repeated for the phantom with full, half full, and empty bladder. Results: The results indicated mean HU differences of up to 120 between registered and ground-truth deformed CT images. However, when the CBCT intensities were calibrated using a region of interest (ROI)-based calibration curve, these differences were reduced by up to 60%. Similarly, the mean differences in average vector field lengths decreased from 10.1 to 2.5 mm when CBCT was calibrated prior to registration. The results showed no dependence on the level of bladder filling. In comparison with the dose calculated on the primary deformed CT, differences in mean dose averaged over all organs were 0.2% and 3.9% for dose calculated on the secondary deformed CT with and without CBCT calibration, respectively, and 0.5% for dose calculated directly on the calibrated CBCT, for the full-bladder scenario. Gamma analysis for the distance to agreement of 2 mm and 2% of prescribed dose indicated a pass rate of 100% for both cases involving calibrated CBCT and on average 86% without CBCT calibration. Conclusions: Using deformable registration on the planning CT images to evaluate the IMRT dose based on daily CBCTs was found feasible. The proposed method will provide an accurate dose distribution using planning CT and pretreatment CBCT data, avoiding the additional uncertainties introduced by CBCT inhomogeneity and artifacts. This is a necessary initial step toward future image-guided adaptive radiotherapy of the prostate.« less

  13. Accelerometer Method and Apparatus for Integral Display and Control Functions

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1996-01-01

    Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. Art accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.

  14. Determination of platinum in waste platinum-loaded carbon catalyst samples using microwave-assisted sample digestion and ICP-OES

    NASA Astrophysics Data System (ADS)

    Ma, Yinbiao; Wei, Xiaojuan

    2017-04-01

    A novel method for the determination of platinum in waste platinum-loaded carbon catalyst samples was established by inductively coupled plasma optical emission spectrometry after samples digested by microwave oven with aqua regia. Such experiment conditions were investigated as the influence of sample digestion methods, digestion time, digestion temperature and interfering ions on the determination. Under the optimized conditions, the linear range of calibration graph for Pt was 0 ˜ 200.00 mg L-1, and the recovery was 95.67% ˜ 104.29%. The relative standard deviation (RSDs) for Pt was 1.78 %. The proposed method was applied to determine the same samples with atomic absorption spectrometry with the results consistently, which is suitable for the determination of platinum in waste platinum-loaded carbon catalyst samples.

  15. Accelerometer Method and Apparatus for Integral Display and Control Functions

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1998-01-01

    Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto is discussed. An accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.

  16. Regionalisation of a distributed method for flood quantiles estimation: Revaluation of local calibration hypothesis to enhance the spatial structure of the optimised parameter

    NASA Astrophysics Data System (ADS)

    Odry, Jean; Arnaud, Patrick

    2016-04-01

    The SHYREG method (Aubert et al., 2014) associates a stochastic rainfall generator and a rainfall-runoff model to produce rainfall and flood quantiles on a 1 km2 mesh covering the whole French territory. The rainfall generator is based on the description of rainy events by descriptive variables following probability distributions and is characterised by a high stability. This stochastic generator is fully regionalised, and the rainfall-runoff transformation is calibrated with a single parameter. Thanks to the stability of the approach, calibration can be performed against only flood quantiles associated with observated frequencies which can be extracted from relatively short time series. The aggregation of SHYREG flood quantiles to the catchment scale is performed using an areal reduction factor technique unique on the whole territory. Past studies demonstrated the accuracy of SHYREG flood quantiles estimation for catchments where flow data are available (Arnaud et al., 2015). Nevertheless, the parameter of the rainfall-runoff model is independently calibrated for each target catchment. As a consequence, this parameter plays a corrective role and compensates approximations and modelling errors which makes difficult to identify its proper spatial pattern. It is an inherent objective of the SHYREG approach to be completely regionalised in order to provide a complete and accurate flood quantiles database throughout France. Consequently, it appears necessary to identify the model configuration in which the calibrated parameter could be regionalised with acceptable performances. The revaluation of some of the method hypothesis is a necessary step before the regionalisation. Especially the inclusion or the modification of the spatial variability of imposed parameters (like production and transfer reservoir size, base flow addition and quantiles aggregation function) should lead to more realistic values of the only calibrated parameter. The objective of the work presented here is to develop a SHYREG evaluation scheme focusing on both local and regional performances. Indeed, it is necessary to maintain the accuracy of at site flood quantiles estimation while identifying a configuration leading to a satisfactory spatial pattern of the calibrated parameter. This ability to be regionalised can be appraised by the association of common regionalisation techniques and split sample validation tests on a set of around 1,500 catchments representing the whole diversity of France physiography. Also, the presence of many nested catchments and a size-based split sample validation make possible to assess the relevance of the calibrated parameter spatial structure inside the largest catchments. The application of this multi-objective evaluation leads to the selection of a version of SHYREG more suitable for regionalisation. References: Arnaud, P., Cantet, P., Aubert, Y., 2015. Relevance of an at-site flood frequency analysis method for extreme events based on stochastic simulation of hourly rainfall. Hydrological Sciences Journal: on press. DOI:10.1080/02626667.2014.965174 Aubert, Y., Arnaud, P., Ribstein, P., Fine, J.A., 2014. The SHYREG flow method-application to 1605 basins in metropolitan France. Hydrological Sciences Journal, 59(5): 993-1005. DOI:10.1080/02626667.2014.902061

  17. Quantitative bioanalysis of strontium in human serum by inductively coupled plasma-mass spectrometry

    PubMed Central

    Somarouthu, Srikanth; Ohh, Jayoung; Shaked, Jonathan; Cunico, Robert L; Yakatan, Gerald; Corritori, Suzana; Tami, Joe; Foehr, Erik D

    2015-01-01

    Aim: A bioanalytical method using inductively-coupled plasma-mass spectrometry to measure endogenous levels of strontium in human serum was developed and validated. Results & methodology: This article details the experimental procedures used for the method development and validation thus demonstrating the application of the inductively-coupled plasma-mass spectrometry method for quantification of strontium in human serum samples. The assay was validated for specificity, linearity, accuracy, precision, recovery and stability. Significant endogenous levels of strontium are present in human serum samples ranging from 19 to 96 ng/ml with a mean of 34.6 ± 15.2 ng/ml (SD). Discussion & conclusion: Calibration procedures and sample pretreatment were simplified for high throughput analysis. The validation demonstrates that the method was sensitive, selective for quantification of strontium (88Sr) and is suitable for routine clinical testing of strontium in human serum samples. PMID:28031925

  18. Geometrical Characterisation of a 2D Laser System and Calibration of a Cross-Grid Encoder by Means of a Self-Calibration Methodology

    PubMed Central

    Torralba, Marta; Díaz-Pérez, Lucía C.

    2017-01-01

    This article presents a self-calibration procedure and the experimental results for the geometrical characterisation of a 2D laser system operating along a large working range (50 mm × 50 mm) with submicrometre uncertainty. Its purpose is to correct the geometric errors of the 2D laser system setup generated when positioning the two laser heads and the plane mirrors used as reflectors. The non-calibrated artefact used in this procedure is a commercial grid encoder that is also a measuring instrument. Therefore, the self-calibration procedure also allows the determination of the geometrical errors of the grid encoder, including its squareness error. The precision of the proposed algorithm is tested using virtual data. Actual measurements are subsequently registered, and the algorithm is applied. Once the laser system is characterised, the error of the grid encoder is calculated along the working range, resulting in an expanded submicrometre calibration uncertainty (k = 2) for the X and Y axes. The results of the grid encoder calibration are comparable to the errors provided by the calibration certificate for its main central axes. It is, therefore, possible to confirm the suitability of the self-calibration methodology proposed in this article. PMID:28858239

  19. Item Banks for Substance Use from the Patient-Reported Outcomes Measurement Information System (PROMIS®): Severity of Use and Positive Appeal of Use*

    PubMed Central

    Pilkonis, Paul A.; Yu, Lan; Dodds, Nathan E.; Johnston, Kelly L.; Lawrence, Suzanne; Hilton, Thomas F.; Daley, Dennis C.; Patkar, Ashwin A.; McCarty, Dennis

    2015-01-01

    Background Two item banks for substance use were developed as part of the Patient-Reported Outcomes Measurement Information System (PROMIS®): severity of substance use and positive appeal of substance use. Methods Qualitative item analysis (including focus groups, cognitive interviewing, expert review, and item revision) reduced an initial pool of more than 5,300 items for substance use to 119 items included in field testing. Items were written in a first-person, past-tense format, with 5 response options reflecting frequency or severity. Both 30-day and 3-month time frames were tested. The calibration sample of 1,336 respondents included 875 individuals from the general population (ascertained through an internet panel) and 461patients from addiction treatment centers participating in the National Drug Abuse Treatment Clinical Trials Network. Results Final banks of 37 and 18 items were calibrated for severity of substance use and positive appeal of substance use, respectively, using the two-parameter graded response model from item response theory (IRT). Initial calibrations were similar for the 30-day and 3-month time frames, and final calibrations used data combined across the time frames, making the items applicable with either interval. Seven-item static short forms were also developed from each item bank. Conclusions Test information curves showed that the PROMIS item banks provided substantial information in a broad range of severity, making them suitable for treatment, observational, and epidemiological research in both clinical and community settings. PMID:26423364

  20. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  1. Characterization of Material Response During Arc-Jet Testing with Optical Methods Status and Perspectives

    NASA Technical Reports Server (NTRS)

    Winter, Michael

    2012-01-01

    The characterization of ablation and recession of heat shield materials during arc jet testing is an important step towards understanding the governing processes during these tests and therefore for a successful extrapolation of ground test data to flight. The behavior of ablative heat shield materials in a ground-based arc jet facility is usually monitored through measurement of temperature distributions (across the surface and in-depth), and through measurement of the final surface recession. These measurements are then used to calibrate/validate materials thermal response codes, which have mathematical models with reasonably good fidelity to the physics and chemistry of ablation, and codes thus calibrated are used for predicting material behavior in flight environments. However, these thermal measurements only indirectly characterize the pyrolysis processes within an ablative material pyrolysis is the main effect during ablation. Quantification of pyrolysis chemistry would therefore provide more definitive and useful data for validation of the material response codes. Information of the chemical products of ablation, to various levels of detail, can be obtained using optical methods. Suitable optical methods to measure the shape and composition of these layers (with emphasis on the blowing layer) during arc jet testing are: 1) optical emission spectroscopy (OES) 2) filtered imaging 3) laser induced fluorescence (LIF) and 4) absorption spectroscopy. Several attempts have been made to optically measure the material response of ablative materials during arc-jet testing. Most recently, NH and OH have been identified in the boundary layer of a PICA ablator. These species are suitable candidates for a detection through PLIF which would enable a spatially-resolved characterization of the blowing layer in terms of both its shape and composition. The recent emission spectroscopy data will be presented and future experiments for a qualitative and quantitative characterization of the material response of ablative materials during arc-jet testing will be discussed.

  2. Comparison of green sample preparation techniques in the analysis of pyrethrins and pyrethroids in baby food by liquid chromatography-tandem mass spectrometry.

    PubMed

    Petrarca, Mateus Henrique; Ccanccapa-Cartagena, Alexander; Masiá, Ana; Godoy, Helena Teixeira; Picó, Yolanda

    2017-05-12

    A new selective and sensitive liquid chromatography triple quadrupole mass spectrometry method was developed for simultaneous analysis of natural pyrethrins and synthetic pyrethroids residues in baby food. In this study, two sample preparation methods based on ultrasound-assisted dispersive liquid-liquid microextraction (UA-DLLME) and salting-out assisted liquid-liquid extraction (SALLE) were optimized, and then, compared regarding the performance criteria. Appropriate linearity in solvent and matrix-based calibrations, and suitable recoveries (75-120%) and precision (RSD values≤16%) were achieved for selected analytes by any of the sample preparation procedures. Both methods provided the analytical selectivity required for the monitoring of the insecticides in fruit-, cereal- and milk-based baby foods. SALLE, recognized by cost-effectiveness, and simple and fast execution, provided a lower enrichment factor, consequently, higher limits of quantification (LOQs) were obtained. Some of them too high to meet the strict legislation regarding baby food. Nonetheless, the combination of ultrasound and DLLME also resulted in a high sample throughput and environmental-friendly method, whose LOQs were lower than the default maximum residue limit (MRL) of 10μgkg -1 set by European Community for baby foods. In the commercial baby foods analyzed, cyhalothrin and etofenprox were detected in different samples, demonstrating the suitability of proposed method for baby food control. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Establishment of the Ph. Eur. erythropoietin chemical reference substance batch 1.

    PubMed

    Burns, C; Bristow, A F; Buchheit, K H; Daas, A; Wierer, M; Costanzo, A

    2015-01-01

    The Erythropoietin (EPO) European Pharmacopoeia (Ph. Eur.) Biological Reference Preparation (BRP) batch 3 was calibrated in 2006 by in vivo bioassay and was used as a reference preparation for these assays as well as for the physicochemical methods in the Ph. Eur. monograph Erythropoietin concentrated solution (1316). In order to avoid the frequent replacement of this standard and thus reduce the use of animals, a new EPO Chemical Reference Substance (CRS) was established to be used solely for the physicochemical methods. Here we report the outcome of a collaborative study aimed at demonstrating the suitability of the candidate CRS (cCRS) as a reference for the physicochemical methods in the Ph. Eur. monograph. Results from the study demonstrated that for the physicochemical methods currently required in the monograph (capillary zone electrophoresis (CZE), polyacrylamide gel electrophoresis (PAGE)/immunoblotting and peptide mapping), the cCRS is essentially identical to the existing BRP. However, data also indicated that, for the physicochemical methods under consideration for inclusion in a revised monograph (test for oxidised forms and glycan mapping), the suitability of the cCRS as a reference needs to be confirmed with additional work. Further to completion of the study, the Ph. Eur. Commission adopted the cCRS as "Erythropoietin for physicochemical tests CRS batch 1" to be used for CZE, PAGE/immunoblotting and peptide mapping.

  4. The role of adequate reference materials in density measurements in hemodialysis

    NASA Astrophysics Data System (ADS)

    Furtado, A.; Moutinho, J.; Moura, S.; Oliveira, F.; Filipe, E.

    2015-02-01

    In hemodialysis, oscillation-type density meters are used to measure the density of the acid component of the dialysate solutions used in the treatment of kidney patients. An incorrect density determination of this solution used in hemodialysis treatments can cause several and adverse events to patients. Therefore, despite the Fresenius Medical Care (FME) tight control of the density meters calibration results, this study shows the benefits of mimic the matrix usually measured to produce suitable reference materials for the density meter calibrations.

  5. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  6. Adulteration of diesel/biodiesel blends by vegetable oil as determined by Fourier transform (FT) near infrared spectrometry and FT-Raman spectroscopy.

    PubMed

    Oliveira, Flavia C C; Brandão, Christian R R; Ramalho, Hugo F; da Costa, Leonardo A F; Suarez, Paulo A Z; Rubim, Joel C

    2007-03-28

    In this work it has been shown that the routine ASTM methods (ASTM 4052, ASTM D 445, ASTM D 4737, ASTM D 93, and ASTM D 86) recommended by the ANP (the Brazilian National Agency for Petroleum, Natural Gas and Biofuels) to determine the quality of diesel/biodiesel blends are not suitable to prevent the adulteration of B2 or B5 blends with vegetable oils. Considering the previous and actual problems with fuel adulterations in Brazil, we have investigated the application of vibrational spectroscopy (Fourier transform (FT) near infrared spectrometry and FT-Raman) to identify adulterations of B2 and B5 blends with vegetable oils. Partial least square regression (PLS), principal component regression (PCR), and artificial neural network (ANN) calibration models were designed and their relative performances were evaluated by external validation using the F-test. The PCR, PLS, and ANN calibration models based on the Fourier transform (FT) near infrared spectrometry and FT-Raman spectroscopy were designed using 120 samples. Other 62 samples were used in the validation and external validation, for a total of 182 samples. The results have shown that among the designed calibration models, the ANN/FT-Raman presented the best accuracy (0.028%, w/w) for samples used in the external validation.

  7. Technical note: An empirical method for absolute calibration of coccolith thickness

    NASA Astrophysics Data System (ADS)

    González-Lemos, Saúl; Guitián, José; Fuertes, Miguel-Ángel; Flores, José-Abel; Stoll, Heather M.

    2018-02-01

    As major calcifiers in the open ocean, coccolithophores play a key role in the marine carbon cycle. Because they may be sensitive to changing CO2 and ocean acidification, there is significant interest in quantifying past and present variations in their cellular calcification by quantifying the thickness of the coccoliths or calcite plates that cover their cells. Polarized light microscopy has emerged as a key tool for quantifying the thickness of these calcite plates, but the reproducibility and accuracy of such determinations has been limited by the absence of suitable calibration materials in the thickness range of coccoliths (0-4 µm). Here, we describe the fabrication of a calcite wedge with a constant slope over this thickness range, and the independent determination of calcite thickness along the wedge profile. We show how the calcite wedge provides more robust calibrations in the 0 to 1.55 µm range than previous approaches using rhabdoliths. We show the particular advantages of the calcite wedge approach for developing equations to relate thickness to the interference colors that arise in calcite in the thickness range between 1.55 and 4 µm. The calcite wedge approach can be applied to develop equations relevant to the particular light spectra and intensity of any polarized light microscope system and could significantly improve inter-laboratory data comparability.

  8. Expansion of the Scope of AOAC First Action Method 2012.25--Single-Laboratory Validation of Triphenylmethane Dye and Leuco Metabolite Analysis in Shrimp, Tilapia, Catfish, and Salmon by LC-MS/MS.

    PubMed

    Andersen, Wendy C; Casey, Christine R; Schneider, Marilyn J; Turnipseed, Sherri B

    2015-01-01

    Prior to conducting a collaborative study of AOAC First Action 2012.25 LC-MS/MS analytical method for the determination of residues of three triphenylmethane dyes (malachite green, crystal violet, and brilliant green) and their metabolites (leucomalachite green and leucocrystal violet) in seafood, a single-laboratory validation of method 2012.25 was performed to expand the scope of the method to other seafood matrixes including salmon, catfish, tilapia, and shrimp. The validation included the analysis of fortified and incurred residues over multiple weeks to assess analyte stability in matrix at -80°C, a comparison of calibration methods over the range 0.25 to 4 μg/kg, study of matrix effects for analyte quantification, and qualitative identification of targeted analytes. Method accuracy ranged from 88 to 112% with 13% RSD or less for samples fortified at 0.5, 1.0, and 2.0 μg/kg. Analyte identification and determination limits were determined by procedures recommended both by the U. S. Food and Drug Administration and the European Commission. Method detection limits and decision limits ranged from 0.05 to 0.24 μg/kg and 0.08 to 0.54 μg/kg, respectively. AOAC First Action Method 2012.25 with an extracted matrix calibration curve and internal standard correction is suitable for the determination of triphenylmethane dyes and leuco metabolites in salmon, catfish, tilapia, and shrimp by LC-MS/MS at a residue determination level of 0.5 μg/kg or below.

  9. Quantifying uncertainties in streamflow predictions through signature based inference of hydrological model parameters

    NASA Astrophysics Data System (ADS)

    Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro

    2016-04-01

    The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.

  10. Preparation of high purity plutonium oxide for radiochemistry instrument calibration standards and working standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, A.S.; Stalnaker, N.D.

    1997-04-01

    Due to the lack of suitable high level National Institute of Standards and Technology (NIST) traceable plutonium solution standards from the NIST or commercial vendors, the CST-8 Radiochemistry team at Los Alamos National Laboratory (LANL) has prepared instrument calibration standards and working standards from a well-characterized plutonium oxide. All the aliquoting steps were performed gravimetrically. When a {sup 241}Am standardized solution obtained from a commercial vendor was compared to these calibration solutions, the results agreed to within 0.04% for the total alpha activity. The aliquots of the plutonium standard solutions and dilutions were sealed in glass ampules for long termmore » storage.« less

  11. New BRDF Model for Desert and Gobi Using Equivalent Mirror Plane Method, Establishment and Validation

    NASA Astrophysics Data System (ADS)

    Li, Y.; Rong, Z.

    2017-12-01

    The surface Bidirectional Reflectance Distribution Function (BRDF) is a key parameter that affects the vicarious calibration accuracy of visible channel remote sensing instrument. In the past 30 years, many studies have been made and a variety of models have been established. Among them, the Ross-li model was highly approved and widely used. Unfortunately, the model doesn't suitable for desert and Gobi quite well because of the scattering kernel it contained, needs the factors such as plant height and plant spacing. A new BRDF model for surface without vegetation, which is mainly used in remote sensing vicarious calibration, is established. That was called Equivalent Mirror Plane (EMP) BRDF. It is used to characterize the bidirectional reflectance of the near Lambertian surface. The accuracy of the EMP BRDF model is validated by the directional reflectance data measured on the Dunhuang Gobi and compared to the Ross-li model. Results show that the regression accuracy of the new model is 0.828, which is similar to the Ross-li model (0.825). Because of the simple form (contains only four polynomials) and simple principle (derived by the Fresnel reflection principle, don't include any vegetation parameters), it is more suitable for near Lambertian surface, such as Gobi, desert, Lunar and reference panel. Results also showed that the new model could also maintain a high accuracy and stability in sparse observation, which is very important for the retrieval requirements of daily updating BRDF remote sensing products.

  12. Uncertainty of future projections of species distributions in mountainous regions.

    PubMed

    Tang, Ying; Winkler, Julie A; Viña, Andrés; Liu, Jianguo; Zhang, Yuanbin; Zhang, Xiaofeng; Li, Xiaohong; Wang, Fang; Zhang, Jindong; Zhao, Zhiqiang

    2018-01-01

    Multiple factors introduce uncertainty into projections of species distributions under climate change. The uncertainty introduced by the choice of baseline climate information used to calibrate a species distribution model and to downscale global climate model (GCM) simulations to a finer spatial resolution is a particular concern for mountainous regions, as the spatial resolution of climate observing networks is often insufficient to detect the steep climatic gradients in these areas. Using the maximum entropy (MaxEnt) modeling framework together with occurrence data on 21 understory bamboo species distributed across the mountainous geographic range of the Giant Panda, we examined the differences in projected species distributions obtained from two contrasting sources of baseline climate information, one derived from spatial interpolation of coarse-scale station observations and the other derived from fine-spatial resolution satellite measurements. For each bamboo species, the MaxEnt model was calibrated separately for the two datasets and applied to 17 GCM simulations downscaled using the delta method. Greater differences in the projected spatial distributions of the bamboo species were observed for the models calibrated using the different baseline datasets than between the different downscaled GCM simulations for the same calibration. In terms of the projected future climatically-suitable area by species, quantification using a multi-factor analysis of variance suggested that the sum of the variance explained by the baseline climate dataset used for model calibration and the interaction between the baseline climate data and the GCM simulation via downscaling accounted for, on average, 40% of the total variation among the future projections. Our analyses illustrate that the combined use of gridded datasets developed from station observations and satellite measurements can help estimate the uncertainty introduced by the choice of baseline climate information to the projected changes in species distribution.

  13. Uncertainty of future projections of species distributions in mountainous regions

    PubMed Central

    Tang, Ying; Viña, Andrés; Liu, Jianguo; Zhang, Yuanbin; Zhang, Xiaofeng; Li, Xiaohong; Wang, Fang; Zhang, Jindong; Zhao, Zhiqiang

    2018-01-01

    Multiple factors introduce uncertainty into projections of species distributions under climate change. The uncertainty introduced by the choice of baseline climate information used to calibrate a species distribution model and to downscale global climate model (GCM) simulations to a finer spatial resolution is a particular concern for mountainous regions, as the spatial resolution of climate observing networks is often insufficient to detect the steep climatic gradients in these areas. Using the maximum entropy (MaxEnt) modeling framework together with occurrence data on 21 understory bamboo species distributed across the mountainous geographic range of the Giant Panda, we examined the differences in projected species distributions obtained from two contrasting sources of baseline climate information, one derived from spatial interpolation of coarse-scale station observations and the other derived from fine-spatial resolution satellite measurements. For each bamboo species, the MaxEnt model was calibrated separately for the two datasets and applied to 17 GCM simulations downscaled using the delta method. Greater differences in the projected spatial distributions of the bamboo species were observed for the models calibrated using the different baseline datasets than between the different downscaled GCM simulations for the same calibration. In terms of the projected future climatically-suitable area by species, quantification using a multi-factor analysis of variance suggested that the sum of the variance explained by the baseline climate dataset used for model calibration and the interaction between the baseline climate data and the GCM simulation via downscaling accounted for, on average, 40% of the total variation among the future projections. Our analyses illustrate that the combined use of gridded datasets developed from station observations and satellite measurements can help estimate the uncertainty introduced by the choice of baseline climate information to the projected changes in species distribution. PMID:29320501

  14. Two imaging techniques for 3D quantification of pre-cementation space for CAD/CAM crowns.

    PubMed

    Rungruanganunt, Patchanee; Kelly, J Robert; Adams, Douglas J

    2010-12-01

    Internal three-dimensional (3D) "fit" of prostheses to prepared teeth is likely more important clinically than "fit" judged only at the level of the margin (i.e. marginal "opening"). This work evaluates two techniques for quantitatively defining 3D "fit", both using pre-cementation space impressions: X-ray microcomputed tomography (micro-CT) and quantitative optical analysis. Both techniques are of interest for comparison of CAD/CAM system capabilities and for documenting "fit" as part of clinical studies. Pre-cementation space impressions were taken of a single zirconia coping on its die using a low viscosity poly(vinyl siloxane) impression material. Calibration specimens of this material were fabricated between the measuring platens of a micrometre. Both calibration curves and pre-cementation space impression data sets were obtained by examination using micro-CT and quantitative optical analysis. Regression analysis was used to compare calibration curves with calibration sets. Micro-CT calibration data showed tighter 95% confidence intervals and was able to measure over a wider thickness range than for the optical technique. Regions of interest (e.g., lingual, cervical) were more easily analysed with optical image analysis and this technique was more suitable for extremely thin impression walls (<10-15μm). Specimen preparation is easier for micro-CT and segmentation parameters appeared to capture dimensions accurately. Both micro-CT and the optical method can be used to quantify the thickness of pre-cementation space impressions. Each has advantages and limitations but either technique has the potential for use as part of clinical studies or CAD/CAM protocol optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Method development and validation for simultaneous quantification of 15 drugs of abuse and prescription drugs and 7 of their metabolites in whole blood relevant in the context of driving under the influence of drugs--usefulness of multi-analyte calibration.

    PubMed

    Steuer, Andrea E; Forss, Anna-Maria; Dally, Annika M; Kraemer, Thomas

    2014-11-01

    In the context of driving under the influence of drugs (DUID), not only common drugs of abuse may have an influence, but also medications with similar mechanisms of action. Simultaneous quantification of a variety of drugs and medications relevant in this context allows faster and more effective analyses. Therefore, multi-analyte approaches have gained more and more popularity in recent years. Usually, calibration curves for such procedures contain a mixture of all analytes, which might lead to mutual interferences. In this study we investigated whether the use of such mixtures leads to reliable results for authentic samples containing only one or two analytes. Five hundred microliters of whole blood were extracted by routine solid-phase extraction (SPE, HCX). Analysis was performed on an ABSciex 3200 QTrap instrument with ESI+ in scheduled MRM mode. The method was fully validated according to international guidelines including selectivity, recovery, matrix effects, accuracy and precision, stabilities, and limit of quantification. The selected SPE provided recoveries >60% for all analytes except 6-monoacetylmorphine (MAM) with coefficients of variation (CV) below 15% or 20% for quality controls (QC) LOW and HIGH, respectively. Ion suppression >30% was found for benzoylecgonine, hydrocodone, hydromorphone, MDA, oxycodone, and oxymorphone at QC LOW, however CVs were always below 10% (n=6 different whole blood samples). Accuracy and precision criteria were fulfilled for all analytes except for MAM. Systematic investigation of accuracy determined for QC MED in a multi-analyte mixture compared to samples containing only single analytes revealed no relevant differences for any analyte, indicating that a multi-analyte calibration is suitable for the presented method. Comparison of approximately 60 samples to a former GC-MS method showed good correlation. The newly validated method was successfully applied to more than 1600 routine samples and 3 proficiency tests. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Design and construction of high-frequency magnetic probe system on the HL-2A tokamak

    NASA Astrophysics Data System (ADS)

    Liang, S. Y.; Ji, X. Q.; Sun, T. F.; Xu, Yuan; Lu, J.; Yuan, B. S.; Ren, L. L.; Yang, Q. W.

    2017-12-01

    A high-frequency magnetic probe system is designed, calibrated and constructed on the HL-2A tokamak. To investigate the factors which affect the probe frequency response, the inductance and capacitance in the probe system are analyzed using an equivalent circuit. Suitable sizes and turn number of the coil, and the length of transmission cable are optimized based on the theory and detailed test in the calibration. To deal with the frequency response limitation and bake-out, the ceramic grooved technique is used and the probe is wound with a bare copper wire. A cascade filter is manufactured with a suitable bandwidth as well as a good phase consistency between channels. The system has been used in the experiment to measure high frequency (≤300 kHz) magnetohydrodynamic fluctuations, which can meet the requirement of physical analysis on HL-2A.

  17. Effects of Energy Development on Hydrologic Response: a Multi-Scale Modeling Approach

    NASA Astrophysics Data System (ADS)

    Vithanage, J.; Miller, S. N.; Berendsen, M.; Caffrey, P. A.; Bellis, J.; Schuler, R.

    2013-12-01

    Potential impacts of energy development on surface hydrology in western Wyoming were assessed using spatially explicit hydrological models. Currently there are proposals to develop over 800 new oil and gas wells in the 218,000 acre-sized LaBarge development area that abuts the Wyoming Range and contributes runoff to the Upper Green River (approximately 1 well per 2 square miles). The intensity of development raises questions relating to impacts on the hydrological cycle, water quality, erosion and sedimentation. We developed landscape management scenarios relating to current disturbance and proposed actions put forth by the energy operators to provide inputs to spatially explicit hydrologic models. Differences between the scenarios were derived to quantify the changes and analyse the impacts to the project area. To perform this research, the Automated Watershed Assessment Tool (AGWA) was enhanced by adding different management practices suitable for the region, including the reclamation of disturbed lands over time. The AGWA interface was used to parameterize and execute two hydrologic models: the Soil and Water Assessment Tool (SWAT) and the KINEmatic Runoff and EROSion model (KINEROS2). We used freely available data including SSURGO soils, Multi-Resolution Landscape Consortium (MRLC) land cover, and 10m resolution terrain data to derive suitable initial parameters for the models. The SWAT model was manually calibrated using an innovative method at the monthly level; observed daily rainfall and temperature inputs were used as a function of elevation considering the local climate effects. Higher temporal calibration was not possible due to a lack of adequate climate and runoff data. The Nash Sutcliff efficiencies of two calibrated watersheds at the monthly scale exceeded 0.95. Results of the AGWA/SWAT simulations indicate a range of sensitivity to disturbance due to heterogeneous soil and terrain characteristics over a simulated time period of 10 years. The KINEROS2 model, a fully distributed physically based event model, was used to simulate runoff and erosion in areas identified by SWAT of particular concern due to their vulnerability. Results were used to find the most suitable locations for placing the well pads and infrastructure that limited overall degradation and downstream delivery of excess water and sediment. Results are highly relevant to land managers interested in optimizing the placement of roads, well pads and other infrastructure that results in disturbance and can be used to design monitoring and mitigation plans post development.

  18. Determination of psilocybin in Psilocybe semilanceata by capillary zone electrophoresis.

    PubMed

    Pedersen-Bjergaard, S; Sannes, E; Rasmussen, K E; Tønnesen, F

    1997-07-04

    A capillary zone electrophoretic (CZE) method was developed for the rapid determination of psilocybin in Psilocybe semilanceata. Following a simple two step extraction with 3.0+2.0 ml methanol, the hallucinogenic compound was effectively separated from matrix components by CZE utilizing a 10 mM borate-phosphate running buffer adjusted to pH 11.5. The identity of psilocybin was confirmed by migration time information and by UV spectra, while quantitation was accomplished utilizing barbital as internal standard. The calibration curve for psilocybin was linear within 0.01-1 mg/ml, while intra-day and inter-day variations of quantitative data were 0.5 and 2.5% R.S.D., respectively. In addition to psilocybin, the method was also suitable for the determination of the structurally related compound baeocystin.

  19. Proposed low-energy absolute calibration of nuclear recoils in a dual-phase noble element TPC using D-D neutron scattering kinematics

    NASA Astrophysics Data System (ADS)

    Verbus, J. R.; Rhyne, C. A.; Malling, D. C.; Genecov, M.; Ghosh, S.; Moskowitz, A. G.; Chan, S.; Chapman, J. J.; de Viveiros, L.; Faham, C. H.; Fiorucci, S.; Huang, D. Q.; Pangilinan, M.; Taylor, W. C.; Gaitskell, R. J.

    2017-04-01

    We propose a new technique for the calibration of nuclear recoils in large noble element dual-phase time projection chambers used to search for WIMP dark matter in the local galactic halo. This technique provides an in situ measurement of the low-energy nuclear recoil response of the target media using the measured scattering angle between multiple neutron interactions within the detector volume. The low-energy reach and reduced systematics of this calibration have particular significance for the low-mass WIMP sensitivity of several leading dark matter experiments. Multiple strategies for improving this calibration technique are discussed, including the creation of a new type of quasi-monoenergetic neutron source with a minimum possible peak energy of 272 keV. We report results from a time-of-flight-based measurement of the neutron energy spectrum produced by an Adelphi Technology, Inc. DD108 neutron generator, confirming its suitability for the proposed nuclear recoil calibration.

  20. The Majorana Demonstrator calibration system

    DOE PAGES

    Abgrall, N.; Arnquist, I. J.; Avignone, III, F. T.; ...

    2017-08-08

    The Majorana Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The Majorana Demonstrator is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a 1-ton 76Ge-based search. The ultra low-background conditions require regular calibrations to verify proper function of the detectors. Radioactive line sources can be deployed around the cryostats containing the detectors for regular energy calibrations. When measuring in low-background mode, these line sources have to be stored outside the shielding so they do not contribute to the background. The deployment and the retraction of the source aremore » designed to be controlled by the data acquisition system and do not require any direct human interaction. In this study, we detail the design requirements and implementation of the calibration apparatus, which provides the event rates needed to define the pulse-shape cuts and energy calibration used in the final analysis as well as data that can be compared to simulations.« less

  1. The MAJORANA DEMONSTRATOR calibration system

    NASA Astrophysics Data System (ADS)

    Abgrall, N.; Arnquist, I. J.; Avignone, F. T., III; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Bradley, A. W.; Brudanin, V.; Busch, M.; Buuck, M.; Caldwell, T. S.; Christofferson, C. D.; Chu, P.-H.; Cuesta, C.; Detwiler, J. A.; Dunagan, C.; Efremenko, Yu.; Ejiri, H.; Elliott, S. R.; Fu, Z.; Gehman, V. M.; Gilliss, T.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guinn, I. S.; Guiseppe, V. E.; Haufe, C. R.; Henning, R.; Hoppe, E. W.; Howe, M. A.; Jasinski, B. R.; Keeter, K. J.; Kidd, M. F.; Konovalov, S. I.; Kouzes, R. T.; Lopez, A. M.; MacMullin, J.; Martin, R. D.; Massarczyk, R.; Meijer, S. J.; Mertens, S.; Orrell, J. L.; O'Shaughnessy, C.; Poon, A. W. P.; Radford, D. C.; Rager, J.; Reine, A. L.; Rielage, K.; Robertson, R. G. H.; Shanks, B.; Shirchenko, M.; Suriano, A. M.; Tedeschi, D.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yu, C.-H.; Yumatov, V.; Zhitnikov, I.; Zhu, B. X.

    2017-11-01

    The MAJORANA Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The MAJORANA DEMONSTRATOR is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a 1-ton 76Ge-based search. The ultra low-background conditions require regular calibrations to verify proper function of the detectors. Radioactive line sources can be deployed around the cryostats containing the detectors for regular energy calibrations. When measuring in low-background mode, these line sources have to be stored outside the shielding so they do not contribute to the background. The deployment and the retraction of the source are designed to be controlled by the data acquisition system and do not require any direct human interaction. In this paper, we detail the design requirements and implementation of the calibration apparatus, which provides the event rates needed to define the pulse-shape cuts and energy calibration used in the final analysis as well as data that can be compared to simulations.

  2. The Majorana Demonstrator calibration system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abgrall, N.; Arnquist, I. J.; Avignone, III, F. T.

    The Majorana Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The Majorana Demonstrator is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a 1-ton 76Ge-based search. The ultra low-background conditions require regular calibrations to verify proper function of the detectors. Radioactive line sources can be deployed around the cryostats containing the detectors for regular energy calibrations. When measuring in low-background mode, these line sources have to be stored outside the shielding so they do not contribute to the background. The deployment and the retraction of the source aremore » designed to be controlled by the data acquisition system and do not require any direct human interaction. In this study, we detail the design requirements and implementation of the calibration apparatus, which provides the event rates needed to define the pulse-shape cuts and energy calibration used in the final analysis as well as data that can be compared to simulations.« less

  3. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  4. A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry

    NASA Astrophysics Data System (ADS)

    Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.

    2018-03-01

    Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.

  5. Estimating seasonal evapotranspiration from temporal satellite images

    USGS Publications Warehouse

    Singh, Ramesh K.; Liu, Shu-Guang; Tieszen, Larry L.; Suyker, Andrew E.; Verma, Shashi B.

    2012-01-01

    Estimating seasonal evapotranspiration (ET) has many applications in water resources planning and management, including hydrological and ecological modeling. Availability of satellite remote sensing images is limited due to repeat cycle of satellite or cloud cover. This study was conducted to determine the suitability of different methods namely cubic spline, fixed, and linear for estimating seasonal ET from temporal remotely sensed images. Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC) model in conjunction with the wet METRIC (wMETRIC), a modified version of the METRIC model, was used to estimate ET on the days of satellite overpass using eight Landsat images during the 2001 crop growing season in Midwest USA. The model-estimated daily ET was in good agreement (R2 = 0.91) with the eddy covariance tower-measured daily ET. The standard error of daily ET was 0.6 mm (20%) at three validation sites in Nebraska, USA. There was no statistically significant difference (P > 0.05) among the cubic spline, fixed, and linear methods for computing seasonal (July–December) ET from temporal ET estimates. Overall, the cubic spline resulted in the lowest standard error of 6 mm (1.67%) for seasonal ET. However, further testing of this method for multiple years is necessary to determine its suitability.

  6. Measurement of Sediment Deposition Rates using an Optical Backscatter Sensor

    NASA Astrophysics Data System (ADS)

    Ridd, P.; Day, G.; Thomas, S.; Harradence, J.; Fox, D.; Bunt, J.; Renagi, O.; Jago, C.

    2001-02-01

    An optical method for measuring siltation of sediment has been developed using an optical fibre backscatter (OBS) nephelometer. Sediment settling upon the optical fibre sensor causes an increase in the backscatter reading which can be related to the settled sediment surface density (SSSD) as measured in units of mg cm -2. Calibration and laboratory tests indicate that the resolution of measurements of SSSD is 0·01 mg cm -2and an accuracy of 5% in still water. In moving water it is more difficult to determine the accuracy of the method because other methods with suitable resolution are unavailable. However, indirect methods using measurements of changing suspended sediment concentration in a ring flume, indicate that the OBS method under-predicts deposition. The series of siltation from three field sites are presented. This sensor offers considerable advances over other methods of measuring settling because time series of settling may be taken and thus settling events may be related to other hydrodynamic parameters such as wave climate and currents.

  7. Wavelength calibration with PMAS at 3.5 m Calar Alto Telescope using a tunable astro-comb

    NASA Astrophysics Data System (ADS)

    Chavez Boggio, J. M.; Fremberg, T.; Bodenmüller, D.; Sandin, C.; Zajnulina, M.; Kelz, A.; Giannone, D.; Rutowska, M.; Moralejo, B.; Roth, M. M.; Wysmolek, M.; Sayinc, H.

    2018-05-01

    On-sky tests conducted with an astro-comb using the Potsdam Multi-Aperture Spectrograph (PMAS) at the 3.5 m Calar Alto Telescope are reported. The proposed astro-comb approach is based on cascaded four-wave mixing between two lasers propagating through dispersion optimized nonlinear fibers. This approach allows for a line spacing that can be continuously tuned over a broad range (from tens of GHz to beyond 1 THz) making it suitable for calibration of low- medium- and high-resolution spectrographs. The astro-comb provides 300 calibration lines and his line-spacing is tracked with a wavemeter having 0.3 pm absolute accuracy. First, we assess the accuracy of Neon calibration by measuring the astro-comb lines with (Neon calibrated) PMAS. The results are compared with expected line positions from wavemeter measurement showing an offset of ∼5-20 pm (4%-16% of one resolution element). This might be the footprint of the accuracy limits from actual Neon calibration. Then, the astro-comb performance as a calibrator is assessed through measurements of the Ca triplet from stellar objects HD3765 and HD219538 as well as with the sky line spectrum, showing the advantage of the proposed astro-comb for wavelength calibration at any resolution.

  8. Tunable lasers for water vapor measurements and other lidar applications

    NASA Technical Reports Server (NTRS)

    Gammon, R. W.; Mcilrath, T. J.; Wilkerson, T. D.

    1977-01-01

    A tunable dye laser suitable for differential absorption (DIAL) measurements of water vapor in the troposphere was constructed. A multi-pass absorption cell for calibration was also constructed for use in atmospheric DIAL measurements of water vapor.

  9. A calibration method of infrared LVF based spectroradiometer

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin

    2017-10-01

    In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.

  10. Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.

  11. Self-Calibration of CMB Polarimeters

    NASA Astrophysics Data System (ADS)

    Keating, Brian

    2013-01-01

    Precision measurements of the polarization of the cosmic microwave background (CMB) radiation, especially experiments seeking to detect the odd-parity "B-modes", have far-reaching implications for cosmology. To detect the B-modes generated during inflation the flux response and polarization angle of these experiments must be calibrated to exquisite precision. While suitable flux calibration sources abound, polarization angle calibrators are deficient in many respects. Man-made polarized sources are often not located in the antenna's far-field, have spectral properties that are radically different from the CMB's, are cumbersome to implement and may be inherently unstable over the (long) duration these searches require to detect the faint signature of the inflationary epoch. Astrophysical sources suffer from time, frequency and spatial variability, are not visible from all CMB observatories, and none are understood with sufficient accuracy to calibrate future CMB polarimeters seeking to probe inflationary energy scales of ~1000 TeV. CMB TB and EB modes, expected to identically vanish in the standard cosmological model, can be used to calibrate CMB polarimeters. By enforcing the observed EB and TB power spectra to be consistent with zero, CMB polarimeters can be calibrated to levels not possible with man-made or astrophysical sources. All of this can be accomplished without any loss of observing time using a calibration source which is spectrally identical to the CMB B-modes. The calibration procedure outlined here can be used for any CMB polarimeter.

  12. Toward Worldwide Hepcidin Assay Harmonization: Identification of a Commutable Secondary Reference Material.

    PubMed

    van der Vorm, Lisa N; Hendriks, Jan C M; Laarakkers, Coby M; Klaver, Siem; Armitage, Andrew E; Bamberg, Alison; Geurts-Moespot, Anneke J; Girelli, Domenico; Herkert, Matthias; Itkonen, Outi; Konrad, Robert J; Tomosugi, Naohisa; Westerman, Mark; Bansal, Sukhvinder S; Campostrini, Natascia; Drakesmith, Hal; Fillet, Marianne; Olbina, Gordana; Pasricha, Sant-Rayn; Pitts, Kelly R; Sloan, John H; Tagliaro, Franco; Weykamp, Cas W; Swinkels, Dorine W

    2016-07-01

    Absolute plasma hepcidin concentrations measured by various procedures differ substantially, complicating interpretation of results and rendering reference intervals method dependent. We investigated the degree of equivalence achievable by harmonization and the identification of a commutable secondary reference material to accomplish this goal. We applied technical procedures to achieve harmonization developed by the Consortium for Harmonization of Clinical Laboratory Results. Eleven plasma hepcidin measurement procedures (5 mass spectrometry based and 6 immunochemical based) quantified native individual plasma samples (n = 32) and native plasma pools (n = 8) to assess analytical performance and current and achievable equivalence. In addition, 8 types of candidate reference materials (3 concentrations each, n = 24) were assessed for their suitability, most notably in terms of commutability, to serve as secondary reference material. Absolute hepcidin values and reproducibility (intrameasurement procedure CVs 2.9%-8.7%) differed substantially between measurement procedures, but all were linear and correlated well. The current equivalence (intermeasurement procedure CV 28.6%) between the methods was mainly attributable to differences in calibration and could thus be improved by harmonization with a common calibrator. Linear regression analysis and standardized residuals showed that a candidate reference material consisting of native lyophilized plasma with cryolyoprotectant was commutable for all measurement procedures. Mathematically simulated harmonization with this calibrator resulted in a maximum achievable equivalence of 7.7%. The secondary reference material identified in this study has the potential to substantially improve equivalence between hepcidin measurement procedures and contributes to the establishment of a traceability chain that will ultimately allow standardization of hepcidin measurement results. © 2016 American Association for Clinical Chemistry.

  13. Solution to the Problem of Calibration of Low-Cost Air Quality Measurement Sensors in Networks.

    PubMed

    Miskell, Georgia; Salmond, Jennifer A; Williams, David E

    2018-04-27

    We provide a simple, remote, continuous calibration technique suitable for application in a hierarchical network featuring a few well-maintained, high-quality instruments ("proxies") and a larger number of low-cost devices. The ideas are grounded in a clear definition of the purpose of a low-cost network, defined here as providing reliable information on air quality at small spatiotemporal scales. The technique assumes linearity of the sensor signal. It derives running slope and offset estimates by matching mean and standard deviations of the sensor data to values derived from proxies over the same time. The idea is extremely simple: choose an appropriate proxy and an averaging-time that is sufficiently long to remove the influence of short-term fluctuations but sufficiently short that it preserves the regular diurnal variations. The use of running statistical measures rather than cross-correlation of sites means that the method is robust against periods of missing data. Ideas are first developed using simulated data and then demonstrated using field data, at hourly and 1 min time-scales, from a real network of low-cost semiconductor-based sensors. Despite the almost naïve simplicity of the method, it was robust for both drift detection and calibration correction applications. We discuss the use of generally available geographic and environmental data as well as microscale land-use regression as means to enhance the proxy estimates and to generalize the ideas to other pollutants with high spatial variability, such as nitrogen dioxide and particulates. These improvements can also be used to minimize the required number of proxy sites.

  14. Wavelength calibration of dispersive near-infrared spectrometer using relative k-space distribution with low coherence interferometer

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2016-05-01

    The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.

  15. Verification of the ISO calibration method for field pyranometers under tropical sky conditions

    NASA Astrophysics Data System (ADS)

    Janjai, Serm; Tohsing, Korntip; Pattarapanitchai, Somjet; Detkhon, Pasakorn

    2017-02-01

    Field pyranomters need to be annually calibrated and the International Organization for Standardization (ISO) has defined a standard method (ISO 9847) for calibrating these pyranometers. According to this standard method for outdoor calibration, the field pyranometers have to be compared to a reference pyranometer for the period of 2 to 14 days, depending on sky conditions. In this work, the ISO 9847 standard method was verified under tropical sky conditions. To verify the standard method, calibration of field pyranometers was conducted at a tropical site located in Nakhon Pathom (13.82o N, 100.04o E), Thailand under various sky conditions. The conditions of the sky were monitored by using a sky camera. The calibration results for different time periods used for the calibration under various sky conditions were analyzed. It was found that the calibration periods given by this standard method could be reduced without significant change in the final calibration result. In addition, recommendation and discussion on the use of this standard method in the tropics were also presented.

  16. Modelling carbon oxidation in pulp mill activated sludge systems: calibration of Activated Sludge Model No 3.

    PubMed

    Barañao, P A; Hall, E R

    2004-01-01

    Activated Sludge Model No 3 (ASM3) was chosen to model an activated sludge system treating effluents from a mechanical pulp and paper mill. The high COD concentration and the high content of readily biodegradable substrates of the wastewater make this model appropriate for this system. ASM3 was calibrated based on batch respirometric tests using fresh wastewater and sludge from the treatment plant, and on analytical measurements of COD, TSS and VSS. The model, developed for municipal wastewater, was found suitable for fitting a variety of respirometric batch tests, performed at different temperatures and food to microorganism ratios (F/M). Therefore, a set of calibrated parameters, as well as the wastewater COD fractions, was estimated for this industrial wastewater. The majority of the calibrated parameters were in the range of those found in the literature.

  17. Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle

    DOE PAGES

    Hruska, Ryan; Mitchell, Jessica; Anderson, Matthew; ...

    2012-09-17

    During the summer of 2010, an Unmanned Aerial Vehicle (UAV) hyperspectral in-flight calibration and characterization experiment of the Resonon PIKA II imaging spectrometer was conducted at the U.S. Department of Energy’s Idaho National Laboratory (INL) UAV Research Park. The purpose of the experiment was to validate the radiometric calibration of the spectrometer and determine the georegistration accuracy achievable from the on-board global positioning system (GPS) and inertial navigation sensors (INS) under operational conditions. In order for low-cost hyperspectral systems to compete with larger systems flown on manned aircraft, they must be able to collect data suitable for quantitative scientific analysis.more » The results of the in-flight calibration experiment indicate an absolute average agreement of 96.3%, 93.7% and 85.7% for calibration tarps of 56%, 24%, and 2.5% reflectivity, respectively. The achieved planimetric accuracy was 4.6 meters (based on RMSE).« less

  18. Node-to-node field calibration of wireless distributed air pollution sensor network.

    PubMed

    Kizel, Fadi; Etzion, Yael; Shafran-Nathan, Rakefet; Levy, Ilan; Fishbain, Barak; Bartonova, Alena; Broday, David M

    2018-02-01

    Low-cost air quality sensors offer high-resolution spatiotemporal measurements that can be used for air resources management and exposure estimation. Yet, such sensors require frequent calibration to provide reliable data, since even after a laboratory calibration they might not report correct values when they are deployed in the field, due to interference with other pollutants, as a result of sensitivity to environmental conditions and due to sensor aging and drift. Field calibration has been suggested as a means for overcoming these limitations, with the common strategy involving periodical collocations of the sensors at an air quality monitoring station. However, the cost and complexity involved in relocating numerous sensor nodes back and forth, and the loss of data during the repeated calibration periods make this strategy inefficient. This work examines an alternative approach, a node-to-node (N2N) calibration, where only one sensor in each chain is directly calibrated against the reference measurements and the rest of the sensors are calibrated sequentially one against the other while they are deployed and collocated in pairs. The calibration can be performed multiple times as a routine procedure. This procedure minimizes the total number of sensor relocations, and enables calibration while simultaneously collecting data at the deployment sites. We studied N2N chain calibration and the propagation of the calibration error analytically, computationally and experimentally. The in-situ N2N calibration is shown to be generic and applicable for different pollutants, sensing technologies, sensor platforms, chain lengths, and sensor order within the chain. In particular, we show that chain calibration of three nodes, each calibrated for a week, propagate calibration errors that are similar to those found in direct field calibration. Hence, N2N calibration is shown to be suitable for calibration of distributed sensor networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Calibration and Measurement in Turbulence Research by the Hot-Wire Method

    NASA Technical Reports Server (NTRS)

    Kovasznay, Kaszlo

    1947-01-01

    The problem of turbulence in aerodynamics is at present being attacked both theoretically and experimentally. In view of the fact however that purely theoretical considerations have not thus far led to satisfactory results the experimental treatment of the problem is of great importance. Among the different measuring procedures the hot wire methods are so far recognized as the most suitable for investigating the turbulence structure. The several disadvantages of these methods however, in particular those arising from the temperature lag of the wire can greatly impair the measurements and may easily render questionable the entire value of the experiment. The name turbulence is applied to that flow condition in which at any point of the stream the magnitude and direction of the velocity fluctuate arbitrarily about a well definable mean value. This fluctuation imparts a certain whirling characteristic to the flow.

  20. Comparison of NAVSTAR satellite L band ionospheric calibrations with Faraday rotation measurements

    NASA Technical Reports Server (NTRS)

    Royden, H. N.; Miller, R. B.; Buennagel, L. A.

    1984-01-01

    It is pointed out that interplanetary navigation at the Jet Propulsion Laboratory (JPL) is performed by analyzing measurements derived from the radio link between spacecraft and earth and, near the target, onboard optical measurements. For precise navigation, corrections for ionospheric effects must be applied, because the earth's ionosphere degrades the accuracy of the radiometric data. These corrections are based on ionospheric total electron content (TEC) determinations. The determinations are based on the measurement of the Faraday rotation of linearly polarized VHF signals from geostationary satellites. Problems arise in connection with the steadily declining number of satellites which are suitable for Faraday rotation measurements. For this reason, alternate methods of determining ionospheric electron content are being explored. One promising method involves the use of satellites of the NAVSTAR Global Positioning System (GPS). The results of a comparative study regarding this method are encouraging.

  1. Research on camera on orbit radial calibration based on black body and infrared calibration stars

    NASA Astrophysics Data System (ADS)

    Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng

    2018-05-01

    Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.

  2. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  3. Imaging workflow and calibration for CT-guided time-domain fluorescence tomography

    PubMed Central

    Tichauer, Kenneth M.; Holt, Robert W.; El-Ghussein, Fadi; Zhu, Qun; Dehghani, Hamid; Leblond, Frederic; Pogue, Brian W.

    2011-01-01

    In this study, several key optimization steps are outlined for a non-contact, time-correlated single photon counting small animal optical tomography system, using simultaneous collection of both fluorescence and transmittance data. The system is presented for time-domain image reconstruction in vivo, illustrating the sensitivity from single photon counting and the calibration steps needed to accurately process the data. In particular, laser time- and amplitude-referencing, detector and filter calibrations, and collection of a suitable instrument response function are all presented in the context of time-domain fluorescence tomography and a fully automated workflow is described. Preliminary phantom time-domain reconstructed images demonstrate the fidelity of the workflow for fluorescence tomography based on signal from multiple time gates. PMID:22076264

  4. Prediction of geomagnetic reversals using low-dimensional dynamical models and advanced data assimilation: a feasibility study

    NASA Astrophysics Data System (ADS)

    Fournier, A.; Morzfeld, M.; Hulot, G.

    2013-12-01

    For a suitable choice of parameters, the system of three ordinary differential equations (ODE) presented by Gissinger [1] was shown to exhibit chaotic reversals whose statistics compared well with those from the paleomagnetic record. In order to further assess the geophysical relevance of this low-dimensional model, we resort to data assimilation methods to calibrate it using reconstructions of the fluctuation of the virtual axial dipole moment spanning the past 2 millions years. Moreover, we test to which extent a properly calibrated model could possibly be used to predict a reversal of the geomagnetic field. We calibrate the ODE model to the geomagnetic field over the past 2 Ma using the SINT data set of Valet et al. [2]. To this end, we consider four data assimilation algorithms: the ensemble Kalman filter (EnKF), a variational method and two Monte Carlo (MC) schemes, prior importance sampling and implicit sampling. We observe that EnKF performs poorly and that prior importance sampling is inefficient. We obtain the most accurate reconstructions of the geomagnetic data using implicit sampling with five data points per assimilation sweep (of duration 5 kyr). The variational scheme performs equally well, but it does not provide us with quantitative information about the uncertainty of the estimates, which makes this method difficult to use for robust prediction under uncertainty. A calibration of the model using the PADM2M data set of Ziegler et al. [3] confirms these findings. We study the predictive capability of the ODE model using statistics computed from synthetic data experiments. For each experiment, we produce 2 Myr of synthetic data (with error levels similar to the ones found in real data), then calibrate the model to this record and then check if this calibrated model can correctly and reliably predict a reversal within the next 10 kyr (say). By performing 100 such experiments, we can assess how reliably our calibrated model can predict a (non-) reversal. It is found that the 5 kyr ahead predictions of reversals produced by the model appear to be accurate and reliable.These encouraging results prompted us to also test predictions of the five reversals of the SINT (and PADM2M) data set, using a similarly calibrated model. Results will be presented and discussed. [1] Gissinger, C., 2012, A new deterministic model for chaotic reversals, European Physical Journal B, 85:137 [2] Valet, J.-P., Meynadier, L. and Guyodo, Y., 2005, Geomagnetic field strength and reversal rate over the past 2 Million years, Nature, 435, 802-805. [3] Ziegler, L. B., Constable, C. G., Johnson, C. L. and Tauxe, L., 2011, PADM2M: a penalized maximum likelihood model of the 0-2 Ma paleomagnetic axial dipole moment, Geophysical Journal International, 184, 1069-1089.

  5. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist

    PubMed Central

    Latif, Quresh S; Saab, Victoria A; Dudley, Jonathan G; Hollenbeck, Jeff P

    2013-01-01

    To conserve habitat for disturbance specialist species, ecologists must identify where individuals will likely settle in newly disturbed areas. Habitat suitability models can predict which sites at new disturbances will most likely attract specialists. Without validation data from newly disturbed areas, however, the best approach for maximizing predictive accuracy can be unclear (Northwestern U.S.A.). We predicted habitat suitability for nesting Black-backed Woodpeckers (Picoides arcticus; a burned-forest specialist) at 20 recently (≤6 years postwildfire) burned locations in Montana using models calibrated with data from three locations in Washington, Oregon, and Idaho. We developed 8 models using three techniques (weighted logistic regression, Maxent, and Mahalanobis D2 models) and various combinations of four environmental variables describing burn severity, the north–south orientation of topographic slope, and prefire canopy cover. After translating model predictions into binary classifications (0 = low suitability to unsuitable, 1 = high to moderate suitability), we compiled “ensemble predictions,” consisting of the number of models (0–8) predicting any given site as highly suitable. The suitability status for 40% of the area burned by eastside Montana wildfires was consistent across models and therefore robust to uncertainty in the relative accuracy of particular models and in alternative ecological hypotheses they described. Ensemble predictions exhibited two desirable properties: (1) a positive relationship with apparent rates of nest occurrence at calibration locations and (2) declining model agreement outside surveyed environments consistent with our reduced confidence in novel (i.e., “no-analogue”) environments. Areas of disagreement among models suggested where future surveys could help validate and refine models for an improved understanding of Black-backed Woodpecker nesting habitat relationships. Ensemble predictions presented here can help guide managers attempting to balance salvage logging with habitat conservation in burned-forest landscapes where black-backed woodpecker nest location data are not immediately available. Ensemble modeling represents a promising tool for guiding conservation of large-scale disturbance specialists. PMID:24340177

  6. Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter

    NASA Astrophysics Data System (ADS)

    Tulyakov, Stepan; Ivanov, Anton; Thomas, Nicolas; Roloff, Victoria; Pommerol, Antoine; Cremonese, Gabriele; Weigel, Thomas; Fleuret, Francois

    2018-01-01

    There are many geometric calibration methods for "standard" cameras. These methods, however, cannot be used for the calibration of telescopes with large focal lengths and complex off-axis optics. Moreover, specialized calibration methods for the telescopes are scarce in literature. We describe the calibration method that we developed for the Colour and Stereo Surface Imaging System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO). Although our method is described in the context of CaSSIS, with camera-specific experiments, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.

  7. Absolute calibration of the Jenoptik CHM15k-x ceilometer and its applicability for quantitative aerosol monitoring

    NASA Astrophysics Data System (ADS)

    Geiß, Alexander; Wiegner, Matthias

    2014-05-01

    The knowledge of the spatiotemporal distribution of atmospheric aerosols and its optical characterization is essential for the understanding of the radiation budget, air quality, and climate. For this purpose, lidar is an excellent system as it is an active remote sensing technique. As multi-wavelength research lidars with depolarization channels are quite complex and cost-expensive, increasing attention is paid to so-called ceilometers. They are simple one-wavelength backscatter lidars with low pulse energy for eye-safe operation. As maintenance costs are low and continuous and unattended measurements can be performed, they are suitable for long-term aerosol monitoring in a network. However, the signal-to-noise ratio is low, and the signals are not calibrated. The only optical property that can be derived from a ceilometer is the particle backscatter coefficient, but even this quantity requires a calibration of the signals. With four years of measurements from a Jenoptik ceilometer CHM15k-x, we developed two methods for an absolute calibration on this system. This advantage of our approach is that only a few days with favorable meteorological conditions are required where Rayleigh-calibration and comparison with our research lidar is possible to estimate the lidar constant. This method enables us to derive the particle backscatter coefficient at 1064 nm, and we retrieved for the first time profiles in near real-time within an accuracy of 10 %. If an appropriate lidar ratio is assumed the aerosol optical depth of e.g. the mixing layer can be determined with an accuracy depending on the accuracy of the lidar ratio estimate. Even for 'simple' applications, e.g. assessment of the mixing layer height, cloud detection, detection of elevated aerosol layers, the particle backscatter coefficient has significant advantages over the measured (uncalibrated) attenuated backscatter. The possibility of continuous operation under nearly any meteorological condition with temporal resolution in the order of 30 seconds makes it also possible to apply time-height-tracking methods for detecting mixing layer heights. The combination of methods for edge detection (e.g. wavelet covariance transform, gradient method, variance method) and edge tracking techniques is used to increase the reliability of the layer detection and attribution. Thus, a feature mask of aerosols and clouds can be derived. Four years of measurements constitute an excellent basis for a climatology including a homogeneous time series of mixing layer heights, aerosol layers and cloud base heights of the troposphere. With a low overlap region of 180 m of the Jenoptik CHM15k-x even very narrow mixing layers, typical for winter conditions, can be considered.

  8. Development of Air Speed Nozzles

    NASA Technical Reports Server (NTRS)

    Zahm, A F

    1920-01-01

    Report describes the development of a suitable speed nozzle for the first few thousand airplanes made by the United States during the recent war in Europe, and to furnish a basis for more mature instruments in the future. Requirements for the project were to provide a suitable pressure collector for aircraft speed meters and to develop a speed nozzle which would be waterproof, powerful, unaffected by slight pitch and yaw, rugged and easy to manufacture, and uniform in structure and reading, so as not to require individual calibration.

  9. Construction of a Cr3C2-C Peritectic Point Cell for Thermocouple Calibration

    NASA Astrophysics Data System (ADS)

    Ogura, Hideki; Deuze, Thierry; Morice, Ronan; Ridoux, Pascal; Filtz, Jean-Remy

    The melting points of Cr3C2-C peritectic (1826°C) and Cr7C3-Cr3C2 eutectic (1742°C) alloys as materials for high-temperature fixed point cells are investigated for the use of thermocouple calibration. Pretests are performed to establish a suitable procedure for constructing contact thermometry cells based on such chromium-carbon mixtures. Two cells are constructed following two different possible procedures. The above two melting points are successfully observed for one of these cells using tungsten-rhenium alloy thermocouples.

  10. Calibration of mass spectrometric peptide mass fingerprint data without specific external or internal calibrants

    PubMed Central

    Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut

    2005-01-01

    Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175

  11. Tidal, Residual, Intertidal Mudflat (TRIM) Model and its Applications to San Francisco Bay, California

    USGS Publications Warehouse

    Cheng, R.T.; Casulli, V.; Gartner, J.W.

    1993-01-01

    A numerical model using a semi-implicit finite-difference method for solving the two-dimensional shallow-water equations is presented. The gradient of the water surface elevation in the momentum equations and the velocity divergence in the continuity equation are finite-differenced implicitly, the remaining terms are finite-differenced explicitly. The convective terms are treated using an Eulerian-Lagrangian method. The combination of the semi-implicit finite-difference solution for the gravity wave propagation, and the Eulerian-Lagrangian treatment of the convective terms renders the numerical model unconditionally stable. When the baroclinic forcing is included, a salt transport equation is coupled to the momentum equations, and the numerical method is subject to a weak stability condition. The method of solution and the properties of the numerical model are given. This numerical model is particularly suitable for applications to coastal plain estuaries and tidal embayments in which tidal currents are dominant, and tidally generated residual currents are important. The model is applied to San Francisco Bay, California where extensive historical tides and current-meter data are available. The model calibration is considered by comparing time-series of the field data and of the model results. Alternatively, and perhaps more meaningfully, the model is calibrated by comparing the harmonic constants of tides and tidal currents derived from field data with those derived from the model. The model is further verified by comparing the model results with an independent data set representing the wet season. The strengths and the weaknesses of the model are assessed based on the results of model calibration and verification. Using the model results, the properties of tides and tidal currents in San Francisco Bay are characterized and discussed. Furthermore, using the numerical model, estimates of San Francisco Bay's volume, surface area, mean water depth, tidal prisms, and tidal excursions at spring and neap tides are computed. Additional applications of the model reveal, qualitatively the spatial distribution of residual variables. ?? 1993 Academic Press. All rights reserved.

  12. Practicable methods for histological section thickness measurement in quantitative stereological analyses.

    PubMed

    Matenaers, Cyrill; Popper, Bastian; Rieger, Alexandra; Wanke, Rüdiger; Blutke, Andreas

    2018-01-01

    The accuracy of quantitative stereological analysis tools such as the (physical) disector method substantially depends on the precise determination of the thickness of the analyzed histological sections. One conventional method for measurement of histological section thickness is to re-embed the section of interest vertically to its original section plane. The section thickness is then measured in a subsequently prepared histological section of this orthogonally re-embedded sample. However, the orthogonal re-embedding (ORE) technique is quite work- and time-intensive and may produce inaccurate section thickness measurement values due to unintentional slightly oblique (non-orthogonal) positioning of the re-embedded sample-section. Here, an improved ORE method is presented, allowing for determination of the factual section plane angle of the re-embedded section, and correction of measured section thickness values for oblique (non-orthogonal) sectioning. For this, the analyzed section is mounted flat on a foil of known thickness (calibration foil) and both the section and the calibration foil are then vertically (re-)embedded. The section angle of the re-embedded section is then calculated from the deviation of the measured section thickness of the calibration foil and its factual thickness, using basic geometry. To find a practicable, fast, and accurate alternative to ORE, the suitability of spectral reflectance (SR) measurement for determination of plastic section thicknesses was evaluated. Using a commercially available optical reflectometer (F20, Filmetrics®, USA), the thicknesses of 0.5 μm thick semi-thin Epon (glycid ether)-sections and of 1-3 μm thick plastic sections (glycolmethacrylate/ methylmethacrylate, GMA/MMA), as regularly used in physical disector analyses, could precisely be measured within few seconds. Compared to the measured section thicknesses determined by ORE, SR measures displayed less than 1% deviation. Our results prove the applicability of SR to efficiently provide accurate section thickness measurements as a prerequisite for reliable estimates of dependent quantitative stereological parameters.

  13. Setup calibration and optimization for comparative digital holography

    NASA Astrophysics Data System (ADS)

    Baumbach, Torsten; Osten, Wolfgang; Kebbel, Volker; von Kopylow, Christoph; Jueptner, Werner

    2004-08-01

    With increasing globalization many enterprises decide to produce the components of their products at different locations all over the world. Consequently, new technologies and strategies for quality control are required. In this context the remote comparison of objects with regard to their shape or response on certain loads is getting more and more important for a variety of applications. For such a task the novel method of comparative digital holography is a suitable tool with interferometric sensitivity. With this technique the comparison in shape or deformation of two objects does not require the presence of both objects at the same place. In contrast to the well known incoherent techniques based on inverse fringe projection this new approach uses a coherent mask for the illumination of the sample object. The coherent mask is created by digital holography to enable the instant access to the complete optical information of the master object at any wanted place. The reconstruction of the mask is done by a spatial light modulator (SLM). The transmission of the digital master hologram to the place of comparison can be done via digital telecommunication networks. Contrary to other interferometric techniques this method enables the comparison of objects with different microstructure. In continuation of earlier reports our investigations are focused here on the analysis of the constraints of the setup with respect to the quality of the hologram reconstruction with a spatial light modulator. For successful measurements the selection of the appropriate reconstruction method and the adequate optical set-up is mandatory. In addition, the use of a SLM for the reconstruction requires the knowledge of its properties for the accomplishment of this method. The investigation results for the display properties such as display curvature, phase shift and the consequences for the technique will be presented. The optimization and the calibration of the set-up and its components lead to improved results in comparative digital holography with respect to the resolution. Examples of measurements before and after the optimization and calibration will be presented.

  14. Practicable methods for histological section thickness measurement in quantitative stereological analyses

    PubMed Central

    Matenaers, Cyrill; Popper, Bastian; Rieger, Alexandra; Wanke, Rüdiger

    2018-01-01

    The accuracy of quantitative stereological analysis tools such as the (physical) disector method substantially depends on the precise determination of the thickness of the analyzed histological sections. One conventional method for measurement of histological section thickness is to re-embed the section of interest vertically to its original section plane. The section thickness is then measured in a subsequently prepared histological section of this orthogonally re-embedded sample. However, the orthogonal re-embedding (ORE) technique is quite work- and time-intensive and may produce inaccurate section thickness measurement values due to unintentional slightly oblique (non-orthogonal) positioning of the re-embedded sample-section. Here, an improved ORE method is presented, allowing for determination of the factual section plane angle of the re-embedded section, and correction of measured section thickness values for oblique (non-orthogonal) sectioning. For this, the analyzed section is mounted flat on a foil of known thickness (calibration foil) and both the section and the calibration foil are then vertically (re-)embedded. The section angle of the re-embedded section is then calculated from the deviation of the measured section thickness of the calibration foil and its factual thickness, using basic geometry. To find a practicable, fast, and accurate alternative to ORE, the suitability of spectral reflectance (SR) measurement for determination of plastic section thicknesses was evaluated. Using a commercially available optical reflectometer (F20, Filmetrics®, USA), the thicknesses of 0.5 μm thick semi-thin Epon (glycid ether)-sections and of 1–3 μm thick plastic sections (glycolmethacrylate/ methylmethacrylate, GMA/MMA), as regularly used in physical disector analyses, could precisely be measured within few seconds. Compared to the measured section thicknesses determined by ORE, SR measures displayed less than 1% deviation. Our results prove the applicability of SR to efficiently provide accurate section thickness measurements as a prerequisite for reliable estimates of dependent quantitative stereological parameters. PMID:29444158

  15. Validation of an LC-MS/MS method to measure tacrolimus in rat kidney and liver tissue and its application to human kidney biopsies.

    PubMed

    Noll, Benjamin D; Coller, Janet K; Somogyi, Andrew A; Morris, Raymond G; Russ, Graeme R; Hesselink, Dennis A; Van Gelder, Teun; Sallustio, Benedetta C

    2013-10-01

    Tacrolimus (TAC) has a narrow therapeutic index and high interindividual and intraindividual pharmacokinetic variability, necessitating therapeutic drug monitoring to individualize dosage. Recent evidence suggests that intragraft TAC concentrations may better predict transplant outcomes. This study aimed to develop a method for the quantification of TAC in small biopsy-sized samples of rat kidney and liver tissue, which could be applied to clinical biopsy samples from kidney transplant recipients. Kidneys and livers were harvested from Mrp2-deficient TR- Wistar rats administered TAC (4 mg·kg·d for 14 days, n = 8) or vehicle (n = 10). Tissue samples (0.20-1.00 mg of dry weight) were solubilized enzymatically and underwent liquid-liquid extraction before analysis by liquid chromatography tandem mass spectrometry method. TAC-free tissue was used in the calibrator and quality control samples. Analyte detection was accomplished using positive electrospray ionization (TAC: m/z 821.5 → 768.6; internal standard ascomycin m/z 809.3 → 756.4). Calibration curves (0.04-2.6 μg/L) were linear (R > 0.99, n = 10), with interday and intraday calibrator coefficients of variation and bias <17% at the lower limit of quantification and <15% at all other concentrations (n = 6-10). Extraction efficiencies for TAC and ascomycin were approximately 70%, and matrix effects were minimal. Rat kidney TAC concentrations were higher (range 109-190 pg/mg tissue) than those in the liver (range 22-53 pg/mg of tissue), with median tissue/blood concentrations ratios of 72.0 and 17.6, respectively. In 2 transplant patients, kidney TAC concentrations ranged from 119 to 285 pg/mg of tissue and were approximately 20 times higher than whole blood trough TAC concentrations. The method displayed precision and accuracy suitable for application to TAC measurement in human kidney biopsy tissue.

  16. An investigation into force-moment calibration techniques applicable to a magnetic suspension and balance system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Eskins, Jonathan

    1988-01-01

    The problem of determining the forces and moments acting on a wind tunnel model suspended in a Magnetic Suspension and Balance System is addressed. Two calibration methods were investigated for three types of model cores, i.e., Alnico, Samarium-Cobalt, and a superconducting solenoid. Both methods involve calibrating the currents in the electromagnetic array against known forces and moments. The first is a static calibration method using calibration weights and a system of pulleys. The other method, dynamic calibration, involves oscillating the model and using its inertia to provide calibration forces and moments. Static calibration data, found to produce the most reliable results, is presented for three degrees of freedom at 0, 15, and -10 deg angle of attack. Theoretical calculations are hampered by the inability to represent iron-cored electromagnets. Dynamic calibrations, despite being quicker and easier to perform, are not as accurate as static calibrations. Data for dynamic calibrations at 0 and 15 deg is compared with the relevant static data acquired. Distortion of oscillation traces is cited as a major source of error in dynamic calibrations.

  17. Research on auto-calibration technology of the image plane's center of 360-degree and all round looking camera

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojun; Xu, Xiping

    2015-10-01

    The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.

  18. The calibration methods for Multi-Filter Rotating Shadowband Radiometer: a review

    NASA Astrophysics Data System (ADS)

    Chen, Maosi; Davis, John; Tang, Hongzhao; Ownby, Carolyn; Gao, Wei

    2013-09-01

    The continuous, over two-decade data record from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) is ideal for climate research which requires timely and accurate information of important atmospheric components such as gases, aerosols, and clouds. Except for parameters derived from MFRSR measurement ratios, which are not impacted by calibration error, most applications require accurate calibration factor(s), angular correction, and spectral response function(s) from calibration. Although a laboratory lamp (or reference) calibration can provide all the information needed to convert the instrument readings to actual radiation, in situ calibration methods are implemented routinely (daily) to fill the gaps between lamp calibrations. In this paper, the basic structure and the data collection and pretreatment of the MFRSR are described. The laboratory lamp calibration and its limitations are summarized. The cloud screening algorithms for MFRSR data are presented. The in situ calibration methods, the standard Langley method and its variants, the ratio-Langley method, the general method, Alexandrov's comprehensive method, and Chen's multi-channel method, are outlined. The reason that all these methods do not fit for all situations is that they assume some properties, such as aerosol optical depth (AOD), total optical depth (TOD), precipitable water vapor (PWV), effective size of aerosol particles, or angstrom coefficient, are invariant over time. These properties are not universal and some of them rarely happen. In practice, daily calibration factors derived from these methods should be smoothed to restrain error.

  19. Small format digital photogrammetry for applications in the earth sciences

    NASA Astrophysics Data System (ADS)

    Rieke-Zapp, Dirk

    2010-05-01

    Small format digital photogrammetry for applications in the earth sciences Photogrammetry is often considered one of the most precise and versatile surveying techniques. The same camera and analysis software can be used for measurements from sub-millimetre to kilometre scale. Such a measurement device is well suited for application by earth scientists working in the field. In this case a small toolset and a straight forward setup best fit the needs of the operator. While a digital camera is typically already part of the field equipment of an earth scientist the main focus of the field work is often not surveying. Lack in photogrammetric training at the same time requires an easy to learn, straight forward surveying technique. A photogrammetric method was developed aimed primarily at earth scientists for taking accurate measurements in the field minimizing extra bulk and weight of the required equipment. The work included several challenges. A) Definition of an upright coordinate system without heavy and bulky tools like a total station or GNS-Sensor. B) Optimization of image acquisition and geometric stability of the image block. C) Identification of a small camera suitable for precise measurements in the field. D) Optimization of the workflow from image acquisition to preparation of images for stereo measurements. E) Introduction of students and non-photogrammetrists to the workflow. Wooden spheres were used as target points in the field. They were more rugged and available in different sizes than ping pong balls used in a previous setup. Distances between three spheres were introduced as scale information in a photogrammetric adjustment. The distances were measured with a laser distance meter accurate to 1 mm (1 sigma). The vertical angle between the spheres was measured with the same laser distance meter. The precision of the measurement was 0.3° (1 sigma) which is sufficient, i.e. better than inclination measurements with a geological compass. The upright coordinate system is important to measure the dip angle of geologic features in outcrop. The planimetric coordinate systems would be arbitrary, but may easily be oriented to compass north introducing a direction measurement of a compass. Wooden spheres and a Leica disto D3 laser distance meter added less than 0.150 kg to the field equipment considering that a suitable digital camera was already part of it. Identification of a small digital camera suitable for precise measurements was a major part of this work. A group of cameras were calibrated several times over different periods of time on a testfield. Further evaluation involved an accuracy assessment in the field comparing distances between signalized points calculated form a photogrammetric setup with coordinates derived from a total station survey. The smallest camera in the test required calibration on the job as the interior orientation changed significantly between testfield calibration and use in the field. We attribute this to the fact that the lens was retracted then the camera was switched off. Fairly stable camera geometry in a compact size camera with lens retracting system was accomplished for Sigma DP1 and DP2 cameras. While the pixel count of the cameras was less than for the Ricoh, the pixel pitch in the Sigma cameras was much larger. Hence, the same mechanical movement would have less per pixel effect for the Sigma cameras than for the Ricoh camera. A large pixel pitch may therefore compensate for some camera instability explaining why cameras with large sensors and larger pixel pitch typically yield better accuracy in object space. Both Sigma cameras weigh approximately 0.250 kg and may even be suitable for use with ultralight aerial vehicles (UAV) which have payload restriction of 0.200 to 0.300 kg. A set of other cameras that were available were also tested on a calibration field and on location showing once again that it is difficult to reason geometric stability from camera specifications. Image acquisition with geometrically stable cameras was fairly straight forward to cover the area of interest with stereo pairs for analysis. We limited our tests to setups with three to five images to minimize the amount of post processing. The laser dot of the laser distance meter was not visible for distances farther than 5-7 m with the naked eye which also limited the maximum stereo area that may be covered with this technique. Extrapolating the setup to fairly large areas showed no significant decrease in accuracy accomplished in object space. Working with a Sigma SD14 SLR camera on a 6 x 18 x 20 m3 volume the maximum length measurement error ranged between 20 and 30 mm depending on image setup and analysis. For smaller outcrops even the compact cameras yielded maximum length measurement errors in the mm range which was considered sufficient for measurements in the earth sciences. In many cases the resolution per pixel was the limiting factor of image analysis rather than accuracy. A field manual was developed guiding novice users and students to this technique. The technique does not simplify ease of use for precision; therefore successful users of the presented method easily grow into more advanced photogrammetric methods for high precision applications. Originally camera calibration was not part of the methodology for the novice operators. Recent introduction of Camera Calibrator which is a low cost, well automated software for camera calibration, allowed beginners to calibrate their camera within a couple minutes. The complete set of calibration parameters can be applied in ERDAS LPS software easing the workflow. Image orientation was performed in LPS 9.2 software which was also used for further image analysis.

  20. Simple transfer calibration method for a Cimel Sun-Moon photometer: calculating lunar calibration coefficients from Sun calibration constants.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane

    2016-09-20

    The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.

  1. Optimized star sensors laboratory calibration method using a regularization neural network.

    PubMed

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  2. Detection and quantification of adulteration in sandalwood oil through near infrared spectroscopy.

    PubMed

    Kuriakose, Saji; Thankappan, Xavier; Joe, Hubert; Venkataraman, Venkateswaran

    2010-10-01

    The confirmation of authenticity of essential oils and the detection of adulteration are problems of increasing importance in the perfumes, pharmaceutical, flavor and fragrance industries. This is especially true for 'value added' products like sandalwood oil. A methodical study is conducted here to demonstrate the potential use of Near Infrared (NIR) spectroscopy along with multivariate calibration models like principal component regression (PCR) and partial least square regression (PLSR) as rapid analytical techniques for the qualitative and quantitative determination of adulterants in sandalwood oil. After suitable pre-processing of the NIR raw spectral data, the models are built-up by cross-validation. The lowest Root Mean Square Error of Cross-Validation and Calibration (RMSECV and RMSEC % v/v) are used as a decision supporting system to fix the optimal number of factors. The coefficient of determination (R(2)) and the Root Mean Square Error of Prediction (RMSEP % v/v) in the prediction sets are used as the evaluation parameters (R(2) = 0.9999 and RMSEP = 0.01355). The overall result leads to the conclusion that NIR spectroscopy with chemometric techniques could be successfully used as a rapid, simple, instant and non-destructive method for the detection of adulterants, even 1% of the low-grade oils, in the high quality form of sandalwood oil.

  3. Accelerated fatigue testing of dentin-composite bond with continuously increasing load.

    PubMed

    Li, Kai; Guo, Jiawen; Li, Yuping; Heo, Young Cheul; Chen, Jihua; Xin, Haitao; Fok, Alex

    2017-06-01

    The aim of this study was to evaluate an accelerated fatigue test method that used a continuously increasing load for testing the dentin-composite bond strength. Dentin-composite disks (ϕ5mm×2mm) made from bovine incisor roots were subjected to cyclic diametral compression with a continuously increasingly load amplitude. Two different load profiles, linear and nonlinear with respect to the number of cycles, were considered. The data were then analyzed by using a probabilistic failure model based on the Weakest-Link Theory and the classical stress-life function, before being transformed to simulate clinical data of direct restorations. All the experimental data could be well fitted with a 2-parameter Weibull function. However, a calibration was required for the effective stress amplitude to account for the difference between static and cyclic loading. Good agreement was then obtained between theory and experiments for both load profiles. The in vitro model also successfully simulated the clinical data. The method presented will allow tooth-composite interfacial fatigue parameters to be determined more efficiently. With suitable calibration, the in vitro model can also be used to assess composite systems in a more clinically relevant manner. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  4. Fibrinolysis standards: a review of the current status.

    PubMed

    Thelwell, C

    2010-07-01

    Biological standards are used to calibrate measurements of components of the fibrinolytic system, either for assigning potency values to therapeutic products, or to determine levels in human plasma as an indicator of thrombotic risk. Traditionally WHO International Standards are calibrated in International Units based on consensus values from collaborative studies. The International Unit is defined by the response activity of a given amount of the standard in a bioassay, independent of the method used. Assay validity is based on the assumption that both standard and test preparation contain the same analyte, and the response in an assay is a true function of this analyte. This principle is reflected in the diversity of source materials used to prepare fibrinolysis standards, which has depended on the contemporary preparations they were employed to measure. With advancing recombinant technology, and improved analytical techniques, a reference system based on reference materials and associated reference methods has been recommended for future fibrinolysis standards. Careful consideration and scientific judgement must however be applied when deciding on an approach to develop a new standard, with decisions based on the suitability of a standard to serve its purpose, and not just to satisfy a metrological ideal. 2010 The International Association for Biologicals. Published by Elsevier Ltd. All rights reserved.

  5. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  6. Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Rit,

    2016-09-15

    Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performedmore » at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in most clinical environments.« less

  7. Determination of the platinum - Group elements (PGE) and gold (Au) in manganese nodule reference samples by nickel sulfide fire-assay and Te coprecipitation with ICP-MS

    USGS Publications Warehouse

    Balaram, V.; Mathur, R.; Banakar, V.K.; Hein, J.R.; Rao, C.R.M.; Gnaneswara, Rao T.; Dasaram, B.

    2006-01-01

    Platinum group elements (PGE) and Au data in polymetallic oceanic ferromanganese nodule reference samples and crust samples obtained by inductively coupled plasma mass spectrometry (ICP-MS), after separation and pre-concentration by nickel sulfide fire-assay and Te coprecipitation, are presented. By optimizing several critical parameters such as flux composition, matrix matching calibration, etc., best experimental conditions were established to develop a method suitable for routine analysis of manganese nodule samples for PGE and Au. Calibrations were performed using international PGE reference materials, WMG-1 and WMS-1. This improved procedure offers extremely low detection limits in the range of 0.004 to 0.016 ng/g. The results obtained in this study for the reference materials compare well with previously published data wherever available. New PGE data arc also provided on some international manganese nodule reference materials. The analytical methodology described here can be used for the routine analysis of manganese nodule and crust samples in marine geochemical studies.

  8. Spectral deconvolution and operational use of stripping ratios in airborne radiometrics.

    PubMed

    Allyson, J D; Sanderson, D C

    2001-01-01

    Spectral deconvolution using stripping ratios for a set of pre-defined energy windows is the simplest means of reducing the most important part of gamma-ray spectral information. In this way, the effective interferences between the measured peaks are removed, leading, through a calibration, to clear estimates of radionuclide inventory. While laboratory measurements of stripping ratios are relatively easy to acquire, with detectors placed above small-scale calibration pads of known radionuclide concentrations, the extrapolation to measurements at altitudes where airborne survey detectors are used bring difficulties such as air-path attenuation and greater uncertainties in knowing ground level inventories. Stripping ratios are altitude dependent, and laboratory measurements using various absorbers to simulate the air-path have been used with some success. Full-scale measurements from an aircraft require a suitable location where radionuclide concentrations vary little over the field of view of the detector (which may be hundreds of metres). Monte Carlo simulations offer the potential of full-scale reproduction of gamma-ray transport and detection mechanisms. Investigations have been made to evaluate stripping ratios using experimental and Monte Carlo methods.

  9. Astronomical calibration of the geological timescale: closing the middle Eocene gap

    NASA Astrophysics Data System (ADS)

    Westerhold, T.; Röhl, U.; Frederichs, T.; Bohaty, S. M.; Zachos, J. C.

    2015-09-01

    To explore cause and consequences of past climate change, very accurate age models such as those provided by the astronomical timescale (ATS) are needed. Beyond 40 million years the accuracy of the ATS critically depends on the correctness of orbital models and radioisotopic dating techniques. Discrepancies in the age dating of sedimentary successions and the lack of suitable records spanning the middle Eocene have prevented development of a continuous astronomically calibrated geological timescale for the entire Cenozoic Era. We now solve this problem by constructing an independent astrochronological stratigraphy based on Earth's stable 405 kyr eccentricity cycle between 41 and 48 million years ago (Ma) with new data from deep-sea sedimentary sequences in the South Atlantic Ocean. This new link completes the Paleogene astronomical timescale and confirms the intercalibration of radioisotopic and astronomical dating methods back through the Paleocene-Eocene Thermal Maximum (PETM, 55.930 Ma) and the Cretaceous-Paleogene boundary (66.022 Ma). Coupling of the Paleogene 405 kyr cyclostratigraphic frameworks across the middle Eocene further paves the way for extending the ATS into the Mesozoic.

  10. Real-time 3D motion tracking for small animal brain PET

    NASA Astrophysics Data System (ADS)

    Kyme, A. Z.; Zhou, V. W.; Meikle, S. R.; Fulton, R. R.

    2008-05-01

    High-resolution positron emission tomography (PET) imaging of conscious, unrestrained laboratory animals presents many challenges. Some form of motion correction will normally be necessary to avoid motion artefacts in the reconstruction. The aim of the current work was to develop and evaluate a motion tracking system potentially suitable for use in small animal PET. This system is based on the commercially available stereo-optical MicronTracker S60 which we have integrated with a Siemens Focus-220 microPET scanner. We present measured performance limits of the tracker and the technical details of our implementation, including calibration and synchronization of the system. A phantom study demonstrating motion tracking and correction was also performed. The system can be calibrated with sub-millimetre accuracy, and small lightweight markers can be constructed to provide accurate 3D motion data. A marked reduction in motion artefacts was demonstrated in the phantom study. The techniques and results described here represent a step towards a practical method for rigid-body motion correction in small animal PET. There is scope to achieve further improvements in the accuracy of synchronization and pose measurements in future work.

  11. Analysis of polar and non-polar VOCs from ambient and source matrices: Development of a new canister autosampler which meets TO-15 QA/QC criteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnett, M.L.W.; Neal, D.; Uchtman, R.

    1997-12-31

    Approximately 108 of the Hazardous Air Pollutants (HAPs) specified in the 1990 Clean Air Act Amendments are classified as volatile organic compounds (VOCs). Of the 108 VOCs, nearly 35% are oxygenated or polar compounds. While more than one sample introduction technique exists for the analysis of these air toxics, SUMMA{reg_sign} canister sampling is suitable for the most complete range of analytes. A broad concentration range of polar and non-polar species can be analyzed from canisters. A new canister autosampler, the Tekmar AUTOCan{trademark} Elite autosampler, has been developed which incorporates the autosampler and concentrator into a single unit. Analysis of polarmore » and non-polar VOCs has been performed. This paper demonstrates adherence to the technical acceptance objectives outlined in the TO-15 methodology including initial calibration, daily calibration, blank analysis, method detection limits and laboratory control samples. The analytical system consists of a Tekmar AUTOCan{trademark} Elite autosampler interfaced to a Hewlett Packard{reg_sign} 5890/5972 MSD.« less

  12. Multi-elemental analysis of aqueous geological samples by inductively coupled plasma-optical emission spectrometry

    USGS Publications Warehouse

    Todorov, Todor I.; Wolf, Ruth E.; Adams, Monique

    2014-01-01

    Typically, 27 major, minor, and trace elements are determined in natural waters, acid mine drainage, extraction fluids, and leachates of geological and environmental samples by inductively coupled plasma-optical emission spectrometry (ICP-OES). At the discretion of the analyst, additional elements may be determined after suitable method modifications and performance data are established. Samples are preserved in 1–2 percent nitric acid (HNO3) at sample collection or as soon as possible after collection. The aqueous samples are aspirated into the ICP-OES discharge, where the elemental emission signals are measured simultaneously for 27 elements. Calibration is performed with a series of matrix-matched, multi-element solution standards.

  13. Solid state TL detectors for in vivo dosimetry in brachytherapy.

    PubMed

    Gambarini, G; Borroni, M; Grisotto, S; Maucione, A; Cerrotta, A; Fallai, C; Carrara, M

    2012-12-01

    In vivo dosimetry provides information about the actual dose delivered to the patient treated with radiotherapy and can be adopted within a routinary treatment quality assurance protocol. Aim of this study was to evaluate the feasibility of performing in vivo rectal dosimetry by placing thermoluminescence detectors directly on the transrectal ultrasound probe adopted for on-line treatment planning of high dose rate brachytherapy boosts of prostate cancer patients. A suitable protocol for TLD calibration has been set up. In vivo measurements resulted to be in good agreement with the calculated doses, showing that the proposed method is feasible and returns accurate results. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Double-cavity radiometer for high-flux density solar radiation measurements.

    PubMed

    Parretta, A; Antonini, A; Armani, M; Nenna, G; Flaminio, G; Pellegrino, M

    2007-04-20

    A radiometric method has been developed, suitable for both total power and flux density profile measurement of concentrated solar radiation. The high-flux density radiation is collected by a first optical cavity, integrated, and driven to a second optical cavity, where, attenuated, it is measured by a conventional radiometer operating under a stationary irradiation regime. The attenuation factor is regulated by properly selecting the aperture areas in the two cavities. The radiometer has been calibrated by a pulsed solar simulator at concentration levels of hundreds of suns. An optical model and a ray-tracing study have also been developed and validated, by which the potentialities of the radiometer have been largely explored.

  15. [Studies on measurement of oral mucosal color with non-contact spectrum colorimeter].

    PubMed

    Ohata, Yohei

    2006-03-01

    Color inspection plays an important role in the diagnosis of oral mucosal lesions. However, it is sometimes difficult to diagnose by color, because color is always evaluated subjectively. In order to measure color objectively and quantitatively, we decided to use a newly developed spectrum colorimeter for the oral mucosa. To keep the same angle and distance, a special stick was utilized. Various experiments were performed and suitable conditions for accurate colorimetric measurement were decided, including room temperature with cooling fan, onset time of the device, calibration timing, and the angle between light and the measured surface. The reproducibility of this method was confirmed by measuring the color of the buccal mucosa in healthy persons.

  16. Long-term hydrological simulation based on the Soil Conservation Service curve number

    NASA Astrophysics Data System (ADS)

    Mishra, Surendra Kumar; Singh, Vijay P.

    2004-05-01

    Presenting a critical review of daily flow simulation models based on the Soil Conservation Service curve number (SCS-CN), this paper introduces a more versatile model based on the modified SCS-CN method, which specializes into seven cases. The proposed model was applied to the Hemavati watershed (area = 600 km2) in India and was found to yield satisfactory results in both calibration and validation. The model conserved monthly and annual runoff volumes satisfactorily. A sensitivity analysis of the model parameters was performed, including the effect of variation in storm duration. Finally, to investigate the model components, all seven variants of the modified version were tested for their suitability.

  17. Analytical study of electrophoretic characterization of kidney cells. [conducted during the Apollo Soyuz Test Project

    NASA Technical Reports Server (NTRS)

    Knox, R. J.

    1978-01-01

    Embryonic kidney cells were studied as a follow-up to the MA-011 Electrophoresis Technology Experiment which was conducted during the Apollo Soyuz Test Project (ASTP). The postflight analysis of the performance of the ASTP zone electrophoresis experiment involving embryonic kidney cells is reported. The feasibility of producing standard particles for electrophoresis was also studied. This work was undertaken in response to a need for standardization of methods for producing, calibrating, and storing electrophoretic particle standards which could be employed in performance tests of various types of electrophoresis equipment. Promising procedures were tested for their suitability in the production of standard test particles from red blood cells.

  18. High Impedance Comparator for Monitoring Water Resistivity.

    ERIC Educational Resources Information Center

    Holewinski, Paul K.

    1984-01-01

    A high-impedance comparator suitable for monitoring the resistivity of a deionized or distilled water line supplying water in the 50 Kohm/cm-2 Mohm/cm range is described. Includes information on required circuits (with diagrams), sensor probe assembly, and calibration techniques. (JN)

  19. 21 CFR 1271.200 - Equipment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... prevent the introduction, transmission, or spread of communicable diseases, equipment used in the manufacture of HCT/Ps must be of appropriate design for its use and must be suitably located and installed to... result in the introduction, transmission, or spread of communicable diseases. (c) Calibration of...

  20. 21 CFR 1271.200 - Equipment.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... prevent the introduction, transmission, or spread of communicable diseases, equipment used in the manufacture of HCT/Ps must be of appropriate design for its use and must be suitably located and installed to... result in the introduction, transmission, or spread of communicable diseases. (c) Calibration of...

  1. 21 CFR 1271.200 - Equipment.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... prevent the introduction, transmission, or spread of communicable diseases, equipment used in the manufacture of HCT/Ps must be of appropriate design for its use and must be suitably located and installed to... result in the introduction, transmission, or spread of communicable diseases. (c) Calibration of...

  2. 21 CFR 1271.200 - Equipment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... prevent the introduction, transmission, or spread of communicable diseases, equipment used in the manufacture of HCT/Ps must be of appropriate design for its use and must be suitably located and installed to... result in the introduction, transmission, or spread of communicable diseases. (c) Calibration of...

  3. A storm-based CSLE incorporating the modified SCS-CN method for soil loss prediction on the Chinese Loess Plateau

    NASA Astrophysics Data System (ADS)

    Shi, Wenhai; Huang, Mingbin

    2017-04-01

    The Chinese Loess Plateau is one of the most erodible areas in the world. In order to reduce soil and water losses, suitable conservation practices need to be designed. For this purpose, there is an increasing demand for an appropriate model that can accurately predict storm-based surface runoff and soil losses on the Loess Plateau. The Chinese Soil Loss Equation (CSLE) has been widely used in this region to assess soil losses from different land use types. However, the CSLE was intended only to predict the mean annual gross soil loss. In this study, a CSLE was proposed that would be storm-based and that introduced a new rainfall-runoff erosivity factor. A dataset was compiled that comprised measurements of soil losses during individual storms from three runoff-erosion plots in each of three different watersheds in the gully region of the Plateau for 3-7 years in three different time periods (1956-1959; 1973-1980; 2010-13). The accuracy of the soil loss predictions made by the new storm-based CSLE was determined using the data for the six plots in two of the watersheds measured during 165 storm-runoff events. The performance of the storm-based CSLE was further compared with the performance of the storm-based Revised Universal Soil Loss Equation (RUSLE) for the same six plots. During the calibration (83 storms) and validation (82 storms) of the storm-based CSLE, the model efficiency, E, was 87.7% and 88.9%, respectively, while the root mean square error (RMSE) was 2.7 and 2.3 t ha-1 indicating a high degree of accuracy. Furthermore, the storm-based CSLE performed better than the storm-based RULSE (E: 75.8% and 70.3%; RMSE: 3.8 and 3.7 t ha-1, for the calibration and validation storms, respectively). The storm-based CSLE was then used to predict the soil losses from the three experimental plots in the third watershed. For these predictions, the model parameter values, previously determined by the calibration based on the data from the initial six plots, were used in the storm-based CSLE. In addition, the surface runoff used by the storm-based CSLE was either obtained from measurements or from the values predicted by the modified Soil Conservation Service Curve Number (SCS-CN) method. When using the measured runoff, the storm-based CSLE had an E of 76.6%, whereas the use of the predicted runoff gave an E of 76.4%. The high E values indicated that the storm-based CSLE incorporating the modified SCS-CN method could accurately predict storm-event-based soil losses resulting from both sheet and rill erosion at the field scale on the Chinese Loess Plateau. This approach could be applicable to other areas of the world once the model parameters have been suitably calibrated.

  4. Finding trap stiffness of optical tweezers using digital filters.

    PubMed

    Almendarez-Rangel, Pedro; Morales-Cruzado, Beatriz; Sarmiento-Gómez, Erick; Pérez-Gutiérrez, Francisco G

    2018-02-01

    Obtaining trap stiffness and calibration of the position detection system is the basis of a force measurement using optical tweezers. Both calibration quantities can be calculated using several experimental methods available in the literature. In most cases, stiffness determination and detection system calibration are performed separately, often requiring procedures in very different conditions, and thus confidence of calibration methods is not assured due to possible changes in the environment. In this work, a new method to simultaneously obtain both the detection system calibration and trap stiffness is presented. The method is based on the calculation of the power spectral density of positions through digital filters to obtain the harmonic contributions of the position signal. This method has the advantage of calculating both trap stiffness and photodetector calibration factor from the same dataset in situ. It also provides a direct method to avoid unwanted frequencies that could greatly affect calibration procedure, such as electric noise, for example.

  5. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    PubMed Central

    Deng, Mingjun; Li, Jiansong

    2017-01-01

    The Chinese Gaofen-3 (GF-3) mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR) sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts) using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method. PMID:29240675

  6. Characterization of the Sonoran desert as a radiometric calibration target for Earth observing sensors

    USGS Publications Warehouse

    Angal, Amit; Chander, Gyanesh; Xiong, Xiaoxiong; Choi, Tae-young; Wu, Aisheng

    2011-01-01

    To provide highly accurate quantitative measurements of the Earth's surface, a comprehensive calibration and validation of the satellite sensors is required. The NASA Moderate Resolution Imaging Spectroradiometer (MODIS) Characterization Support Team, in collaboration with United States Geological Survey, Earth Resources Observation and Science Center, has previously demonstrated the use of African desert sites to monitor the long-term calibration stability of Terra MODIS and Landsat 7 (L7) Enhanced Thematic Mapper plus (ETM+). The current study focuses on evaluating the suitability of the Sonoran Desert test site for post-launch long-term radiometric calibration as well as cross-calibration purposes. Due to the lack of historical and on-going in situ ground measurements, the Sonoran Desert is not usually used for absolute calibration. An in-depth evaluation (spatial, temporal, and spectral stability) of this site using well calibrated L7 ETM+ measurements and local climatology data has been performed. The Sonoran Desert site produced spatial variability of about 3 to 5% in the reflective solar regions, and the temporal variations of the site after correction for view-geometry impacts were generally around 3%. The results demonstrate that, barring the impacts due to occasional precipitation, the Sonoran Desert site can be effectively used for cross-calibration and long-term stability monitoring of satellite sensors, thus, providing a good test site in the western hemisphere.

  7. Radiometric calibration of the Earth observing system's imaging sensors

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1987-01-01

    Philosophy, requirements, and methods of calibration of multispectral space sensor systems as applicable to the Earth Observing System (EOS) are discussed. Vicarious methods for calibration of low spatial resolution systems, with respect to the Advanced Very High Resolution Radiometer (AVHRR), are then summarized. Finally, a theoretical introduction is given to a new vicarious method of calibration using the ratio of diffuse-to-global irradiance at the Earth's surfaces as the key input. This may provide an additional independent method for in-flight calibration.

  8. Configurations and calibration methods for passive sampling techniques.

    PubMed

    Ouyang, Gangfeng; Pawliszyn, Janusz

    2007-10-19

    Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.

  9. Application of composite small calibration objects in traffic accident scene photogrammetry.

    PubMed

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  10. LEAP: An Innovative Direction Dependent Ionospheric Calibration Scheme for Low Frequency Arrays

    NASA Astrophysics Data System (ADS)

    Rioja, María J.; Dodson, Richard; Franzen, Thomas M. O.

    2018-05-01

    The ambitious scientific goals of the SKA require a matching capability for calibration of atmospheric propagation errors, which contaminate the observed signals. We demonstrate a scheme for correcting the direction-dependent ionospheric and instrumental phase effects at the low frequencies and with the wide fields of view planned for SKA-Low. It leverages bandwidth smearing, to filter-out signals from off-axis directions, allowing the measurement of the direction-dependent antenna-based gains in the visibility domain; by doing this towards multiple directions it is possible to calibrate across wide fields of view. This strategy removes the need for a global sky model, therefore all directions are independent. We use MWA results at 88 and 154 MHz under various weather conditions to characterise the performance and applicability of the technique. We conclude that this method is suitable to measure and correct for temporal fluctuations and direction-dependent spatial ionospheric phase distortions on a wide range of scales: both larger and smaller than the array size. The latter are the most intractable and pose a major challenge for future instruments. Moreover this scheme is an embarrassingly parallel process, as multiple directions can be processed independently and simultaneously. This is an important consideration for the SKA, where the current planned architecture is one of compute-islands with limited interconnects. Current implementation of the algorithm and on-going developments are discussed.

  11. Parameter Calibration of GTN Damage Model and Formability Analysis of 22MnB5 in Hot Forming Process

    NASA Astrophysics Data System (ADS)

    Ying, Liang; Liu, Wenquan; Wang, Dantong; Hu, Ping

    2017-11-01

    Hot forming of high strength steel at elevated temperatures is an attractive technology to achieve the lightweight of vehicle body. The mechanical behavior of boron steel 22MnB5 strongly depends on the variation of temperature which makes the process design more difficult. In this paper, the Gurson-Tvergaard-Needleman (GTN) model is used to study the formability of 22MnB5 sheet at different temperatures. Firstly, the rheological behavior of 22MnB5 is analyzed through a series of hot tensile tests at a temperature range of 600-800 °C. Then, a detailed process to calibrate the damage parameters is given based on the response surface methodology and genetic algorithm method. The GTN model together with the damage parameters calibrated is then implemented to simulate the deformation and damage evolution of 22MnB5 in the process of high-temperature Nakazima test. The capability of the GTN model as a suitable tool to evaluate the sheet formability is confirmed by comparing experimental and calculated results. Finally, as a practical application, the forming limit diagram of 22MnB5 at 700 °C is constructed using the Nakazima simulation and Marciniak-Kuczynski (M-K) model, respectively. And the simulation integrated GTN model shows a higher reliability by comparing the predicted results of these two approaches with the experimental ones.

  12. Signal inference with unknown response: calibration-uncertainty renormalized estimator.

    PubMed

    Dorn, Sebastian; Enßlin, Torsten A; Greiner, Maksim; Selig, Marco; Boehm, Vanessa

    2015-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.

  13. Quantitative aspects of microchip isotachophoresis for high precision determination of main components in pharmaceuticals.

    PubMed

    Hradski, Jasna; Chorváthová, Mária Drusková; Bodor, Róbert; Sabo, Martin; Matejčík, Štefan; Masár, Marián

    2016-12-01

    Although microchip electrophoresis (MCE) is intended to provide reliable quantitative data, so far there is only limited attention paid to these important aspects. This study gives a general overview of key aspects to be followed to reach high-precise determination using isotachophoresis (ITP) on the microchip with conductivity detection. From the application point of view, the procedure for the determination of acetate, a main component in the pharmaceutical preparation buserelin acetate, was developed. Our results document that run-to-run fluctuations in the sample injection volume limit the reproducibility of quantitation based on the external calibration. The use of a suitable internal standard (succinate in this study) improved the repeatability of the precision of acetate determination from six to eight times. The robustness of the procedure was studied in terms of impact of fluctuations in various experimental parameters (driving current, concentration of the leading ions, pH of the leading electrolyte and buffer impurities) on the precision of the ITP determination. The use of computer simulation programs provided means to assess the ITP experiments using well-defined theoretical models. A long-term validity of the calibration curves on two microchips and two MCE equipments was verified. This favors ITP over other microchip electrophoresis techniques, when chip-to-chip or equipment-to-equipment transfer of the analytical method is required. The recovery values in the range of 98-101 % indicate very accurate determination of acetate in buserelin acetate, which is used in the treatment of hormone-dependent tumors. This study showed that microchip ITP is suitable for reliable determination of main components in pharmaceutical preparations.

  14. Surface Renewal: Micrometeorological Measurements Avoiding the Sonic Anemometer

    NASA Astrophysics Data System (ADS)

    Suvocarev, K.; Reba, M. L.; Runkle, B.

    2016-12-01

    Surface renewal (SR) is micrometeorological technique that has been suggested as an inexpensive alternative to eddy covariance (EC). While it was originally dependent on a calibration coefficient (α), a recent approach by Castellví (2004) showed that SR can be used as a stand-alone method where α is estimated using similarity theory. This "self-calibration" method is suitable for measuring different scalar fluxes under all stability conditions (Castellví et. al, 2008). According to the same authors, SR does not demand a sonic anemometer as only the horizontal wind speed is necessary to arrive to α values. Therefore, it is more affordable and applicable in both roughness and inertial sub-layers which makes this method less stringent to fetch requirements (Castellví, 2012). The SR method has not yet been tested when the equipment is reduced to scalar measurements and a simple anemometer (RM Young 5103 Wind Monitor Sensor). Here, our objective was to test this approach over temperature, H2O, CO2 and CH4 time series. When EC is taken as a reference for a comparison, our initial results show that all fluxes measured by SR are higher than corresponding reference fluxes. The portion of overestimation is in the range of typical values reported by SR literature. Still, more research will be done to improve its understanding as the correlation between flux measurements is very high. The SR method seems to be promising in avoiding the use of sonic anemometry (and related errors) while maintaining fewer fetch requirements and the possibility to yield observations from all wind directions.

  15. Structured light system calibration method with optimal fringe angle.

    PubMed

    Li, Beiwen; Zhang, Song

    2014-11-20

    For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H)  mm×250(W)  mm×500(D)  mm.

  16. Cross-Calibration of Secondary Electron Multiplier in Noble Gas Analysis

    NASA Astrophysics Data System (ADS)

    Santato, Alessandro; Hamilton, Doug; Deerberg, Michael; Wijbrans, Jan; Kuiper, Klaudia; Bouman, Claudia

    2015-04-01

    The latest generation of multi-collector noble gas mass spectrometers has decisively improved the precision in isotopic ratio analysis [1, 2] and helped the scientific community to address new questions [3]. Measuring numerous isotopes simultaneously has two significant advantages: firstly, any fluctuations in signal intensity have no effect on the isotope ratio and secondly, the analysis time is reduced. This particular point becomes very important in static vacuum mass spectrometry where during the analysis, the signal intensity decays and at the same time the background increases. However, when multi-collector analysis is utilized, it is necessary to pay special attention to the cross calibration of the detectors. This is a key point in order to have accurate and reproducible isotopic ratios. In isotope ratio mass spectrometry, with regard to the type of detector (i.e. Faraday or Secondary Electron Multiplier, SEM), analytical technique (TIMS, MC-ICP-MS or IRMS) and isotope system of interest, several techniques are currently applied to cross-calibrate the detectors. Specifically, the gain of the Faraday cups is generally stable and only the associated amplifier must be calibrated. For example, on the Thermo Scientific instrument control systems, the 1011 and 1012 ohm amplifiers can easily be calibrated through a fully software controlled procedure by inputting a constant electric signal to each amplifier sequentially [4]. On the other hand, the yield of the SEMs can drift up to 0.2% / hour and other techniques such as peak hopping, standard-sample bracketing and multi-dynamic measurement must be used. Peak hopping allows the detectors to be calibrated by measuring an ion beam of constant intensity across the detectors whereas standard-sample bracketing corrects the drift of the detectors through the analysis of a reference standard of a known isotopic ratio. If at least one isotopic pair of the sample is known, multi-dynamic measurement can be used; in this case the known isotopic ratio is measured on different pairs of detectors and the true value of the isotopic ratio of interest can be determined by a specific equation. In noble gas analysis, due to the decay of the ion beam during the measurement as well as the special isotopic systematic of the gases themselves, the cross-calibration of the SEM using these techniques becomes more complex and other methods should be investigated. In this work we present a comparison between different approaches to cross-calibrate multiple SEM's in noble gas analysis in order to evaluate the most suitable and reliable method. References: [1] Mark et al. (2009) Geochem. Geophys. Geosyst. 10, 1-9. [2] Mark et al. (2011) Geochim. Cosmochim. 75, 7494-7501. [3] Phillips and Matchan (2013) Geochimica et Cosmochimica Acta 121, 229-239. [4] Koornneef et al. (2014) Journal of Analytical Atomic Spectrometry 28, 749-754.

  17. Potential of satellite-derived ecosystem functional attributes to anticipate species range shifts

    NASA Astrophysics Data System (ADS)

    Alcaraz-Segura, Domingo; Lomba, Angela; Sousa-Silva, Rita; Nieto-Lugilde, Diego; Alves, Paulo; Georges, Damien; Vicente, Joana R.; Honrado, João P.

    2017-05-01

    In a world facing rapid environmental changes, anticipating their impacts on biodiversity is of utmost relevance. Remotely-sensed Ecosystem Functional Attributes (EFAs) are promising predictors for Species Distribution Models (SDMs) by offering an early and integrative response of vegetation performance to environmental drivers. Species of high conservation concern would benefit the most from a better ability to anticipate changes in habitat suitability. Here we illustrate how yearly projections from SDMs based on EFAs could reveal short-term changes in potential habitat suitability, anticipating mid-term shifts predicted by climate-change-scenario models. We fitted two sets of SDMs for 41 plant species of conservation concern in the Iberian Peninsula: one calibrated with climate variables for baseline conditions and projected under two climate-change-scenarios (future conditions); and the other calibrated with EFAs for 2001 and projected annually from 2001 to 2013. Range shifts predicted by climate-based models for future conditions were compared to the 2001-2013 trends from EFAs-based models. Projections of EFAs-based models estimated changes (mostly contractions) in habitat suitability that anticipated, for the majority (up to 64%) of species, the mid-term shifts projected by traditional climate-change-scenario forecasting, and showed greater agreement with the business-as-usual scenario than with the sustainable-development one. This study shows how satellite-derived EFAs can be used as meaningful essential biodiversity variables in SDMs to provide early-warnings of range shifts and predictions of short-term fluctuations in suitable conditions for multiple species.

  18. Comparing electronic probes for volumetric water content of low-density feathermoss

    USGS Publications Warehouse

    Overduin, P.P.; Yoshikawa, K.; Kane, D.L.; Harden, J.W.

    2005-01-01

    Purpose - Feathermoss is ubiquitous in the boreal forest and across various land-cover types of the arctic and subarctic. A variety of affordable commercial sensors for soil moisture content measurement have recently become available and are in use in such regions, often in conjunction with fire-susceptibility or ecological studies. Few come supplied with calibrations suitable or suggested for soils high in organics. Aims to test seven of these sensors for use in feathermoss, seeking calibrations between sensor output and volumetric water content. Design/methodology/approach - Measurements from seven sensors installed in live, dead and burned feathermoss samples, drying in a controlled manner, were compared to moisture content measurements. Empirical calibrations of sensor output to water content were determined. Findings - Almost all of the sensors tested were suitable for measuring the moss sample water content, and a unique calibration for each sensor for this material is presented. Differences in sensor design lead to changes in sensitivity as a function of volumetric water content, affecting the spatial averaging over the soil measurement volume. Research limitations/implications - The wide range of electromagnetic sensors available include frequency and time domain designs with variations in wave guide and sensor geometry, the location of sensor electronics and operating frequency. Practical implications - This study provides information for extending the use of electromagnetic sensors to feathermoss. Originality/value - A comparison of volumetric water content sensor mechanics and design is of general interest to researchers measuring soil water content. In particular, researchers working in wetlands, boreal forests and tundra regions will be able to apply these results. ?? Emerald Group Publishing Limited.

  19. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  20. Development and calibration of a load sensing cervical distractor capable of withstanding autoclave sterilization.

    PubMed

    Demetropoulos, C K; Truumees, E; Herkowitz, H N; Yang, K H

    2005-05-01

    In surgery of the cervical spine, a Caspar pin distractor is often used to apply a tensile load to the spine in order to open up the disc space. This is often done in order to place a graft or other interbody fusion device in the spine. Ideally a tight interference fit is achieved. If the spine is over distracted, allowing for a large graft, there is an increased risk of subsidence into the endplate. If there is too little distraction, there is an increased risk of graft dislodgement or pseudoarthrosis. Generally, graft height is selected from preoperative measurements and observed distraction without knowing the intraoperative compressive load. This device was designed to give the surgeon an assessment of this applied load. Instrumentation of the device involved the application of strain gauges and the selection of materials that would survive standard autoclave sterilization. The device was calibrated, sterilized and once again calibrated to demonstrate its suitability for surgical use. Results demonstrate excellent linearity in the calibration, and no difference was detected in the pre- and post-sterilization calibrations.

  1. Novel Calibration Technique for a Coulometric Evolved Vapor Analyzer for Measuring Water Content of Materials

    NASA Astrophysics Data System (ADS)

    Bell, S. A.; Miao, P.; Carroll, P. A.

    2018-04-01

    Evolved vapor coulometry is a measurement technique that selectively detects water and is used to measure water content of materials. The basis of the measurement is the quantitative electrolysis of evaporated water entrained in a carrier gas stream. Although this measurement has a fundamental principle—based on Faraday's law which directly relates electrolysis current to amount of substance electrolyzed—in practice it requires calibration. Commonly, reference materials of known water content are used, but the variety of these is limited, and they are not always available for suitable values, materials, with SI traceability, or with well-characterized uncertainty. In this paper, we report development of an alternative calibration approach using as a reference the water content of humid gas of defined dew point traceable to the SI via national humidity standards. The increased information available through this new type of calibration reveals a variation of the instrument performance across its range not visible using the conventional approach. The significance of this is discussed along with details of the calibration technique, example results, and an uncertainty evaluation.

  2. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  3. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  4. Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation

    NASA Astrophysics Data System (ADS)

    Bueno, Diana R.; Montano, L.

    2017-04-01

    Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.

  5. Computer Generated Hologram System for Wavefront Measurement System Calibration

    NASA Technical Reports Server (NTRS)

    Olczak, Gene

    2011-01-01

    Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.

  6. Metrological characterization methods for confocal chromatic line sensors and optical topography sensors

    NASA Astrophysics Data System (ADS)

    Seppä, Jeremias; Niemelä, Karri; Lassila, Antti

    2018-05-01

    The increasing use of chromatic confocal technology for, e.g. fast, in-line optical topography, and measuring thickness, roughness and profiles implies a need for the characterization of various aspects of the sensors. Single-point, line and matrix versions of chromatic confocal technology, encoding depth information into wavelength, have been developed. Of these, line sensors are particularly suitable for in-line process measurement. Metrological characterization and development of practical methods for calibration and checking is needed for new optical methods and devices. Compared to, e.g. tactile methods, optical topography measurement techniques have limitations related to light wavelength and coherence, optical properties of the sample including reflectivity, specularity, roughness and colour, and definition of optical versus mechanical surfaces. In this work, metrological characterization methods for optical line sensors were developed for scale magnification and linearity, sensitivity to sample properties, and dynamic characteristics. An accurate depth scale calibration method using a single prototype groove depth sample was developed for a line sensor and validated with laser-interferometric sample tracking, attaining (sub)micrometre level or better than 0.1% scale accuracy. Furthermore, the effect of different surfaces and materials on the measurement and depth scale was studied, in particular slope angle, specularity and colour. In addition, dynamic performance, noise, lateral scale and resolution were measured using the developed methods. In the case of the LCI1200 sensor used in this study, which has a 11.3 mm  ×  2.8 mm measurement range, the instrument depth scale was found to depend only minimally on sample colour, whereas measuring steeply sloped specular surfaces in the peripheral measurement area, in the worst case, caused a somewhat larger relative sample-dependent change (1%) in scale.

  7. Determination of microbial phenolic acids in human faeces by UPLC-ESI-TQ MS.

    PubMed

    Sánchez-Patán, Fernando; Monagas, María; Moreno-Arribas, M Victoria; Bartolomé, Begoña

    2011-03-23

    The aim of the present work was to develop a reproducible, sensitive, and rapid UPLC-ESI-TQ MS analytical method for determination of microbial phenolic acids and other related compounds in faeces. A total of 47 phenolic compounds including hydroxyphenylpropionic, hydroxyphenylacetic, hydroxycinnamic, hydroxybenzoic, and hydroxymandelic acids and simple phenols were considered. To prepare an optimum pool standard solution, analytes were classified in 5 different groups with different starting concentrations according to their MS response. The developed UPLC method allowed a high resolution of the pool standard solution within an 18 min injection run time. The LOD of phenolic compounds ranged from 0.001 to 0.107 μg/mL and LOQ from 0.003 to 0.233 μg/mL. The method precision met acceptance criteria (<15% RSD) for all analytes, and accuracy was >80%. The method was applied to faecal samples collected before and after the intake of a flavan-3-ol supplement by a healthy volunteer. Both external and internal calibration methods were considered for quantification purposes, using 4-hydroxybenzoic-2,3,4,5-d4 acid as internal standard. For most analytes and samples, the level of microbial phenolic acids did not differ by using one or another calibration method. The results revealed an increase in protocatechuic, syringic, benzoic, p-coumaric, phenylpropionic, 3-hydroxyphenylacetic, and 3-hydroxyphenylpropionic acids, although differences due to the intake were only significant for the latter compound. In conclusion, the UPLC-DAD-ESI-TQ MS method developed is suitable for targeted analysis of microbial-derived phenolic metabolites in faecal samples from human intervention or in vitro fermentation studies, which requires high sensitivity and throughput.

  8. High-Temperature Thermal Conductivity Measurement Apparatus Based on Guarded Hot Plate Method

    NASA Astrophysics Data System (ADS)

    Turzo-Andras, E.; Magyarlaki, T.

    2017-10-01

    An alternative calibration procedure has been applied using apparatus built in-house, created to optimize thermal conductivity measurements. The new approach compared to those of usual measurement procedures of thermal conductivity by guarded hot plate (GHP) consists of modified design of the apparatus, modified position of the temperature sensors and new conception in the calculation method, applying the temperature at the inlet section of the specimen instead of the temperature difference across the specimen. This alternative technique is suitable for eliminating the effect of thermal contact resistance arising between a rigid specimen and the heated plate, as well as accurate determination of the specimen temperature and of the heat loss at the lateral edge of the specimen. This paper presents an overview of the specific characteristics of the newly developed "high-temperature thermal conductivity measurement apparatus" based on the GHP method, as well as how the major difficulties are handled in the case of this apparatus, as compared to the common GHP method that conforms to current international standards.

  9. Comparison of retention models for polymers 1. Poly(ethylene glycol)s.

    PubMed

    Bashir, Mubasher A; Radke, Wolfgang

    2006-10-27

    The suitability of three different retention models to predict the retention times of poly(ethylene glycol)s (PEGs) in gradient and isocratic chromatography was investigated. The models investigated were the linear (LSSM) and the quadratic solvent strength model (QSSM). In addition, a model describing the retention behaviour of polymers was extended to account for gradient elution (PM). It was found that all models are suited to properly predict gradient retention volumes provided the extraction of the analyte specific parameters is performed from gradient experiments as well. The LSSM and QSSM on principle cannot describe retention behaviour under critical or SEC conditions. Since the PM is designed to cover all three modes of polymer chromatography, it is therefore superior to the other models. However, the determination of the analyte specific parameters, which are needed to calibrate the retention behaviour, strongly depend on the suitable selection of initial experiments. A useful strategy for a purposeful selection of these calibration experiments is proposed.

  10. A Practically Validated Intelligent Calibration Circuit Using Optimized ANN for Flow Measurement by Venturi

    NASA Astrophysics Data System (ADS)

    Venkata, Santhosh Krishnan; Roy, Binoy Krishna

    2016-03-01

    Design of an intelligent flow measurement technique using venturi flow meter is reported in this paper. The objectives of the present work are: (1) to extend the linearity range of measurement to 100 % of full scale input range, (2) to make the measurement technique adaptive to variations in discharge coefficient, diameter ratio of venturi nozzle and pipe (β), liquid density, and liquid temperature, and (3) to achieve the objectives (1) and (2) using an optimized neural network. The output of venturi flow meter is differential pressure. It is converted to voltage by using a suitable data conversion unit. A suitable optimized artificial neural network (ANN) is added, in place of conventional calibration circuit. ANN is trained, tested with simulated data considering variations in discharge coefficient, diameter ratio between venturi nozzle and pipe, liquid density, and liquid temperature. The proposed technique is then subjected to practical data for validation. Results show that the proposed technique has fulfilled the objectives.

  11. A new systematic calibration method of ring laser gyroscope inertial navigation system

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu

    2016-10-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.

  12. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  13. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  14. Shortwave Radiometer Calibration Methods Comparison and Resulting Solar Irradiance Measurement Differences: A User Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Banks financing solar energy projects require assurance that these systems will produce the energy predicted. Furthermore, utility planners and grid system operators need to understand the impact of the variable solar resource on solar energy conversion system performance. Accurate solar radiation data sets reduce the expense associated with mitigating performance risk and assist in understanding the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methods provided by radiometric calibrationmore » service providers, such as NREL and manufacturers of radiometers, on the resulting calibration responsivity. Some of these radiometers are calibrated indoors and some outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides the outdoor calibration responsivity of pyranometers and pyrheliometers at 45 degree solar zenith angle, and as a function of solar zenith angle determined by clear-sky comparisons with reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison between the test radiometer under calibration and a reference radiometer of the same type. In both methods, the reference radiometer calibrations are traceable to the World Radiometric Reference (WRR). These different methods of calibration demonstrated +1% to +2% differences in solar irradiance measurement. Analyzing these differences will ultimately help determine the uncertainty of the field radiometer data and guide the development of a consensus standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainty will allow more accurate prediction of solar output and improve the bankability of solar projects.« less

  15. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  16. Using qualitative comparative analysis in a systematic review of a complex intervention.

    PubMed

    Kahwati, Leila; Jacobs, Sara; Kane, Heather; Lewis, Megan; Viswanathan, Meera; Golin, Carol E

    2016-05-04

    Systematic reviews evaluating complex interventions often encounter substantial clinical heterogeneity in intervention components and implementation features making synthesis challenging. Qualitative comparative analysis (QCA) is a non-probabilistic method that uses mathematical set theory to study complex phenomena; it has been proposed as a potential method to complement traditional evidence synthesis in reviews of complex interventions to identify key intervention components or implementation features that might explain effectiveness or ineffectiveness. The objective of this study was to describe our approach in detail and examine the suitability of using QCA within the context of a systematic review. We used data from a completed systematic review of behavioral interventions to improve medication adherence to conduct two substantive analyses using QCA. The first analysis sought to identify combinations of nine behavior change techniques/components (BCTs) found among effective interventions, and the second analysis sought to identify combinations of five implementation features (e.g., agent, target, mode, time span, exposure) found among effective interventions. For each substantive analysis, we reframed the review's research questions to be designed for use with QCA, calibrated sets (i.e., transformed raw data into data used in analysis), and identified the necessary and/or sufficient combinations of BCTs and implementation features found in effective interventions. Our application of QCA for each substantive analysis is described in detail. We extended the original review findings by identifying seven combinations of BCTs and four combinations of implementation features that were sufficient for improving adherence. We found reasonable alignment between several systematic review steps and processes used in QCA except that typical approaches to study abstraction for some intervention components and features did not support a robust calibration for QCA. QCA was suitable for use within a systematic review of medication adherence interventions and offered insights beyond the single dimension stratifications used in the original completed review. Future prospective use of QCA during a review is needed to determine the optimal way to efficiently integrate QCA into existing approaches to evidence synthesis of complex interventions.

  17. Ionizable Nitroxides for Studying Local Electrostatic Properties of Lipid Bilayers and Protein Systems by EPR

    PubMed Central

    Voinov, Maxim A.; Smirnov, Alex I.

    2016-01-01

    Electrostatic interactions are known to play one of the major roles in the myriad of biochemical and biophysical processes. In this Chapter we describe biophysical methods to probe local electrostatic potentials of proteins and lipid bilayer systems that is based on an observation of reversible protonation of nitroxides by EPR. Two types of the electrostatic probes are discussed. The first one includes methanethiosulfonate derivatives of protonatable nitroxides that could be used for highly specific covalent modification of the cysteine’s sulfhydryl groups. Such spin labels are very similar in magnetic parameters and chemical properties to conventional MTSL making them suitable for studying local electrostatic properties of protein-lipid interfaces. The second type of EPR probes is designed as spin-labeled phospholipids having a protonatable nitroxide tethered to the polar head group. The probes of both types report on their ionization state through changes in magnetic parameters and a degree of rotational averaging, thus, allowing one to determine the electrostatic contribution to the interfacial pKa of the nitroxide, and, therefore, determining the local electrostatic potential. Due to their small molecular volume these probes cause a minimal perturbation to the protein or lipid system while covalent attachment secure the position of the reporter nitroxides. Experimental procedures to characterize and calibrate these probes by EPR and also the methods to analyze the EPR spectra by least-squares simulations are also outlined. The ionizable nitroxide labels and the nitroxide-labeled phospholipids described so far cover an exceptionally wide pH range from ca. 2.5 to 7.0 pH units making them suitable to study a broad range of biophysical phenomena especially at the negatively charged lipid bilayer surfaces. The rationale for selecting proper electrostatically neutral interface for calibrating such probes and example of studying surface potential of lipid bilayer is also described. PMID:26477252

  18. Combining satellite, aerial and ground measurements to assess forest carbon stocks in Democratic Republic of Congo

    NASA Astrophysics Data System (ADS)

    Beaumont, Benjamin; Bouvy, Alban; Stephenne, Nathalie; Mathoux, Pierre; Bastin, Jean-François; Baudot, Yves; Akkermans, Tom

    2015-04-01

    Monitoring tropical forest carbon stocks changes has been a rising topic in the recent years as a result of REDD+ mechanisms negotiations. Such monitoring will be mandatory for each project/country willing to benefit from these financial incentives in the future. Aerial and satellite remote sensing technologies offer cost advantages in implementing large scale forest inventories. Despite the recent progress made in the use of airborne LiDAR for carbon stocks estimation, no widely operational and cost effective method has yet been delivered for central Africa forest monitoring. Within the Maï Ndombe region of Democratic Republic of Congo, the EO4REDD project develops a method combining satellite, aerial and ground measurements. This combination is done in three steps: [1] mapping and quantifying forest cover changes using an object-based semi-automatic change detection (deforestation and forest degradation) methodology based on very high resolution satellite imagery (RapidEye), [2] developing an allometric linear model for above ground biomass measurements based on dendrometric parameters (tree crown areas and heights) extracted from airborne stereoscopic image pairs and calibrated using ground measurements of individual trees on a data set of 18 one hectare plots and [3] relating these two products to assess carbon stocks changes at a regional scale. Given the high accuracies obtained in [1] (> 80% for deforestation and 77% for forest degradation) and the suitable, but still to be improved with a larger calibrating sample, model (R² of 0.7) obtained in [2], EO4REDD products can be seen as a valid and replicable option for carbon stocks monitoring in tropical forests. Further improvements are planned to strengthen the cost effectiveness value and the REDD+ suitability in the second phase of EO4REDD. This second phase will include [A] specific model developments per forest type; [B] measurements of afforestation, reforestation and natural regeneration processes and [C] study of Sentinel satellite data series potential use.

  19. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    PubMed Central

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726

  20. [Determination of soluble solids content in Nanfeng Mandarin by Vis/NIR spectroscopy and UVE-ICA-LS-SVM].

    PubMed

    Sun, Tong; Xu, Wen-Li; Hu, Tian; Liu, Mu-Hua

    2013-12-01

    The objective of the present research was to assess soluble solids content (SSC) of Nanfeng mandarin by visible/near infrared (Vis/NIR) spectroscopy combined with new variable selection method, simplify prediction model and improve the performance of prediction model for SSC of Nanfeng mandarin. A total of 300 Nanfeng mandarin samples were used, the numbers of Nanfeng mandarin samples in calibration, validation and prediction sets were 150, 75 and 75, respectively. Vis/NIR spectra of Nanfeng mandarin samples were acquired by a QualitySpec spectrometer in the wavelength range of 350-1000 nm. Uninformative variables elimination (UVE) was used to eliminate wavelength variables that had few information of SSC, then independent component analysis (ICA) was used to extract independent components (ICs) from spectra that eliminated uninformative wavelength variables. At last, least squares support vector machine (LS-SVM) was used to develop calibration models for SSC of Nanfeng mandarin using extracted ICs, and 75 prediction samples that had not been used for model development were used to evaluate the performance of SSC model of Nanfeng mandarin. The results indicate t hat Vis/NIR spectroscopy combinedwith UVE-ICA-LS-SVM is suitable for assessing SSC o f Nanfeng mandarin, and t he precision o f prediction ishigh. UVE--ICA is an effective method to eliminate uninformative wavelength variables, extract important spectral information, simplify prediction model and improve the performance of prediction model. The SSC model developed by UVE-ICA-LS-SVM is superior to that developed by PLS, PCA-LS-SVM or ICA-LS-SVM, and the coefficient of determination and root mean square error in calibration, validation and prediction sets were 0.978, 0.230%, 0.965, 0.301% and 0.967, 0.292%, respectively.

  1. Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry

    PubMed Central

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies. PMID:26011052

  2. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  3. Densities of some molten fluoride salt mixtures suitable for heat storage in space power applications

    NASA Technical Reports Server (NTRS)

    Misra, Ajay K.

    1988-01-01

    Liquid densities were determined for a number of fluoride salt mixtures suitable for heat storage in space power applications, using a procedure that consisted of measuring the loss of weight of an inert bob in the melt. The density apparatus was calibrated with pure LiF and NaF at different temperatures. Density data for safe binary and ternary fluoride salt eutectics and congruently melting intermediate compounds are presented. In addition, a comparison was made between the volumetric heat storage capacity of different salt mixtures.

  4. An Automated Method of MFRSR Calibration for Aerosol Optical Depth Analysis with Application to an Asian Dust Outbreak over the United States.

    NASA Astrophysics Data System (ADS)

    Augustine, John A.; Cornwall, Christopher R.; Hodges, Gary B.; Long, Charles N.; Medina, Carlos I.; Deluisi, John J.

    2003-02-01

    Over the past decade, networks of Multifilter Rotating Shadowband Radiometers (MFRSR) and automated sun photometers have been established in the United States to monitor aerosol properties. The MFRSR alternately measures diffuse and global irradiance in six narrow spectral bands and a broadband channel of the solar spectrum, from which the direct normal component for each may be inferred. Its 500-nm channel mimics sun photometer measurements and thus is a source of aerosol optical depth information. Automatic data reduction methods are needed because of the high volume of data produced by the MFRSR. In addition, these instruments are often not calibrated for absolute irradiance and must be periodically calibrated for optical depth analysis using the Langley method. This process involves extrapolation to the signal the MFRSR would measure at the top of the atmosphere (I0). Here, an automated clear-sky identification algorithm is used to screen MFRSR 500-nm measurements for suitable calibration data. The clear-sky MFRSR measurements are subsequently used to construct a set of calibration Langley plots from which a mean I0 is computed. This calibration I0 may be subsequently applied to any MFRSR 500-nm measurement within the calibration period to retrieve aerosol optical depth. This method is tested on a 2-month MFRSR dataset from the Table Mountain NOAA Surface Radiation Budget Network (SURFRAD) station near Boulder, Colorado. The resultant I0 is applied to two Asian dust-related high air pollution episodes that occurred within the calibration period on 13 and 17 April 2001. Computed aerosol optical depths for 17 April range from approximately 0.30 to 0.40, and those for 13 April vary from background levels to >0.30. Errors in these retrievals were estimated to range from ±0.01 to ±0.05, depending on the solar zenith angle. The calculations are compared with independent MFRSR-based aerosol optical depth retrievals at the Pawnee National Grasslands, 85 km to the northeast of Table Mountain, and to sun-photometer-derived aerosol optical depths at the National Renewable Energy Laboratory in Golden, Colorado, 50 km to the south. Both the Table Mountain and Golden stations are situated within a few kilometers of the Front Range of the Rocky Mountains, whereas the Pawnee station is on the eastern plains of Colorado. Time series of aerosol optical depth from Pawnee and Table Mountain stations compare well for 13 April when, according to the Naval Aerosol Analysis and Prediction System, an upper-level Asian dust plume enveloped most of Colorado. Aerosol optical depths at the Golden station for that event are generally greater than those at Table Mountain and Pawnee, possibly because of the proximity of Golden to Denver's urban aerosol plume. The dust over Colorado was primarily surface based on 17 April. On that day, aerosol optical depths at Table Mountain and Golden are similar but are 2 times the magnitude of those at Pawnee. This difference is attributed to meteorological conditions that favored air stagnation in the planetary boundary layer along the Front Range, and a west-to-east gradient in aerosol concentration. The magnitude and timing of the aerosol optical depth measurements at Table Mountain for these events are found to be consistent with independent measurements made at NASA Aerosol Robotic Network (AERONET) stations at Missoula, Montana, and at Bondville, Illinois.

  5. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Emilia, Giulio, E-mail: giulio.demilia@univaq.it; Di Gasbarro, David, E-mail: david.digasbarro@graduate.univaq.it; Gaspari, Antonella, E-mail: antonella.gaspari@graduate.univaq.it

    2016-06-28

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behaviormore » if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.« less

  6. Quantitation of active pharmaceutical ingredients and excipients in powder blends using designed multivariate calibration models by near-infrared spectroscopy.

    PubMed

    Li, Weiyong; Worosila, Gregory D

    2005-05-13

    This research note demonstrates the simultaneous quantitation of a pharmaceutical active ingredient and three excipients in a simulated powder blend containing acetaminophen, Prosolv and Crospovidone. An experimental design approach was used in generating a 5-level (%, w/w) calibration sample set that included 125 samples. The samples were prepared by weighing suitable amount of powders into separate 20-mL scintillation vials and were mixed manually. Partial least squares (PLS) regression was used in calibration model development. The models generated accurate results for quantitation of Crospovidone (at 5%, w/w) and magnesium stearate (at 0.5%, w/w). Further testing of the models demonstrated that the 2-level models were as effective as the 5-level ones, which reduced the calibration sample number to 50. The models had a small bias for quantitation of acetaminophen (at 30%, w/w) and Prosolv (at 64.5%, w/w) in the blend. The implication of the bias is discussed.

  7. The Kelvin and Temperature Measurements

    PubMed Central

    Mangum, B. W.; Furukawa, G. T.; Kreider, K. G.; Meyer, C. W.; Ripple, D. C.; Strouse, G. F.; Tew, W. L.; Moldover, M. R.; Johnson, B. Carol; Yoon, H. W.; Gibson, C. E.; Saunders, R. D.

    2001-01-01

    The International Temperature Scale of 1990 (ITS-90) is defined from 0.65 K upwards to the highest temperature measurable by spectral radiation thermometry, the radiation thermometry being based on the Planck radiation law. When it was developed, the ITS-90 represented thermodynamic temperatures as closely as possible. Part I of this paper describes the realization of contact thermometry up to 1234.93 K, the temperature range in which the ITS-90 is defined in terms of calibration of thermometers at 15 fixed points and vapor pressure/temperature relations which are phase equilibrium states of pure substances. The realization is accomplished by using fixed-point devices, containing samples of the highest available purity, and suitable temperature-controlled environments. All components are constructed to achieve the defining equilibrium states of the samples for the calibration of thermometers. The high quality of the temperature realization and measurements is well documented. Various research efforts are described, including research to improve the uncertainty in thermodynamic temperatures by measuring the velocity of sound in gas up to 800 K, research in applying noise thermometry techniques, and research on thermocouples. Thermometer calibration services and high-purity samples and devices suitable for “on-site” thermometer calibration that are available to the thermometry community are described. Part II of the paper describes the realization of temperature above 1234.93 K for which the ITS-90 is defined in terms of the calibration of spectroradiometers using reference blackbody sources that are at the temperature of the equilibrium liquid-solid phase transition of pure silver, gold, or copper. The realization of temperature from absolute spectral or total radiometry over the temperature range from about 60 K to 3000 K is also described. The dissemination of the temperature scale using radiation thermometry from NIST to the customer is achieved by calibration of blackbody sources, tungsten-strip lamps, and pyrometers. As an example of the research efforts in absolute radiometry, which impacts the NIST spectral irradiance and radiance scales, results with filter radiometers and a high-temperature blackbody are summarized. PMID:27500019

  8. Comparison of TLD calibration methods for  192Ir dosimetry

    PubMed Central

    Butler, Duncan J.; Wilfert, Lisa; Ebert, Martin A.; Todd, Stephen P.; Hayton, Anna J.M.; Kron, Tomas

    2013-01-01

    For the purpose of dose measurement using a high‐dose rate  192Ir source, four methods of thermoluminescent dosimeter (TLD) calibration were investigated. Three of the four calibration methods used the  192Ir source. Dwell times were calculated to deliver 1 Gy to the TLDs irradiated either in air or water. Dwell time calculations were confirmed by direct measurement using an ionization chamber. The fourth method of calibration used 6 MV photons from a medical linear accelerator, and an energy correction factor was applied to account for the difference in sensitivity of the TLDs in  192Ir and 6 M V. The results of the four TLD calibration methods are presented in terms of the results of a brachytherapy audit where seven Australian centers irradiated three sets of TLDs in a water phantom. The results were in agreement within estimated uncertainties when the TLDs were calibrated with the  192Ir source. Calibrating TLDs in a phantom similar to that used for the audit proved to be the most practical method and provided the greatest confidence in measured dose. When calibrated using 6 MV photons, the TLD results were consistently higher than the  192Ir−calibrated TLDs, suggesting this method does not fully correct for the response of the TLDs when irradiated in the audit phantom. PACS number: 87 PMID:23318392

  9. Methods for Calibration of Prout-Tompkins Kinetics Parameters Using EZM Iteration and GLO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K; de Supinski, B

    2006-11-07

    This document contains information regarding the standard procedures used to calibrate chemical kinetics parameters for the extended Prout-Tompkins model to match experimental data. Two methods for calibration are mentioned: EZM calibration and GLO calibration. EZM calibration matches kinetics parameters to three data points, while GLO calibration slightly adjusts kinetic parameters to match multiple points. Information is provided regarding the theoretical approach and application procedure for both of these calibration algorithms. It is recommended that for the calibration process, the user begin with EZM calibration to provide a good estimate, and then fine-tune the parameters using GLO. Two examples have beenmore » provided to guide the reader through a general calibrating process.« less

  10. A calibration method based on virtual large planar target for cameras with large FOV

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu

    2018-02-01

    In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.

  11. A Full-Envelope Air Data Calibration and Three-Dimensional Wind Estimation Method Using Global Output-Error Optimization and Flight-Test Techniques

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.

    2012-01-01

    A novel, efficient air data calibration method is proposed for aircraft with limited envelopes. This method uses output-error optimization on three-dimensional inertial velocities to estimate calibration and wind parameters. Calibration parameters are based on assumed calibration models for static pressure, angle of attack, and flank angle. Estimated wind parameters are the north, east, and down components. The only assumptions needed for this method are that the inertial velocities and Euler angles are accurate, the calibration models are correct, and that the steady-state component of wind is constant throughout the maneuver. A two-minute maneuver was designed to excite the aircraft over the range of air data calibration parameters and de-correlate the angle-of-attack bias from the vertical component of wind. Simulation of the X-48B (The Boeing Company, Chicago, Illinois) aircraft was used to validate the method, ultimately using data derived from wind-tunnel testing to simulate the un-calibrated air data measurements. Results from the simulation were accurate and robust to turbulence levels comparable to those observed in flight. Future experiments are planned to evaluate the proposed air data calibration in a flight environment.

  12. Simple and cost-effective liquid chromatography-mass spectrometry method to measure dabrafenib quantitatively and six metabolites semi-quantitatively in human plasma.

    PubMed

    Vikingsson, Svante; Dahlberg, Jan-Olof; Hansson, Johan; Höiom, Veronica; Gréen, Henrik

    2017-06-01

    Dabrafenib is an inhibitor of BRAF V600E used for treating metastatic melanoma but a majority of patients experience adverse effects. Methods to measure the levels of dabrafenib and major metabolites during treatment are needed to allow development of individualized dosing strategies to reduce the burden of such adverse events. In this study, an LC-MS/MS method capable of measuring dabrafenib quantitatively and six metabolites semi-quantitatively is presented. The method is fully validated with regard to dabrafenib in human plasma in the range 5-5000 ng/mL. The analytes were separated on a C18 column after protein precipitation and detected in positive electrospray ionization mode using a Xevo TQ triple quadrupole mass spectrometer. As no commercial reference standards are available, the calibration curve of dabrafenib was used for semi-quantification of dabrafenib metabolites. Compared to earlier methods the presented method represents a simpler and more cost-effective approach suitable for clinical studies. Graphical abstract Combined multi reaction monitoring transitions of dabrafenib and metabolites in a typical case sample.

  13. Determination of aliskiren in human serum quantities by HPLC-tandem mass spectrometry appropriate for pediatric trials.

    PubMed

    Burckhardt, Bjoern B; Ramusovic, Sergej; Tins, Jutta; Laeer, Stephanie

    2013-04-01

    The orally active direct renin inhibitor aliskiren is approved for the treatment of essential hypertension in adults. Analytical methods utilized in clinical studies on efficacy and safety have not been fully described in the literature but need a large sample volume ranging from 200 to 700 μL, rendering them unsuitable particularly for pediatric applications. In the assay presented only 100 μL of serum is needed for mixed-mode solid-phase extraction. The chromatographic separation was performed on Xselect(TM) C18 CSH columns with mobile phase consisting of methanol-water-formic acid (75:25:0.005, v/v/v) and a flow rate of 0.4 mL/min. Running in positive electrospray ionization and multiple reaction monitoring the mass spectrometer was set to analyze precursor ion 552.2 m/z [M + H](+) to product ion 436.2 m/z during a total run time of 5 min. The method covers a linear calibration range of 0.146-1200 ng/mL. Intra-run and inter-run precisions were 0.4-7.2 and 0.6-12.9%. Mean recovery was at least 89%. Selectivity, accuracy and stability results comply with current European Medicines Agency and Food and Drug Administration guidelines. This successfully validated LC-MS/MS method with a wide linear calibration range requiring small serum amounts is suitable for pharmacokinetic investigations of aliskiren in pediatrics, adults and the elderly. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Gas chromatography-electron ionization-mass spectrometry quantitation of valproic acid and gabapentin, using dried plasma spots, for therapeutic drug monitoring in in-home medical care.

    PubMed

    Ikeda, Kayo; Ikawa, Kazuro; Yokoshige, Satoko; Yoshikawa, Satoshi; Morikawa, Norifumi

    2014-12-01

    A simple and sensitive gas chromatography-electron ionization-mass spectrometry (GC-EI-MS) method using dried plasma spot testing cards was developed for determination of valproic acid and gabapentin concentrations in human plasma from patients receiving in-home medical care. We have proposed that a simple, easy and dry sampling method is suitable for in-home medical patients for therapeutic drug monitoring. Therefore, in the present study, we used recently developed commercially available easy handling cards: Whatman FTA DMPK-A and Bond Elut DMS. In-home medical care patients can collect plasma using these simple kits. The spots of plasma on the cards were extracted into methanol and then evaporated to dryness. The residues were trimethylsilylated using N-methyl-N-trimethylsilyltrifluoroacetamide. For GC-EI-MS analysis, the calibration curves on both cards were linear from 10 to 200 µg/mL for valproic acid, and from 0.5 to 10 µg/mL for gabapentin. Intra- and interday precisions in plasma were both ≤13.0% (coefficient of variation), and the accuracy was between 87.9 and 112% for both cards within the calibration curves. The limits of quantification were 10 µg/mL for valproic acid and 0.5 µg/mL for gabapentin on both cards. We believe that the present method will be useful for in-home medical care. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.

  16. Halogenated peptides as internal standards (H-PINS): introduction of an MS-based internal standard set for liquid chromatography-mass spectrometry.

    PubMed

    Mirzaei, Hamid; Brusniak, Mi-Youn; Mueller, Lukas N; Letarte, Simon; Watts, Julian D; Aebersold, Ruedi

    2009-08-01

    As the application for quantitative proteomics in the life sciences has grown in recent years, so has the need for more robust and generally applicable methods for quality control and calibration. The reliability of quantitative proteomics is tightly linked to the reproducibility and stability of the analytical platforms, which are typically multicomponent (e.g. sample preparation, multistep separations, and mass spectrometry) with individual components contributing unequally to the overall system reproducibility. Variations in quantitative accuracy are thus inevitable, and quality control and calibration become essential for the assessment of the quality of the analyses themselves. Toward this end, the use of internal standards cannot only assist in the detection and removal of outlier data acquired by an irreproducible system (quality control) but can also be used for detection of changes in instruments for their subsequent performance and calibration. Here we introduce a set of halogenated peptides as internal standards. The peptides are custom designed to have properties suitable for various quality control assessments, data calibration, and normalization processes. The unique isotope distribution of halogenated peptides makes their mass spectral detection easy and unambiguous when spiked into complex peptide mixtures. In addition, they were designed to elute sequentially over an entire aqueous to organic LC gradient and to have m/z values within the commonly scanned mass range (300-1800 Da). In a series of experiments in which these peptides were spiked into an enriched N-glycosite peptide fraction (i.e. from formerly N-glycosylated intact proteins in their deglycosylated form) isolated from human plasma, we show the utility and performance of these halogenated peptides for sample preparation and LC injection quality control as well as for retention time and mass calibration. Further use of the peptides for signal intensity normalization and retention time synchronization for selected reaction monitoring experiments is also demonstrated.

  17. Application of Genetic Algorithm (GA) Assisted Partial Least Square (PLS) Analysis on Trilinear and Non-trilinear Fluorescence Data Sets to Quantify the Fluorophores in Multifluorophoric Mixtures: Improving Quantification Accuracy of Fluorimetric Estimations of Dilute Aqueous Mixtures.

    PubMed

    Kumar, Keshav

    2018-03-01

    Excitation-emission matrix fluorescence (EEMF) and total synchronous fluorescence spectroscopy (TSFS) are the 2 fluorescence techniques that are commonly used for the analysis of multifluorophoric mixtures. These 2 fluorescence techniques are conceptually different and provide certain advantages over each other. The manual analysis of such highly correlated large volume of EEMF and TSFS towards developing a calibration model is difficult. Partial least square (PLS) analysis can analyze the large volume of EEMF and TSFS data sets by finding important factors that maximize the correlation between the spectral and concentration information for each fluorophore. However, often the application of PLS analysis on entire data sets does not provide a robust calibration model and requires application of suitable pre-processing step. The present work evaluates the application of genetic algorithm (GA) analysis prior to PLS analysis on EEMF and TSFS data sets towards improving the precision and accuracy of the calibration model. The GA algorithm essentially combines the advantages provided by stochastic methods with those provided by deterministic approaches and can find the set of EEMF and TSFS variables that perfectly correlate well with the concentration of each of the fluorophores present in the multifluorophoric mixtures. The utility of the GA assisted PLS analysis is successfully validated using (i) EEMF data sets acquired for dilute aqueous mixture of four biomolecules and (ii) TSFS data sets acquired for dilute aqueous mixtures of four carcinogenic polycyclic aromatic hydrocarbons (PAHs) mixtures. In the present work, it is shown that by using the GA it is possible to significantly improve the accuracy and precision of the PLS calibration model developed for both EEMF and TSFS data set. Hence, GA must be considered as a useful pre-processing technique while developing an EEMF and TSFS calibration model.

  18. Halogenated Peptides as Internal Standards (H-PINS)

    PubMed Central

    Mirzaei, Hamid; Brusniak, Mi-Youn; Mueller, Lukas N.; Letarte, Simon; Watts, Julian D.; Aebersold, Ruedi

    2009-01-01

    As the application for quantitative proteomics in the life sciences has grown in recent years, so has the need for more robust and generally applicable methods for quality control and calibration. The reliability of quantitative proteomics is tightly linked to the reproducibility and stability of the analytical platforms, which are typically multicomponent (e.g. sample preparation, multistep separations, and mass spectrometry) with individual components contributing unequally to the overall system reproducibility. Variations in quantitative accuracy are thus inevitable, and quality control and calibration become essential for the assessment of the quality of the analyses themselves. Toward this end, the use of internal standards cannot only assist in the detection and removal of outlier data acquired by an irreproducible system (quality control) but can also be used for detection of changes in instruments for their subsequent performance and calibration. Here we introduce a set of halogenated peptides as internal standards. The peptides are custom designed to have properties suitable for various quality control assessments, data calibration, and normalization processes. The unique isotope distribution of halogenated peptides makes their mass spectral detection easy and unambiguous when spiked into complex peptide mixtures. In addition, they were designed to elute sequentially over an entire aqueous to organic LC gradient and to have m/z values within the commonly scanned mass range (300–1800 Da). In a series of experiments in which these peptides were spiked into an enriched N-glycosite peptide fraction (i.e. from formerly N-glycosylated intact proteins in their deglycosylated form) isolated from human plasma, we show the utility and performance of these halogenated peptides for sample preparation and LC injection quality control as well as for retention time and mass calibration. Further use of the peptides for signal intensity normalization and retention time synchronization for selected reaction monitoring experiments is also demonstrated. PMID:19411281

  19. SU-E-T-223: Computed Radiography Dose Measurements of External Radiotherapy Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aberle, C; Kapsch, R

    2015-06-15

    Purpose: To obtain quantitative, two-dimensional dose measurements of external radiotherapy beams with a computed radiography (CR) system and to derive volume correction factors for ionization chambers in small fields. Methods: A commercial Kodak ACR2000i CR system with Kodak Flexible Phosphor Screen HR storage foils was used. Suitable measurement conditions and procedures were established. Several corrections were derived, including image fading, length-scale corrections and long-term stability corrections. Dose calibration curves were obtained for cobalt, 4 MV, 8 MV and 25 MV photons, and for 10 MeV, 15 MeV and 18 MeV electrons in a water phantom. Inherent measurement inhomogeneities were studiedmore » as well as directional dependence of the response. Finally, 2D scans with ionization chambers were directly compared to CR measurements, and volume correction factors were derived. Results: Dose calibration curves (0.01 Gy to 7 Gy) were obtained for multiple photon and electron beam qualities. For each beam quality, the calibration curves can be described by a single fit equation over the whole dose range. The energy dependence of the dose response was determined. The length scale on the images was adjusted scan-by-scan, typically by 2 percent horizontally and by 3 percent vertically. The remaining inhomogeneities after the system’s standard calibration procedure were corrected for. After correction, the homogeneity is on the order of a few percent. The storage foils can be rotated by up to 30 degrees without a significant effect on the measured signal. First results on the determination of volume correction factors were obtained. Conclusion: With CR, quantitative, two-dimensional dose measurements with a high spatial resolution (sub-mm) can be obtained over a large dose range. In order to make use of these advantages, several calibrations, corrections and supporting measurements are needed. This work was funded by the European Metrology Research Programme (EMRP) project HLT09 MetrExtRT Metrology for Radiotherapy using Complex Radiation Fields.« less

  20. A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.

    PubMed

    Pagoulatos, N; Haynor, D R; Kim, Y

    2001-09-01

    We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.

  1. Self-calibration method for rotating laser positioning system using interscanning technology and ultrasonic ranging.

    PubMed

    Wu, Jun; Yu, Zhijing; Zhuge, Jingchang

    2016-04-01

    A rotating laser positioning system (RLPS) is an efficient measurement method for large-scale metrology. Due to multiple transmitter stations, which consist of a measurement network, the position relationship of these stations must be first calibrated. However, with such auxiliary devices such as a laser tracker, scale bar, and complex calibration process, the traditional calibration methods greatly reduce the measurement efficiency. This paper proposes a self-calibration method for RLPS, which can automatically obtain the position relationship. The method is implemented through interscanning technology by using a calibration bar mounted on the transmitter station. Each bar is composed of three RLPS receivers and one ultrasonic sensor whose coordinates are known in advance. The calibration algorithm is mainly based on multiplane and distance constraints and is introduced in detail through a two-station mathematical model. The repeated experiments demonstrate that the coordinate measurement uncertainty of spatial points by using this method is about 0.1 mm, and the accuracy experiments show that the average coordinate measurement deviation is about 0.3 mm compared with a laser tracker. The accuracy can meet the requirements of most applications, while the calibration efficiency is significantly improved.

  2. Biodiesel content determination in diesel fuel blends using near infrared (NIR) spectroscopy and support vector machines (SVM).

    PubMed

    Alves, Julio Cesar L; Poppi, Ronei J

    2013-01-30

    This work verifies the potential of support vector machine (SVM) algorithm applied to near infrared (NIR) spectroscopy data to develop multivariate calibration models for determination of biodiesel content in diesel fuel blends that are more effective and appropriate for analytical determinations of this type of fuel nowadays, providing the usual extended analytical range with required accuracy. Considering the difficulty to develop suitable models for this type of determination in an extended analytical range and that, in practice, biodiesel/diesel fuel blends are nowadays most often used between 0 and 30% (v/v) of biodiesel content, a calibration model is suggested for the range 0-35% (v/v) of biodiesel in diesel blends. The possibility of using a calibration model for the range 0-100% (v/v) of biodiesel in diesel fuel blends was also investigated and the difficulty in obtaining adequate results for this full analytical range is discussed. The SVM models are compared with those obtained with PLS models. The best result was obtained by the SVM model using the spectral region 4400-4600 cm(-1) providing the RMSEP value of 0.11% in 0-35% biodiesel content calibration model. This model provides the determination of biodiesel content in agreement with the accuracy required by ABNT NBR and ASTM reference methods and without interference due to the presence of vegetable oil in the mixture. The best SVM model fit performance for the relationship studied is also verified by providing similar prediction results with the use of 4400-6200 cm(-1) spectral range while the PLS results are much worse over this spectral region. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Comparison of Blackbody Sources for Low-Temperature IR Calibration

    NASA Astrophysics Data System (ADS)

    Ljungblad, S.; Holmsten, M.; Josefson, L. E.; Klason, P.

    2015-12-01

    Radiation thermometers are traditionally mostly used in high-temperature applications. They are, however, becoming more common in different applications at room temperature or below, in applications such as monitoring frozen food and evaluating heat leakage in buildings. To measure temperature accurately with a pyrometer, calibration is essential. A problem with traditional, commercially available, blackbody sources is that ice is often formed on the surface when measuring temperatures below 0°C. This is due to the humidity of the surrounding air and, as ice does not have the same emissivity as the blackbody source, it biases the measurements. An alternative to a traditional blackbody source has been tested by SP Technical Research Institute of Sweden. The objective is to find a cost-efficient method of calibrating pyrometers by comparison at the level of accuracy required for the intended use. A disc-shaped blackbody with a surface pyramid pattern is placed in a climatic chamber with an opening for field of view of the pyrometer. The temperature of the climatic chamber is measured with two platinum resistance thermometers in the air in the vicinity of the disc. As a rule, frost will form only if the deposition surface is colder than the surrounding air, and, as this is not the case when the air of the climatic chamber is cooled, there should be no frost or ice formed on the blackbody surface. To test the disc-shaped blackbody source, a blackbody cavity immersed in a conventional stirred liquid bath was used as a reference blackbody source. Two different pyrometers were calibrated by comparison using the two different blackbody sources, and the results were compared. The results of the measurements show that the disc works as intended and is suitable as a blackbody radiation source.

  4. The Geoscience Spaceborne Imaging Spectroscopy Technical Committees Calibration and Validation Workshop

    NASA Technical Reports Server (NTRS)

    Ong, Cindy; Mueller, Andreas; Thome, Kurtis; Pierce, Leland E.; Malthus, Timothy

    2016-01-01

    Calibration is the process of quantitatively defining a system's responses to known, controlled signal inputs, and validation is the process of assessing, by independent means, the quality of the data products derived from those system outputs [1]. Similar to other Earth observation (EO) sensors, the calibration and validation of spaceborne imaging spectroscopy sensors is a fundamental underpinning activity. Calibration and validation determine the quality and integrity of the data provided by spaceborne imaging spectroscopy sensors and have enormous downstream impacts on the accuracy and reliability of products generated from these sensors. At least five imaging spectroscopy satellites are planned to be launched within the next five years, with the two most advanced scheduled to be launched in the next two years [2]. The launch of these sensors requires the establishment of suitable, standardized, and harmonized calibration and validation strategies to ensure that high-quality data are acquired and comparable between these sensor systems. Such activities are extremely important for the community of imaging spectroscopy users. Recognizing the need to focus on this underpinning topic, the Geoscience Spaceborne Imaging Spectroscopy (previously, the International Spaceborne Imaging Spectroscopy) Technical Committee launched a calibration and validation initiative at the 2013 International Geoscience and Remote Sensing Symposium (IGARSS) in Melbourne, Australia, and a post-conference activity of a vicarious calibration field trip at Lake Lefroy in Western Australia.

  5. Analysis of calibration accuracy of cameras with different target sizes for large field of view

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Chai, Zhiwen; Long, Changyu; Deng, Huaxia; Ma, Mengchao; Zhong, Xiang; Yu, Huan

    2018-03-01

    Visual measurement plays an increasingly important role in the field o f aerospace, ship and machinery manufacturing. Camera calibration of large field-of-view is a critical part of visual measurement . For the issue a large scale target is difficult to be produced, and the precision can not to be guaranteed. While a small target has the advantage of produced of high precision, but only local optimal solutions can be obtained . Therefore, studying the most suitable ratio of the target size to the camera field of view to ensure the calibration precision requirement of the wide field-of-view is required. In this paper, the cameras are calibrated by a series of different dimensions of checkerboard calibration target s and round calibration targets, respectively. The ratios of the target size to the camera field-of-view are 9%, 18%, 27%, 36%, 45%, 54%, 63%, 72%, 81% and 90%. The target is placed in different positions in the camera field to obtain the camera parameters of different positions . Then, the distribution curves of the reprojection mean error of the feature points' restructure in different ratios are analyzed. The experimental data demonstrate that with the ratio of the target size to the camera field-of-view increas ing, the precision of calibration is accordingly improved, and the reprojection mean error changes slightly when the ratio is above 45%.

  6. Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut

    2017-04-01

    Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization

  7. Self-calibration of Cosmic Microwave Background Polarization Experiments

    NASA Astrophysics Data System (ADS)

    Keating, Brian G.; Shimon, Meir; Yadav, Amit P. S.

    2013-01-01

    Precision measurements of the polarization of the cosmic microwave background (CMB) radiation, especially experiments seeking to detect the odd-parity "B-modes," have far-reaching implications for cosmology. To detect the B-modes generated during inflation, the flux response and polarization angle of these experiments must be calibrated to exquisite precision. While suitable flux calibration sources abound, polarization angle calibrators are deficient in many respects. Man-made polarized sources are often not located in the antenna's far-field, have spectral properties that are radically different from the CMB's, are cumbersome to implement, and may be inherently unstable over the (long) duration these searches require to detect the faint signature of the inflationary epoch. Astrophysical sources suffer from time, frequency, and spatial variability, are not visible from all CMB observatories, and none are understood with sufficient accuracy to calibrate future CMB polarimeters seeking to probe inflationary energy scales of 1015 GeV. Both man-made and astrophysical sources require dedicated observations which detract from the amount of integration time usable for detection of the inflationary B-modes. CMB TB and EB modes, expected to identically vanish in the standard cosmological model, can be used to calibrate CMB polarimeters. By enforcing the observed EB and TB power spectra to be consistent with zero, CMB polarimeters can be calibrated to levels not possible with man-made or astrophysical sources. All of this can be accomplished for any polarimeter without any loss of observing time using a calibration source which is spectrally identical to the CMB B-modes.

  8. In-focal-plane characterization of excitation distribution for quantitative fluorescence microscopy applications

    NASA Astrophysics Data System (ADS)

    Dietrich, Klaus; Brülisauer, Martina; ćaǧin, Emine; Bertsch, Dietmar; Lüthi, Stefan; Heeb, Peter; Stärker, Ulrich; Bernard, André

    2017-06-01

    The applications of fluorescence microscopy span medical diagnostics, bioengineering and biomaterial analytics. Full exploitation of fluorescent microscopy is hampered by imperfections in illumination, detection and filtering. Mainly, errors stem from deviations induced by real-world components inducing spatial or angular variations of propagation properties along the optical path, and they can be addressed through consistent and accurate calibration. For many applications, uniform signal to noise ratio (SNR) over the imaging area is required. Homogeneous SNR can be achieved by quantifying and compensating for the signal bias. We present a method to quantitatively characterize novel reference materials as a calibration reference for biomaterials analytics. The reference materials under investigation comprise thin layers of fluorophores embedded in polymer matrices. These layers are highly homogeneous in their fluorescence response, where cumulative variations do not exceed 1% over the field of view (1.5 x 1.1 mm). An automated and reproducible measurement methodology, enabling sufficient correction for measurement artefacts, is reported. The measurement setup is equipped with an autofocus system, ensuring that the measured film quality is not artificially increased by out-of-focus reduction of the system modulation transfer function. The quantitative characterization method is suitable for analysis of modified bio-materials, especially through patterned protein decoration. The imaging method presented here can be used to statistically analyze protein patterns, thereby increasing both precision and throughput. Further, the method can be developed to include a reference emitter and detector pair on the image surface of the reference object, in order to provide traceable measurements.

  9. Forecasting the detectability of known radial velocity planets with the upcoming CHEOPS mission

    NASA Astrophysics Data System (ADS)

    Yi, Joo Sung; Chen, Jingjing; Kipping, David

    2018-04-01

    The CHaracterizing ExOPlanets Satellite (CHEOPS) mission is planned for launch next year with a major objective being to search for transits of known radial velocity (RV) planets, particularly those orbiting bright stars. Since the RV method is only sensitive to planetary mass, the radii, transit depths and transit signal-to-noise values of each RV planet are, a priori, unknown. Using an empirically calibrated probabilistic mass-radius relation, forecaster, we address this by predicting a catalogue of homogeneous credible intervals for these three keys terms for 468 planets discovered via RVs. Of these, we find that the vast majority should be detectable with CHEOPS, including terrestrial bodies, if they have the correct geometric alignment. In particular, we predict that 22 mini-Neptunes and 82 Neptune-sized planets would be suitable for detection and that more than 80 per cent of these will have apparent magnitude of V < 10, making them highly suitable for follow-up characterization work. Our work aims to assist the CHEOPS team in scheduling efforts and highlights the great value of quantifiable, statistically robust estimates for upcoming exoplanetary missions.

  10. Predicting glycogen concentration in the foot muscle of abalone using near infrared reflectance spectroscopy (NIRS).

    PubMed

    Fluckiger, Miriam; Brown, Malcolm R; Ward, Louise R; Moltschaniwskyj, Natalie A

    2011-06-15

    Near infrared reflectance spectroscopy (NIRS) was used to predict glycogen concentrations in the foot muscle of cultured abalone. NIR spectra of live, shucked and freeze-dried abalones were modelled against chemically measured glycogen data (range: 0.77-40.9% of dry weight (DW)) using partial least squares (PLS) regression. The calibration models were then used to predict glycogen concentrations of test abalone samples and model robustness was assessed from coefficient of determination of the validation (R2(val)) and standard error of prediction (SEP) values. The model for freeze-dried abalone gave the best prediction (R2(val) 0.97, SEP=1.71), making it suitable for quantifying glycogen. Models for live and shucked abalones had R2(val) of 0.86 and 0.90, and SEP of 3.46 and 3.07 respectively, making them suitable for producing estimations of glycogen concentration. As glycogen is a taste-active component associated with palatability in abalone, this study demonstrated the potential of NIRS as a rapid method to monitor the factors associated with abalone quality. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Volumetric calibration of a plenoptic camera.

    PubMed

    Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S

    2018-02-01

    The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.

  12. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  13. BESTEST-EX | Buildings | NREL

    Science.gov Websites

    method for testing home energy audit software and associated calibration methods. BESTEST-EX is one of Energy Analysis Model Calibration Methods. When completed, the ANSI/RESNET SMOT will specify test procedures for evaluating calibration methods used in conjunction with predicting building energy use and

  14. Calibration and correction procedures for cosmic-ray neutron soil moisture probes located across Australia

    NASA Astrophysics Data System (ADS)

    Hawdon, Aaron; McJannet, David; Wallace, Jim

    2014-06-01

    The cosmic-ray probe (CRP) provides continuous estimates of soil moisture over an area of ˜30 ha by counting fast neutrons produced from cosmic rays which are predominantly moderated by water molecules in the soil. This paper describes the setup, measurement correction procedures, and field calibration of CRPs at nine locations across Australia with contrasting soil type, climate, and land cover. These probes form the inaugural Australian CRP network, which is known as CosmOz. CRP measurements require neutron count rates to be corrected for effects of atmospheric pressure, water vapor pressure changes, and variations in incoming neutron intensity. We assess the magnitude and importance of these corrections and present standardized approaches for network-wide analysis. In particular, we present a new approach to correct for incoming neutron intensity variations and test its performance against existing procedures used in other studies. Our field calibration results indicate that a generalized calibration function for relating neutron counts to soil moisture is suitable for all soil types, with the possible exception of very sandy soils with low water content. Using multiple calibration data sets, we demonstrate that the generalized calibration function only applies after accounting for persistent sources of hydrogen in the soil profile. Finally, we demonstrate that by following standardized correction procedures and scaling neutron counting rates of all CRPs to a single reference location, differences in calibrations between sites are related to site biomass. This observation provides a means for estimating biomass at a given location or for deriving coefficients for the calibration function in the absence of field calibration data.

  15. High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu

    2017-05-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.

  16. Uncertainty propagation in the calibration equations for NTC thermistors

    NASA Astrophysics Data System (ADS)

    Liu, Guang; Guo, Liang; Liu, Chunlong; Wu, Qingwen

    2018-06-01

    The uncertainty propagation problem is quite important for temperature measurements, since we rely so much on the sensors and calibration equations. Although uncertainty propagation for platinum resistance or radiation thermometers is well known, there have been few publications concerning negative temperature coefficient (NTC) thermistors. Insight into the propagation characteristics of uncertainty that develop when equations are determined using the Lagrange interpolation or least-squares fitting method is presented here with respect to several of the most common equations used in NTC thermistor calibration. Within this work, analytical expressions of the propagated uncertainties for both fitting methods are derived for the uncertainties in the measured temperature and resistance at each calibration point. High-precision calibration of an NTC thermistor in a precision water bath was performed by means of the comparison method. Results show that, for both fitting methods, the propagated uncertainty is flat in the interpolation region but rises rapidly beyond the calibration range. Also, for temperatures interpolated between calibration points, the propagated uncertainty is generally no greater than that associated with the calibration points. For least-squares fitting, the propagated uncertainty is significantly reduced by increasing the number of calibration points and can be well kept below the uncertainty of the calibration points.

  17. Authentic Practices as Contexts for Learning to Draw Inferences beyond Correlated Data

    ERIC Educational Resources Information Center

    Dierdorp, Adri; Bakker, Arthur; Eijkelhof, Harrie; van Maanen, Jan

    2011-01-01

    To support 11th-grade students' informal inferential reasoning, a teaching and learning strategy was designed based on authentic practices in which professionals use correlation or linear regression. These practices included identifying suitable physical training programmes, dyke monitoring, and the calibration of measurement instruments. The…

  18. Effect of the Scattering Radiation in Air and Two Type of Slap Phantom between PMMA and the ISO Water Phantom for Personal Dosimeters Calibration

    NASA Astrophysics Data System (ADS)

    Kamwang, N.; Rungseesumran, T.; Saengchantr, D.; Monthonwattana, S.; Pungkun, V.

    2017-06-01

    The calibration of personal dosimeter to determine the quantities of the personal dose equivalent, Hp(d), is required to be placed on a suitable phantom in order to provide a reasonable approximation to the radiation backscattering properties as equivalent as part of body. The dosimeter which is worn on the trunk usually calibrated with slap phantom which recommended in ICRU 47 with dimension of 30 cm (w) x 30 cm (h) x 15 cm (t) PMMA slab phantom to achieve uniformity in calibration procedures, on the other hand the International Organization for Standardization (ISO), ISO 4037-3, proposed the ISO water slap phantom, with PMMA walls, same dimension but different wall thickness (front wall 2.5 mm and other side wall 10 mm thick) and fill with water. However, some laboratories are still calibrating a personal dosimeter in air in term of ambient dose equivalent, H*(d). This research study the effect of the scattering radiation in two type of those slap phantoms and in air, to calibrate two type of OSL (XA and LA) and electronic personal dosimeters. The X-ray and Cs-137 radiation field with the energy range from 33 to 662 keV were used. The results of this study will be discussed.

  19. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  20. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    PubMed

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  1. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    PubMed Central

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. PMID:28672823

  2. Calibration procedure of Hukseflux SR25 to Establish the Diffuse Reference for the Outdoor Broadband Radiometer Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reda, Ibrahim M.; Andreas, Afshin M.

    2017-08-01

    Accurate pyranometer calibrations, traceable to internationally recognized standards, are critical for solar irradiance measurements. One calibration method is the component summation method, where the pyranometers are calibrated outdoors under clear sky conditions, and the reference global solar irradiance is calculated as the sum of two reference components, the diffuse horizontal and subtended beam solar irradiances. The beam component is measured with pyrheliometers traceable to the World Radiometric Reference, while there is no internationally recognized reference for the diffuse component. In the absence of such a reference, we present a method to consistently calibrate pyranometers for measuring the diffuse component. Themore » method is based on using a modified shade/unshade method and a pyranometer with less than 0.5 W/m2 thermal offset. The calibration result shows that the responsivity of Hukseflux SR25 pyranometer equals 10.98 uV/(W/m2) with +/-0.86 percent uncertainty.« less

  3. Radiation calibration for LWIR Hyperspectral Imager Spectrometer

    NASA Astrophysics Data System (ADS)

    Yang, Zhixiong; Yu, Chunchao; Zheng, Wei-jian; Lei, Zhenggang; Yan, Min; Yuan, Xiaochun; Zhang, Peizhong

    2014-11-01

    The radiometric calibration of LWIR Hyperspectral imager Spectrometer is presented. The lab has been developed to LWIR Interferometric Hyperspectral imager Spectrometer Prototype(CHIPED-I) to study Lab Radiation Calibration, Two-point linear calibration is carried out for the spectrometer by using blackbody respectively. Firstly, calibration measured relative intensity is converted to the absolute radiation lightness of the object. Then, radiation lightness of the object is is converted the brightness temperature spectrum by the method of brightness temperature. The result indicated †that this method of Radiation Calibration calibration was very good.

  4. A proposed standard method for polarimetric calibration and calibration verification

    NASA Astrophysics Data System (ADS)

    Persons, Christopher M.; Jones, Michael W.; Farlow, Craig A.; Morell, L. Denise; Gulley, Michael G.; Spradley, Kevin D.

    2007-09-01

    Accurate calibration of polarimetric sensors is critical to reducing and analyzing phenomenology data, producing uniform polarimetric imagery for deployable sensors, and ensuring predictable performance of polarimetric algorithms. It is desirable to develop a standard calibration method, including verification reporting, in order to increase credibility with customers and foster communication and understanding within the polarimetric community. This paper seeks to facilitate discussions within the community on arriving at such standards. Both the calibration and verification methods presented here are performed easily with common polarimetric equipment, and are applicable to visible and infrared systems with either partial Stokes or full Stokes sensitivity. The calibration procedure has been used on infrared and visible polarimetric imagers over a six year period, and resulting imagery has been presented previously at conferences and workshops. The proposed calibration method involves the familiar calculation of the polarimetric data reduction matrix by measuring the polarimeter's response to a set of input Stokes vectors. With this method, however, linear combinations of Stokes vectors are used to generate highly accurate input states. This allows the direct measurement of all system effects, in contrast with fitting modeled calibration parameters to measured data. This direct measurement of the data reduction matrix allows higher order effects that are difficult to model to be discovered and corrected for in calibration. This paper begins with a detailed tutorial on the proposed calibration and verification reporting methods. Example results are then presented for a LWIR rotating half-wave retarder polarimeter.

  5. Minimizing Gravity Sag of a Large Mirror with an Inverted Hindle-Mount

    NASA Technical Reports Server (NTRS)

    Robinson, David W.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    A method of minimizing the optical distortion from gravity sag on a suspended large autocollimating flat mirror has been devised. This method consists of an inverted nine-point Hindle-Mount. A conventional Hindle-mount is located underneath a sky-viewing mirror and is primarily under compression loads from the weight of the mirror. It is not suitable for the situation where the mirror is viewing the ground, since a mirror would tend to fall out of the mount when in an inverted position. The inverted Hindle-Mount design consists of bonded joints on the backside of the mirror that allow the mirror to be held or suspended above an object to be viewed. This ability is useful in optical setups such as a calibration test where a flat mirror is located above a telescope so that the telescope may view a known optic.

  6. [Determination of aristolochic acid A in Guanxinsuhe preparations by RP-HPLC].

    PubMed

    Li, Lin; Gao, Hui-Min; Wang, Zhi-Min; Wang, Wei-Hao

    2006-01-01

    To establish a determination method of aristolochic acid A in Guanxisuhe preparations by RP-HPLC. The instrument used was Hewlett-Packard 1100 HPLC with a Alltech C18 column (4.6 mm x 250 mm, 5 microm). The mobile phase was methanol-water-acetic acid (68: 32:1) and the flow rate was 1.0 mL x min(-1). The UV detection wavelength was 390 nm and the column temperature was at 35 degrees C. The extracted solvent for the preparations was methanol solution contained 10% formic acid. The calibration curve was linear (r = 0.999 9) within the range of 0.119-1.89 microg for aristolochic acid A. The average recovery 99.0%, RSD 0.63%. The method with good linear relationship was convenient, quick, accurate, and suitable for the quality control of the aristolochic acid A in Guanxinsuhe and other traditional Chinese medicines containing aristolochic acid A.

  7. A comparative study of the use of powder X-ray diffraction, Raman and near infrared spectroscopy for quantification of binary polymorphic mixtures of piracetam.

    PubMed

    Croker, Denise M; Hennigan, Michelle C; Maher, Anthony; Hu, Yun; Ryder, Alan G; Hodnett, Benjamin K

    2012-04-07

    Diffraction and spectroscopic methods were evaluated for quantitative analysis of binary powder mixtures of FII(6.403) and FIII(6.525) piracetam. The two polymorphs of piracetam could be distinguished using powder X-ray diffraction (PXRD), Raman and near-infrared (NIR) spectroscopy. The results demonstrated that Raman and NIR spectroscopy are most suitable for quantitative analysis of this polymorphic mixture. When the spectra are treated with the combination of multiplicative scatter correction (MSC) and second derivative data pretreatments, the partial least squared (PLS) regression model gave a root mean square error of calibration (RMSEC) of 0.94 and 0.99%, respectively. FIII(6.525) demonstrated some preferred orientation in PXRD analysis, making PXRD the least preferred method of quantification. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Simultaneous HPLC analysis of pseudophedrine hydrochloride, codeine phosphate, and triprolidine hydrochloride in liquid dosage forms.

    PubMed

    Manassra, Adnan; Khamis, Mustafa; El-Dakiky, Magdy; Abdel-Qader, Zuhair; Al-Rimawi, Fuad

    2010-03-11

    An HPLC method using UV detection is proposed for the simultaneous determination of pseudophedrine hydrochloride, codeine phosphate, and triprolidine hydrochloride in liquid formulation. C18 column (250mmx4.0mm) is used as the stationary phase with a mixture of methanol:acetate buffer:acetonitrile (85:5:10, v/v) as the mobile phase. The factors affecting column separation of the analytes were studied. The calibration graphs exhibited a linear concentration range of 0.06-1.0mg/ml for pseudophedrine hydrochloride, 0.02-1.0mg/ml for codeine phosphate, and 0.0025-1.0mg/ml for triprolidine hydrochloride for a sample size of 5microl with correlation coefficients of better than 0.999 for all active ingredients studied. The results demonstrate that this method is reliable, reproducible and suitable for routine use with analysis time of less than 4min. Copyright 2009 Elsevier B.V. All rights reserved.

  9. Hydraulic modeling of mussel habitat at a bridge-replacement site, Allegheny River, Pennsylvania, USA

    USGS Publications Warehouse

    Fulton, John W.; Wagner, Chad R.; Rogers, Megan E.; Zimmerman, Gregory F.

    2010-01-01

    Based on the statistical targets established, the hydraulic model results suggest that an additional 2428 m2 or a 30-percent increase in suitable mussel habitat could be generated at the replacement-bridge site when compared to the baseline condition associated with the existing bridge at that same location. The study did not address the influences of substrate, acid mine drainage, sediment loads from tributaries, and surface-water/ground-water exchange on mussel habitat. Future studies could include methods for quantifying (1) channel–substrate composition and distribution using tools such as hydroacoustic echosounders specifically designed and calibrated to identify bed composition and mussel populations, (2) surface-water and ground-water interactions, and (3) a high-streamflow event.

  10. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems

    PubMed Central

    de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625

  11. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems.

    PubMed

    de Paula, Lauro C M; Soares, Anderson S; de Lima, Telma W; Delbem, Alexandre C B; Coelho, Clarimar J; Filho, Arlindo R G

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation.

  12. Cider fermentation process monitoring by Vis-NIR sensor system and chemometrics.

    PubMed

    Villar, Alberto; Vadillo, Julen; Santos, Jose I; Gorritxategi, Eneko; Mabe, Jon; Arnaiz, Aitor; Fernández, Luis A

    2017-04-15

    Optimization of a multivariate calibration process has been undertaken for a Visible-Near Infrared (400-1100nm) sensor system, applied in the monitoring of the fermentation process of the cider produced in the Basque Country (Spain). The main parameters that were monitored included alcoholic proof, l-lactic acid content, glucose+fructose and acetic acid content. The multivariate calibration was carried out using a combination of different variable selection techniques and the most suitable pre-processing strategies were selected based on the spectra characteristics obtained by the sensor system. The variable selection techniques studied in this work include Martens Uncertainty test, interval Partial Least Square Regression (iPLS) and Genetic Algorithm (GA). This procedure arises from the need to improve the calibration models prediction ability for cider monitoring. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Custom-oriented wavefront sensor for human eye properties measurements

    NASA Astrophysics Data System (ADS)

    Galetskiy, Sergey; Letfullin, Renat; Dubinin, Alex; Cherezova, Tatyana; Belyakov, Alexey; Kudryashov, Alexis

    2005-12-01

    The problem of correct measurement of human eye aberrations is very important with the rising widespread of a surgical procedure for reducing refractive error in the eye, so called, LASIK (laser-assisted in situ keratomileusis). In this paper we show capabilities to measure aberrations by means of the aberrometer built in our lab together with Active Optics Ltd. We discuss the calibration of the aberrometer and show invalidity to use for the ophthalmic calibration purposes the analytical equation based on thin lens formula. We show that proper analytical equation suitable for calibration should have dependence on the square of the distance increment and we illustrate this both by experiment and by Zemax Ray tracing modeling. Also the error caused by inhomogeneous intensity distribution of the beam imaged onto the aberrometer's Shack-Hartmann sensor is discussed.

  14. The Impact of Indoor and Outdoor Radiometer Calibration on Solar Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    2016-06-02

    This study addresses the effect of calibration methodologies on calibration responsivities and the resulting impact on radiometric measurements. The calibration responsivities used in this study are provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides outdoor calibration responsivity of pyranometers and pyrheliometers at a 45 degree solar zenith angle and responsivity as a function of solar zenith angle determined by clear-sky comparisons to reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturersmore » are performed using a stable artificial light source in a side-by-side comparison of the test radiometer under calibration to a reference radiometer of the same type. These different methods of calibration demonstrated 1percent to 2 percent differences in solar irradiance measurement. Analyzing these values will ultimately enable a reduction in radiometric measurement uncertainties and assist in developing consensus on a standard for calibration.« less

  15. An Improved Approach for RSSI-Based only Calibration-Free Real-Time Indoor Localization on IEEE 802.11 and 802.15.4 Wireless Networks.

    PubMed

    Passafiume, Marco; Maddio, Stefano; Cidronali, Alessandro

    2017-03-29

    Assuming a reliable and responsive spatial contextualization service is a must-have in IEEE 802.11 and 802.15.4 wireless networks, a suitable approach consists of the implementation of localization capabilities, as an additional application layer to the communication protocol stack. Considering the applicative scenario where satellite-based positioning applications are denied, such as indoor environments, and excluding data packet arrivals time measurements due to lack of time resolution, received signal strength indicator (RSSI) measurements, obtained according to IEEE 802.11 and 802.15.4 data access technologies, are the unique data sources suitable for indoor geo-referencing using COTS devices. In the existing literature, many RSSI based localization systems are introduced and experimentally validated, nevertheless they require periodic calibrations and significant information fusion from different sensors that dramatically decrease overall systems reliability and their effective availability. This motivates the work presented in this paper, which introduces an approach for an RSSI-based calibration-free and real-time indoor localization. While switched-beam array-based hardware (compliant with IEEE 802.15.4 router functionality) has already been presented by the author, the focus of this paper is the creation of an algorithmic layer for use with the pre-existing hardware capable to enable full localization and data contextualization over a standard 802.15.4 wireless sensor network using only RSSI information without the need of lengthy offline calibration phase. System validation reports the localization results in a typical indoor site, where the system has shown high accuracy, leading to a sub-metrical overall mean error and an almost 100% site coverage within 1 m localization error.

  16. An Improved Approach for RSSI-Based only Calibration-Free Real-Time Indoor Localization on IEEE 802.11 and 802.15.4 Wireless Networks

    PubMed Central

    Passafiume, Marco; Maddio, Stefano; Cidronali, Alessandro

    2017-01-01

    Assuming a reliable and responsive spatial contextualization service is a must-have in IEEE 802.11 and 802.15.4 wireless networks, a suitable approach consists of the implementation of localization capabilities, as an additional application layer to the communication protocol stack. Considering the applicative scenario where satellite-based positioning applications are denied, such as indoor environments, and excluding data packet arrivals time measurements due to lack of time resolution, received signal strength indicator (RSSI) measurements, obtained according to IEEE 802.11 and 802.15.4 data access technologies, are the unique data sources suitable for indoor geo-referencing using COTS devices. In the existing literature, many RSSI based localization systems are introduced and experimentally validated, nevertheless they require periodic calibrations and significant information fusion from different sensors that dramatically decrease overall systems reliability and their effective availability. This motivates the work presented in this paper, which introduces an approach for an RSSI-based calibration-free and real-time indoor localization. While switched-beam array-based hardware (compliant with IEEE 802.15.4 router functionality) has already been presented by the author, the focus of this paper is the creation of an algorithmic layer for use with the pre-existing hardware capable to enable full localization and data contextualization over a standard 802.15.4 wireless sensor network using only RSSI information without the need of lengthy offline calibration phase. System validation reports the localization results in a typical indoor site, where the system has shown high accuracy, leading to a sub-metrical overall mean error and an almost 100% site coverage within 1 m localization error. PMID:28353676

  17. A game-theoretic approach for calibration of low-cost magnetometers under noise uncertainty

    NASA Astrophysics Data System (ADS)

    Siddharth, S.; Ali, A. S.; El-Sheimy, N.; Goodall, C. L.; Syed, Z. F.

    2012-02-01

    Pedestrian heading estimation is a fundamental challenge in Global Navigation Satellite System (GNSS)-denied environments. Additionally, the heading observability considerably degrades in low-speed mode of operation (e.g. walking), making this problem even more challenging. The goal of this work is to improve the heading solution when hand-held personal/portable devices, such as cell phones, are used for positioning and to improve the heading estimation in GNSS-denied signal environments. Most smart phones are now equipped with self-contained, low cost, small size and power-efficient sensors, such as magnetometers, gyroscopes and accelerometers. A magnetometer needs calibration before it can be properly employed for navigation purposes. Magnetometers play an important role in absolute heading estimation and are embedded in many smart phones. Before the users navigate with the phone, a calibration is invoked to ensure an improved signal quality. This signal is used later in the heading estimation. In most of the magnetometer-calibration approaches, the motion modes are seldom described to achieve a robust calibration. Also, suitable calibration approaches fail to discuss the stopping criteria for calibration. In this paper, the following three topics are discussed in detail that are important to achieve proper magnetometer-calibration results and in turn the most robust heading solution for the user while taking care of the device misalignment with respect to the user: (a) game-theoretic concepts to attain better filter parameter tuning and robustness in noise uncertainty, (b) best maneuvers with focus on 3D and 2D motion modes and related challenges and (c) investigation of the calibration termination criteria leveraging the calibration robustness and efficiency.

  18. Development and Validation of a Reversed Phase HPLC Method for Determination of Anacardic Acids in Cashew (Anacardium occidentale) Nut Shell Liquid.

    PubMed

    Oiram Filho, Francisco; Alcântra, Daniel Barbosa; Rodrigues, Tigressa Helena Soares; Alexandre E Silva, Lorena Mara; de Oliveira Silva, Ebenezer; Zocolo, Guilherme Julião; de Brito, Edy Sousa

    2018-04-01

    Cashew nut shell liquid (CNSL) contains phenolic lipids with aliphatic chains that are of commercial interest. In this work, a chromatographic method was developed to monitor and quantify anacardic acids (AnAc) in CNSL. Samples containing AnAc were analyzed on a high-performance liquid chromatograph coupled to a diode array detector, equipped with a reversed phase C18 (150 × 4.6 mm × 5 μm) column using acetonitrile and water as the mobile phase both acidified with acetic acid to pH 3.0 in an isocratic mode (80:20:1). The chromatographic method showed adequate selectivity, as it could clearly separate the different AnAc. To validate this method, AnAc triene was used as an external standard at seven different concentrations varying from 50 to 1,000 μg mL-1. The Student's t-test and F-test were applied to ensure high confidence for the obtained data from the analytical calibration curve. The results were satisfactory with respect to intra-day (relative standard deviation (RSD) = 0.60%) and inter-day (RSD = 0.67%) precision, linearity (y = 2,670.8x - 26,949, r2 > 0.9998), system suitability for retention time (RSD = 1.02%), area under the curve (RSD = 0.24%), selectivity and limits of detection (19.8 μg mg-1) and quantification (60.2 μg mg-1). The developed chromatographic method was applied for the analysis of different CNSL samples, and it was deemed suitable for the quantification of AnAc.

  19. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  20. Color measurement of plastics - From compounding via pelletizing, up to injection molding and extrusion

    NASA Astrophysics Data System (ADS)

    Botos, J.; Murail, N.; Heidemeyer, P.; Kretschmer, K.; Ulmer, B.; Zentgraf, T.; Bastian, M.; Hochrein, T.

    2014-05-01

    The typical offline color measurement on injection molded or pressed specimens is a very expensive and time-consuming process. In order to optimize the productivity and quality, it is desirable to measure the color already during the production. Therefore several systems have been developed to monitor the color e.g. on melts, strands, pellets, the extrudate or injection molded part already during the process. Different kinds of inline, online and atline methods with their respective advantages and disadvantages will be compared. The criteria are e.g. the testing time, which ranges from real-time to some minutes, the required calibration procedure, the spectral resolution and the final measuring precision. The latter ranges between 0.05 to 0.5 in the CIE L*a*b* system depending on the particular measurement system. Due to the high temperatures in typical plastics processes thermochromism of polymers and dyes has to be taken into account. This effect can influence the color value in the magnitude of some 10% and is barely understood so far. Different suitable methods to compensate thermochromic effects during compounding or injection molding by using calibration curves or artificial neural networks are presented. Furthermore it is even possible to control the color during extrusion and compounding almost in real-time. The goal is a specific developed software for adjusting the color recipe automatically with the final objective of a closed-loop control.

  1. Flight Calibration of four airspeed systems on a swept-wing airplane at Mach numbers up to 1.04 by the NACA radar-phototheodolite method

    NASA Technical Reports Server (NTRS)

    Thompson, Jim Rogers; Bray, Richard S; COOPER GEORGE E

    1950-01-01

    The calibrations of four airspeed systems installed in a North American F-86A airplane have been determined in flight at Mach numbers up to 1.04 by the NACA radar-phototheodolite method. The variation of the static-pressure error per unit indicated impact pressure is presented for three systems typical of those currently in use in flight research, a nose boom and two different wing-tip booms, and for the standard service system installed in the airplane. A limited amount of information on the effect of airplane normal-force coefficient on the static-pressure error is included. The results are compared with available theory and with results from wind-tunnel tests of the airspeed heads alone. Of the systems investigated, a nose-boom installation was found to be most suitable for research use at transonic and low supersonic speeds because it provided the greatest sensitivity of the indicated Mach number to a unit change in true Mach number at very high subsonic speeds, and because it was least sensitive to changes in airplane normal-force coefficient. The static-pressure error of the nose-boom system was small and constant above a Mach number of 1.03 after passage of the fuselage bow shock wave over the airspeed head.

  2. Metrologically Traceable Determination of the Water Content in Biopolymers: INRiM Activity

    NASA Astrophysics Data System (ADS)

    Rolle, F.; Beltramino, G.; Fernicola, V.; Sega, M.; Verdoja, A.

    2017-03-01

    Water content in materials is a key factor affecting many chemical and physical properties. In polymers of biological origin, it influences their stability and mechanical properties as well as their biodegradability. The present work describes the activity carried out at INRiM on the determination of water content in samples of a commercial starch-derived biopolymer widely used in shopping bags (Mater-Bi^{circledR }). Its water content, together with temperature, is the most influencing parameter affecting its biodegradability, because of the considerable impact on the microbial activity which is responsible for the biopolymer degradation in the environment. The main scope of the work was the establishment of a metrologically traceable procedure for the determination of water content by using two electrochemical methods, namely coulometric Karl Fischer (cKF) titration and evolved water vapour (EWV) analysis. The obtained results are presented. The most significant operational parameters were considered, and a particular attention was devoted to the establishment of metrological traceability of the measurement results by using appropriate calibration procedures, calibrated standards and suitable certified reference materials. Sample homogeneity and oven-drying temperature were found to be the most important influence quantities in the whole water content measurement process. The results of the two methods were in agreement within the stated uncertainties. Further development is foreseen for the application of cKF and EWV to other polymers.

  3. Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Katrašnik, Jaka; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    The goal of this article is to present a novel method for spectral characterization and calibration of spectrometers and hyper-spectral imaging systems based on non-collinear acousto-optical tunable filters. The method characterizes the spectral tuning curve (frequency-wavelength characteristic) of the AOTF (Acousto-Optic Tunable Filter) filter by matching the acquired and modeled spectra of the HgAr calibration lamp, which emits line spectrum that can be well modeled via AOTF transfer function. In this way, not only tuning curve characterization and corresponding spectral calibration but also spectral resolution assessment is performed. The obtained results indicated that the proposed method is efficient, accurate and feasible for routine calibration of AOTF spectrometers and hyper-spectral imaging systems and thereby a highly competitive alternative to the existing calibration methods.

  4. Dual-angle, self-calibrating Thomson scattering measurements in RFX-MOD

    NASA Astrophysics Data System (ADS)

    Giudicotti, L.; Pasqualotto, R.; Fassina, A.

    2014-11-01

    In the multipoint Thomson scattering (TS) system of the RFX-MOD experiment the signals from a few spatial positions can be observed simultaneously under two different scattering angles. In addition the detection system uses optical multiplexing by signal delays in fiber optic cables of different length so that the two sets of TS signals can be observed by the same polychromator. Owing to the dependence of the TS spectrum on the scattering angle, it was then possible to implement self-calibrating TS measurements in which the electron temperature Te, the electron density ne and the relative calibration coefficients of spectral channels sensitivity Ci were simultaneously determined by a suitable analysis of the two sets of TS data collected at the two angles. The analysis has shown that, in spite of the small difference in the spectra obtained at the two angles, reliable values of the relative calibration coefficients can be determined by the analysis of good S/N dual-angle spectra recorded in a few tens of plasma shots. This analysis suggests that in RFX-MOD the calibration of the entire set of TS polychromators by means of the similar, dual-laser (Nd:YAG/Nd:YLF) TS technique, should be feasible.

  5. Scene-based nonuniformity correction for airborne point target detection systems.

    PubMed

    Zhou, Dabiao; Wang, Dejiang; Huo, Lijun; Liu, Rang; Jia, Ping

    2017-06-26

    Images acquired by airborne infrared search and track (IRST) systems are often characterized by nonuniform noise. In this paper, a scene-based nonuniformity correction method for infrared focal-plane arrays (FPAs) is proposed based on the constant statistics of the received radiation ratios of adjacent pixels. The gain of each pixel is computed recursively based on the ratios between adjacent pixels, which are estimated through a median operation. Then, an elaborate mathematical model describing the error propagation, derived from random noise and the recursive calculation procedure, is established. The proposed method maintains the characteristics of traditional methods in calibrating the whole electro-optics chain, in compensating for temporal drifts, and in not preserving the radiometric accuracy of the system. Moreover, the proposed method is robust since the frame number is the only variant, and is suitable for real-time applications owing to its low computational complexity and simplicity of implementation. The experimental results, on different scenes from a proof-of-concept point target detection system with a long-wave Sofradir FPA, demonstrate the compelling performance of the proposed method.

  6. A Simple RP-HPLC Method for Quantitation of Itopride HCl in Tablet Dosage Form.

    PubMed

    Thiruvengada, Rajan Vs; Mohamed, Saleem Ts; Ramkanth, S; Alagusundaram, M; Ganaprakash, K; Madhusudhana, Chetty C

    2010-10-01

    An isocratic reversed phase high-performance liquid chromatographic method with ultraviolet detection at 220 nm has been developed for the quantification of itopride hydrochloride in tablet dosage form. The quantification was carried out using C(8) column (250 mm × 4.6 mm), 5-μm particle size SS column. The mobile phase comprised of two solvents (Solvent A: buffer 1.4 mL ortho-phosphoric acid adjusted to pH 3.0 with triethyl amine and Solvent B: acetonitrile). The ratio of Solvent A: Solvent B was 75:25 v/v. The flow rate was 1.0 mL (-1)with UV detection at 220 nm. The method has been validated and proved to be robust. The calibration curve was linear in the concentration range of 80-120% with coefficient of correlation 0.9995. The percentage recovery for itopride HCl was 100.01%. The proposed method was validated for its selectivity, linearity, accuracy, and precision. The method was found to be suitable for the quality control of itopride HCl in tablet dosage formulation.

  7. A Simple RP-HPLC Method for Quantitation of Itopride HCl in Tablet Dosage Form

    PubMed Central

    Thiruvengada, Rajan VS; Mohamed, Saleem TS; Ramkanth, S; Alagusundaram, M; Ganaprakash, K; Madhusudhana, Chetty C

    2010-01-01

    An isocratic reversed phase high-performance liquid chromatographic method with ultraviolet detection at 220 nm has been developed for the quantification of itopride hydrochloride in tablet dosage form. The quantification was carried out using C8 column (250 mm × 4.6 mm), 5-μm particle size SS column. The mobile phase comprised of two solvents (Solvent A: buffer 1.4 mL ortho-phosphoric acid adjusted to pH 3.0 with triethyl amine and Solvent B: acetonitrile). The ratio of Solvent A: Solvent B was 75:25 v/v. The flow rate was 1.0 mL -1with UV detection at 220 nm. The method has been validated and proved to be robust. The calibration curve was linear in the concentration range of 80-120% with coefficient of correlation 0.9995. The percentage recovery for itopride HCl was 100.01%. The proposed method was validated for its selectivity, linearity, accuracy, and precision. The method was found to be suitable for the quality control of itopride HCl in tablet dosage formulation. PMID:21264104

  8. Monitoring the metering performance of an electronic voltage transformer on-line based on cyber-physics correlation analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Zhu; Li, Hongbin; Tang, Dengping; Hu, Chen; Jiao, Yang

    2017-10-01

    Metering performance is the key parameter of an electronic voltage transformer (EVT), and it requires high accuracy. The conventional off-line calibration method using a standard voltage transformer is not suitable for the key equipment in a smart substation, which needs on-line monitoring. In this article, we propose a method for monitoring the metering performance of an EVT on-line based on cyber-physics correlation analysis. By the electrical and physical properties of a substation running in three-phase symmetry, the principal component analysis method is used to separate the metering deviation caused by the primary fluctuation and the EVT anomaly. The characteristic statistics of the measured data during operation are extracted, and the metering performance of the EVT is evaluated by analyzing the change in statistics. The experimental results show that the method successfully monitors the metering deviation of a Class 0.2 EVT accurately. The method demonstrates the accurate evaluation of on-line monitoring of the metering performance on an EVT without a standard voltage transformer.

  9. Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer

    NASA Astrophysics Data System (ADS)

    Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin

    2017-06-01

    Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.

  10. A straightforward experimental method to evaluate the Lamb-Mössbauer factor of a 57Co/Rh source

    NASA Astrophysics Data System (ADS)

    Spina, G.; Lantieri, M.

    2014-01-01

    In analyzing Mössbauer spectra by means of the integral transmission function, a correct evaluation of the recoilless fs factor of the source at the position of the sample is needed. A novel method to evaluate fs for a 57Co source is proposed. The method uses the standard transmission experimental set up and it does not need further measurements but the ones that are mandatory in order to center the Mössbauer line and to calibrate the Mössbauer transducer. Firstly, the background counts are evaluated by collecting a standard Multi Channel Scaling (MCS) spectrum of a tick metal iron foil absorber and two Pulse Height Analysis (PHA) spectra with the same life-time and setting the maximum velocity of the transducer at the same value of the MCS spectrum. Secondly, fs is evaluated by fitting the collected MCS spectrum throughout the integral transmission approach. A test of the suitability of the technique is presented, too.

  11. An economic passive sampling method to detect particulate pollutants using magnetic measurements.

    PubMed

    Cao, Liwan; Appel, Erwin; Hu, Shouyun; Ma, Mingming

    2015-10-01

    Identifying particulate matter (PM) emitted from industrial processes into the atmosphere is an important issue in environmental research. This paper presents a passive sampling method using simple artificial samplers that maintains the advantage of bio-monitoring, but overcomes some of its disadvantages. The samplers were tested in a heavily polluted area (Linfen, China) and compared to results from leaf samples. Spatial variations of magnetic susceptibility from artificial passive samplers and leaf samples show very similar patterns. Scanning electron microscopy suggests that the collected PM are mostly in the range of 2-25 μm; frequent occurrence of spherical shape indicates industrial combustion dominates PM emission. Magnetic properties around power plants show different features than other plants. This sampling method provides a suitable and economic tool for semi-quantifying temporal and spatial distribution of air quality; they can be installed in a regular grid and calibrate the weight of PM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. The Calibration of AVHRR/3 Visible Dual Gain Using Meteosat-8 as a MODIS Calibration Transfer Medium

    NASA Technical Reports Server (NTRS)

    Avey, Lance; Garber, Donald; Nguyen, Louis; Minnis, Patrick

    2007-01-01

    This viewgraph presentation reviews the NOAA-17 AVHRR visible channels calibrated against MET-8/MODIS using dual gain regression methods. The topics include: 1) Motivation; 2) Methodology; 3) Dual Gain Regression Methods; 4) Examples of Regression methods; 5) AVHRR/3 Regression Strategy; 6) Cross-Calibration Method; 7) Spectral Response Functions; 8) MET8/NOAA-17; 9) Example of gain ratio adjustment; 10) Effect of mixed low/high count FOV; 11) Monitor dual gains over time; and 12) Conclusions

  13. Analysis of variation matrix array by bilinear least squares-residual bilinearization (BLLS-RBL) for resolving and quantifying of foodstuff dyes in a candy sample.

    PubMed

    Asadpour-Zeynali, Karim; Maryam Sajjadi, S; Taherzadeh, Fatemeh; Rahmanian, Reza

    2014-04-05

    Bilinear least square (BLLS) method is one of the most suitable algorithms for second-order calibration. Original BLLS method is not applicable to the second order pH-spectral data when an analyte has more than one spectroscopically active species. Bilinear least square-residual bilinearization (BLLS-RBL) was developed to achieve the second order advantage for analysis of complex mixtures. Although the modified method is useful, the pure profiles cannot be obtained and only the linear combination will be obtained. Moreover, for prediction of analyte in an unknown sample, the original algorithm of RBL may diverge; instead of converging to the desired analyte concentrations. Therefore, Gauss Newton-RLB algorithm should be used, which is not as simple as original protocol. Also, the analyte concentration can be predicted on the basis of each of the equilibrating species of the component of interest that are not exactly the same. The aim of the present work is to tackle the non-uniqueness problem in the second order calibration of monoprotic acid mixtures and divergence of RBL. Each pH-absorbance matrix was pretreated by subtraction of the first spectrum from other spectra in the data set to produce full rank array that is called variation matrix. Then variation matrices were analyzed uniquely by original BLLS-RBL that is more parsimonious than its modified counterpart. The proposed method was performed on the simulated as well as the analysis of real data. Sunset yellow and Carmosine as monoprotic acids were determined in candy sample in the presence of unknown interference by this method. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Segmental analysis of amphetamines in hair using a sensitive UHPLC-MS/MS method.

    PubMed

    Jakobsson, Gerd; Kronstrand, Robert

    2014-06-01

    A sensitive and robust ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine and 3,4-methylenedioxy methamphetamine in hair samples. Segmented hair (10 mg) was incubated in 2M sodium hydroxide (80°C, 10 min) before liquid-liquid extraction with isooctane followed by centrifugation and evaporation of the organic phase to dryness. The residue was reconstituted in methanol:formate buffer pH 3 (20:80). The total run time was 4 min and after optimization of UHPLC-MS/MS-parameters validation included selectivity, matrix effects, recovery, process efficiency, calibration model and range, lower limit of quantification, precision and bias. The calibration curve ranged from 0.02 to 12.5 ng/mg, and the recovery was between 62 and 83%. During validation the bias was less than ±7% and the imprecision was less than 5% for all analytes. In routine analysis, fortified control samples demonstrated an imprecision <13% and control samples made from authentic hair demonstrated an imprecision <26%. The method was applied to samples from a controlled study of amphetamine intake as well as forensic hair samples previously analyzed with an ultra high performance liquid chromatography time of flight mass spectrometry (UHPLC-TOF-MS) screening method. The proposed method was suitable for quantification of these drugs in forensic cases including violent crimes, autopsy cases, drug testing and re-granting of driving licences. This study also demonstrated that if hair samples are divided into several short segments, the time point for intake of a small dose of amphetamine can be estimated, which might be useful when drug facilitated crimes are investigated. Copyright © 2014 John Wiley & Sons, Ltd.

  15. An investigation of the Eigenvalue Calibration Method (ECM) using GASP for non-imaging and imaging detectors

    NASA Astrophysics Data System (ADS)

    Kyne, Gillian; Lara, David; Hallinan, Gregg; Redfern, Michael; Shearer, Andrew

    2016-02-01

    Polarised light from astronomical targets can yield a wealth of information about their source radiation mechanisms, and about the geometry of the scattered light regions. Optical observations, of both the linear and circular polarisation components, have been impeded due to non-optimised instrumentation. The need for suitable observing conditions and the availability of luminous targets are also limiting factors. The science motivation of any instrument adds constraints to its operation such as high signal-to-noise (SNR) and detector readout speeds. These factors in particular lead to a wide range of sources that have yet to be observed. The Galway Astronomical Stokes Polarimeter (GASP) has been specifically designed to make observations of these sources. GASP uses division of amplitude polarimeter (DOAP) (Compain and Drevillon Appl. Opt. 37, 5938-5944, 1998) to measure the four components of the Stokes vector (I, Q, U and V) simultaneously, which eliminates the constraints placed upon the need for moving parts during observation, and offers a real-time complete measurement of polarisation. Results from the GASP calibration are presented in this work for both a 1D detector system, and a pixel-by-pixel analysis on a 2D detector system. Following Compain et al. (Appl. Opt. 38, 3490-3502 1999) we use the Eigenvalue Calibration Method (ECM) to measure the polarimetric limitations of the instrument for each of the two systems. Consequently, the ECM is able to compensate for systematic errors introduced by the calibration optics, and it also accounts for all optical elements of the polarimeter in the output. Initial laboratory results of the ECM are presented, using APD detectors, where errors of 0.2 % and 0.1° were measured for the degree of linear polarisation (DOLP) and polarisation angle (PA) respectively. Channel-to-channel image registration is an important aspect of 2-D polarimetry. We present our calibration results of the measured Mueller matrix of each sample, used by the ECM, when 2 Andor iXon Ultra 897 detectors were loaned to the project. A set of Zenith flat-field images were recorded during an observing campaign at the Palomar 200 inch telescope in November 2012. From these we show the polarimetric errors from the spatial polarimetry indicating both the stability and absolute accuracy of GASP.

  16. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-06-22

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.

  17. IMU-based online kinematic calibration of robot manipulator.

    PubMed

    Du, Guanglong; Zhang, Ping

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  18. SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Elder, E; Roper, J

    2015-06-15

    Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less

  19. An optically stimulated luminescence dosimeter for measuring patient exposure from imaging guidance procedures.

    PubMed

    Ding, George X; Malcolm, Arnold W

    2013-09-07

    There is a growing interest in patient exposure resulting from an x-ray imaging procedure used in image-guided radiation therapy. This study explores a feasibility to use a commercially available optically stimulated luminescence (OSL) dosimeter, nanoDot, for estimating imaging radiation exposure to patients. The kilovoltage x-ray sources used for kV-cone-beam CT (CBCT) imaging acquisition procedures were from a Varian on-board imager (OBI) image system. An ionization chamber was used to determine the energy response of nanoDot dosimeters. The chamber calibration factors for x-ray beam quality specified by half-value layer were obtained from an Accredited Dosimetry Calibration Laboratory. The Monte Carlo calculated dose distributions were used to validate the dose distributions measured by using the nanoDot dosimeters in phantom and in vivo. The range of the energy correction factors for the nanoDot as a function of photon energy and bow-tie filters was found to be 0.88-1.13 for different kVp and bow-tie filters. Measurement uncertainties of nanoDot were approximately 2-4% after applying the energy correction factors. The tests of nanoDot placed on a RANDO phantom and on patient's skin showed consistent results. The nanoDot is suitable dosimeter for in vivo dosimetry due to its small size and manageable energy dependence. The dosimeter placed on a patient's skin has potential to serve as an experimental method to monitor and to estimate patient exposure resulting from a kilovoltage x-ray imaging procedure. Due to its large variation in energy response, nanoDot is not suitable to measure radiation doses resulting from mixed beams of megavoltage therapeutic and kilovoltage imaging radiations.

  20. An optically stimulated luminescence dosimeter for measuring patient exposure from imaging guidance procedures

    NASA Astrophysics Data System (ADS)

    Ding, George X.; Malcolm, Arnold W.

    2013-09-01

    There is a growing interest in patient exposure resulting from an x-ray imaging procedure used in image-guided radiation therapy. This study explores a feasibility to use a commercially available optically stimulated luminescence (OSL) dosimeter, nanoDot, for estimating imaging radiation exposure to patients. The kilovoltage x-ray sources used for kV-cone-beam CT (CBCT) imaging acquisition procedures were from a Varian on-board imager (OBI) image system. An ionization chamber was used to determine the energy response of nanoDot dosimeters. The chamber calibration factors for x-ray beam quality specified by half-value layer were obtained from an Accredited Dosimetry Calibration Laboratory. The Monte Carlo calculated dose distributions were used to validate the dose distributions measured by using the nanoDot dosimeters in phantom and in vivo. The range of the energy correction factors for the nanoDot as a function of photon energy and bow-tie filters was found to be 0.88-1.13 for different kVp and bow-tie filters. Measurement uncertainties of nanoDot were approximately 2-4% after applying the energy correction factors. The tests of nanoDot placed on a RANDO phantom and on patient's skin showed consistent results. The nanoDot is suitable dosimeter for in vivo dosimetry due to its small size and manageable energy dependence. The dosimeter placed on a patient's skin has potential to serve as an experimental method to monitor and to estimate patient exposure resulting from a kilovoltage x-ray imaging procedure. Due to its large variation in energy response, nanoDot is not suitable to measure radiation doses resulting from mixed beams of megavoltage therapeutic and kilovoltage imaging radiations.

  1. Quantitative monitoring of sucrose, reducing sugar and total sugar dynamics for phenotyping of water-deficit stress tolerance in rice through spectroscopy and chemometrics

    NASA Astrophysics Data System (ADS)

    Das, Bappa; Sahoo, Rabi N.; Pargal, Sourabh; Krishna, Gopal; Verma, Rakesh; Chinnusamy, Viswanathan; Sehgal, Vinay K.; Gupta, Vinod K.; Dash, Sushanta K.; Swain, Padmini

    2018-03-01

    In the present investigation, the changes in sucrose, reducing and total sugar content due to water-deficit stress in rice leaves were modeled using visible, near infrared (VNIR) and shortwave infrared (SWIR) spectroscopy. The objectives of the study were to identify the best vegetation indices and suitable multivariate technique based on precise analysis of hyperspectral data (350 to 2500 nm) and sucrose, reducing sugar and total sugar content measured at different stress levels from 16 different rice genotypes. Spectral data analysis was done to identify suitable spectral indices and models for sucrose estimation. Novel spectral indices in near infrared (NIR) range viz. ratio spectral index (RSI) and normalised difference spectral indices (NDSI) sensitive to sucrose, reducing sugar and total sugar content were identified which were subsequently calibrated and validated. The RSI and NDSI models had R2 values of 0.65, 0.71 and 0.67; RPD values of 1.68, 1.95 and 1.66 for sucrose, reducing sugar and total sugar, respectively for validation dataset. Different multivariate spectral models such as artificial neural network (ANN), multivariate adaptive regression splines (MARS), multiple linear regression (MLR), partial least square regression (PLSR), random forest regression (RFR) and support vector machine regression (SVMR) were also evaluated. The best performing multivariate models for sucrose, reducing sugars and total sugars were found to be, MARS, ANN and MARS, respectively with respect to RPD values of 2.08, 2.44, and 1.93. Results indicated that VNIR and SWIR spectroscopy combined with multivariate calibration can be used as a reliable alternative to conventional methods for measurement of sucrose, reducing sugars and total sugars of rice under water-deficit stress as this technique is fast, economic, and noninvasive.

  2. Demonstration of a novel cavity ring down spectrometer for NO2 measurement during Discover AQ (July, 2011) where column profiles in the troposphere were collected over the mid Atlantic region of the United States

    NASA Astrophysics Data System (ADS)

    Brent, L. C.; Stehr, J. W.; Thorn, W.; Leen, J.; Gupta, M.; Luke, W. T.; Kelley, P.; Ren, X.; He, H.; Arkinson, H.; Weinheimer, A. J.; Pusede, S. E.; Cohen, R. C.; Dickerson, R. R.; Discover AQ science Team

    2011-12-01

    Real time, atmospheric NO2 column profiles from the Mid-Atlantic region, during the NASA Discover AQ air campaign, demonstrate that cavity ring down spectroscopy, with a LED light source, is a suitable technique for the detection NO2 in the boundary layer and lower free troposphere. Preliminary results from this air campaign indicate that 0.5 to 30 ppb of NO2 can be observed and that the results were similar to NO2 measurements obtained via laser induced fluorescence and chemiluminescence. The cavity ringdown instrument is relatively inexpensive, weighs 40 lbs, and relies on a built in zeroing method to account for drift with respect to time and altitude. Follow on collaboration with NOAA and NIST will consist of side by side ambient air comparison and calibration. In this field experiment the NOAA modified Thermo 42s which uses a UV light source to selectively convert NO2 to NO and chemiluminecsent detection, and a NIST Thermo 42I with a molybdenum NO2 to NO converter and chemiluminescent detection will be compared to NO2 measured by the Los Gatos Research cavity ringdown detector. Part of the calibration procedure will include testing for interferences of nitric acid, n-propyl nitrate and HONO. The altitude integral of NO2 concentrations provide column content suitable for comparison to measurements made from space and for remotely sensing spectrometers. This data helps in the understanding of transport and is necessary for drawing policy relevant conclusions with respect to pollution control.

  3. FY17 Status Report on the Initial Development of a Constitutive Model for Grade 91 Steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messner, M. C.; Phan, V. -T.; Sham, T. -L.

    Grade 91 is a candidate structural material for high temperature advanced reactor applications. Existing ASME Section III, Subsection HB, Subpart B simplified design rules based on elastic analysis are setup as conservative screening tools with the intent to supplement these screening rules with full inelastic analysis when required. The Code provides general guidelines for suitable inelastic models but does not provide constitutive model implementations. This report describes the development of an inelastic constitutive model for Gr. 91 steel aimed at fulfilling the ASME Code requirements and being included into a new Section III Code appendix, HBB-Z. A large database ofmore » over 300 experiments on Gr. 91 was collected and converted to a standard XML form. Five families of Gr. 91 material models were identified in the literature. Of these five, two are potentially suitable for use in the ASME code. These two models were implemented and evaluated against the experimental database. Both models have deficiencies so the report develops a framework for developing and calibrating an improved model. This required creating a new modeling method for representing changes in material rate sensitivity across the full ASME allowable temperature range for Gr. 91 structural components: room temperature to 650° C. On top of this framework for rate sensitivity the report describes calibrating a model for work hardening and softening in the material using genetic algorithm optimization. Future work will focus on improving this trial model by including tension/compression asymmetry observed in experiments and necessary to capture material ratcheting under zero mean stress and by improving the optimization and analysis framework.« less

  4. Using tri-axial accelerometers to identify wild polar bear behaviors

    USGS Publications Warehouse

    Pagano, Anthony M.; Rode, Karyn D.; Cutting, A.; Owen, M.A.; Jensen, S.; Ware, J.V.; Robbins, C.T.; Durner, George M.; Atwood, Todd C.; Obbard, M.E.; Middel, K.R.; Thiemann, G.W.; Williams, T.M.

    2017-01-01

    Tri-axial accelerometers have been used to remotely identify the behaviors of a wide range of taxa. Assigning behaviors to accelerometer data often involves the use of captive animals or surrogate species, as their accelerometer signatures are generally assumed to be similar to those of their wild counterparts. However, this has rarely been tested. Validated accelerometer data are needed for polar bears Ursus maritimus to understand how habitat conditions may influence behavior and energy demands. We used accelerometer and water conductivity data to remotely distinguish 10 polar bear behaviors. We calibrated accelerometer and conductivity data collected from collars with behaviors observed from video-recorded captive polar bears and brown bears U. arctos, and with video from camera collars deployed on free-ranging polar bears on sea ice and on land. We used random forest models to predict behaviors and found strong ability to discriminate the most common wild polar bear behaviors using a combination of accelerometer and conductivity sensor data from captive or wild polar bears. In contrast, models using data from captive brown bears failed to reliably distinguish most active behaviors in wild polar bears. Our ability to discriminate behavior was greatest when species- and habitat-specific data from wild individuals were used to train models. Data from captive individuals may be suitable for calibrating accelerometers, but may provide reduced ability to discriminate some behaviors. The accelerometer calibrations developed here provide a method to quantify polar bear behaviors to evaluate the impacts of declines in Arctic sea ice.

  5. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  6. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  7. Volumetric calibration of a plenoptic camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  8. Volumetric calibration of a plenoptic camera

    DOE PAGES

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...

    2018-02-01

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  9. Blind calibration of radio interferometric arrays using sparsity constraints and its implications for self-calibration

    NASA Astrophysics Data System (ADS)

    Chiarucci, Simone; Wijnholds, Stefan J.

    2018-02-01

    Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.

  10. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  11. Novel Calibration Algorithm for a Three-Axis Strapdown Magnetometer

    PubMed Central

    Liu, Yan Xia; Li, Xi Sheng; Zhang, Xiao Juan; Feng, Yi Bo

    2014-01-01

    A complete error calibration model with 12 independent parameters is established by analyzing the three-axis magnetometer error mechanism. The said model conforms to an ellipsoid restriction, the parameters of the ellipsoid equation are estimated, and the ellipsoid coefficient matrix is derived. However, the calibration matrix cannot be determined completely, as there are fewer ellipsoid parameters than calibration model parameters. Mathematically, the calibration matrix derived from the ellipsoid coefficient matrix by a different matrix decomposition method is not unique, and there exists an unknown rotation matrix R between them. This paper puts forward a constant intersection angle method (angles between the geomagnetic field and gravitational field are fixed) to estimate R. The Tikhonov method is adopted to solve the problem that rounding errors or other errors may seriously affect the calculation results of R when the condition number of the matrix is very large. The geomagnetic field vector and heading error are further corrected by R. The constant intersection angle method is convenient and practical, as it is free from any additional calibration procedure or coordinate transformation. In addition, the simulation experiment indicates that the heading error declines from ±1° calibrated by classical ellipsoid fitting to ±0.2° calibrated by a constant intersection angle method, and the signal-to-noise ratio is 50 dB. The actual experiment exhibits that the heading error is further corrected from ±0.8° calibrated by the classical ellipsoid fitting to ±0.3° calibrated by a constant intersection angle method. PMID:24831110

  12. A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature

    NASA Astrophysics Data System (ADS)

    Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min

    2017-05-01

    This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.

  13. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  14. On-Demand Calibration and Evaluation for Electromagnetically Tracked Laparoscope in Augmented Reality Visualization

    PubMed Central

    Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Purpose Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that calibration can be performed in the OR on demand. Methods We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration result in the OR, we integrated a tube phantom with fCalib and overlaid a virtual representation of the tube on the live video scene. Results We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggested that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, would affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s – 22.7 s). Conclusions We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand. PMID:27250853

  15. Data consistency criterion for selecting parameters for k-space-based reconstruction in parallel imaging.

    PubMed

    Nana, Roger; Hu, Xiaoping

    2010-01-01

    k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.

  16. Metrological traceability of carbon dioxide measurements in atmosphere and seawater

    NASA Astrophysics Data System (ADS)

    Rolle, F.; Pessana, E.; Sega, M.

    2017-05-01

    The accurate determination of gaseous pollutants is fundamental for the monitoring of the trends of these analytes in the environment and the application of the metrological concepts to this field is necessary to assure the reliability of the measurement results. In this work, an overview of the activity carried out at Istituto Nazionale di Ricerca Metrologica to establish the metrological traceability of the measurements of gaseous atmospheric pollutants, in particular of carbon dioxide (CO2), is presented. Two primary methods, the gravimetry and the dynamic dilution, are used for the preparation of reference standards for composition which can be used to calibrate sensors and analytical instrumentation. At present, research is carried out to lower the measurement uncertainties of the primary gas mixtures and to extend their application to the oceanic field. The reason of such investigation is due to the evidence of the changes occurring in seawater carbonate chemistry, connected to the rising level of CO2 in the atmosphere. The well established activity to assure the metrological traceability of CO2 in the atmosphere will be applied to the determination of CO2 in seawater, by developing suitable reference materials for calibration and control of the sensors during their routine use.

  17. Establishing the 1st Chinese National Standard for inactivated hepatitis A vaccine.

    PubMed

    Gao, Fan; Mao, Qun-Ying; Wang, Yi-Ping; Chen, Pan; Liang, Zheng-Lun

    2016-07-01

    A reference standard calibrated in the International Units is needed for the quality control of hepatitis A vaccine. Thus, National Institutes for Food and Drug Control launched a project to establish a non-adsorbed inactivated hepatitis A vaccine reference as the working standard calibrated against the 1st International Standard (IS). Two national standard candidates (NSCs) were obtained from two manufacturers, and designated as NSC A (lyophilized form) and NSC B (liquid form). Six laboratories participated in the collaborative study and were asked to use their in-house validated enzyme-linked immunosorbent assay methods to detect hepatitis A vaccine antigen content. Although both exhibited good parallelism and linear relationship with IS, NSC B showed a better agreement among laboratories than NSC A. And based on suitability of the candidates, NSC B was selected. The accelerated degradation study showed that NSC B was stable at the storage temperature (≤-70 °C). Therefore NSC B was approved as the first Chinese national antigen standard for inactivated hepatitis A vaccine, with an assigned antigen content of 70 IU/ml. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Design of an ultra-portable field transfer radiometer supporting automated vicarious calibration

    NASA Astrophysics Data System (ADS)

    Anderson, Nikolaus; Thome, Kurtis; Czapla-Myers, Jeffrey; Biggar, Stuart

    2015-09-01

    The University of Arizona Remote Sensing Group (RSG) began outfitting the radiometric calibration test site (RadCaTS) at Railroad Valley Nevada in 2004 for automated vicarious calibration of Earth-observing sensors. RadCaTS was upgraded to use RSG custom 8-band ground viewing radiometers (GVRs) beginning in 2011 and currently four GVRs are deployed providing an average reflectance for the test site. This measurement of ground reflectance is the most critical component of vicarious calibration using the reflectance-based method. In order to ensure the quality of these measurements, RSG has been exploring more efficient and accurate methods of on-site calibration evaluation. This work describes the design of, and initial results from, a small portable transfer radiometer for the purpose of GVR calibration validation on site. Prior to deployment, RSG uses high accuracy laboratory calibration methods in order to provide radiance calibrations with low uncertainties for each GVR. After deployment, a solar radiation based calibration has typically been used. The method is highly dependent on a clear, stable atmosphere, requires at least two people to perform, is time consuming in post processing, and is dependent on several large pieces of equipment. In order to provide more regular and more accurate calibration monitoring, the small portable transfer radiometer is designed for quick, one-person operation and on-site field calibration comparison results. The radiometer is also suited for laboratory calibration use and thus could be used as a transfer radiometer calibration standard for ground viewing radiometers of a RadCalNet site.

  19. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    PubMed

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Automatic Calibration Method for Driver’s Head Orientation in Natural Driving Environment

    PubMed Central

    Fu, Xianping; Guan, Xiao; Peli, Eli; Liu, Hongbo; Luo, Gang

    2013-01-01

    Gaze tracking is crucial for studying driver’s attention, detecting fatigue, and improving driver assistance systems, but it is difficult in natural driving environments due to nonuniform and highly variable illumination and large head movements. Traditional calibrations that require subjects to follow calibrators are very cumbersome to be implemented in daily driving situations. A new automatic calibration method, based on a single camera for determining the head orientation and which utilizes the side mirrors, the rear-view mirror, the instrument board, and different zones in the windshield as calibration points, is presented in this paper. Supported by a self-learning algorithm, the system tracks the head and categorizes the head pose in 12 gaze zones based on facial features. The particle filter is used to estimate the head pose to obtain an accurate gaze zone by updating the calibration parameters. Experimental results show that, after several hours of driving, the automatic calibration method without driver’s corporation can achieve the same accuracy as a manual calibration method. The mean error of estimated eye gazes was less than 5°in day and night driving. PMID:24639620

Top