Sample records for simple calibration method

  1. Simple transfer calibration method for a Cimel Sun-Moon photometer: calculating lunar calibration coefficients from Sun calibration constants.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane

    2016-09-20

    The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.

  2. Improved dewpoint-probe calibration

    NASA Technical Reports Server (NTRS)

    Stephenson, J. G.; Theodore, E. A.

    1978-01-01

    Relatively-simple pressure-control apparatus calibrates dewpoint probes considerably faster than conventional methods, with no loss of accuracy. Technique requires only pressure measurement at each calibration point and single absolute-humidity measurement at beginning of run. Several probes can be calibrated simultaneously and points can be checked above room temperature.

  3. An Accurate Non-Cooperative Method for Measuring Textureless Spherical Target Based on Calibrated Lasers.

    PubMed

    Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning

    2016-12-09

    Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time.

  4. An Accurate Non-Cooperative Method for Measuring Textureless Spherical Target Based on Calibrated Lasers

    PubMed Central

    Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning

    2016-01-01

    Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time. PMID:27941705

  5. ON THE CALIBRATION OF DK-02 AND KID DOSIMETERS (in Estonian)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ehvaert, H.

    1963-01-01

    For the periodic calibration of the DK-02 and WD dosimeters, the rotating stand method which is more advantageous than the usual method is recommended. The calibration can be accomplished in a strong gamma field, reducing considerably the time necessary for calibration. Using a point source, the dose becomes a simple function of time and geometrical parameters. The experimental values are in good agreement with theoretical values. (tr-auth)

  6. Evaluation of assigned-value uncertainty for complex calibrator value assignment processes: a prealbumin example.

    PubMed

    Middleton, John; Vaks, Jeffrey E

    2007-04-01

    Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.

  7. On the use of video projectors for three-dimensional scanning

    NASA Astrophysics Data System (ADS)

    Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.; Robledo-Sanchez, Carlos; Diaz-Gonzalez, Gerardo

    2017-08-01

    Structured light projection is one of the most useful methods for accurate three-dimensional scanning. Video projectors are typically used as the illumination source. However, because video projectors are not designed for structured light systems, some considerations such as gamma calibration must be taken into account. In this work, we present a simple method for gamma calibration of video projectors. First, the experimental fringe patterns are normalized. Then, the samples of the fringe patterns are sorted in ascending order. The sample sorting leads to a simple three-parameter sine curve that is fitted using the Gauss-Newton algorithm. The novelty of this method is that the sorting process removes the effect of the unknown phase. Thus, the resulting gamma calibration algorithm is significantly simplified. The feasibility of the proposed method is illustrated in a three-dimensional scanning experiment.

  8. Calibration of a horizontally acting force transducer with the use of a simple pendulum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taberner, Andrew J.; Hunter, Ian W.; BioInstrumentation Laboratory, Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 and Institute for Soldier Nanotechnologies, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139

    This article details the implementation of a method for calibrating horizontally measuring force transducers using a pendulum. The technique exploits the sinusoidal inertial force generated by a suspended mass as it pendulates about a point on the measurement axis of the force transducer. The method is used to calibrate a reconfigurable, custom-made force transducer based on exchangeable cantilevers with stiffness ranging from 10 to 10{sup 4} N/m. In this implementation, the relative combined standard uncertainty in the calibrated transducer stiffness is 0.41% while the repeatability of the calibration technique is 0.46%.

  9. A simple, accurate, field-portable mixing ratio generator and Rayleigh distillation device

    USDA-ARS?s Scientific Manuscript database

    Routine field calibration of water vapor analyzers has always been a challenging problem for those making long-term flux measurements at remote sites. Automated sampling of standard gases from compressed tanks, the method of choice for CO2 calibration, cannot be used for H2O. Calibrations are typica...

  10. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.

    PubMed

    Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C

    2008-07-21

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.

  11. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition

    PubMed Central

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-01-01

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible. PMID:29695041

  12. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition.

    PubMed

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-04-24

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible.

  13. Calibrating the ECCO ocean general circulation model using Green's functions

    NASA Technical Reports Server (NTRS)

    Menemenlis, D.; Fu, L. L.; Lee, T.; Fukumori, I.

    2002-01-01

    Green's functions provide a simple, yet effective, method to test and calibrate General-Circulation-Model(GCM) parameterizations, to study and quantify model and data errors, to correct model biases and trends, and to blend estimates from different solutions and data products.

  14. A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.

    PubMed

    Pagoulatos, N; Haynor, D R; Kim, Y

    2001-09-01

    We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.

  15. The simple procedure for the fluxgate magnetometers calibration

    NASA Astrophysics Data System (ADS)

    Marusenkov, Andriy

    2014-05-01

    The fluxgate magnetometers are widely used in geophysics investigations including the geomagnetic field monitoring at the global network of geomagnetic observatories as well as for electromagnetic sounding of the Earth's crust conductivity. For solving these tasks the magnetometers have to be calibrated with an appropriate level of accuracy. As a particular case, the ways to satisfy the recent requirements to the scaling and orientation errors of 1-second INTERNAGNET magnetometers are considered in the work. The goal of the present study was to choose a simple and reliable calibration method for estimation of scale factors and angular errors of the three-axis magnetometers in the field. There are a large number of the scalar calibration methods, which use a free rotation of the sensor in the calibration field followed by complicated data processing procedures for numerical solution of the high-order equations set. The chosen approach also exploits the Earth's magnetic field as a calibrating signal, but, in contrast to other methods, the sensor has to be oriented in some particular positions in respect to the total field vector, instead of the sensor free rotation. This allows to use very simple and straightforward linear computation formulas and, as a result, to achieve more reliable estimations of the calibrated parameters. The estimation of the scale factors is performed by the sequential aligning of each component of the sensor in two positions: parallel and anti-parallel to the Earth's magnetic field vector. The estimation of non-orthogonality angles between each pair of components is performed after sequential aligning of the components at the angles +/- 45 and +/- 135 degrees of arc in respect to the total field vector. Due to such four positions approach the estimations of the non-orthogonality angles are invariant to the zero offsets and non-linearity of transfer functions of the components. The experimental justifying of the proposed method by means of the Coil Calibration system reveals, that the achieved accuracy (<0.04 % for scale factors and 0.03 degrees of arc for angle errors) is sufficient for many applications, particularly for satisfying the INTERMAGNET requirements to 1-second instruments.

  16. A proposed standard methodology for estimating the wounding capacity of small calibre projectiles or other missiles.

    PubMed

    Berlin, R H; Janzon, B; Rybeck, B; Schantz, B; Seeman, T

    1982-01-01

    A standard methodology for estimating the energy transfer characteristics of small calibre bullets and other fast missiles is proposed, consisting of firings against targets made of soft soap. The target is evaluated by measuring the size of the permanent cavity remaining in it after the shot. The method is very simple to use and does not require access to any sophisticated measuring equipment. It can be applied under all circumstances, even under field conditions. Adequate methods of calibration to ensure good accuracy are suggested. The precision and limitations of the method are discussed.

  17. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  18. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  19. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  20. A novel calibration method for non-orthogonal shaft laser theodolite measurement system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Bin, E-mail: wubin@tju.edu.cn, E-mail: xueting@tju.edu.cn; Yang, Fengting; Ding, Wen

    2016-03-15

    Non-orthogonal shaft laser theodolite (N-theodolite) is a new kind of large-scale metrological instrument made up by two rotary tables and one collimated laser. There are three axes for an N-theodolite. According to naming conventions in traditional theodolite, rotary axes of two rotary tables are called as horizontal axis and vertical axis, respectively, and the collimated laser beam is named as sight axis. And the difference between N-theodolite and traditional theodolite is obvious, since the former one with no orthogonal and intersecting accuracy requirements. So the calibration method for traditional theodolite is no longer suitable for N-theodolite, while the calibration methodmore » applied currently is really complicated. Thus this paper introduces a novel calibration method for non-orthogonal shaft laser theodolite measurement system to simplify the procedure and to improve the calibration accuracy. A simple two-step process, calibration for intrinsic parameters and for extrinsic parameters, is proposed by the novel method. And experiments have shown its efficiency and accuracy.« less

  1. Model independent approach to the single photoelectron calibration of photomultiplier tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saldanha, R.; Grandi, L.; Guardincerri, Y.

    2017-08-01

    The accurate calibration of photomultiplier tubes is critical in a wide variety of applications in which it is necessary to know the absolute number of detected photons or precisely determine the resolution of the signal. Conventional calibration methods rely on fitting the photomultiplier response to a low intensity light source with analytical approximations to the single photoelectron distribution, often leading to biased estimates due to the inability to accurately model the full distribution, especially at low charge values. In this paper we present a simple statistical method to extract the relevant single photoelectron calibration parameters without making any assumptions aboutmore » the underlying single photoelectron distribution. We illustrate the use of this method through the calibration of a Hamamatsu R11410 photomultiplier tube and study the accuracy and precision of the method using Monte Carlo simulations. The method is found to have significantly reduced bias compared to conventional methods and works under a wide range of light intensities, making it suitable for simultaneously calibrating large arrays of photomultiplier tubes.« less

  2. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  3. Robot geometry calibration

    NASA Technical Reports Server (NTRS)

    Hayati, Samad; Tso, Kam; Roston, Gerald

    1988-01-01

    Autonomous robot task execution requires that the end effector of the robot be positioned accurately relative to a reference world-coordinate frame. The authors present a complete formulation to identify the actual robot geometric parameters. The method applies to any serial link manipulator with arbitrary order and combination of revolute and prismatic joints. A method is also presented to solve the inverse kinematic of the actual robot model which usually is not a so-called simple robot. Experimental results performed by utilizing a PUMA 560 with simple measurement hardware are presented. As a result of this calibration a precision move command is designed and integrated into a robot language, RCCL, and used in the NASA Telerobot Testbed.

  4. Different grades MEMS accelerometers error characteristics

    NASA Astrophysics Data System (ADS)

    Pachwicewicz, M.; Weremczuk, J.

    2017-08-01

    The paper presents calibration effects of two different MEMS accelerometers of different price and quality grades and discusses different accelerometers errors types. The calibration for error determining is provided by reference centrifugal measurements. The design and measurement errors of the centrifuge are discussed as well. It is shown that error characteristics of the sensors are very different and it is not possible to use simple calibration methods presented in the literature in both cases.

  5. Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor

    NASA Astrophysics Data System (ADS)

    Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick

    This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  6. User-friendly freehand ultrasound calibration using Lego bricks and automatic registration.

    PubMed

    Xiao, Yiming; Yan, Charles Xiao Bo; Drouin, Simon; De Nigris, Dante; Kochanowska, Anna; Collins, D Louis

    2016-09-01

    As an inexpensive, noninvasive, and portable clinical imaging modality, ultrasound (US) has been widely employed in many interventional procedures for monitoring potential tissue deformation, surgical tool placement, and locating surgical targets. The application requires the spatial mapping between 2D US images and 3D coordinates of the patient. Although positions of the devices (i.e., ultrasound transducer) and the patient can be easily recorded by a motion tracking system, the spatial relationship between the US image and the tracker attached to the US transducer needs to be estimated through an US calibration procedure. Previously, various calibration techniques have been proposed, where a spatial transformation is computed to match the coordinates of corresponding features in a physical phantom and those seen in the US scans. However, most of these methods are difficult to use for novel users. We proposed an ultrasound calibration method by constructing a phantom from simple Lego bricks and applying an automated multi-slice 2D-3D registration scheme without volumetric reconstruction. The method was validated for its calibration accuracy and reproducibility. Our method yields a calibration accuracy of [Formula: see text] mm and a calibration reproducibility of 1.29 mm. We have proposed a robust, inexpensive, and easy-to-use ultrasound calibration method.

  7. Simultaneous determination of potassium guaiacolsulfonate, guaifenesin, diphenhydramine HCl and carbetapentane citrate in syrups by using HPLC-DAD coupled with partial least squares multivariate calibration.

    PubMed

    Dönmez, Ozlem Aksu; Aşçi, Bürge; Bozdoğan, Abdürrezzak; Sungur, Sidika

    2011-02-15

    A simple and rapid analytical procedure was proposed for the determination of chromatographic peaks by means of partial least squares multivariate calibration (PLS) of high-performance liquid chromatography with diode array detection (HPLC-DAD). The method is exemplified with analysis of quaternary mixtures of potassium guaiacolsulfonate (PG), guaifenesin (GU), diphenhydramine HCI (DP) and carbetapentane citrate (CP) in syrup preparations. In this method, the area does not need to be directly measured and predictions are more accurate. Though the chromatographic and spectral peaks of the analytes were heavily overlapped and interferents coeluted with the compounds studied, good recoveries of analytes could be obtained with HPLC-DAD coupled with PLS calibration. This method was tested by analyzing the synthetic mixture of PG, GU, DP and CP. As a comparison method, a classsical HPLC method was used. The proposed methods were applied to syrups samples containing four drugs and the obtained results were statistically compared with each other. Finally, the main advantage of HPLC-PLS method over the classical HPLC method tried to emphasized as the using of simple mobile phase, shorter analysis time and no use of internal standard and gradient elution. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. A pipette-based calibration system for fast-scan cyclic voltammetry with fast response times.

    PubMed

    Ramsson, Eric S

    2016-01-01

    Fast-scan cyclic voltammetry (FSCV) is an electrochemical technique that utilizes the oxidation and/or reduction of an analyte of interest to infer rapid changes in concentrations. In order to calibrate the resulting oxidative or reductive current, known concentrations of an analyte must be introduced under controlled settings. Here, I describe a simple and cost-effective method, using a Petri dish and pipettes, for the calibration of carbon fiber microelectrodes (CFMs) using FSCV.

  9. Accurate and simple method for quantification of hepatic fat content using magnetic resonance imaging: a prospective study in biopsy-proven nonalcoholic fatty liver disease.

    PubMed

    Hatta, Tomoko; Fujinaga, Yasunari; Kadoya, Masumi; Ueda, Hitoshi; Murayama, Hiroaki; Kurozumi, Masahiro; Ueda, Kazuhiko; Komatsu, Michiharu; Nagaya, Tadanobu; Joshita, Satoru; Kodama, Ryo; Tanaka, Eiji; Uehara, Tsuyoshi; Sano, Kenji; Tanaka, Naoki

    2010-12-01

    To assess the degree of hepatic fat content, simple and noninvasive methods with high objectivity and reproducibility are required. Magnetic resonance imaging (MRI) is one such candidate, although its accuracy remains unclear. We aimed to validate an MRI method for quantifying hepatic fat content by calibrating MRI reading with a phantom and comparing MRI measurements in human subjects with estimates of liver fat content in liver biopsy specimens. The MRI method was performed by a combination of MRI calibration using a phantom and double-echo chemical shift gradient-echo sequence (double-echo fast low-angle shot sequence) that has been widely used on a 1.5-T scanner. Liver fat content in patients with nonalcoholic fatty liver disease (NAFLD, n = 26) was derived from a calibration curve generated by scanning the phantom. Liver fat was also estimated by optical image analysis. The correlation between the MRI measurements and liver histology findings was examined prospectively. Magnetic resonance imaging measurements showed a strong correlation with liver fat content estimated from the results of light microscopic examination (correlation coefficient 0.91, P < 0.001) regardless of the degree of hepatic steatosis. Moreover, the severity of lobular inflammation or fibrosis did not influence the MRI measurements. This MRI method is simple and noninvasive, has excellent ability to quantify hepatic fat content even in NAFLD patients with mild steatosis or advanced fibrosis, and can be performed easily without special devices.

  10. Calibration of satellite sensors after launch

    NASA Technical Reports Server (NTRS)

    Fraser, R. S.; Kaufman, Y. J.

    1986-01-01

    A simple and accurate method for the postflight calibration of satellite Visible Infrared Spin-Scan Radiometers (VISSR) is presented, and the results of inflight testing are reported. The calibration source for the VISSR with its effective wavelength of 610 nm is the radiance of sunlight, measured in calibrated reflectance units, scattered by the atmospheric gas above ocean which is far from land. Only the lowest 20 percent of the full-scale VISSR response is calibrated. VISSR testing aboard two geostationary operational evironmental satellites between 1980 and 1983 showed significant calibration coefficient variations of only + or - 12 percent and + or - 2 percent. Good agreement was found between values of aerosol optical thickness measured by VISSR and those measured from the ground.

  11. The research on calibration methods of dual-CCD laser three-dimensional human face scanning system

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong

    2013-09-01

    In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eken Tuna, Kevin Mayeda, Abraham Hofstetter, Rengin Gok, Gonca Orgulu, Niyazi Turkelli

    A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, they found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction. After calibrating the stations ISP, ISKB and MALT for local and regional distances, single-station moment-magnitude estimates (M{submore » W}) derived from the coda spectra were in excellent agreement with those determined from multistation waveform modeling inversions, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub W} estimates to significantly smaller events which could not otherwise be waveform modeled. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less

  13. A new method to calibrate the absolute sensitivity of a soft X-ray streak camera

    NASA Astrophysics Data System (ADS)

    Yu, Jian; Liu, Shenye; Li, Jin; Yang, Zhiwen; Chen, Ming; Guo, Luting; Yao, Li; Xiao, Shali

    2016-12-01

    In this paper, we introduce a new method to calibrate the absolute sensitivity of a soft X-ray streak camera (SXRSC). The calibrations are done in the static mode by using a small laser-produced X-ray source. A calibrated X-ray CCD is used as a secondary standard detector to monitor the X-ray source intensity. In addition, two sets of holographic flat-field grating spectrometers are chosen as the spectral discrimination systems of the SXRSC and the X-ray CCD. The absolute sensitivity of the SXRSC is obtained by comparing the signal counts of the SXRSC to the output counts of the X-ray CCD. Results show that the calibrated spectrum covers the range from 200 eV to 1040 eV. The change of the absolute sensitivity in the vicinity of the K-edge of the carbon can also be clearly seen. The experimental values agree with the calculated values to within 29% error. Compared with previous calibration methods, the proposed method has several advantages: a wide spectral range, high accuracy, and simple data processing. Our calibration results can be used to make quantitative X-ray flux measurements in laser fusion research.

  14. Measuring the orthogonality error of coil systems

    USGS Publications Warehouse

    Heilig, B.; Csontos, A.; Pajunpää, K.; White, Tim; St. Louis, B.; Calp, D.

    2012-01-01

    Recently, a simple method was proposed for the determination of pitch angle between two coil axes by means of a total field magnetometer. The method is applicable when the homogeneous volume in the centre of the coil system is large enough to accommodate the total field sensor. Orthogonality of calibration coil systems used for calibrating vector magnetometers can be attained by this procedure. In addition, the method can be easily automated and applied to the calibration of delta inclination–delta declination (dIdD) magnetometers. The method was tested by several independent research groups, having a variety of test equipment, and located at differing geomagnetic observatories, including: Nurmijärvi, Finland; Hermanus, South Africa; Ottawa, Canada; Tihany, Hungary. This paper summarizes the test results, and discusses the advantages and limitations of the method.

  15. Probing fibronectin–antibody interactions using AFM force spectroscopy and lateral force microscopy

    PubMed Central

    Kulik, Andrzej J; Lee, Kyumin; Pyka-Fościak, Grazyna; Nowak, Wieslaw

    2015-01-01

    Summary The first experiment showing the effects of specific interaction forces using lateral force microscopy (LFM) was demonstrated for lectin–carbohydrate interactions some years ago. Such measurements are possible under the assumption that specific forces strongly dominate over the non-specific ones. However, obtaining quantitative results requires the complex and tedious calibration of a torsional force. Here, a new and relatively simple method for the calibration of the torsional force is presented. The proposed calibration method is validated through the measurement of the interaction forces between human fibronectin and its monoclonal antibody. The results obtained using LFM and AFM-based classical force spectroscopies showed similar unbinding forces recorded at similar loading rates. Our studies verify that the proposed lateral force calibration method can be applied to study single molecule interactions. PMID:26114080

  16. Large ensemble modeling of the last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert

    2016-05-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.

  17. Dynamic calibration of agent-based models using data assimilation.

    PubMed

    Ward, Jonathan A; Evans, Andrew J; Malleson, Nicolas S

    2016-04-01

    A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds.

  18. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    PubMed

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  19. Accounting For Gains And Orientations In Polarimetric SAR

    NASA Technical Reports Server (NTRS)

    Freeman, Anthony

    1992-01-01

    Calibration method accounts for characteristics of real radar equipment invalidating standard 2 X 2 complex-amplitude R (receiving) and T (transmitting) matrices. Overall gain in each combination of transmitting and receiving channels assumed different even when only one transmitter and one receiver used. One characterizes departure of polarimetric Synthetic Aperture Radar (SAR) system from simple 2 X 2 model in terms of single parameter used to transform measurements into format compatible with simple 2 X 2 model. Data processed by applicable one of several prior methods based on simple model.

  20. Uncertainty Analysis for Angle Calibrations Using Circle Closure

    PubMed Central

    Estler, W. Tyler

    1998-01-01

    We analyze two types of full-circle angle calibrations: a simple closure in which a single set of unknown angular segments is sequentially compared with an unknown reference angle, and a dual closure in which two divided circles are simultaneously calibrated by intercomparison. In each case, the constraint of circle closure provides auxiliary information that (1) enables a complete calibration process without reference to separately calibrated reference artifacts, and (2) serves to reduce measurement uncertainty. We derive closed-form expressions for the combined standard uncertainties of angle calibrations, following guidelines published by the International Organization for Standardization (ISO) and NIST. The analysis includes methods for the quantitative evaluation of the standard uncertainty of small angle measurement using electronic autocollimators, including the effects of calibration uncertainty and air turbulence. PMID:28009359

  1. Determination of perfluorinated compounds in fish fillet homogenates: method validation and application to fillet homogenates from the Mississippi River.

    PubMed

    Malinsky, Michelle Duval; Jacoby, Cliffton B; Reagen, William K

    2011-01-10

    We report herein a simple protein precipitation extraction-liquid chromatography tandem mass spectrometry (LC/MS/MS) method, validation, and application for the analysis of perfluorinated carboxylic acids (C7-C12), perfluorinated sulfonic acids (C4, C6, and C8), and perfluorooctane sulfonamide (FOSA) in fish fillet tissue. The method combines a rapid homogenization and protein precipitation tissue extraction procedure using stable-isotope internal standard (IS) calibration. Method validation in bluegill (Lepomis macrochirus) fillet tissue evaluated the following: (1) method accuracy and precision in both extracted matrix-matched calibration and solvent (unextracted) calibration, (2) quantitation of mixed branched and linear isomers of perfluorooctanoate (PFOA) and perfluorooctanesulfonate (PFOS) with linear isomer calibration, (3) quantitation of low level (ppb) perfluorinated compounds (PFCs) in the presence of high level (ppm) PFOS, and (4) specificity from matrix interferences. Both calibration techniques produced method accuracy of at least 100±13% with a precision (%RSD) ≤18% for all target analytes. Method accuracy and precision results for fillet samples from nine different fish species taken from the Mississippi River in 2008 and 2009 are also presented. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Standing on the shoulders of giants: improving medical image segmentation via bias correction.

    PubMed

    Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul

    2010-01-01

    We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.

  3. An IMU-to-Body Alignment Method Applied to Human Gait Analysis.

    PubMed

    Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo

    2016-12-10

    This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  4. Use of eddy-covariance methods to "calibrate" simple estimators of evapotranspiration

    USGS Publications Warehouse

    Sumner, David M.; Geurink, Jeffrey S.; Swancar, Amy

    2017-01-01

    Direct measurement of actual evapotranspiration (ET) provides quantification of this large component of the hydrologic budget, but typically requires long periods of record and large instrumentation and labor costs. Simple surrogate methods of estimating ET, if “calibrated” to direct measurements of ET, provide a reliable means to quantify ET. Eddy-covariance measurements of ET were made for 12 years (2004-2015) at an unimproved bahiagrass (Paspalum notatum) pasture in Florida. These measurements were compared to annual rainfall derived from rain gage data and monthly potential ET (PET) obtained from a long-term (since 1995) U.S. Geological Survey (USGS) statewide, 2-kilometer, daily PET product. The annual proportion of ET to rainfall indicates a strong correlation (r2=0.86) to annual rainfall; the ratio increases linearly with decreasing rainfall. Monthly ET rates correlated closely (r2=0.84) to the USGS PET product. The results indicate that simple surrogate methods of estimating actual ET show positive potential in the humid Florida climate given the ready availability of historical rainfall and PET.

  5. An accurate system for onsite calibration of electronic transformers with digital output.

    PubMed

    Zhi, Zhang; Li, Hong-Bin

    2012-06-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  6. An accurate system for onsite calibration of electronic transformers with digital output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhi Zhang; Li Hongbin; State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differentialmore » method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.« less

  7. An accurate system for onsite calibration of electronic transformers with digital output

    NASA Astrophysics Data System (ADS)

    Zhi, Zhang; Li, Hong-Bin

    2012-06-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  8. Calibration of an analyzing magnet using the 12C(d, p0)13C nuclear reaction with a thick carbon target

    NASA Astrophysics Data System (ADS)

    Andrade, E.; Canto, C. E.; Rocha, M. F.

    2017-09-01

    The absolute energy of an ion beam produced by an accelerator is usually determined by an electrostatic or magnetic analyzer, which in turn must be calibrated. Various methods for accelerator energy calibration are extensively reported in the literature, like nuclear reaction resonances, neutron threshold, and time of flight, among others. This work reports on a simple method to calibrate the magnet associated to a vertical 5.5 MV Van de Graaff accelerator. The method is based on bombarding with deuteron beams a thick carbon target and measuring with a surface barrier detector the particle energy spectra produced. The analyzer magnetic field is measured for each spectrum and the beam energy is deduced by the best fit of the simulation of the spectrum with the SIMNRA code that includes 12C(d,p0)13C nuclear cross sections.

  9. Method for lateral force calibration in atomic force microscope using MEMS microforce sensor.

    PubMed

    Dziekoński, Cezary; Dera, Wojciech; Jarząbek, Dariusz M

    2017-11-01

    In this paper we present a simple and direct method for the lateral force calibration constant determination. Our procedure does not require any knowledge about material or geometrical parameters of an investigated cantilever. We apply a commercially available microforce sensor with advanced electronics for direct measurement of the friction force applied by the cantilever's tip to a flat surface of the microforce sensor measuring beam. Due to the third law of dynamics, the friction force of the equal value tilts the AFM cantilever. Therefore, torsional (lateral force) signal is compared with the signal from the microforce sensor and the lateral force calibration constant is determined. The method is easy to perform and could be widely used for the lateral force calibration constant determination in many types of atomic force microscopes. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. General Matrix Inversion Technique for the Calibration of Electric Field Sensor Arrays on Aircraft Platforms

    NASA Technical Reports Server (NTRS)

    Mach, D. M.; Koshak, W. J.

    2007-01-01

    A matrix calibration procedure has been developed that uniquely relates the electric fields measured at the aircraft with the external vector electric field and net aircraft charge. The calibration method can be generalized to any reasonable combination of electric field measurements and aircraft. A calibration matrix is determined for each aircraft that represents the individual instrument responses to the external electric field. The aircraft geometry and configuration of field mills (FMs) uniquely define the matrix. The matrix can then be inverted to determine the external electric field and net aircraft charge from the FM outputs. A distinct advantage of the method is that if one or more FMs need to be eliminated or deemphasized [e.g., due to a malfunction), it is a simple matter to reinvert the matrix without the malfunctioning FMs. To demonstrate the calibration technique, data are presented from several aircraft programs (ER-2, DC-8, Altus, and Citation).

  11. Study of glass hydrometer calibration by hydrostatic weighting

    NASA Astrophysics Data System (ADS)

    Chen, Chaoyun; Wang, Jintao; Li, Zhihao; Zhang, Peiman

    2016-01-01

    Glass hydrometers are simple but effective instruments for measuring the density of liquids. Glass hydrometers calibration based on the Archimedes law, using silicon ring as a reference standard solid density, n-tridecane with density stability and low surface tension as the standard working liquid, based on hydrostatic weighing method designs a glass hydrometer calibration system. Glass hydrometer calibration system uses CCD image measurement system to align the scale of hydrometer and liquid surface, with positioning accuracy of 0.01 mm. Surface tension of the working liquid is measured by Whihemy plate. According to twice glass hydrometer weighing in the air and liquid can calculate the correction value of the current scale. In order to verify the validity of the principle of the hydrostatic weighing method of glass hydrometer calibration system, for measuring the density range of (770-790) kg/m3, with a resolution of 0.2 kg/m3 of hydrometer. The results of measurement compare with the Physikalisch-Technische Bundesanstalt(PTB) ,verifying the validity of the calibration system.

  12. Calibration of stereo rigs based on the backward projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin

    2016-08-01

    High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.

  13. Absolute Radiometric Calibration of Narrow-Swath Imaging Sensors with Reference to Non-Coincident Wide-Swath Sensors

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Thome, Kurtis; Lockwood, Ronald

    2012-01-01

    An inter-calibration method is developed to provide absolute radiometric calibration of narrow-swath imaging sensors with reference to non-coincident wide-swath sensors. The method predicts at-sensor radiance using non-coincident imagery from the reference sensor and knowledge of spectral reflectance of the test site. The imagery of the reference sensor is restricted to acquisitions that provide similar view and solar illumination geometry to reduce uncertainties due to directional reflectance effects. Spectral reflectance of the test site is found with a simple iterative radiative transfer method using radiance values of a well-understood wide-swath sensor and spectral shape information based on historical ground-based measurements. At-sensor radiance is calculated for the narrow-swath sensor using this spectral reflectance and atmospheric parameters that are also based on historical in situ measurements. Results of the inter-calibration method show agreement on the 2 5 percent level in most spectral regions with the vicarious calibration technique relying on coincident ground-based measurements referred to as the reflectance-based approach. While the variability of the inter-calibration method based on non-coincident image pairs is significantly larger, results are consistent with techniques relying on in situ measurements. The method is also insensitive to spectral differences between the sensors by transferring to surface spectral reflectance prior to prediction of at-sensor radiance. The utility of this inter-calibration method is made clear by its flexibility to utilize image pairings with acquisition dates differing in excess of 30 days allowing frequent absolute calibration comparisons between wide- and narrow-swath sensors.

  14. A Focusing Method in the Calibration Process of Image Sensors Based on IOFBs

    PubMed Central

    Fernández, Pedro R.; Lázaro, José L.; Gardel, Alfredo; Cano, Ángel E.; Bravo, Ignacio

    2010-01-01

    A focusing procedure in the calibration process of image sensors based on Incoherent Optical Fiber Bundles (IOFBs) is described using the information extracted from fibers. These procedures differ from any other currently known focusing method due to the non spatial in-out correspondence between fibers, which produces a natural codification of the image to transmit. Focus measuring is essential prior to carrying out calibration in order to guarantee accurate processing and decoding. Four algorithms have been developed to estimate the focus measure; two methods based on mean grey level, and the other two based on variance. In this paper, a few simple focus measures are defined and compared. Some experimental results referred to the focus measure and the accuracy of the developed methods are discussed in order to demonstrate its effectiveness. PMID:22315526

  15. A novel method to calibrate DOI function of a PET detector with a dual-ended-scintillator readout.

    PubMed

    Shao, Yiping; Yao, Rutao; Ma, Tianyu

    2008-12-01

    The detection of depth-of-interaction (DOI) is a critical detector capability to improve the PET spatial resolution uniformity across the field-of-view and will significantly enhance, in particular, small bore system performance for brain, breast, and small animal imaging. One promising technique of DOI detection is to use dual-ended-scintillator readout that uses two photon sensors to detect scintillation light from both ends of a scintillator array and estimate DOI based on the ratio of signals (similar to Anger logic). This approach needs a careful DOI function calibration to establish accurate relationship between DOI and signal ratios, and to recalibrate if the detection condition is shifted due to the drift of sensor gain, bias variations, or degraded optical coupling, etc. However, the current calibration method that uses coincident events to locate interaction positions inside a single scintillator crystal has severe drawbacks, such as complicated setup, long and repetitive measurements, and being prone to errors from various possible misalignments among the source and detector components. This method is also not practically suitable to calibrate multiple DOI functions of a crystal array. To solve these problems, a new method has been developed that requires only a uniform flood source to irradiate a crystal array without the need to locate the interaction positions, and calculates DOI functions based solely on the uniform probability distribution of interactions over DOI positions without knowledge or assumption of detector responses. Simulation and experiment have been studied to validate the new method, and the results show that the new method, with a simple setup and one single measurement, can provide consistent and accurate DOI functions for the entire array of multiple scintillator crystals. This will enable an accurate, simple, and practical DOI function calibration for the PET detectors based on the design of dual-ended-scintillator readout. In addition, the new method can be generally applied to calibrating other types of detectors that use the similar dual-ended readout to acquire the radiation interaction position.

  16. A novel method to calibrate DOI function of a PET detector with a dual-ended-scintillator readout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao Yiping; Yao Rutao; Ma Tianyu

    The detection of depth-of-interaction (DOI) is a critical detector capability to improve the PET spatial resolution uniformity across the field-of-view and will significantly enhance, in particular, small bore system performance for brain, breast, and small animal imaging. One promising technique of DOI detection is to use dual-ended-scintillator readout that uses two photon sensors to detect scintillation light from both ends of a scintillator array and estimate DOI based on the ratio of signals (similar to Anger logic). This approach needs a careful DOI function calibration to establish accurate relationship between DOI and signal ratios, and to recalibrate if the detectionmore » condition is shifted due to the drift of sensor gain, bias variations, or degraded optical coupling, etc. However, the current calibration method that uses coincident events to locate interaction positions inside a single scintillator crystal has severe drawbacks, such as complicated setup, long and repetitive measurements, and being prone to errors from various possible misalignments among the source and detector components. This method is also not practically suitable to calibrate multiple DOI functions of a crystal array. To solve these problems, a new method has been developed that requires only a uniform flood source to irradiate a crystal array without the need to locate the interaction positions, and calculates DOI functions based solely on the uniform probability distribution of interactions over DOI positions without knowledge or assumption of detector responses. Simulation and experiment have been studied to validate the new method, and the results show that the new method, with a simple setup and one single measurement, can provide consistent and accurate DOI functions for the entire array of multiple scintillator crystals. This will enable an accurate, simple, and practical DOI function calibration for the PET detectors based on the design of dual-ended-scintillator readout. In addition, the new method can be generally applied to calibrating other types of detectors that use the similar dual-ended readout to acquire the radiation interaction position.« less

  17. Electronic test and calibration circuits, a compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A wide variety of simple test calibration circuits are compiled for the engineer and laboratory technician. The majority of circuits were found inexpensive to assemble. Testing electronic devices and components, instrument and system test, calibration and reference circuits, and simple test procedures are presented.

  18. An efficient swarm intelligence approach to feature selection based on invasive weed optimization: Application to multivariate calibration and classification using spectroscopic data

    NASA Astrophysics Data System (ADS)

    Sheykhizadeh, Saheleh; Naseri, Abdolhossein

    2018-04-01

    Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively.

  19. An efficient swarm intelligence approach to feature selection based on invasive weed optimization: Application to multivariate calibration and classification using spectroscopic data.

    PubMed

    Sheykhizadeh, Saheleh; Naseri, Abdolhossein

    2018-04-05

    Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. How To Characterize Individual Nanosize Liposomes with Simple Self-Calibrating Fluorescence Microscopy.

    PubMed

    Mortensen, Kim I; Tassone, Chiara; Ehrlich, Nicky; Andresen, Thomas L; Flyvbjerg, Henrik

    2018-05-09

    Nanosize lipid vesicles are used extensively at the interface between nanotechnology and biology, e.g., as containers for chemical reactions at minute concentrations and vehicles for targeted delivery of pharmaceuticals. Typically, vesicle samples are heterogeneous as regards vesicle size and structural properties. Consequently, vesicles must be characterized individually to ensure correct interpretation of experimental results. Here we do that using dual-color fluorescence labeling of vesicles-of their lipid bilayers and lumens, separately. A vesicle then images as two spots, one in each color channel. A simple image analysis determines the total intensity and width of each spot. These four data all depend on the vesicle radius in a simple manner for vesicles that are spherical, unilamellar, and optimal encapsulators of molecular cargo. This permits identification of such ideal vesicles. They in turn enable calibration of the dual-color fluorescence microscopy images they appear in. Since this calibration is not a separate experiment but an analysis of images of vesicles to be characterized, it eliminates the potential source of error that a separate calibration experiment would have been. Nonideal vesicles in the same images were characterized by how their four data violate the calibrated relationship established for ideal vesicles. In this way, our method yields size, shape, lamellarity, and encapsulation efficiency of each imaged vesicle. Applying this procedure to extruded samples of vesicles, we found that, contrary to common assumptions, only a fraction of vesicles are ideal.

  1. Inverse kinematic solution for near-simple robots and its application to robot calibration

    NASA Technical Reports Server (NTRS)

    Hayati, Samad A.; Roston, Gerald P.

    1986-01-01

    This paper provides an inverse kinematic solution for a class of robot manipulators called near-simple manipulators. The kinematics of these manipulators differ from those of simple-robots by small parameter variations. Although most robots are by design simple, in practice, due to manufacturing tolerances, every robot is near-simple. The method in this paper gives an approximate inverse kinematics solution for real time applications based on the nominal solution for these robots. The validity of the results are tested both by a simulation study and by applying the algorithm to a PUMA robot.

  2. A Simple Method for Estimating Informative Node Age Priors for the Fossil Calibration of Molecular Divergence Time Analyses

    PubMed Central

    Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.

    2013-01-01

    Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303

  3. A simple formulation for deriving effective atomic numbers via electron density calibration from dual-energy CT data in the human body.

    PubMed

    Saito, Masatoshi; Sagara, Shota

    2017-06-01

    The main objective of this study is to propose a simple formulation (which we called DEEDZ) for deriving effective atomic numbers (Z eff ) via electron density (ρ e ) calibration from dual-energy (DE) CT data. We carried out numerical analysis of this DEEDZ method for a large variety of materials with known elemental compositions and mass densities using an available photon cross sections database. The new conversion approach was also applied to previously published experimental DECT data to validate its practical feasibility. We performed numerical analysis of the DEEDZ conversion method for tissue surrogates that have the same chemical compositions and mass densities as a commercial tissue-characterization phantom in order to determine the parameters necessary for the ρ e and Z eff calibrations in the DEEDZ conversion. These parameters were then applied to the human-body-equivalent tissues of ICRU Report 46 as objects of interest with unknown ρ e and Z eff . The attenuation coefficients of these materials were calculated using the XCOM photon cross sections database. We also applied the DEEDZ conversion to experimental DECT data available in the literature, which was measured for two commercial phantoms of different shapes and sizes using a dual-source CT scanner at 80 kV and 140 kV/Sn. The simulated Z eff 's were in excellent agreement with the reference values for almost all of the ICRU-46 human tissues over the Z eff range from 5.83 (gallstones-cholesterol) to 16.11 (bone mineral-hydroxyapatite). The relative deviations from the reference Z eff were within ± 0.3% for all materials, except for one outlier that presented a -3.1% deviation, namely, the thyroid. The reason for this discrepancy is that the thyroid contains a small amount of iodine, an element with a large atomic number (Z = 53). In the experimental case, we confirmed that the simple formulation with less fit parameters enable to calibrate Z eff as accurately as the existing calibration procedure. The DEEDZ conversion method based on the simple formulation proposed could facilitate the construction of ρ e and Z eff images from acquired DECT data. © 2017 American Association of Physicists in Medicine.

  4. Simple Parametric Model for Intensity Calibration of Cassini Composite Infrared Spectrometer Data

    NASA Technical Reports Server (NTRS)

    Brasunas, J.; Mamoutkine, A.; Gorius, N.

    2016-01-01

    Accurate intensity calibration of a linear Fourier-transform spectrometer typically requires the unknown science target and the two calibration targets to be acquired under identical conditions. We present a simple model suitable for vector calibration that enables accurate calibration via adjustments of measured spectral amplitudes and phases when these three targets are recorded at different detector or optics temperatures. Our model makes calibration more accurate both by minimizing biases due to changing instrument temperatures that are always present at some level and by decreasing estimate variance through incorporating larger averages of science and calibration interferogram scans.

  5. An IMU-to-Body Alignment Method Applied to Human Gait Analysis

    PubMed Central

    Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo

    2016-01-01

    This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis. PMID:27973406

  6. Hand-eye calibration for rigid laparoscopes using an invariant point.

    PubMed

    Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2016-06-01

    Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.

  7. Nitrogen dioxide and kerosene-flame soot calibration of photoacoustic instruments for measurement of light absorption by aerosols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnott, W. Patrick; Moosmu''ller, Hans; Walker, John W.

    2000-12-01

    A nitrogen dioxide calibration method is developed to evaluate the theoretical calibration for a photoacoustic instrument used to measure light absorption by atmospheric aerosols at a laser wavelength of 532.0 nm. This method uses high concentrations of nitrogen dioxide so that both a simple extinction and the photoacoustically obtained absorption measurement may be performed simultaneously. Since Rayleigh scattering is much less than absorption for the gas, the agreement between the extinction and absorption coefficients can be used to evaluate the theoretical calibration, so that the laser gas spectra are not needed. Photoacoustic theory is developed to account for strong absorptionmore » of the laser beam power in passage through the resonator. Findings are that the photoacoustic absorption based on heat-balance theory for the instrument compares well with absorption inferred from the extinction measurement, and that both are well within values represented by published spectra of nitrogen dioxide. Photodissociation of nitrogen dioxide limits the calibration method to wavelengths longer than 398 nm. Extinction and absorption at 532 and 1047 nm were measured for kerosene-flame soot to evaluate the calibration method, and the single scattering albedo was found to be 0.31 and 0.20 at these wavelengths, respectively.« less

  8. On using summary statistics from an external calibration sample to correct for covariate measurement error.

    PubMed

    Guo, Ying; Little, Roderick J; McConnell, Daniel S

    2012-01-01

    Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.

  9. Integrating ecosystems measurements from multiple eddy-covariance sites to a simple model of ecosystem process - Are there possibilities for a uniform model calibration?

    NASA Astrophysics Data System (ADS)

    Minunno, Francesco; Peltoniemi, Mikko; Launiainen, Samuli; Mäkelä, Annikki

    2014-05-01

    Biogeochemical models quantify the material and energy flux exchanges between biosphere, atmosphere and soil, however there is still considerable uncertainty underpinning model structure and parametrization. The increasing availability of data from of multiple sources provides useful information for model calibration and validation at different space and time scales. We calibrated a simplified ecosystem process model PRELES to data from multiple sites. In this work we had the following objective: to compare a multi-site calibration and site-specific calibrations, in order to test if PRELES is a model of general applicability, and to test how well one parameterization can predict ecosystem fluxes. Model calibration and evaluation were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 9 sites of Finland and Sweden were used in the study; half dataset was used for model calibrations and half for the comparative analyses. 10 BCs were performed; the model was independently calibrated for each of the nine sites (site-specific calibrations) and a multi-site calibration was achieved using the data from all the sites in one BC. Then 9 BMCs were carried out, one for each site, using output from the multi-site and the site-specific versions of PRELES. Similar estimates were obtained for the parameters at which model outputs are most sensitive. Not surprisingly, the joint posterior distribution achieved through the multi-site calibration was characterized by lower uncertainty, because more data were involved in the calibration process. No significant differences were encountered in the prediction of the multi-site and site-specific versions of PRELES, and after BMC, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Despite being a simple model, PRELES provided good estimates of GPP and ET; only for one site PRELES multi-site version underestimated water fluxes. Our study implies convergence of GPP and water processes in boreal zone to the extent that their plausible prediction is possible with a simple model using global parameterization.

  10. Precise and direct method for the measurement of the torsion spring constant of the atomic force microscopy cantilevers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarząbek, D. M., E-mail: djarz@ippt.pan.pl

    2015-01-15

    A direct method for the evaluation of the torsional spring constants of the atomic force microscope cantilevers is presented in this paper. The method uses a nanoindenter to apply forces at the long axis of the cantilever and in the certain distance from it. The torque vs torsion relation is then evaluated by the comparison of the results of the indentations experiments at different positions on the cantilever. Next, this relation is used for the precise determination of the torsional spring constant of the cantilever. The statistical analysis shows that the standard deviation of the calibration measurements is equal tomore » approximately 1%. Furthermore, a simple method for calibration of the photodetector’s lateral response is proposed. The overall procedure of the lateral calibration constant determination has the accuracy approximately equal to 10%.« less

  11. Computerized tomography calibrator

    NASA Technical Reports Server (NTRS)

    Engel, Herbert P. (Inventor)

    1991-01-01

    A set of interchangeable pieces comprising a computerized tomography calibrator, and a method of use thereof, permits focusing of a computerized tomographic (CT) system. The interchangeable pieces include a plurality of nestable, generally planar mother rings, adapted for the receipt of planar inserts of predetermined sizes, and of predetermined material densities. The inserts further define openings therein for receipt of plural sub-inserts. All pieces are of known sizes and densities, permitting the assembling of different configurations of materials of known sizes and combinations of densities, for calibration (i.e., focusing) of a computerized tomographic system through variation of operating variables thereof. Rather than serving as a phanton, which is intended to be representative of a particular workpiece to be tested, the set of interchangeable pieces permits simple and easy standardized calibration of a CT system. The calibrator and its related method of use further includes use of air or of particular fluids for filling various openings, as part of a selected configuration of the set of pieces.

  12. General Matrix Inversion for the Calibration of Electric Field Sensor Arrays on Aircraft Platforms

    NASA Technical Reports Server (NTRS)

    Mach, D. M.; Koshak, W. J.

    2006-01-01

    We have developed a matrix calibration procedure that uniquely relates the electric fields measured at the aircraft with the external vector electric field and net aircraft charge. Our calibration method is being used with all of our aircraft/electric field sensing combinations and can be generalized to any reasonable combination of electric field measurements and aircraft. We determine a calibration matrix that represents the individual instrument responses to the external electric field. The aircraft geometry and configuration of field mills (FMs) uniquely define the matrix. The matrix can then be inverted to determine the external electric field and net aircraft charge from the FM outputs. A distinct advantage of the method is that if one or more FMs need to be eliminated or de-emphasized (for example, due to a malfunction), it is a simple matter to reinvert the matrix without the malfunctioning FMs. To demonstrate our calibration technique, we present data from several of our aircraft programs (ER-2, DC-8, Altus, Citation).

  13. Exploration of attenuated total reflectance mid-infrared spectroscopy and multivariate calibration to measure immunoglobulin G in human sera.

    PubMed

    Hou, Siyuan; Riley, Christopher B; Mitchell, Cynthia A; Shaw, R Anthony; Bryanton, Janet; Bigsby, Kathryn; McClure, J Trenton

    2015-09-01

    Immunoglobulin G (IgG) is crucial for the protection of the host from invasive pathogens. Due to its importance for human health, tools that enable the monitoring of IgG levels are highly desired. Consequently there is a need for methods to determine the IgG concentration that are simple, rapid, and inexpensive. This work explored the potential of attenuated total reflectance (ATR) infrared spectroscopy as a method to determine IgG concentrations in human serum samples. Venous blood samples were collected from adults and children, and from the umbilical cord of newborns. The serum was harvested and tested using ATR infrared spectroscopy. Partial least squares (PLS) regression provided the basis to develop the new analytical methods. Three PLS calibrations were determined: one for the combined set of the venous and umbilical cord serum samples, the second for only the umbilical cord samples, and the third for only the venous samples. The number of PLS factors was chosen by critical evaluation of Monte Carlo-based cross validation results. The predictive performance for each PLS calibration was evaluated using the Pearson correlation coefficient, scatter plot and Bland-Altman plot, and percent deviations for independent prediction sets. The repeatability was evaluated by standard deviation and relative standard deviation. The results showed that ATR infrared spectroscopy is potentially a simple, quick, and inexpensive method to measure IgG concentrations in human serum samples. The results also showed that it is possible to build a united calibration curve for the umbilical cord and the venous samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Radiometric calibration of Landsat Thematic Mapper multispectral images

    USGS Publications Warehouse

    Chavez, P.S.

    1989-01-01

    A main problem encountered in radiometric calibration of satellite image data is correcting for atmospheric effects. Without this correction, an image digital number (DN) cannot be converted to a surface reflectance value. In this paper the accuracy of a calibration procedure, which includes a correction for atmospheric scattering, is tested. Two simple methods, a stand-alone and an in situ sky radiance measurement technique, were used to derive the HAZE DN values for each of the six reflectance Thematic Mapper (TM) bands. The DNs of two Landsat TM images of Phoenix, Arizona were converted to surface reflectances. -from Author

  15. Solar-diffuser panel and ratioing radiometer approach to satellite sensor on-board calibration

    NASA Technical Reports Server (NTRS)

    Slater, Philip N.; Palmer, James M.

    1991-01-01

    The use of a solar-diffuser panel is a desirable approach to the on-board absolute radiometric calibration of satellite multispectral sensors used for earth observation in the solar reflective spectral range. It provides a full aperture, full field, end-to-end calibration near the top of the sensor's dynamic range and across its entire spectral response range. A serious drawback is that the panel's reflectance, and the response of any simple detector used to monitor its reflectance may change with time. This paper briefly reviews some preflight and on-board methods for absolute calibration and introduces the ratioing-radiometer concept in which the radiance of the panel is ratioed with respect to the solar irradiance at the time the multispectral sensor is viewing the panel in its calibration mode.

  16. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    NASA Astrophysics Data System (ADS)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  17. Optogalvanic wavelength calibration for laser monitoring of reactive atmospheric species

    NASA Technical Reports Server (NTRS)

    Webster, C. R.

    1982-01-01

    Laser-based techniques have been successfully employed for monitoring atmospheric species of importance to stratospheric ozone chemistry or tropospheric air quality control. When spectroscopic methods using tunable lasers are used, a simultaneously recorded reference spectrum is required for wavelength calibration. For stable species this is readily achieved by incorporating into the sensing instrument a reference cell containing the species to be monitored. However, when the species of interest is short-lived, this approach is unsuitable. It is proposed that wavelength calibration for short-lived species may be achieved by generating the species of interest in an electrical or RF discharge and using optogalvanic detection as a simple, sensitive, and reliable means of recording calibration spectra. The wide applicability of this method is emphasized. Ultraviolet, visible, or infrared lasers, either CW or pulsed, may be used in aircraft, balloon, or shuttle experiments for sensing atoms, molecules, radicals, or ions.

  18. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  19. Calibrating page sized Gafchromic EBT3 films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crijns, W.; Maes, F.; Heide, U. A. van der

    2013-01-15

    Purpose: The purpose is the development of a novel calibration method for dosimetry with Gafchromic EBT3 films. The method should be applicable for pretreatment verification of volumetric modulated arc, and intensity modulated radiotherapy. Because the exposed area on film can be large for such treatments, lateral scan errors must be taken into account. The correction for the lateral scan effect is obtained from the calibration data itself. Methods: In this work, the film measurements were modeled using their relative scan values (Transmittance, T). Inside the transmittance domain a linear combination and a parabolic lateral scan correction described the observed transmittancemore » values. The linear combination model, combined a monomer transmittance state (T{sub 0}) and a polymer transmittance state (T{sub {infinity}}) of the film. The dose domain was associated with the observed effects in the transmittance domain through a rational calibration function. On the calibration film only simple static fields were applied and page sized films were used for calibration and measurements (treatment verification). Four different calibration setups were considered and compared with respect to dose estimation accuracy. The first (I) used a calibration table from 32 regions of interest (ROIs) spread on 4 calibration films, the second (II) used 16 ROIs spread on 2 calibration films, the third (III), and fourth (IV) used 8 ROIs spread on a single calibration film. The calibration tables of the setups I, II, and IV contained eight dose levels delivered to different positions on the films, while for setup III only four dose levels were applied. Validation was performed by irradiating film strips with known doses at two different time points over the course of a week. Accuracy of the dose response and the lateral effect correction was estimated using the dose difference and the root mean squared error (RMSE), respectively. Results: A calibration based on two films was the optimal balance between cost effectiveness and dosimetric accuracy. The validation resulted in dose errors of 1%-2% for the two different time points, with a maximal absolute dose error around 0.05 Gy. The lateral correction reduced the RMSE values on the sides of the film to the RMSE values at the center of the film. Conclusions: EBT3 Gafchromic films were calibrated for large field dosimetry with a limited number of page sized films and simple static calibration fields. The transmittance was modeled as a linear combination of two transmittance states, and associated with dose using a rational calibration function. Additionally, the lateral scan effect was resolved in the calibration function itself. This allows the use of page sized films. Only two calibration films were required to estimate both the dose and the lateral response. The calibration films were used over the course of a week, with residual dose errors Less-Than-Or-Slanted-Equal-To 2% or Less-Than-Or-Slanted-Equal-To 0.05 Gy.« less

  20. Linear Calibration of Radiographic Mineral Density Using Video-Digitizing Methods

    NASA Technical Reports Server (NTRS)

    Martin, R. Bruce; Papamichos, Thomas; Dannucci, Greg A.

    1990-01-01

    Radiographic images can provide quantitative as well as qualitative information if they are subjected to densitometric analysis. Using modem video-digitizing techniques, such densitometry can be readily accomplished using relatively inexpensive computer systems. However, such analyses are made more difficult by the fact that the density values read from the radiograph have a complex, nonlinear relationship to bone mineral content. This article derives the relationship between these variables from the nature of the intermediate physical processes, and presents a simple mathematical method for obtaining a linear calibration function using a step wedge or other standard.

  1. Linear Calibration of Radiographic Mineral Density Using Video-Digitizing Methods

    NASA Technical Reports Server (NTRS)

    Martin, R. Bruce; Papamichos, Thomas; Dannucci, Greg A.

    1990-01-01

    Radiographic images can provide quantitative as well as qualitative information if they are subjected to densitometric analysis. Using modern video-digitizing techniques, such densitometry can be readily accomplished using relatively inexpensive computer systems. However, such analyses are made more difficult by the fact that the density values read from the radiograph have a complex, nonlinear relationship to bone mineral content. This article derives the relationship between these variables from the nature of the intermediate physical processes, and presents a simple mathematical method for obtaining a linear calibration function using a step wedge or other standard.

  2. Development and Interlaboratory Validation of a Simple Screening Method for Genetically Modified Maize Using a ΔΔC(q)-Based Multiplex Real-Time PCR Assay.

    PubMed

    Noguchi, Akio; Nakamura, Kosuke; Sakata, Kozue; Sato-Fukuda, Nozomi; Ishigaki, Takumi; Mano, Junichi; Takabatake, Reona; Kitta, Kazumi; Teshima, Reiko; Kondo, Kazunari; Nishimaki-Mogami, Tomoko

    2016-04-19

    A number of genetically modified (GM) maize events have been developed and approved worldwide for commercial cultivation. A screening method is needed to monitor GM maize approved for commercialization in countries that mandate the labeling of foods containing a specified threshold level of GM crops. In Japan, a screening method has been implemented to monitor approved GM maize since 2001. However, the screening method currently used in Japan is time-consuming and requires generation of a calibration curve and experimental conversion factor (C(f)) value. We developed a simple screening method that avoids the need for a calibration curve and C(f) value. In this method, ΔC(q) values between the target sequences and the endogenous gene are calculated using multiplex real-time PCR, and the ΔΔC(q) value between the analytical and control samples is used as the criterion for determining analytical samples in which the GM organism content is below the threshold level for labeling of GM crops. An interlaboratory study indicated that the method is applicable independently with at least two models of PCR instruments used in this study.

  3. Accurate mass measurement by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry. I. Measurement of positive radical ions using porphyrin standard reference materials.

    PubMed

    Griffiths, Nia W; Wyatt, Mark F; Kean, Suzanna D; Graham, Andrew E; Stein, Bridget K; Brenton, A Gareth

    2010-06-15

    A method for the accurate mass measurement of positive radical ions by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOFMS) is described. Initial use of a conjugated oligomeric calibration material was rejected in favour of a series of meso-tetraalkyl/tetraalkylaryl-functionalised porphyrins, from which the two calibrants required for a particular accurate mass measurement were chosen. While all measurements of monoisotopic species were within +/-5 ppm, and the method was rigorously validated using chemometrics, mean values of five measurements were used for extra confidence in the generation of potential elemental formulae. Potential difficulties encountered when measuring compounds containing multi-isotopic elements are discussed, where the monoisotopic peak is no longer the lowest mass peak, and a simple mass-correction solution can be applied. The method requires no significant expertise to implement, but care and attention is required to obtain valid measurements. The method is operationally simple and will prove useful to the analytical chemistry community. Copyright (c) 2010 John Wiley & Sons, Ltd.

  4. Sensitive analytical method for simultaneous analysis of some vasoconstrictors with highly overlapped analytical signals

    NASA Astrophysics Data System (ADS)

    Nikolić, G. S.; Žerajić, S.; Cakić, M.

    2011-10-01

    Multivariate calibration method is a powerful mathematical tool that can be applied in analytical chemistry when the analytical signals are highly overlapped. The method with regression by partial least squares is proposed for the simultaneous spectrophotometric determination of adrenergic vasoconstrictors in decongestive solution containing two active components: phenyleprine hydrochloride and trimazoline hydrochloride. These sympathomimetic agents are that frequently associated in pharmaceutical formulations against the common cold. The proposed method, which is, simple and rapid, offers the advantages of sensitivity and wide range of determinations without the need for extraction of the vasoconstrictors. In order to minimize the optimal factors necessary to obtain the calibration matrix by multivariate calibration, different parameters were evaluated. The adequate selection of the spectral regions proved to be important on the number of factors. In order to simultaneously quantify both hydrochlorides among excipients, the spectral region between 250 and 290 nm was selected. A recovery for the vasoconstrictor was 98-101%. The developed method was applied to assay of two decongestive pharmaceutical preparations.

  5. A novel multivariate approach using science-based calibration for direct coating thickness determination in real-time NIR process monitoring.

    PubMed

    Möltgen, C-V; Herdling, T; Reich, G

    2013-11-01

    This study demonstrates an approach, using science-based calibration (SBC), for direct coating thickness determination on heart-shaped tablets in real-time. Near-Infrared (NIR) spectra were collected during four full industrial pan coating operations. The tablets were coated with a thin hydroxypropyl methylcellulose (HPMC) film up to a film thickness of 28 μm. The application of SBC permits the calibration of the NIR spectral data without using costly determined reference values. This is due to the fact that SBC combines classical methods to estimate the coating signal and statistical methods for the noise estimation. The approach enabled the use of NIR for the measurement of the film thickness increase from around 8 to 28 μm of four independent batches in real-time. The developed model provided a spectroscopic limit of detection for the coating thickness of 0.64 ± 0.03 μm root-mean square (RMS). In the commonly used statistical methods for calibration, such as Partial Least Squares (PLS), sufficiently varying reference values are needed for calibration. For thin non-functional coatings this is a challenge because the quality of the model depends on the accuracy of the selected calibration standards. The obvious and simple approach of SBC eliminates many of the problems associated with the conventional statistical methods and offers an alternative for multivariate calibration. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. An accurate online calibration system based on combined clamp-shape coil for high voltage electronic current transformers.

    PubMed

    Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi

    2013-07-01

    Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.

  7. Measurement of Antenna Bore-Sight Gain

    NASA Technical Reports Server (NTRS)

    Fortinberry, Jarrod; Shumpert, Thomas

    2016-01-01

    The absolute or free-field gain of a simple antenna can be approximated using standard antenna theory formulae or for a more accurate prediction, numerical methods may be employed to solve for antenna parameters including gain. Both of these methods will result in relatively reasonable estimates but in practice antenna gain is usually verified and documented via measurements and calibration. In this paper, a relatively simple and low-cost, yet effective means of determining the bore-sight free-field gain of a VHF/UHF antenna is proposed by using the Brewster angle relationship.

  8. Evaluation of commercially available techniques and development of simplified methods for measuring grille airflows in HVAC systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Iain S.; Wray, Craig P.; Guillot, Cyril

    2003-08-01

    In this report, we discuss the accuracy of flow hoods for residential applications, based on laboratory tests and field studies. The results indicate that commercially available hoods are often inadequate to measure flows in residential systems, and that there can be a wide range of performance between different flow hoods. The errors are due to poor calibrations, sensitivity of existing hoods to grille flow non-uniformities, and flow changes from added flow resistance. We also evaluated several simple techniques for measuring register airflows that could be adopted by the HVAC industry and homeowners as simple diagnostics that are often as accuratemore » as commercially available devices. Our test results also show that current calibration procedures for flow hoods do not account for field application problems. As a result, organizations such as ASHRAE or ASTM need to develop a new standard for flow hood calibration, along with a new measurement standard to address field use of flow hoods.« less

  9. Bacterial contamination monitor

    NASA Technical Reports Server (NTRS)

    Rich, E.; Macleod, N. H.

    1973-01-01

    Economical, simple, and fast method uses apparatus which detects bacteria by photography. Apparatus contains camera, film assembly, calibrated light bulb, opaque plastic plate with built-in reflecting surface and transparent window section, opaque slide, plate with chemical packages, and cover containing roller attached to handle.

  10. Hand-Eye Calibration in Visually-Guided Robot Grinding.

    PubMed

    Li, Wen-Long; Xie, He; Zhang, Gang; Yan, Si-Jie; Yin, Zhou-Ping

    2016-11-01

    Visually-guided robot grinding is a novel and promising automation technique for blade manufacturing. One common problem encountered in robot grinding is hand-eye calibration, which establishes the pose relationship between the end effector (hand) and the scanning sensor (eye). This paper proposes a new calibration approach for robot belt grinding. The main contribution of this paper is its consideration of both joint parameter errors and pose parameter errors in a hand-eye calibration equation. The objective function of the hand-eye calibration is built and solved, from which 30 compensated values (corresponding to 24 joint parameters and six pose parameters) are easily calculated in a closed solution. The proposed approach is economic and simple because only a criterion sphere is used to calculate the calibration parameters, avoiding the need for an expensive and complicated tracking process using a laser tracker. The effectiveness of this method is verified using a calibration experiment and a blade grinding experiment. The code used in this approach is attached in the Appendix.

  11. Operational correction and validation of the VIIRS TEB longwave infrared band calibration bias during blackbody temperature changes

    NASA Astrophysics Data System (ADS)

    Wang, Wenhui; Cao, Changyong; Ignatov, Alex; Li, Zhenglong; Wang, Likun; Zhang, Bin; Blonski, Slawomir; Li, Jun

    2017-09-01

    The Suomi NPP VIIRS thermal emissive bands (TEB) have been performing very well since data became available on January 20, 2012. The longwave infrared bands at 11 and 12 um (M15 and M16) are primarily used for sea surface temperature (SST) retrievals. A long standing anomaly has been observed during the quarterly warm-up-cool-down (WUCD) events. During such event daytime SST product becomes anomalous with a warm bias shown as a spike in the SST time series on the order of 0.2 K. A previous study (CAO et al. 2017) suggested that the VIIRS TEB calibration anomaly during WUCD is due to a flawed theoretical assumption in the calibration equation and proposed an Ltrace method to address the issue. This paper complements that study and presents operational implementation and validation of the Ltrace method for M15 and M16. The Ltrace method applies bias correction during WUCD only. It requires a simple code change and one-time calibration parameter look-up-table update. The method was evaluated using colocated CrIS observations and the SST algorithm. Our results indicate that the method can effectively reduce WUCD calibration anomaly in M15, with residual bias of 0.02 K after the correction. It works less effectively for M16, with residual bias of 0.04 K. The Ltrace method may over-correct WUCD calibration biases, especially for M16. However, the residual WUCD biases are small in both bands. Evaluation results using the SST algorithm show that the method can effectively remove SST anomaly during WUCD events.

  12. Calibration of a subcutaneous amperometric glucose sensor implanted for 7 days in diabetic patients. Part 2. Superiority of the one-point calibration method.

    PubMed

    Choleau, C; Klein, J C; Reach, G; Aussedat, B; Demaria-Pesce, V; Wilson, G S; Gifford, R; Ward, W K

    2002-08-01

    Calibration, i.e. the transformation in real time of the signal I(t) generated by the glucose sensor at time t into an estimation of glucose concentration G(t), represents a key issue for the development of a continuous glucose monitoring system. To compare two calibration procedures. In the one-point calibration, which assumes that I(o) is negligible, S is simply determined as the ratio I/G, and G(t) = I(t)/S. The two-point calibration consists in the determination of a sensor sensitivity S and of a background current I(o) by plotting two values of the sensor signal versus the concomitant blood glucose concentrations. The subsequent estimation of G(t) is given by G(t) = (I(t)-I(o))/S. A glucose sensor was implanted in the abdominal subcutaneous tissue of nine type 1 diabetic patients during 3 (n = 2) and 7 days (n = 7). The one-point calibration was performed a posteriori either once per day before breakfast, or twice per day before breakfast and dinner, or three times per day before each meal. The two-point calibration was performed each morning during breakfast. The percentages of points present in zones A and B of the Clarke Error Grid were significantly higher when the system was calibrated using the one-point calibration. Use of two one-point calibrations per day before meals was virtually as accurate as three one-point calibrations. This study demonstrates the feasibility of a simple method for calibrating a continuous glucose monitoring system.

  13. Food adulteration analysis without laboratory prepared or determined reference food adulterant values.

    PubMed

    Kalivas, John H; Georgiou, Constantinos A; Moira, Marianna; Tsafaras, Ilias; Petrakis, Eleftherios A; Mousdis, George A

    2014-04-01

    Quantitative analysis of food adulterants is an important health and economic issue that needs to be fast and simple. Spectroscopy has significantly reduced analysis time. However, still needed are preparations of analyte calibration samples matrix matched to prediction samples which can be laborious and costly. Reported in this paper is the application of a newly developed pure component Tikhonov regularization (PCTR) process that does not require laboratory prepared or reference analysis methods, and hence, is a greener calibration method. The PCTR method requires an analyte pure component spectrum and non-analyte spectra. As a food analysis example, synchronous fluorescence spectra of extra virgin olive oil samples adulterated with sunflower oil is used. Results are shown to be better than those obtained using ridge regression with reference calibration samples. The flexibility of PCTR allows including reference samples and is generic for use with other instrumental methods and food products. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. A simple differential steady-state method to measure the thermal conductivity of solid bulk materials with high accuracy.

    PubMed

    Kraemer, D; Chen, G

    2014-02-01

    Accurate measurements of thermal conductivity are of great importance for materials research and development. Steady-state methods determine thermal conductivity directly from the proportionality between heat flow and an applied temperature difference (Fourier Law). Although theoretically simple, in practice, achieving high accuracies with steady-state methods is challenging and requires rather complex experimental setups due to temperature sensor uncertainties and parasitic heat loss. We developed a simple differential steady-state method in which the sample is mounted between an electric heater and a temperature-controlled heat sink. Our method calibrates for parasitic heat losses from the electric heater during the measurement by maintaining a constant heater temperature close to the environmental temperature while varying the heat sink temperature. This enables a large signal-to-noise ratio which permits accurate measurements of samples with small thermal conductance values without an additional heater calibration measurement or sophisticated heater guards to eliminate parasitic heater losses. Additionally, the differential nature of the method largely eliminates the uncertainties of the temperature sensors, permitting measurements with small temperature differences, which is advantageous for samples with high thermal conductance values and/or with strongly temperature-dependent thermal conductivities. In order to accelerate measurements of more than one sample, the proposed method allows for measuring several samples consecutively at each temperature measurement point without adding significant error. We demonstrate the method by performing thermal conductivity measurements on commercial bulk thermoelectric Bi2Te3 samples in the temperature range of 30-150 °C with an error below 3%.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eken, T; Mayeda, K; Hofstetter, A

    A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, we found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction for 10 narrow frequency bands ranging between 0.02 to 2.0 Hz. For higher frequencies however, 2-D pathmore » corrections will be necessary and will be the subject of a future study. After calibrating the stations ISP, ISKB, and MALT for local and regional distances, single-station moment-magnitude estimates (M{sub w}) derived from the coda spectra were in excellent agreement with those determined from multi-station waveform modeling inversions of long-period data, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub w} estimates to significantly smaller events which could not otherwise be waveform modeled due to poor signal-to-noise ratio at long periods and sparse station coverage. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less

  16. Application of Fluorescence Spectrometry With Multivariate Calibration to the Enantiomeric Recognition of Fluoxetine in Pharmaceutical Preparations.

    PubMed

    Poláček, Roman; Májek, Pavel; Hroboňová, Katarína; Sádecká, Jana

    2016-04-01

    Fluoxetine is the most prescribed antidepressant chiral drug worldwide. Its enantiomers have a different duration of serotonin inhibition. A novel simple and rapid method for determination of the enantiomeric composition of fluoxetine in pharmaceutical pills is presented. Specifically, emission, excitation, and synchronous fluorescence techniques were employed to obtain the spectral data, which with multivariate calibration methods, namely, principal component regression (PCR) and partial least square (PLS), were investigated. The chiral recognition of fluoxetine enantiomers in the presence of β-cyclodextrin was based on diastereomeric complexes. The results of the multivariate calibration modeling indicated good prediction abilities. The obtained results for tablets were compared with those from chiral HPLC and no significant differences are shown by Fisher's (F) test and Student's t-test. The smallest residuals between reference or nominal values and predicted values were achieved by multivariate calibration of synchronous fluorescence spectral data. This conclusion is supported by calculated values of the figure of merit.

  17. In situ calibration of position detection in an optical trap for active microrheology in viscous materials

    PubMed Central

    Staunton, Jack R.; Blehm, Ben; Devine, Alexus; Tanner, Kandice

    2017-01-01

    In optical trapping, accurate determination of forces requires calibration of the position sensitivity relating displacements to the detector readout via the V-nm conversion factor (β). Inaccuracies in measured trap stiffness (k) and dependent calculations of forces and material properties occur if β is assumed to be constant in optically heterogeneous materials such as tissue, necessitating calibration at each probe. For solid-like samples in which probes are securely positioned, calibration can be achieved by moving the sample with a nanopositioning stage and stepping the probe through the detection beam. However, this method may be applied to samples only under select circumstances. Here, we introduce a simple method to find β in any material by steering the detection laser beam while the probe is trapped. We demonstrate the approach in the yolk of living Danio rerio (zebrafish) embryos and measure the viscoelastic properties over an order of magnitude of stress-strain amplitude. PMID:29519028

  18. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  19. Evaluation of an empirical monitor output estimation in carbon ion radiotherapy.

    PubMed

    Matsumura, Akihiko; Yusa, Ken; Kanai, Tatsuaki; Mizota, Manabu; Ohno, Tatsuya; Nakano, Takashi

    2015-09-01

    A conventional broad beam method is applied to carbon ion radiotherapy at Gunma University Heavy Ion Medical Center. According to this method, accelerated carbon ions are scattered by various beam line devices to form 3D dose distribution. The physical dose per monitor unit (d/MU) at the isocenter, therefore, depends on beam line parameters and should be calibrated by a measurement in clinical practice. This study aims to develop a calculation algorithm for d/MU using beam line parameters. Two major factors, the range shifter dependence and the field aperture effect, are measured via PinPoint chamber in a water phantom, which is an identical setup as that used for monitor calibration in clinical practice. An empirical monitor calibration method based on measurement results is developed using a simple algorithm utilizing a linear function and a double Gaussian pencil beam distribution to express the range shifter dependence and the field aperture effect. The range shifter dependence and the field aperture effect are evaluated to have errors of 0.2% and 0.5%, respectively. The proposed method has successfully estimated d/MU with a difference of less than 1% with respect to the measurement results. Taking the measurement deviation of about 0.3% into account, this result is sufficiently accurate for clinical applications. An empirical procedure to estimate d/MU with a simple algorithm is established in this research. This procedure allows them to use the beam time for more treatments, quality assurances, and other research endeavors.

  20. Efficient quantification of water content in edible oils by headspace gas chromatography with vapour phase calibration.

    PubMed

    Xie, Wei-Qi; Gong, Yi-Xian; Yu, Kong-Xian

    2018-06-01

    An automated and accurate headspace gas chromatographic (HS-GC) technique was investigated for rapidly quantifying water content in edible oils. In this method, multiple headspace extraction (MHE) procedures were used to analyse the integrated water content from the edible oil sample. A simple vapour phase calibration technique with an external vapour standard was used to calibrate both the water content in the gas phase and the total weight of water in edible oil sample. After that the water in edible oils can be quantified. The data showed that the relative standard deviation of the present HS-GC method in the precision test was less than 1.13%, the relative differences between the new method and a reference method (i.e. the oven-drying method) were no more than 1.62%. The present HS-GC method is automated, accurate, efficient, and can be a reliable tool for quantifying water content in edible oil related products and research. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  1. The Use of Partial Least Square Regression and Spectral Data in UV-Visible Region for Quantification of Adulteration in Indonesian Palm Civet Coffee

    PubMed Central

    Yulia, Meinilwita

    2017-01-01

    Asian palm civet coffee or kopi luwak (Indonesian words for coffee and palm civet) is well known as the world's priciest and rarest coffee. To protect the authenticity of luwak coffee and protect consumer from luwak coffee adulteration, it is very important to develop a robust and simple method for determining the adulteration of luwak coffee. In this research, the use of UV-Visible spectra combined with PLSR was evaluated to establish rapid and simple methods for quantification of adulteration in luwak-arabica coffee blend. Several preprocessing methods were tested and the results show that most of the preprocessing spectra were effective in improving the quality of calibration models with the best PLS calibration model selected for Savitzky-Golay smoothing spectra which had the lowest RMSECV (0.039) and highest RPDcal value (4.64). Using this PLS model, a prediction for quantification of luwak content was calculated and resulted in satisfactory prediction performance with high both RPDp and RER values. PMID:28913348

  2. Inverse models: A necessary next step in ground-water modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1997-01-01

    Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.

  3. Comparative study between derivative spectrophotometry and multivariate calibration as analytical tools applied for the simultaneous quantitation of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2013-09-01

    Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.

  4. An accurate online calibration system based on combined clamp-shape coil for high voltage electronic current transformers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi

    Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Basedmore » on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.« less

  5. Methods to produce calibration mixtures for anesthetic gas monitors and how to perform volumetric calculations on anesthetic gases.

    PubMed

    Christensen, P L; Nielsen, J; Kann, T

    1992-10-01

    A simple procedure for making calibration mixtures of oxygen and the anesthetic gases isoflurane, enflurane, and halothane is described. One to ten grams of the anesthetic substance is evaporated in a closed, 11,361-cc glass bottle filled with oxygen gas at atmospheric pressure. The carefully mixed gas is used to calibrate anesthetic gas monitors. By comparison of calculated and measured volumetric results it is shown that at atmospheric conditions the volumetric behavior of anesthetic gas mixtures can be described with reasonable accuracy using the ideal gas law. A procedure is described for calculating the deviation from ideal gas behavior in cases in which this is needed.

  6. Detector-unit-dependent calibration for polychromatic projections of rock core CT.

    PubMed

    Li, Mengfei; Zhao, Yunsong; Zhang, Peng

    2017-01-01

    Computed tomography (CT) plays an important role in digital rock analysis, which is a new prospective technique for oil and gas industry. But the artifacts in CT images will influence the accuracy of the digital rock model. In this study, we proposed and demonstrated a novel method to restore detector-unit-dependent functions for polychromatic projection calibration by scanning some simple shaped reference samples. As long as the attenuation coefficients of the reference samples are similar to the scanned object, the size or position is not needed to be exactly known. Both simulated and real data were used to verify the proposed method. The results showed that the new method reduced both beam hardening artifacts and ring artifacts effectively. Moreover, the method appeared to be quite robust.

  7. Air sampling with solid phase microextraction

    NASA Astrophysics Data System (ADS)

    Martos, Perry Anthony

    There is an increasing need for simple yet accurate air sampling methods. The acceptance of new air sampling methods requires compatibility with conventional chromatographic equipment, and the new methods have to be environmentally friendly, simple to use, yet with equal, or better, detection limits, accuracy and precision than standard methods. Solid phase microextraction (SPME) satisfies the conditions for new air sampling methods. Analyte detection limits, accuracy and precision of analysis with SPME are typically better than with any conventional air sampling methods. Yet, air sampling with SPME requires no pumps, solvents, is re-usable, extremely simple to use, is completely compatible with current chromatographic equipment, and requires a small capital investment. The first SPME fiber coating used in this study was poly(dimethylsiloxane) (PDMS), a hydrophobic liquid film, to sample a large range of airborne hydrocarbons such as benzene and octane. Quantification without an external calibration procedure is possible with this coating. Well understood are the physical and chemical properties of this coating, which are quite similar to those of the siloxane stationary phase used in capillary columns. The log of analyte distribution coefficients for PDMS are linearly related to chromatographic retention indices and to the inverse of temperature. Therefore, the actual chromatogram from the analysis of the PDMS air sampler will yield the calibration parameters which are used to quantify unknown airborne analyte concentrations (ppb v to ppm v range). The second fiber coating used in this study was PDMS/divinyl benzene (PDMS/DVB) onto which o-(2,3,4,5,6- pentafluorobenzyl) hydroxylamine (PFBHA) was adsorbed for the on-fiber derivatization of gaseous formaldehyde (ppb v range), with and without external calibration. The oxime formed from the reaction can be detected with conventional gas chromatographic detectors. Typical grab sampling times were as small as 5 seconds. With 300 seconds sampling, the formaldehyde detection limit was 2.1 ppbv, better than any other 5 minute sampling device for formaldehyde. The first-order rate constant for product formation was used to quantify formaldehyde concentrations without a calibration curve. This spot sampler was used to sample the headspace of hair gel, particle board, plant material and coffee grounds for formaldehyde, and other carbonyl compounds, with extremely promising results. The SPME sampling devices were also used for time- weighted average sampling (30 minutes to 16 hours). Finally, the four new SPME air sampling methods were field tested with side-by-side comparisons to standard air sampling methods, showing a tremendous use of SPME as an air sampler.

  8. Measuring water and sediment discharge from a road plot with a settling basin and tipping bucket

    Treesearch

    Thomas A. Black; Charles H. Luce

    2013-01-01

    A simple empirical method quantifies water and sediment production from a forest road surface, and is well suited for calibration and validation of road sediment models. To apply this quantitative method, the hydrologic technician installs bordered plots on existing typical road segments and measures coarse sediment production in a settling tank. When a tipping bucket...

  9. A comparative uncertainty study of the calibration of macrolide antibiotic reference standards using quantitative nuclear magnetic resonance and mass balance methods.

    PubMed

    Liu, Shu-Yu; Hu, Chang-Qin

    2007-10-17

    This study introduces the general method of quantitative nuclear magnetic resonance (qNMR) for the calibration of reference standards of macrolide antibiotics. Several qNMR experimental conditions were optimized including delay, which is an important parameter of quantification. Three kinds of macrolide antibiotics were used to validate the accuracy of the qNMR method by comparison with the results obtained by the high performance liquid chromatography (HPLC) method. The purities of five common reference standards of macrolide antibiotics were measured by the 1H qNMR method and the mass balance method, respectively. The analysis results of the two methods were compared. The qNMR is quick and simple to use. In a new medicine research and development process, qNMR provides a new and reliable method for purity analysis of the reference standard.

  10. Probe-Specific Procedure to Estimate Sensitivity and Detection Limits for 19F Magnetic Resonance Imaging.

    PubMed

    Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M

    2016-01-01

    Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.

  11. A novel 360-degree shape measurement using a simple setup with two mirrors and a laser MEMS scanner

    NASA Astrophysics Data System (ADS)

    Jin, Rui; Zhou, Xiang; Yang, Tao; Li, Dong; Wang, Chao

    2017-09-01

    There is no denying that 360-degree shape measurement technology plays an important role in the field of threedimensional optical metrology. Traditional optical 360-degree shape measurement methods are mainly two kinds: the first kind, by placing multiple scanners to achieve 360-degree measurements; the second kind, through the high-precision rotating device to get 360-degree shape model. The former increases the number of scanners and costly, while the latter using rotating devices lead to time consuming. This paper presents a low cost and fast optical 360-degree shape measurement method, which possesses the advantages of full static, fast and low cost. The measuring system consists of two mirrors with a certain angle, a laser projection system, a stereoscopic calibration block, and two cameras. And most of all, laser MEMS scanner can achieve precise movement of laser stripes without any movement mechanism, improving the measurement accuracy and efficiency. What's more, a novel stereo calibration technology presented in this paper can achieve point clouds data registration, and then get the 360-degree model of objects. A stereoscopic calibration block with special coded patterns on six sides is used in this novel stereo calibration method. Through this novel stereo calibration technology we can quickly get the 360-degree models of objects.

  12. Development and Testing of a Simple Calibration Technique for Long-Term Hydrological Impact Assessment (L-THIA) Model

    NASA Astrophysics Data System (ADS)

    Muthukrishnan, S.; Harbor, J.

    2001-12-01

    Hydrological studies are significant part of every engineering, developmental project and geological studies done to assess and understand the interactions between the hydrology and the environment. Such studies are generally conducted before the beginning of the project as well as after the project is completed, such that a comprehensive analysis can be done on the impact of such projects on the local and regional hydrology of the area. A good understanding of the chain of relationships that form the hydro-eco-biological and environmental cycle can be of immense help in maintaining the natural balance as we work towards exploration and exploitation of the natural resources as well as urbanization of undeveloped land. Rainfall-Runoff modeling techniques have been of great use here for decades since they provide fast and efficient means of analyzing vast amount of data that is gathered. Though process based, detailed models are better than the simple models, the later ones are used more often due to their simplicity, ease of use, and easy availability of data needed to run them. The Curve Number (CN) method developed by the United States Department of Agriculture (USDA) is one of the most widely used hydrologic modeling tools in the US, and has earned worldwide acceptance as a practical method for evaluating the effects of land use changes on the hydrology of an area. The Long-Term Hydrological Impact Assessment (L-THIA) model is a basic, CN-based, user-oriented model that has gained popularity amongst watershed planners because of its reliance on readily available data, and because the model is easy to use (http://www.ecn.purdue.edu/runoff) and produces results geared to the general information needs of planners. The L-THIA model was initially developed to study the relative long-term hydrologic impacts of different land use (past/current/future) scenarios, and it has been successful in meeting this goal. However, one of the weaknesses of L-THIA, as well as other models that focus strictly on surface runoff, is that many users are interested in predictions of runoff that match observations of flow in streams and rivers. To make L-THIA more useful for the planners and engineers alike, a simple, long-term calibration method based on linear regression of L-THIA predicted and observed surface runoff has been developed and tested here. The results from Little Eagle Creek (LEC) in Indiana show that such calibrations are successful and valuable. This method can be used to calibrate other simple rainfall-runoff models too.

  13. Analysis of Natural Toxins by Liquid Chromatography-Chemiluminescence Nitrogen Detection and Application to the Preparation of Certified Reference Materials.

    PubMed

    Thomas, Krista; Wechsler, Dominik; Chen, Yi-Min; Crain, Sheila; Quilliam, Michael A

    2016-09-01

    The implementation of instrumental analytical methods such as LC-MS for routine monitoring of toxins requires the availability of accurate calibration standards. This is a challenge because many toxins are rare, expensive, dangerous to handle, and/or unstable, and simple gravimetric procedures are not reliable for establishing accurate concentrations in solution. NMR has served as one method of qualitative and quantitative characterization of toxin calibration solution Certified Reference Materials (CRMs). LC with chemiluminescence N detection (LC-CLND) was selected as a complementary method for comprehensive characterization of CRMs because it provides a molar response to N. Here we report on our investigation of LC-CLND as a method suitable for quantitative analysis of nitrogenous toxins. It was demonstrated that a wide range of toxins could be analyzed quantitatively by LC-CLND. Furthermore, equimolar responses among diverse structures were established and it was shown that a single high-purity standard such as caffeine could be used for instrument calibration. The limit of detection was approximately 0.6 ng N. Measurement of several of Canada's National Research Council toxin CRMs with caffeine as the calibrant showed precision averaging 2% RSD and accuracy ranging from 97 to 102%. Application of LC-CLND to the production of calibration solution CRMs and the establishment of traceability of measurement results are presented.

  14. The Use of the Time Average Visibility for Analyzing HERA-19 Commissioning Data

    NASA Astrophysics Data System (ADS)

    Gallardo, Samavarti; Benefo, Roshan; La Plante, Paul; Aguirre, James; HERA Collaboration

    2018-01-01

    The Hydrogen Epoch of Reionization Array (HERA) is a radio telescope that will be observing large structure throughout the cosmic reionzation epoch. This will allow us to characterize the evolution of the 21 cm power spectrum to constrain the timing and morphology of reionization, the properties of the first galaxies, the evolution of large-scale structure, and the early sources of heating. We develop a simple and robust observable for the HERA-19 commissioning data, the Time Average Visibility (TAV). We compare both redundantly and absolutely calibrated visibilities to detailed instrument simulations and to analytical expectations, and explore the signal present in the TAV. The TAV has already been demonstrated as a method to reject poorly performing antennas, and may be improved with this work to allow a simple cross-check of the calibration solutions without imaging.

  15. Analysis of potential migrants from plastic materials in milk by liquid chromatography-mass spectrometry with liquid-liquid extraction and low-temperature purification.

    PubMed

    Bodai, Zsolt; Szabó, Bálint Sámuel; Novák, Márton; Hámori, Susanne; Nyiri, Zoltán; Rikker, Tamás; Eke, Zsuzsanna

    2014-10-15

    A simple and fast analytical method was developed for the determination of six UV stabilizers (Cyasorb UV-1164, Tinuvin P, Tinuvin 234, Tinuvin 326, Tinuvin 327, and Tinuvin 1577) and five antioxidants (Irgafos 168, Irganox 1010, Irganox 3114, Irganox 3790, and Irganox 565) in milk. For sample preparation liquid-liquid extraction with low-temperature purification combined with centrifugation was used to remove fats, proteins, and sugars. After the cleanup step, the sample was analyzed with high-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS). External standard and matrix calibrations were tested. External calibration proved to be acceptable for Tinuvin P, Tinuvin 234, Tinuvin 326, Tinuvin 327, Irganox 3114, and Irganox 3790. The method was successfully validated with matrix calibration for all compounds. Method detection limits were between 0.25 and 10 μg/kg. Accuracies ranged from 93 to 109%, and intraday precisions were <13%.

  16. Generation of CsI cluster ions for mass calibration in matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Lou, Xianwen; van Dongen, Joost L J; Meijer, E W

    2010-07-01

    A simple method was developed for the generation of cesium iodide (CsI) cluster ions up to m/z over 20,000 in matrix-assisted laser desorption/ionization mass spectrometry (MALDI MS). Calibration ions in both positive and negative ion modes can readily be generated from a single MALDI spot of CsI(3) with 2-[(2E)-3-(4-tert-butylphenyl)-2-methylprop-2-enylidene] malononitrile (DCTB) matrix. The major cluster ion series observed in the positive ion mode is [(CsI)(n)Cs](+), and in the negative ion mode is [(CsI)(n)I](-). In both cluster series, ions spread evenly every 259.81 units. The easy method described here for the production of CsI cluster ions should be useful for MALDI MS calibrations. Copyright 2010 American Society for Mass Spectrometry. Published by Elsevier Inc. All rights reserved.

  17. Lineal energy calibration of mini tissue-equivalent gas-proportional counters (TEPC)

    NASA Astrophysics Data System (ADS)

    Conte, V.; Moro, D.; Grosswendt, B.; Colautti, P.

    2013-07-01

    Mini TEPCs are cylindrical gas proportional counters of 1 mm or less of sensitive volume diameter. The lineal energy calibration of these tiny counters can be performed with an external gamma-ray source. However, to do that, first a method to get a simple and precise spectral mark has to be found and then the keV/μm value of this mark. A precise method (less than 1% of uncertainty) to identify this markis described here, and the lineal energy value of this mark has been measured for different simulated site sizes by using a 137Cs gamma source and a cylindrical TEPC equipped with a precision internal 244Cm alpha-particle source, and filled with propane-based tissue-equivalent gas mixture. Mini TEPCs can be calibrated in terms of lineal energy, by exposing them to 137Cesium sources, with an overall uncertainty of about 5%.

  18. A technique for verifying the input response function of neutron time-of-flight scintillation detectors using cosmic rays.

    PubMed

    Bonura, M A; Ruiz, C L; Fehl, D L; Cooper, G W; Chandler, G; Hahn, K D; Nelson, A J; Styron, J D; Torres, J A

    2014-11-01

    An accurate interpretation of DD or DT fusion neutron time-of-flight (nTOF) signals from current mode detectors employed at the Z-facility at Sandia National Laboratories requires that the instrument response functions (IRF's) be deconvolved from the measured nTOF signals. A calibration facility that produces detectable sub-ns radiation pulses is typically used to measure the IRF of such detectors. This work, however, reports on a simple method that utilizes cosmic radiation to measure the IRF of nTOF detectors, operated in pulse-counting mode. The characterizing metrics reported here are the throughput delay and full-width-at-half-maximum. This simple approach yields consistent IRF results with the same detectors calibrated in 2007 at a LINAC bremsstrahlung accelerator (Idaho State University). In particular, the IRF metrics from these two approaches and their dependence on the photomultipliers bias agree to within a few per cent. This information may thus be used to verify if the IRF for a given nTOF detector employed at Z has changed since its original current-mode calibration and warrants re-measurement.

  19. Estimation of the quantification uncertainty from flow injection and liquid chromatography transient signals in inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Laborda, Francisco; Medrano, Jesús; Castillo, Juan R.

    2004-06-01

    The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ.

  20. Investigations in Marine Chemistry: Salinity II.

    ERIC Educational Resources Information Center

    Schlenker, Richard M.

    Presented is a science activity in which the student investigates methods of calibration of a simple conductivity meter via a hands-on inquiry technique. Conductivity is mathematically compared to salinity using a point slope formula and graphical techniques. Sample solutions of unknown salinity are provided so that the students can sharpen their…

  1. Calibration of an electronic counter and pulse height analyzer for plotting erythrocyte volume spectra.

    DOT National Transportation Integrated Search

    1963-03-01

    A simple technique is presented for calibrating an electronic system used in the plotting of erythrocyte volume spectra. The calibration factors, once obtained, apparently remain applicable for some time. Precise estimates of calibration factors appe...

  2. Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2013-01-01

    Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.

  3. Temporal dynamics of sand dune bidirectional reflectance characteristics for absolute radiometric calibration of optical remote sensing data

    NASA Astrophysics Data System (ADS)

    Coburn, Craig A.; Logie, Gordon S. J.

    2018-01-01

    Attempts to use pseudoinvariant calibration sites (PICS) for establishing absolute radiometric calibration of Earth observation (EO) satellites requires high-quality information about the nature of the bidirectional reflectance distribution function (BRDF) of the surfaces used for these calibrations. Past studies have shown that the PICS method is useful for evaluating the trend of sensors over time or for the intercalibration of sensors. The PICS method was not considered until recently for deriving absolute radiometric calibration. This paper presents BRDF data collected by a high-performance portable goniometer system to develop a temporal BRDF model for the Algodones Dunes in California. By sampling the BRDF of the sand surface at similar solar zenith angles to those normally encountered by EO satellites, additional information on the changing nature of the surface can improve models used to provide absolute radiometric correction. The results demonstrated that the BRDF of a reasonably simple sand surface was complex with changes in anisotropy taking place in response to changing solar zenith angles. For the majority of observation and illumination angles, the spectral reflectance anisotropy observed varied between 1% and 5% in patterns that repeat around solar noon.

  4. SU-F-E-19: A Novel Method for TrueBeam Jaw Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corns, R; Zhao, Y; Huang, V

    2016-06-15

    Purpose: A simple jaw calibration method is proposed for Varian TrueBeam using an EPID-Encoder combination that gives accurate fields sizes and a homogeneous junction dose. This benefits clinical applications such as mono-isocentric half-beam block breast cancer or head and neck cancer treatment with junction/field matching. Methods: We use EPID imager with pixel size 0.392 mm × 0.392 mm to determine the radiation jaw position as measured from radio-opaque markers aligned with the crosshair. We acquire two images with different symmetric field sizes and record each individual jaw encoder values. A linear relationship between each jaw’s position and its encoder valuemore » is established, from which we predict the encoder values that produce the jaw positions required by TrueBeam’s calibration procedure. During TrueBeam’s jaw calibration procedure, we move the jaw with the pendant to set the jaw into position using the predicted encoder value. The overall accuracy is under 0.1 mm. Results: Our in-house software analyses images and provides sub-pixel accuracy to determine field centre and radiation edges (50% dose of the profile). We verified the TrueBeam encoder provides a reliable linear relationship for each individual jaw position (R{sup 2}>0.9999) from which the encoder values necessary to set jaw calibration points (1 cm and 19 cm) are predicted. Junction matching dose inhomogeneities were improved from >±20% to <±6% using this new calibration protocol. However, one technical challenge exists for junction matching, if the collimator walkout is large. Conclusion: Our new TrueBeam jaw calibration method can systematically calibrate the jaws to crosshair within sub-pixel accuracy and provides both good junction doses and field sizes. This method does not compensate for a larger collimator walkout, but can be used as the underlying foundation for addressing the walkout issue.« less

  5. Comparative study between derivative spectrophotometry and multivariate calibration as analytical tools applied for the simultaneous quantitation of Amlodipine, Valsartan and Hydrochlorothiazide.

    PubMed

    Darwish, Hany W; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A

    2013-09-01

    Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. High-precision method of binocular camera calibration with a distortion model.

    PubMed

    Li, Weimin; Shan, Siyu; Liu, Hui

    2017-03-10

    A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.

  7. Simple and accurate quantification of BTEX in ambient air by SPME and GC-MS.

    PubMed

    Baimatova, Nassiba; Kenessov, Bulat; Koziel, Jacek A; Carlsen, Lars; Bektassov, Marat; Demyanenko, Olga P

    2016-07-01

    Benzene, toluene, ethylbenzene and xylenes (BTEX) comprise one of the most ubiquitous and hazardous groups of ambient air pollutants of concern. Application of standard analytical methods for quantification of BTEX is limited by the complexity of sampling and sample preparation equipment, and budget requirements. Methods based on SPME represent simpler alternative, but still require complex calibration procedures. The objective of this research was to develop a simpler, low-budget, and accurate method for quantification of BTEX in ambient air based on SPME and GC-MS. Standard 20-mL headspace vials were used for field air sampling and calibration. To avoid challenges with obtaining and working with 'zero' air, slope factors of external standard calibration were determined using standard addition and inherently polluted lab air. For polydimethylsiloxane (PDMS) fiber, differences between the slope factors of calibration plots obtained using lab and outdoor air were below 14%. PDMS fiber provided higher precision during calibration while the use of Carboxen/PDMS fiber resulted in lower detection limits for benzene and toluene. To provide sufficient accuracy, the use of 20mL vials requires triplicate sampling and analysis. The method was successfully applied for analysis of 108 ambient air samples from Almaty, Kazakhstan. Average concentrations of benzene, toluene, ethylbenzene and o-xylene were 53, 57, 11 and 14µgm(-3), respectively. The developed method can be modified for further quantification of a wider range of volatile organic compounds in air. In addition, the new method is amenable to automation. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. FAST Model Calibration and Validation of the OC5-DeepCwind Floating Offshore Wind System Against Wave Tank Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  9. Evaluating fossil calibrations for dating phylogenies in light of rates of molecular evolution: a comparison of three approaches.

    PubMed

    Lukoschek, Vimoksalehi; Scott Keogh, J; Avise, John C

    2012-01-01

    Evolutionary and biogeographic studies increasingly rely on calibrated molecular clocks to date key events. Although there has been significant recent progress in development of the techniques used for molecular dating, many issues remain. In particular, controversies abound over the appropriate use and placement of fossils for calibrating molecular clocks. Several methods have been proposed for evaluating candidate fossils; however, few studies have compared the results obtained by different approaches. Moreover, no previous study has incorporated the effects of nucleotide saturation from different data types in the evaluation of candidate fossils. In order to address these issues, we compared three approaches for evaluating fossil calibrations: the single-fossil cross-validation method of Near, Meylan, and Shaffer (2005. Assessing concordance of fossil calibration points in molecular clock studies: an example using turtles. Am. Nat. 165:137-146), the empirical fossil coverage method of Marshall (2008. A simple method for bracketing absolute divergence times on molecular phylogenies using multiple fossil calibration points. Am. Nat. 171:726-742), and the Bayesian multicalibration method of Sanders and Lee (2007. Evaluating molecular clock calibrations using Bayesian analyses with soft and hard bounds. Biol. Lett. 3:275-279) and explicitly incorporate the effects of data type (nuclear vs. mitochondrial DNA) for identifying the most reliable or congruent fossil calibrations. We used advanced (Caenophidian) snakes as a case study; however, our results are applicable to any taxonomic group with multiple candidate fossils, provided appropriate taxon sampling and sufficient molecular sequence data are available. We found that data type strongly influenced which fossil calibrations were identified as outliers, regardless of which method was used. Despite the use of complex partitioned models of sequence evolution and multiple calibrations throughout the tree, saturation severely compressed basal branch lengths obtained from mitochondrial DNA compared with nuclear DNA. The effects of mitochondrial saturation were not ameliorated by analyzing a combined nuclear and mitochondrial data set. Although removing the third codon positions from the mitochondrial coding regions did not ameliorate saturation effects in the single-fossil cross-validations, it did in the Bayesian multicalibration analyses. Saturation significantly influenced the fossils that were selected as most reliable for all three methods evaluated. Our findings highlight the need to critically evaluate the fossils selected by data with different rates of nucleotide substitution and how data with different evolutionary rates affect the results of each method for evaluating fossils. Our empirical evaluation demonstrates that the advantages of using multiple independent fossil calibrations significantly outweigh any disadvantages.

  10. A fast and simple spectrofluorometric method for the determination of alendronate sodium in pharmaceuticals

    PubMed Central

    Ezzati Nazhad Dolatabadi, Jafar; Hamishehkar, Hamed; de la Guardia, Miguel; Valizadeh, Hadi

    2014-01-01

    Introduction: Alendronate sodium enhances bone formation and increases osteoblast proliferation and maturation and leads to the inhibition of osteoblast apoptosis. Therefore, a rapid and simple spectrofluorometric method has been developed and validated for the quantitative determination of it. Methods: The procedure is based on the reaction of primary amino group of alendronate with o-phthalaldehyde (OPA) in sodium hydroxide solution. Results: The calibration graph was linear over the concentration range of 0.0-2.4 μM and limit of detection and limit of quantification of the method was 8.89 and 29 nanomolar, respectively. The enthalpy and entropy of the reaction between alendronate sodium and OPA showed that the reaction is endothermic and entropy favored (ΔH = 154.08 kJ/mol; ΔS = 567.36 J/mol K) which indicates that OPA interaction with alendronate is increased at elevated temperature. Conclusion: This simple method can be used as a practical technique for the analysis of alendronate in various samples. PMID:24790897

  11. A candidate reference method for serum potassium measurement by inductively coupled plasma mass spectrometry.

    PubMed

    Yan, Ying; Han, Bingqing; Zeng, Jie; Zhou, Weiyan; Zhang, Tianjiao; Zhang, Jiangtao; Chen, Wenxiang; Zhang, Chuanbao

    2017-08-28

    Potassium is an important serum ion that is frequently assayed in clinical laboratories. Quality assurance requires reference methods; thus, the establishment of a candidate reference method for serum potassium measurements is important. An inductively coupled plasma mass spectrometry (ICP-MS) method was developed. Serum samples were gravimetrically spiked with an aluminum internal standard, digested with 69% ultrapure nitric acid, and diluted to the required concentration. The 39K/27Al ratios were measured by ICP-MS in hydrogen mode. The method was calibrated using 5% nitric acid matrix calibrators, and the calibration function was established using the bracketing method. The correlation coefficients between the measured 39K/27Al ratios and the analyte concentration ratios were >0.9999. The coefficients of variation were 0.40%, 0.68%, and 0.22% for the three serum samples, and the analytical recovery was 99.8%. The accuracy of the measurement was also verified by measuring certified reference materials, SRM909b and SRM956b. Comparison with the ion selective electrode routine method and international inter-laboratory comparisons gave satisfied results. The new ICP-MS method is specific, precise, simple, and low-cost, and it may be used as a candidate reference method for standardizing serum potassium measurements.

  12. Alignment of angular velocity sensors for a vestibular prosthesis.

    PubMed

    Digiovanna, Jack; Carpaneto, Jacopo; Micera, Silvestro; Merfeld, Daniel M

    2012-02-13

    Vestibular prosthetics transmit angular velocities to the nervous system via electrical stimulation. Head-fixed gyroscopes measure angular motion, but the gyroscope coordinate system will not be coincident with the sensory organs the prosthetic replaces. Here we show a simple calibration method to align gyroscope measurements with the anatomical coordinate system. We benchmarked the method with simulated movements and obtain proof-of-concept with one healthy subject. The method was robust to misalignment, required little data, and minimal processing.

  13. eSIP: A Novel Solution-Based Sectioned Image Property Approach for Microscope Calibration

    PubMed Central

    Butzlaff, Malte; Weigel, Arwed; Ponimaskin, Evgeni; Zeug, Andre

    2015-01-01

    Fluorescence confocal microscopy represents one of the central tools in modern sciences. Correspondingly, a growing amount of research relies on the development of novel microscopic methods. During the last decade numerous microscopic approaches were developed for the investigation of various scientific questions. Thereby, the former qualitative imaging methods became replaced by advanced quantitative methods to gain more and more information from a given sample. However, modern microscope systems being as complex as they are, require very precise and appropriate calibration routines, in particular when quantitative measurements should be compared over longer time scales or between different setups. Multispectral beads with sub-resolution size are often used to describe the point spread function and thus the optical properties of the microscope. More recently, a fluorescent layer was utilized to describe the axial profile for each pixel, which allows a spatially resolved characterization. However, fabrication of a thin fluorescent layer with matching refractive index is technically not solved yet. Therefore, we propose a novel type of calibration concept for sectioned image property (SIP) measurements which is based on fluorescent solution and makes the calibration concept available for a broader number of users. Compared to the previous approach, additional information can be obtained by application of this extended SIP chart approach, including penetration depth, detected number of photons, and illumination profile shape. Furthermore, due to the fit of the complete profile, our method is less susceptible to noise. Generally, the extended SIP approach represents a simple and highly reproducible method, allowing setup independent calibration and alignment procedures, which is mandatory for advanced quantitative microscopy. PMID:26244982

  14. Note: A simple image processing based fiducial auto-alignment method for sample registration.

    PubMed

    Robertson, Wesley D; Porto, Lucas R; Ip, Candice J X; Nantel, Megan K T; Tellkamp, Friedjof; Lu, Yinfei; Miller, R J Dwayne

    2015-08-01

    A simple method for the location and auto-alignment of sample fiducials for sample registration using widely available MATLAB/LabVIEW software is demonstrated. The method is robust, easily implemented, and applicable to a wide variety of experiment types for improved reproducibility and increased setup speed. The software uses image processing to locate and measure the diameter and center point of circular fiducials for distance self-calibration and iterative alignment and can be used with most imaging systems. The method is demonstrated to be fast and reliable in locating and aligning sample fiducials, provided here by a nanofabricated array, with accuracy within the optical resolution of the imaging system. The software was further demonstrated to register, load, and sample the dynamically wetted array.

  15. Gas chromatography-electron ionization-mass spectrometry quantitation of valproic acid and gabapentin, using dried plasma spots, for therapeutic drug monitoring in in-home medical care.

    PubMed

    Ikeda, Kayo; Ikawa, Kazuro; Yokoshige, Satoko; Yoshikawa, Satoshi; Morikawa, Norifumi

    2014-12-01

    A simple and sensitive gas chromatography-electron ionization-mass spectrometry (GC-EI-MS) method using dried plasma spot testing cards was developed for determination of valproic acid and gabapentin concentrations in human plasma from patients receiving in-home medical care. We have proposed that a simple, easy and dry sampling method is suitable for in-home medical patients for therapeutic drug monitoring. Therefore, in the present study, we used recently developed commercially available easy handling cards: Whatman FTA DMPK-A and Bond Elut DMS. In-home medical care patients can collect plasma using these simple kits. The spots of plasma on the cards were extracted into methanol and then evaporated to dryness. The residues were trimethylsilylated using N-methyl-N-trimethylsilyltrifluoroacetamide. For GC-EI-MS analysis, the calibration curves on both cards were linear from 10 to 200 µg/mL for valproic acid, and from 0.5 to 10 µg/mL for gabapentin. Intra- and interday precisions in plasma were both ≤13.0% (coefficient of variation), and the accuracy was between 87.9 and 112% for both cards within the calibration curves. The limits of quantification were 10 µg/mL for valproic acid and 0.5 µg/mL for gabapentin on both cards. We believe that the present method will be useful for in-home medical care. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Measuring the Density of a Sugar Solution: A General Chemistry Experiment Using a Student-Prepared Unknown

    ERIC Educational Resources Information Center

    Peterson, Karen I.

    2008-01-01

    The experiment developed in this article addresses the concept of equipment calibration for reducing systematic error. It also suggests simple student-prepared sucrose solutions for which accurate densities are known, but not readily available to students. Densities are measured with simple glassware that has been calibrated using the density of…

  17. ;Click; analytics for ;click; chemistry - A simple method for calibration-free evaluation of online NMR spectra

    NASA Astrophysics Data System (ADS)

    Michalik-Onichimowska, Aleksandra; Kern, Simon; Riedel, Jens; Panne, Ulrich; King, Rudibert; Maiwald, Michael

    2017-04-01

    Driven mostly by the search for chemical syntheses under biocompatible conditions, so called "click" chemistry rapidly became a growing field of research. The resulting simple one-pot reactions are so far only scarcely accompanied by an adequate optimization via comparably straightforward and robust analysis techniques possessing short set-up times. Here, we report on a fast and reliable calibration-free online NMR monitoring approach for technical mixtures. It combines a versatile fluidic system, continuous-flow measurement of 1H spectra with a time interval of 20 s per spectrum, and a robust, fully automated algorithm to interpret the obtained data. As a proof-of-concept, the thiol-ene coupling between N-boc cysteine methyl ester and allyl alcohol was conducted in a variety of non-deuterated solvents while its time-resolved behaviour was characterized with step tracer experiments. Overlapping signals in online spectra during thiol-ene coupling could be deconvoluted with a spectral model using indirect hard modeling and were subsequently converted to either molar ratios (using a calibration-free approach) or absolute concentrations (using 1-point calibration). For various solvents the kinetic constant k for pseudo-first order reaction was estimated to be 3.9 h-1 at 25 °C. The obtained results were compared with direct integration of non-overlapping signals and showed good agreement with the implemented mass balance.

  18. Small Scale Mass Flow Plug Calibration

    NASA Technical Reports Server (NTRS)

    Sasson, Jonathan

    2015-01-01

    A simple control volume model has been developed to calculate the discharge coefficient through a mass flow plug (MFP) and validated with a calibration experiment. The maximum error of the model in the operating region of the MFP is 0.54%. The model uses the MFP geometry and operating pressure and temperature to couple continuity, momentum, energy, an equation of state, and wall shear. Effects of boundary layer growth and the reduction in cross-sectional flow area are calculated using an in- integral method. A CFD calibration is shown to be of lower accuracy with a maximum error of 1.35%, and slower by a factor of 100. Effects of total pressure distortion are taken into account in the experiment. Distortion creates a loss in flow rate and can be characterized by two different distortion descriptors.

  19. Radiometric Cross-Calibration of GAOFEN-1 Wfv Cameras with LANDSAT-8 Oli and Modis Sensors Based on Radiation and Geometry Matching

    NASA Astrophysics Data System (ADS)

    Li, J.; Wu, Z.; Wei, X.; Zhang, Y.; Feng, F.; Guo, F.

    2018-04-01

    Cross-calibration has the advantages of high precision, low resource requirements and simple implementation. It has been widely used in recent years. The four wide-field-of-view (WFV) cameras on-board Gaofen-1 satellite provide high spatial resolution and wide combined coverage (4 × 200 km) without onboard calibration. In this paper, the four-band radiometric cross-calibration coefficients of WFV1 camera were obtained based on radiation and geometry matching taking Landsat 8 OLI (Operational Land Imager) sensor as reference. Scale Invariant Feature Transform (SIFT) feature detection method and distance and included angle weighting method were introduced to correct misregistration of WFV-OLI image pair. The radiative transfer model was used to eliminate difference between OLI sensor and WFV1 camera through the spectral match factor (SMF). The near-infrared band of WFV1 camera encompasses water vapor absorption bands, thus a Look Up Table (LUT) for SMF varies from water vapor amount is established to estimate the water vapor effects. The surface synchronization experiment was designed to verify the reliability of the cross-calibration coefficients, which seem to perform better than the official coefficients claimed by the China Centre for Resources Satellite Data and Application (CCRSDA).

  20. Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2017-01-01

    A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.

  1. On the calibration of continuous, high-precision delta18O and delta2H measurements using an off-axis integrated cavity output spectrometer.

    PubMed

    Wang, Lixin; Caylor, Kelly K; Dragoni, Danilo

    2009-02-01

    The (18)O and (2)H of water vapor serve as powerful tracers of hydrological processes. The typical method for determining water vapor delta(18)O and delta(2)H involves cryogenic trapping and isotope ratio mass spectrometry. Even with recent technical advances, these methods cannot resolve vapor composition at high temporal resolutions. In recent years, a few groups have developed continuous laser absorption spectroscopy (LAS) approaches for measuring delta(18)O and delta(2)H which achieve accuracy levels similar to those of lab-based mass spectrometry methods. Unfortunately, most LAS systems need cryogenic cooling and constant calibration to a reference gas, and have substantial power requirements, making them unsuitable for long-term field deployment at remote field sites. A new method called Off-Axis Integrated Cavity Output Spectroscopy (OA-ICOS) has been developed which requires extremely low-energy consumption and neither reference gas nor cryogenic cooling. In this report, we develop a relatively simple pumping system coupled to a dew point generator to calibrate an ICOS-based instrument (Los Gatos Research Water Vapor Isotope Analyzer (WVIA) DLT-100) under various pressures using liquid water with known isotopic signatures. Results show that the WVIA can be successfully calibrated using this customized system for different pressure settings, which ensure that this instrument can be combined with other gas-sampling systems. The precisions of this instrument and the associated calibration method can reach approximately 0.08 per thousand for delta(18)O and approximately 0.4 per thousand for delta(2)H. Compared with conventional mass spectrometry and other LAS-based methods, the OA-ICOS technique provides a promising alternative tool for continuous water vapor isotopic measurements in field deployments. Copyright 2009 John Wiley & Sons, Ltd.

  2. Simultaneous determination of V, Ni and Fe in fuel fly ash using solid sampling high resolution continuum source graphite furnace atomic absorption spectrometry.

    PubMed

    Cárdenas Valdivia, A; Vereda Alonso, E; López Guerrero, M M; Gonzalez-Rodriguez, J; Cano Pavón, J M; García de Torres, A

    2018-03-01

    A green and simple method has been proposed in this work for the simultaneous determination of V, Ni and Fe in fuel ash samples by solid sampling high resolution continuum source graphite furnace atomic absorption spectrometry (SS HR CS GFAAS). The application of fast programs in combination with direct solid sampling allows eliminating pretreatment steps, involving minimal manipulation of sample. Iridium treated platforms were applied throughout the present study, enabling the use of aqueous standards for calibration. Correlation coefficients for the calibration curves were typically better than 0.9931. The concentrations found in the fuel ash samples analysed ranged from 0.66% to 4.2% for V, 0.23-0.7% for Ni and 0.10-0.60% for Fe. Precision (%RSD) were 5.2%, 10.0% and 9.8% for V, Ni and Fe, respectively, obtained as the average of the %RSD of six replicates of each fuel ash sample. The optimum conditions established were applied to the determination of the target analytes in fuel ash samples. In order to test the accuracy and applicability of the proposed method in the analysis of samples, five ash samples from the combustion of fuel in power stations, were analysed. The method accuracy was evaluated by comparing the results obtained using the proposed method with the results obtained by ICP OES previous acid digestion. The results showed good agreement between them. The goal of this work has been to develop a fast and simple methodology that permits the use of aqueous standards for straightforward calibration and the simultaneous determination of V, Ni and Fe in fuel ash samples by direct SS HR CS GFAAS. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Calibration of the Infrared Telescope Facility National Science Foundation Camera Jupiter Galileo Data Set

    NASA Astrophysics Data System (ADS)

    Vincent, Mark B.; Chanover, Nancy J.; Beebe, Reta F.; Huber, Lyle

    2005-10-01

    The NASA Infrared Telescope Facility (IRTF) on Mauna Kea, Hawaii, set aside some time on about 500 nights from 1995 to 2002, when the NSFCAM facility infrared camera was mounted and Jupiter was visible, for a standardized set of observations of Jupiter in support of the Galileo mission. The program included observations of Jupiter, nearby reference stars, and dome flats in five filters: narrowband filters centered at 1.58, 2.28, and 3.53 μm, and broader L' and M' bands that probe the atmosphere from the stratosphere to below the main cloud layer. The reference stars were not cross-calibrated against standards. We performed follow-up observations to calibrate these stars and Jupiter in 2003 and 2004. We present a summary of the calibration of the Galileo support monitoring program data set. We present calibrated magnitudes of the six most frequently observed stars, calibrated reflectivities, and brightness temperatures of Jupiter from 1995 to 2004, and a simple method of normalizing the Jovian brightness to the 2004 results. Our study indicates that the NSFCAM's zero-point magnitudes were not stable from 1995 to early 1997, and that the best Jovian calibration possible with this data set is limited to about +/-10%. The raw images and calibration data have been deposited in the Planetary Data System.

  4. Analysis of Flavonoid in Medicinal Plant Extract Using Infrared Spectroscopy and Chemometrics

    PubMed Central

    Retnaningtyas, Yuni; Nuri; Lukman, Hilmia

    2016-01-01

    Infrared (IR) spectroscopy combined with chemometrics has been developed for simple analysis of flavonoid in the medicinal plant extract. Flavonoid was extracted from medicinal plant leaves by ultrasonication and maceration. IR spectra of selected medicinal plant extract were correlated with flavonoid content using chemometrics. The chemometric method used for calibration analysis was Partial Last Square (PLS) and the methods used for classification analysis were Linear Discriminant Analysis (LDA), Soft Independent Modelling of Class Analogies (SIMCA), and Support Vector Machines (SVM). In this study, the calibration of NIR model that showed best calibration with R 2 and RMSEC value was 0.9916499 and 2.1521897, respectively, while the accuracy of all classification models (LDA, SIMCA, and SVM) was 100%. R 2 and RMSEC of calibration of FTIR model were 0.8653689 and 8.8958149, respectively, while the accuracy of LDA, SIMCA, and SVM was 86.0%, 91.2%, and 77.3%, respectively. PLS and LDA of NIR models were further used to predict unknown flavonoid content in commercial samples. Using these models, the significance of flavonoid content that has been measured by NIR and UV-Vis spectrophotometry was evaluated with paired samples t-test. The flavonoid content that has been measured with both methods gave no significant difference. PMID:27529051

  5. Multi-scale soil moisture model calibration and validation: An ARS Watershed on the South Fork of the Iowa River

    USDA-ARS?s Scientific Manuscript database

    Soil moisture monitoring with in situ technology is a time consuming and costly endeavor for which a method of increasing the resolution of spatial estimates across in situ networks is necessary. Using a simple hydrologic model, the resolution of an in situ watershed network can be increased beyond...

  6. Large ensemble modeling of last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.

    2015-11-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.

  7. Sky camera geometric calibration using solar observations

    DOE PAGES

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-05

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less

  8. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    NASA Astrophysics Data System (ADS)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  9. The Detection and Quantification of Adulteration in Ground Roasted Asian Palm Civet Coffee Using Near-Infrared Spectroscopy in Tandem with Chemometrics

    NASA Astrophysics Data System (ADS)

    Suhandy, D.; Yulia, M.; Ogawa, Y.; Kondo, N.

    2018-05-01

    In the present research, an evaluation of using near infrared (NIR) spectroscopy in tandem with full spectrum partial least squares (FS-PLS) regression for quantification of degree of adulteration in civet coffee was conducted. A number of 126 ground roasted coffee samples with degree of adulteration 0-51% were prepared. Spectral data were acquired using a NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement in the range of 1300-2500 nm. The samples were divided into two groups calibration sample set (84 samples) and prediction sample set (42 samples). The calibration model was developed on original spectra using FS-PLS regression with full-cross validation method. The calibration model exhibited the determination coefficient R2=0.96 for calibration and R2=0.92 for validation. The prediction resulted in low root mean square error of prediction (RMSEP) (4.67%) and high ratio prediction to deviation (RPD) (3.75). In conclusion, the degree of adulteration in civet coffee have been quantified successfully by using NIR spectroscopy and FS-PLS regression in a non-destructive, economical, precise, and highly sensitive method, which uses very simple sample preparation.

  10. Updated radiometric calibration for the Landsat-5 thematic mapper reflective bands

    USGS Publications Warehouse

    Helder, D.L.; Markham, B.L.; Thome, K.J.; Barsi, J.A.; Chander, G.; Malla, R.

    2008-01-01

    The Landsat-5 Thematic Mapper (TM) has been the workhorse of the Landsat system. Launched in 1984, it continues collecting data through the time frame of this paper. Thus, it provides an invaluable link to the past history of the land features of the Earth's surface, and it becomes imperative to provide an accurate radiometric calibration of the reflective bands to the user community. Previous calibration has been based on information obtained from prelaunch, the onboard calibrator, vicarious calibration attempts, and cross-calibration with Landsat-7. Currently, additional data sources are available to improve this calibration. Specifically, improvements in vicarious calibration methods and development of the use of pseudoinvariant sites for trending provide two additional independent calibration sources. The use of these additional estimates has resulted in a consistent calibration approach that ties together all of the available calibration data sources. Results from this analysis indicate a simple exponential, or a constant model may be used for all bands throughout the lifetime of Landsat-5 TM. Where previously time constants for the exponential models were approximately one year, the updated model has significantly longer time constants in bands 1-3. In contrast, bands 4, 5, and 7 are shown to be best modeled by a constant. The models proposed in this paper indicate calibration knowledge of 5% or better early in life, decreasing to nearly 2% later in life. These models have been implemented at the U.S. Geological Survey Earth Resources Observation and Science (EROS) and are the default calibration used for all Landsat TM data now distributed through EROS. ?? 2008 IEEE.

  11. FAST Model Calibration and Validation of the OC5- DeepCwind Floating Offshore Wind System Against Wave Tank Test Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  12. A rapid tool for determination of titanium dioxide content in white chickpea samples.

    PubMed

    Sezer, Banu; Bilge, Gonca; Berkkan, Aysel; Tamer, Ugur; Hakki Boyaci, Ismail

    2018-02-01

    Titanium dioxide (TiO 2 ) is a widely used additive in foods. However, in the scientific community there is an ongoing debate on health concerns about TiO 2 . The main goal of this study is to determine TiO 2 content by using laser induced breakdown spectroscopy (LIBS). To this end, different amounts of TiO 2 was added to white chickpeas and analyzed by using LIBS. Calibration curve was obtained by following Ti emissions at 390.11nm for univariate calibration, and partial least square (PLS) calibration curve was obtained by evaluating the whole spectra. The results showed that Ti calibration curve at 390.11nm provides successful determination of Ti level with 0.985 of R 2 and 33.9ppm of limit of detection (LOD) value, while PLS has 0.989 of R 2 and 60.9ppm of LOD. Furthermore, commercial white chickpea samples were used to validate the method, and validation R 2 for simple calibration and PLS were calculated as 0.989 and 0.951, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Lattice modeling and calibration with turn-by-turn orbit data

    NASA Astrophysics Data System (ADS)

    Huang, Xiaobiao; Sebek, Jim; Martin, Don

    2010-11-01

    A new method that explores turn-by-turn beam position monitor (BPM) data to calibrate lattice models of accelerators is proposed. The turn-by-turn phase space coordinates at one location of the ring are first established using data from two BPMs separated by a simple section with a known transfer matrix, such as a drift space. The phase space coordinates are then tracked with the model to predict positions at other BPMs, which can be compared to measurements. The model is adjusted to minimize the difference between the measured and predicted orbit data. BPM gains and rolls are included as fitting variables. This technique can be applied to either the entire or a section of the ring. We have tested the method experimentally on a part of the SPEAR3 ring.

  14. Data analysis and calibration for a bulk-refractive-index-compensated surface plasmon resonance affinity sensor

    NASA Astrophysics Data System (ADS)

    Chinowsky, Timothy M.; Yee, Sinclair S.

    2002-02-01

    Surface plasmon resonance (SPR) affinity sensing, the problem of bulk refractive index (RI) interference in SPR sensing, and a sensor developed to overcome this problem are briefly reviewed. The sensor uses a design based on Texas Instruments' Spreeta SPR sensor to simultaneously measure both bulk and surface RI. The bulk RI measurement is then used to compensate the surface measurement and remove the effects of bulk RI interference. To achieve accurate compensation, robust data analysis and calibration techniques are necessary. Simple linear data analysis techniques derived from measurements of the sensor response were found to provide a versatile, low noise method for extracting measurements of bulk and surface refractive index from the raw sensor data. Automatic calibration using RI gradients was used to correct the linear estimates, enabling the sensor to produce accurate data even when the sensor has a complicated nonlinear response which varies with time. The calibration procedure is described, and the factors influencing calibration accuracy are discussed. Data analysis and calibration principles are illustrated with an experiment in which sucrose and detergent solutions are used to produce changes in bulk and surface RI, respectively.

  15. Sensitive method for characterizing liquid helium cooled preamplifier feedback resistors

    NASA Technical Reports Server (NTRS)

    Smeins, L. G.; Arentz, R. F.

    1983-01-01

    It is pointed out that the simple and traditional method of measuring resistance using an electrometer is ineffective since it is limited to a narrow and nonrepresentative range of terminal voltages. The present investigation is concerned with a resistor measurement technique which was developed to select and calibrate the Transimpedance Mode Amplifier (TIA) load resistors on the Infrared Astronomical Satellite (IRAS) for the wide variety of time and voltage varying signals which will be processed during the flight. The developed method has great versatility and power, and makes it possible to measure the varied and complex responses of nonideal feedback resistors to IR photo-detector currents. When employed with a stable input coupling capacitor, and a narrow band RMS voltmeter, the five input waveforms thouroughly test and calibrate all the features of interest in a load resistor and its associated TIA circuitry.

  16. Self-calibration of cone-beam CT geometry using 3D–2D image registration

    PubMed Central

    Ouadah, S; Stayman, J W; Gang, G J; Ehtiati, T; Siewerdsen, J H

    2016-01-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM = 0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p < 0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE = 0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p < 0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is applicable to situations where conventional calibration is not feasible, such as complex non-circular CBCT orbits and systems with irreproducible source-detector trajectory. PMID:26961687

  17. Self-calibration of cone-beam CT geometry using 3D-2D image registration

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Stayman, J. W.; Gang, G. J.; Ehtiati, T.; Siewerdsen, J. H.

    2016-04-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is applicable to situations where conventional calibration is not feasible, such as complex non-circular CBCT orbits and systems with irreproducible source-detector trajectory.

  18. A Bayesian modelling method for post-processing daily sub-seasonal to seasonal rainfall forecasts from global climate models and evaluation for 12 Australian catchments

    NASA Astrophysics Data System (ADS)

    Schepen, Andrew; Zhao, Tongtiegang; Wang, Quan J.; Robertson, David E.

    2018-03-01

    Rainfall forecasts are an integral part of hydrological forecasting systems at sub-seasonal to seasonal timescales. In seasonal forecasting, global climate models (GCMs) are now the go-to source for rainfall forecasts. For hydrological applications however, GCM forecasts are often biased and unreliable in uncertainty spread, and calibration is therefore required before use. There are sophisticated statistical techniques for calibrating monthly and seasonal aggregations of the forecasts. However, calibration of seasonal forecasts at the daily time step typically uses very simple statistical methods or climate analogue methods. These methods generally lack the sophistication to achieve unbiased, reliable and coherent forecasts of daily amounts and seasonal accumulated totals. In this study, we propose and evaluate a Rainfall Post-Processing method for Seasonal forecasts (RPP-S), which is based on the Bayesian joint probability modelling approach for calibrating daily forecasts and the Schaake Shuffle for connecting the daily ensemble members of different lead times. We apply the method to post-process ACCESS-S forecasts for 12 perennial and ephemeral catchments across Australia and for 12 initialisation dates. RPP-S significantly reduces bias in raw forecasts and improves both skill and reliability. RPP-S forecasts are also more skilful and reliable than forecasts derived from ACCESS-S forecasts that have been post-processed using quantile mapping, especially for monthly and seasonal accumulations. Several opportunities to improve the robustness and skill of RPP-S are identified. The new RPP-S post-processed forecasts will be used in ensemble sub-seasonal to seasonal streamflow applications.

  19. Effect of cantilever geometry on the optical lever sensitivities and thermal noise method of the atomic force microscope.

    PubMed

    Sader, John E; Lu, Jianing; Mulvaney, Paul

    2014-11-01

    Calibration of the optical lever sensitivities of atomic force microscope (AFM) cantilevers is especially important for determining the force in AFM measurements. These sensitivities depend critically on the cantilever mode used and are known to differ for static and dynamic measurements. Here, we calculate the ratio of the dynamic and static sensitivities for several common AFM cantilevers, whose shapes vary considerably, and experimentally verify these results. The dynamic-to-static optical lever sensitivity ratio is found to range from 1.09 to 1.41 for the cantilevers studied - in stark contrast to the constant value of 1.09 used widely in current calibration studies. This analysis shows that accuracy of the thermal noise method for the static spring constant is strongly dependent on cantilever geometry - neglect of these dynamic-to-static factors can induce errors exceeding 100%. We also discuss a simple experimental approach to non-invasively and simultaneously determine the dynamic and static spring constants and optical lever sensitivities of cantilevers of arbitrary shape, which is applicable to all AFM platforms that have the thermal noise method for spring constant calibration.

  20. Software validation applied to spreadsheets used in laboratories working under ISO/IEC 17025

    NASA Astrophysics Data System (ADS)

    Banegas, J. M.; Orué, M. W.

    2016-07-01

    Several documents deal with software validation. Nevertheless, more are too complex to be applied to validate spreadsheets - surely the most used software in laboratories working under ISO/IEC 17025. The method proposed in this work is intended to be directly applied to validate spreadsheets. It includes a systematic way to document requirements, operational aspects regarding to validation, and a simple method to keep records of validation results and modifications history. This method is actually being used in an accredited calibration laboratory, showing to be practical and efficient.

  1. ITER-like antenna capacitors voltage probes: Circuit/electromagnetic calculations and calibrations.

    PubMed

    Helou, W; Dumortier, P; Durodié, F; Lombard, G; Nicholls, K

    2016-10-01

    The analyses illustrated in this manuscript have been performed in order to provide the required data for the amplitude-and-phase calibration of the D-dot voltage probes used in the ITER-like antenna at the Joint European Torus tokamak. Their equivalent electrical circuit has been extracted and analyzed, and it has been compared to the one of voltage probes installed in simple transmission lines. A radio-frequency calibration technique has been formulated and exact mathematical relations have been derived. This technique mixes in an elegant fashion data extracted from measurements and numerical calculations to retrieve the calibration factors. The latter have been compared to previous calibration data with excellent agreement proving the robustness of the proposed radio-frequency calibration technique. In particular, it has been stressed that it is crucial to take into account environmental parasitic effects. A low-frequency calibration technique has been in addition formulated and analyzed in depth. The equivalence between the radio-frequency and low-frequency techniques has been rigorously demonstrated. The radio-frequency calibration technique is preferable in the case of the ITER-like antenna due to uncertainties on the characteristics of the cables connected at the inputs of the voltage probes. A method to extract the effect of a mismatched data acquisition system has been derived for both calibration techniques. Finally it has been outlined that in the case of the ITER-like antenna voltage probes can be in addition used to monitor the currents at the inputs of the antenna.

  2. The effect of density gradients on hydrometers

    NASA Astrophysics Data System (ADS)

    Heinonen, Martti; Sillanpää, Sampo

    2003-05-01

    Hydrometers are simple but effective instruments for measuring the density of liquids. In this work, we studied the effect of non-uniform density of liquid on a hydrometer reading. The effect induced by vertical temperature gradients was investigated theoretically and experimentally. A method for compensating for the effect mathematically was developed and tested with experimental data obtained with the MIKES hydrometer calibration system. In the tests, the method was found reliable. However, the reliability depends on the available information on the hydrometer dimensions and density gradients.

  3. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  4. Derivation of flood frequency curves in poorly gauged Mediterranean catchments using a simple stochastic hydrological rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Aronica, G. T.; Candela, A.

    2007-12-01

    SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.

  5. Using the entire history in the analysis of nested case cohort samples.

    PubMed

    Rivera, C L; Lumley, T

    2016-08-15

    Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Calibration of strain-gage installations in aircraft structures for the measurement of flight loads

    NASA Technical Reports Server (NTRS)

    Skopinski, T H; Aiken, William S , Jr; Huston, Wilber B

    1954-01-01

    A general method has been developed for calibrating strain-gage installations in aircraft structures, which permits the measurement in flight of the shear or lift, the bending moment, and the torque or pitching moment on the principal lifting or control surfaces. Although the stress in structural members may not be a simple function of the three loads of interest, a straightforward procedure is given for numerically combining the outputs of several bridges in such a way that the loads may be obtained. Extensions of the basic procedure by means of electrical combination of the strain-gage bridges are described which permit compromises between strain-gage installation time, availability of recording instruments, and data reduction time. The basic principles of strain-gage calibration procedures are illustrated by reference to the data for two aircraft structures of typical construction, one a straight and the other a swept horizontal stabilizer.

  7. Misalignments calibration in small-animal PET scanners based on rotating planar detectors and parallel-beam geometry.

    PubMed

    Abella, M; Vicente, E; Rodríguez-Ruano, A; España, S; Lage, E; Desco, M; Udias, J M; Vaquero, J J

    2012-11-21

    Technological advances have improved the assembly process of PET detectors, resulting in quite small mechanical tolerances. However, in high-spatial-resolution systems, even submillimetric misalignments of the detectors may lead to a notable degradation of image resolution and artifacts. Therefore, the exact characterization of misalignments is critical for optimum reconstruction quality in such systems. This subject has been widely studied for CT and SPECT scanners based on cone beam geometry, but this is not the case for PET tomographs based on rotating planar detectors. The purpose of this work is to analyze misalignment effects in these systems and to propose a robust and easy-to-implement protocol for geometric characterization. The result of the proposed calibration method, which requires no more than a simple calibration phantom, can then be used to generate a correct 3D-sinogram from the acquired list mode data.

  8. Calibration and Temperature Profile of a Tungsten Filament Lamp

    ERIC Educational Resources Information Center

    de Izarra, Charles; Gitton, Jean-Michel

    2010-01-01

    The goal of this work proposed for undergraduate students and teachers is the calibration of a tungsten filament lamp from electric measurements that are both simple and precise, allowing to determine the temperature of tungsten filament as a function of the current intensity. This calibration procedure was first applied to a conventional filament…

  9. 40 CFR 85.2232 - Calibrations, adjustments-EPA 81.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... readings using the span gas through the probe and through the calibration port shall be made and compared... through the probe. This paragraph does not prevent those who wish to always adjust the analyzer to the... without a calibration port, perform a simple leak check (e.g., cap the probe). Repair any leaks before...

  10. 40 CFR 85.2232 - Calibrations, adjustments-EPA 81.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... readings using the span gas through the probe and through the calibration port shall be made and compared... through the probe. This paragraph does not prevent those who wish to always adjust the analyzer to the... without a calibration port, perform a simple leak check (e.g., cap the probe). Repair any leaks before...

  11. 40 CFR 85.2232 - Calibrations, adjustments-EPA 81.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... readings using the span gas through the probe and through the calibration port shall be made and compared... through the probe. This paragraph does not prevent those who wish to always adjust the analyzer to the... without a calibration port, perform a simple leak check (e.g., cap the probe). Repair any leaks before...

  12. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach

    PubMed Central

    Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin

    2014-01-01

    Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456

  13. A real-time camera calibration system based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  14. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    PubMed

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Simple method for self-referenced and lable-free biosensing by using a capillary sensing element.

    PubMed

    Liu, Yun; Chen, Shimeng; Liu, Qiang; Liu, Zigeng; Wei, Peng

    2017-05-15

    We demonstrated a simple method for self-reference and label free biosensing based on a capillary sensing element and common optoelectronic devices. The capillary sensing element is illuminated by a light-emitting diode (LED) light source and detected by a webcam. Part of gold film that deposited on the tubing wall is functionalized to carry on the biological information in the excited SPR modes. The end face of the capillary was monitored and separate regions of interest (ROIs) were selected as the measurement channel and the reference channel. In the ROIs, the biological information can be accurately extracted from the image by simple image processing. Moreover, temperature fluctuation, bulk RI fluctuation, light source fluctuation and other factors can be effectively compensated during detection. Our biosensing device has a sensitivity of 1145%/RIU and a resolution better than 5.287 × 10 -4 RIU, considering a 0.79% noise level. We apply it for concanavalin A (Con A) biological measurement, which has an approximately linear response to the specific analyte concentration. This simple method provides a new approach for multichannel SPR sensing and reference-compensated calibration of SPR signal for label-free detection.

  16. Direct Reading Particle Counters: Calibration Verification and Multiple Instrument Agreement via Bump Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jankovic, John; Zontek, Tracy L.; Ogle, Burton R.

    We examined the calibration records of two direct reading instruments designated as condensation particle counters in order to determine the number of times they were found to be out of tolerance at annual manufacturer's recalibration. For both instruments were found to be out of tolerance more times than within tolerance. And, it was concluded that annual calibration alone was insufficient to provide operational confidence in an instrument's response. Thus, a method based on subsequent agreement with data gathered from a newly calibrated instrument was developed to confirm operational readiness between annual calibrations, hereafter referred to as bump testing. The methodmore » consists of measuring source particles produced by a gas grille spark igniter in a gallon-size jar. Sampling from this chamber with a newly calibrated instrument to determine the calibrated response over the particle concentration range of interest serves as a reference. Agreement between this reference response and subsequent responses at later dates implies that the instrument is performing as it was at the time of calibration. Side-by-side sampling allows the level of agreement between two or more instruments to be determined. This is useful when simultaneously collected data are compared for differences, i.e., background with process aerosol concentrations. A reference set of data was obtained using the spark igniter. The generation system was found to be reproducible and suitable to form the basis of calibration verification. Finally, the bump test is simple enough to be performed periodically throughout the calibration year or prior to field monitoring.« less

  17. Direct Reading Particle Counters: Calibration Verification and Multiple Instrument Agreement via Bump Testing

    DOE PAGES

    Jankovic, John; Zontek, Tracy L.; Ogle, Burton R.; ...

    2015-01-27

    We examined the calibration records of two direct reading instruments designated as condensation particle counters in order to determine the number of times they were found to be out of tolerance at annual manufacturer's recalibration. For both instruments were found to be out of tolerance more times than within tolerance. And, it was concluded that annual calibration alone was insufficient to provide operational confidence in an instrument's response. Thus, a method based on subsequent agreement with data gathered from a newly calibrated instrument was developed to confirm operational readiness between annual calibrations, hereafter referred to as bump testing. The methodmore » consists of measuring source particles produced by a gas grille spark igniter in a gallon-size jar. Sampling from this chamber with a newly calibrated instrument to determine the calibrated response over the particle concentration range of interest serves as a reference. Agreement between this reference response and subsequent responses at later dates implies that the instrument is performing as it was at the time of calibration. Side-by-side sampling allows the level of agreement between two or more instruments to be determined. This is useful when simultaneously collected data are compared for differences, i.e., background with process aerosol concentrations. A reference set of data was obtained using the spark igniter. The generation system was found to be reproducible and suitable to form the basis of calibration verification. Finally, the bump test is simple enough to be performed periodically throughout the calibration year or prior to field monitoring.« less

  18. Absolute radiometric calibration of Landsat using a pseudo invariant calibration site

    USGS Publications Warehouse

    Helder, D.; Thome, K.J.; Mishra, N.; Chander, G.; Xiong, Xiaoxiong; Angal, A.; Choi, Tae-young

    2013-01-01

    Pseudo invariant calibration sites (PICS) have been used for on-orbit radiometric trending of optical satellite systems for more than 15 years. This approach to vicarious calibration has demonstrated a high degree of reliability and repeatability at the level of 1-3% depending on the site, spectral channel, and imaging geometries. A variety of sensors have used this approach for trending because it is broadly applicable and easy to implement. Models to describe the surface reflectance properties, as well as the intervening atmosphere have also been developed to improve the precision of the method. However, one limiting factor of using PICS is that an absolute calibration capability has not yet been fully developed. Because of this, PICS are primarily limited to providing only long term trending information for individual sensors or cross-calibration opportunities between two sensors. This paper builds an argument that PICS can be used more extensively for absolute calibration. To illustrate this, a simple empirical model is developed for the well-known Libya 4 PICS based on observations by Terra MODIS and EO-1 Hyperion. The model is validated by comparing model predicted top-of-atmosphere reflectance values to actual measurements made by the Landsat ETM+ sensor reflective bands. Following this, an outline is presented to develop a more comprehensive and accurate PICS absolute calibration model that can be Système international d'unités (SI) traceable. These initial concepts suggest that absolute calibration using PICS is possible on a broad scale and can lead to improved on-orbit calibration capabilities for optical satellite sensors.

  19. Solution to the Problem of Calibration of Low-Cost Air Quality Measurement Sensors in Networks.

    PubMed

    Miskell, Georgia; Salmond, Jennifer A; Williams, David E

    2018-04-27

    We provide a simple, remote, continuous calibration technique suitable for application in a hierarchical network featuring a few well-maintained, high-quality instruments ("proxies") and a larger number of low-cost devices. The ideas are grounded in a clear definition of the purpose of a low-cost network, defined here as providing reliable information on air quality at small spatiotemporal scales. The technique assumes linearity of the sensor signal. It derives running slope and offset estimates by matching mean and standard deviations of the sensor data to values derived from proxies over the same time. The idea is extremely simple: choose an appropriate proxy and an averaging-time that is sufficiently long to remove the influence of short-term fluctuations but sufficiently short that it preserves the regular diurnal variations. The use of running statistical measures rather than cross-correlation of sites means that the method is robust against periods of missing data. Ideas are first developed using simulated data and then demonstrated using field data, at hourly and 1 min time-scales, from a real network of low-cost semiconductor-based sensors. Despite the almost naïve simplicity of the method, it was robust for both drift detection and calibration correction applications. We discuss the use of generally available geographic and environmental data as well as microscale land-use regression as means to enhance the proxy estimates and to generalize the ideas to other pollutants with high spatial variability, such as nitrogen dioxide and particulates. These improvements can also be used to minimize the required number of proxy sites.

  20. Ratio manipulating spectrophotometry versus chemometry as stability indicating methods for cefquinome sulfate determination

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.; Arafa, Reham M.; Abbas, Samah S.; Amer, Sawsan M.

    2016-01-01

    Spectral resolution of cefquinome sulfate (CFQ) in the presence of its degradation products was studied. Three selective, accurate and rapid spectrophotometric methods were performed for the determination of CFQ in the presence of either its hydrolytic, oxidative or photo-degradation products. The proposed ratio difference, derivative ratio and mean centering are ratio manipulating spectrophotometric methods that were satisfactorily applied for selective determination of CFQ within linear range of 5.0-40.0 μg mL- 1. Concentration Residuals Augmented Classical Least Squares was applied and evaluated for the determination of the cited drug in the presence of its all degradation products. Traditional Partial Least Squares regression was also applied and benchmarked against the proposed advanced multivariate calibration. Experimentally designed 25 synthetic mixtures of three factors at five levels were used to calibrate and validate the multivariate models. Advanced chemometrics succeeded in quantitative and qualitative analyses of CFQ along with its hydrolytic, oxidative and photo-degradation products. The proposed methods were applied successfully for different pharmaceutical formulations analyses. These developed methods were simple and cost-effective compared with the manufacturer's RP-HPLC method.

  1. Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns

    NASA Astrophysics Data System (ADS)

    Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2012-07-01

    Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.

  2. Automatic Calibration of a Distributed Rainfall-Runoff Model, Using the Degree-Day Formulation for Snow Melting, Within DMIP2 Project

    NASA Astrophysics Data System (ADS)

    Frances, F.; Orozco, I.

    2010-12-01

    This work presents the assessment of the TETIS distributed hydrological model in mountain basins of the American and Carson rivers in Sierra Nevada (USA) at hourly time discretization, as part of the DMIP2 Project. In TETIS each cell of the spatial grid conceptualizes the water cycle using six tanks connected among them. The relationship between tanks depends on the case, although at the end in most situations, simple linear reservoirs and flow thresholds schemes are used with exceptional results (Vélez et al., 1999; Francés et al., 2002). In particular, within the snow tank, snow melting is based in this work on the simple degree-day method with spatial constant parameters. The TETIS model includes an automatic calibration module, based on the SCE-UA algorithm (Duan et al., 1992; Duan et al., 1994) and the model effective parameters are organized following a split structure, as presented by Francés and Benito (1995) and Francés et al. (2007). In this way, the calibration involves in TETIS up to 9 correction factors (CFs), which correct globally the different parameter maps instead of each parameter cell value, thus reducing drastically the number of variables to be calibrated. This strategy allows for a fast and agile modification in different hydrological processes preserving the spatial structure of each parameter map. With the snowmelt submodel, automatic model calibration was carried out in three steps, separating the calibration of rainfall-runoff and snowmelt parameters. In the first step, the automatic calibration of the CFs during the period 05/20/1990 to 07/31/1990 in the American River (without snow influence), gave a Nash-Sutcliffe Efficiency (NSE) index of 0.92. The calibration of the three degree-day parameters was done using all the SNOTEL stations in the American and Carson rivers. Finally, using previous calibrations as initial values, the complete calibration done in the Carson River for the period 10/01/1992 to 07/31/1993 gave a NSE index of 0.86. The temporal and spatial validation using five periods must be considered in both rivers excellent for discharges (NSEs higher than 0.76) and good for snow distribution (daily spatial coverage errors ranging from -10 to 27%). In conclusion, this work demonstrates: 1.- The viability of automatic calibration of distributed models, with the corresponding personal time saving and maximum exploitation of the available information. 2.- The good performance of the degree-day snowmelt formulation even at hourly time discretization, in spite of its simplicity.

  3. Hydrograph matching method for measuring model performance

    NASA Astrophysics Data System (ADS)

    Ewen, John

    2011-09-01

    SummaryDespite all the progress made over the years on developing automatic methods for analysing hydrographs and measuring the performance of rainfall-runoff models, automatic methods cannot yet match the power and flexibility of the human eye and brain. Very simple approaches are therefore being developed that mimic the way hydrologists inspect and interpret hydrographs, including the way that patterns are recognised, links are made by eye, and hydrological responses and errors are studied and remembered. In this paper, a dynamic programming algorithm originally designed for use in data mining is customised for use with hydrographs. It generates sets of "rays" that are analogous to the visual links made by the hydrologist's eye when linking features or times in one hydrograph to the corresponding features or times in another hydrograph. One outcome from this work is a new family of performance measures called "visual" performance measures. These can measure differences in amplitude and timing, including the timing errors between simulated and observed hydrographs in model calibration. To demonstrate this, two visual performance measures, one based on the Nash-Sutcliffe Efficiency and the other on the mean absolute error, are used in a total of 34 split-sample calibration-validation tests for two rainfall-runoff models applied to the Hodder catchment, northwest England. The customised algorithm, called the Hydrograph Matching Algorithm, is very simple to apply; it is given in a few lines of pseudocode.

  4. Absolute calibration of the Jenoptik CHM15k-x ceilometer and its applicability for quantitative aerosol monitoring

    NASA Astrophysics Data System (ADS)

    Geiß, Alexander; Wiegner, Matthias

    2014-05-01

    The knowledge of the spatiotemporal distribution of atmospheric aerosols and its optical characterization is essential for the understanding of the radiation budget, air quality, and climate. For this purpose, lidar is an excellent system as it is an active remote sensing technique. As multi-wavelength research lidars with depolarization channels are quite complex and cost-expensive, increasing attention is paid to so-called ceilometers. They are simple one-wavelength backscatter lidars with low pulse energy for eye-safe operation. As maintenance costs are low and continuous and unattended measurements can be performed, they are suitable for long-term aerosol monitoring in a network. However, the signal-to-noise ratio is low, and the signals are not calibrated. The only optical property that can be derived from a ceilometer is the particle backscatter coefficient, but even this quantity requires a calibration of the signals. With four years of measurements from a Jenoptik ceilometer CHM15k-x, we developed two methods for an absolute calibration on this system. This advantage of our approach is that only a few days with favorable meteorological conditions are required where Rayleigh-calibration and comparison with our research lidar is possible to estimate the lidar constant. This method enables us to derive the particle backscatter coefficient at 1064 nm, and we retrieved for the first time profiles in near real-time within an accuracy of 10 %. If an appropriate lidar ratio is assumed the aerosol optical depth of e.g. the mixing layer can be determined with an accuracy depending on the accuracy of the lidar ratio estimate. Even for 'simple' applications, e.g. assessment of the mixing layer height, cloud detection, detection of elevated aerosol layers, the particle backscatter coefficient has significant advantages over the measured (uncalibrated) attenuated backscatter. The possibility of continuous operation under nearly any meteorological condition with temporal resolution in the order of 30 seconds makes it also possible to apply time-height-tracking methods for detecting mixing layer heights. The combination of methods for edge detection (e.g. wavelet covariance transform, gradient method, variance method) and edge tracking techniques is used to increase the reliability of the layer detection and attribution. Thus, a feature mask of aerosols and clouds can be derived. Four years of measurements constitute an excellent basis for a climatology including a homogeneous time series of mixing layer heights, aerosol layers and cloud base heights of the troposphere. With a low overlap region of 180 m of the Jenoptik CHM15k-x even very narrow mixing layers, typical for winter conditions, can be considered.

  5. A simple and sensitive quantitation of N,N-dimethyltryptamine by gas chromatography with surface ionization detection.

    PubMed

    Ishii, A; Seno, H; Suzuki, O; Hattori, H; Kumazawa, T

    1997-01-01

    A simple and sensitive method for determination of N,N-dimethyltryptamine (DMT) by gas chromatography (GC) with surface ionization detection (SID) is presented. Whole blood or urine, containing DMT and gramine (internal standard), was subjected to solid-phase extraction with a Sep-Pak C18 cartridge before analysis by GC-SID. The calibration curve was linear in the DMT range of 1.25-20 ng/mL blood or urine. The detection limit of DMT was about 0.5 ng/mL (10 pg on-column). The recovery of both DMT and gramine spiked in biological fluids was above 86%.

  6. Analysis of Calibration Errors for Both Short and Long Stroke White Light Experiments

    NASA Technical Reports Server (NTRS)

    Pan, Xaiopei

    2006-01-01

    This work will analyze focusing and tilt variations introduced by thermal changes in calibration processes. In particular the accuracy limits are presented for common short- and long-stroke experiments. A new, simple, practical calibration scheme is proposed and analyzed based on the SIM PlanetQuest's Micro-Arcsecond Metrology (MAM) testbed experiments.

  7. Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Rit,

    2016-09-15

    Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performedmore » at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in most clinical environments.« less

  8. Topics in Statistical Calibration

    DTIC Science & Technology

    2014-03-27

    on a parametric bootstrap where, instead of sampling directly from the residuals , samples are drawn from a normal distribution. This procedure will...addition to centering them (Davison and Hinkley, 1997). When there are outliers in the residuals , the bootstrap distribution of x̂0 can become skewed or...based and inversion methods using the linear mixed-effects model. Then, a simple parametric bootstrap algorithm is proposed that can be used to either

  9. Spectral characterization of near-infrared acousto-optic tunable filter (AOTF) hyperspectral imaging systems using standard calibration materials.

    PubMed

    Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2011-04-01

    In this study, we propose and evaluate a method for spectral characterization of acousto-optic tunable filter (AOTF) hyperspectral imaging systems in the near-infrared (NIR) spectral region from 900 nm to 1700 nm. The proposed spectral characterization method is based on the SRM-2035 standard reference material, exhibiting distinct spectral features, which enables robust non-rigid matching of the acquired and reference spectra. The matching is performed by simultaneously optimizing the parameters of the AOTF tuning curve, spectral resolution, baseline, and multiplicative effects. In this way, the tuning curve (frequency-wavelength characteristics) and the corresponding spectral resolution of the AOTF hyperspectral imaging system can be characterized simultaneously. Also, the method enables simple spectral characterization of the entire imaging plane of hyperspectral imaging systems. The results indicate that the method is accurate and efficient and can easily be integrated with systems operating in diffuse reflection or transmission modes. Therefore, the proposed method is suitable for characterization, calibration, or validation of AOTF hyperspectral imaging systems. © 2011 Society for Applied Spectroscopy

  10. Simultaneous determination of Nifuroxazide and Drotaverine hydrochloride in pharmaceutical preparations by bivariate and multivariate spectral analysis

    NASA Astrophysics Data System (ADS)

    Metwally, Fadia H.

    2008-02-01

    The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12 μg ml -1 of NIF and 2-8 μg ml -1 of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method.

  11. Simplified multiple headspace extraction gas chromatographic technique for determination of monomer solubility in water.

    PubMed

    Chai, X S; Schork, F J; DeCinque, Anthony

    2005-04-08

    This paper reports an improved headspace gas chromatographic (GC) technique for determination of monomer solubilities in water. The method is based on a multiple headspace extraction GC technique developed previously [X.S. Chai, Q.X. Hou, F.J. Schork, J. Appl. Polym. Sci., in press], but with the major modification in the method calibration technique. As a result, only a few iterations of headspace extraction and GC measurement are required, which avoids the "exhaustive" headspace extraction, and thus the experimental time for each analysis. For highly insoluble monomers, effort must be made to minimize adsorption in the headspace sampling channel, transportation conduit and capillary column by using higher operating temperature and a short capillary column in the headspace sampler and GC system. For highly water soluble monomers, a new calibration method is proposed. The combinations of these technique modifications results in a method that is simple, rapid and automated. While the current focus of the authors is on the determination of monomer solubility in aqueous solutions, the method should be applicable to determination of solubility of any organic in water.

  12. Development and validation of a reversed-phase fluorescence HPLC method for determination of bucillamine in human plasma using pre-column derivatization with monobromobimane.

    PubMed

    Lee, Kang Choon; Chun, Young Goo; Kim, Insoo; Shin, Beom Soo; Park, Eun-Seok; Yoo, Sun Dong; Youn, Yu Seok

    2009-07-15

    A simple, specific and sensitive derivatization with monobromobimane (mBrB) and the corresponding HPLC-fluorescence quantitation method for the analysis of bucillamine in human plasma was developed and validated. The analytical procedure involves a simple protein precipitation, pre-column fluorescence derivatization, and separation by reversed-phase high performance liquid chromatography (RP-HPLC). The calibration curve showed good linearity over a wide concentration range (50 ng/mL to 10 microg/mL) in human plasma (r(2)=0.9998). The lower limit of quantitation (LLOQ) was 50 ng/mL. The average precision and accuracy at LLOQ were within 6.3% and 107.6%, respectively. This method was successfully applied to a pharmacokinetic study after oral administration of a dose (300 mg) of bucillamine to 20 healthy Korean volunteers.

  13. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    NASA Astrophysics Data System (ADS)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  14. Simple determination of fluoride in biological samples by headspace solid-phase microextraction and gas chromatography-tandem mass spectrometry.

    PubMed

    Kwon, Sun-Myung; Shin, Ho-Sang

    2015-08-14

    A simple and convenient method to detect fluoride in biological samples was developed. This method was based on derivatization with 2-(bromomethyl)naphthalene, headspace solid phase microextraction (HS-SPME) in a vial, and gas chromatography-tandem mass spectrometric detection. The HS-SPME parameters were optimized as follows: selection of CAR/PDMS fiber, 0.5% 2-(bromomethyl)naphthalene, 250 mg/L 15-crown-5-ether as a phase transfer catalyst, extraction and derivatization temperature of 95 °C, heating time of 20 min and pH of 7.0. Under the established conditions, the lowest limits of detection were 9 and 11 μg/L in 1.0 ml of plasma and urine, respectively, and the intra- and inter-day relative standard deviation was less than 7.7% at concentrations of 0.1 and 1.0 mg/L. The calibration curve showed good linearity of plasma and urine with r=0.9990 and r=0.9992, respectively. This method is simple, amenable to automation and environmentally friendly. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Determination of triclosan in antiperspirant gels by first-order derivative spectrophotometry.

    PubMed

    Du, Lina; Li, Miao; Jin, Yiguang

    2011-10-01

    A first-order derivative UV spectrophotometric method was developed to determine triclosan, a broad-spectrum antimicrobial agent, in health care products containing fragrances which could interfere the determination as impurities. Different extraction methods were compared. Triclosan was extracted with chloroform and diluted with ethanol followed by the derivative spectrophotometric measurement. The interference of fragrances was completely eliminated. The calibration graph was found to be linear in the range of 7.5-45 microg x mL(-1). The method is simple, rapid, sensitive and proper to determine triclosan in fragrance-containing health care products.

  16. Experimental calibration procedures for rotating Lorentz-force flowmeters

    DOE PAGES

    Hvasta, M. G.; Slighton, N. T.; Kolemen, E.; ...

    2017-07-14

    Rotating Lorentz-force flowmeters are a novel and useful technology with a range of applications in a variety of different industries. However, calibrating these flowmeters can be challenging, time-consuming, and expensive. In this paper, simple calibration procedures for rotating Lorentz-force flowmeters are presented. These procedures eliminate the need for expensive equipment, numerical modeling, redundant flowmeters, and system down-time. Finally, the calibration processes are explained in a step-by-step manner and compared to experimental results.

  17. Experimental calibration procedures for rotating Lorentz-force flowmeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hvasta, M. G.; Slighton, N. T.; Kolemen, E.

    Rotating Lorentz-force flowmeters are a novel and useful technology with a range of applications in a variety of different industries. However, calibrating these flowmeters can be challenging, time-consuming, and expensive. In this paper, simple calibration procedures for rotating Lorentz-force flowmeters are presented. These procedures eliminate the need for expensive equipment, numerical modeling, redundant flowmeters, and system down-time. Finally, the calibration processes are explained in a step-by-step manner and compared to experimental results.

  18. Multi-objective calibration and uncertainty analysis of hydrologic models; A comparative study between formal and informal methods

    NASA Astrophysics Data System (ADS)

    Shafii, M.; Tolson, B.; Matott, L. S.

    2012-04-01

    Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.

  19. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajami, N K; Duan, Q; Gao, X

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less

  20. Development of new analytical methods for the determination of caffeine content in aqueous solution of green coffee beans.

    PubMed

    Weldegebreal, Blen; Redi-Abshiro, Mesfin; Chandravanshi, Bhagwan Singh

    2017-12-05

    This study was conducted to develop fast and cost effective methods for the determination of caffeine in green coffee beans. In the present work direct determination of caffeine in aqueous solution of green coffee bean was performed using FT-IR-ATR and fluorescence spectrophotometry. Caffeine was also directly determined in dimethylformamide solution using NIR spectroscopy with univariate calibration technique. The percentage of caffeine for the same sample of green coffee beans was determined using the three newly developed methods. The caffeine content of the green coffee beans was found to be 1.52 ± 0.09 (% w/w) using FT-IR-ATR, 1.50 ± 0.14 (% w/w) using NIR and 1.50 ± 0.05 (% w/w) using fluorescence spectroscopy. The means of the three methods were compared by applying one way analysis of variance and at p = 0.05 significance level the means were not significantly different. The percentage of caffeine in the same sample of green coffee bean was also determined by using the literature reported UV/Vis spectrophotometric method for comparison and found to be 1.40 ± 0.02 (% w/w). New simple, rapid and inexpensive methods were developed for direct determination of caffeine content in aqueous solution of green coffee beans using FT-IR-ATR and fluorescence spectrophotometries. NIR spectrophotometry can also be used as alternative choice of caffeine determination using reduced amount of organic solvent (dimethylformamide) and univariate calibration technique. These analytical methods may therefore, be recommended for the rapid, simple, safe and cost effective determination of caffeine in green coffee beans.

  1. CALCULATION OF GAMMA SPECTRA IN A PLASTIC SCINTILLATOR FOR ENERGY CALIBRATIONAND DOSE COMPUTATION.

    PubMed

    Kim, Chankyu; Yoo, Hyunjun; Kim, Yewon; Moon, Myungkook; Kim, Jong Yul; Kang, Dong Uk; Lee, Daehee; Kim, Myung Soo; Cho, Minsik; Lee, Eunjoong; Cho, Gyuseong

    2016-09-01

    Plastic scintillation detectors have practical advantages in the field of dosimetry. Energy calibration of measured gamma spectra is important for dose computation, but it is not simple in the plastic scintillators because of their different characteristics and a finite resolution. In this study, the gamma spectra in a polystyrene scintillator were calculated for the energy calibration and dose computation. Based on the relationship between the energy resolution and estimated energy broadening effect in the calculated spectra, the gamma spectra were simply calculated without many iterations. The calculated spectra were in agreement with the calculation by an existing method and measurements. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Method for radiometric calibration of an endoscope's camera and light source

    NASA Astrophysics Data System (ADS)

    Rai, Lav; Higgins, William E.

    2008-03-01

    An endoscope is a commonly used instrument for performing minimally invasive visual examination of the tissues inside the body. A physician uses the endoscopic video images to identify tissue abnormalities. The images, however, are highly dependent on the optical properties of the endoscope and its orientation and location with respect to the tissue structure. The analysis of endoscopic video images is, therefore, purely subjective. Studies suggest that the fusion of endoscopic video images (providing color and texture information) with virtual endoscopic views (providing structural information) can be useful for assessing various pathologies for several applications: (1) surgical simulation, training, and pedagogy; (2) the creation of a database for pathologies; and (3) the building of patient-specific models. Such fusion requires both geometric and radiometric alignment of endoscopic video images in the texture space. Inconsistent estimates of texture/color of the tissue surface result in seams when multiple endoscopic video images are combined together. This paper (1) identifies the endoscope-dependent variables to be calibrated for objective and consistent estimation of surface texture/color and (2) presents an integrated set of methods to measure them. Results show that the calibration method can be successfully used to estimate objective color/texture values for simple planar scenes, whereas uncalibrated endoscopes performed very poorly for the same tests.

  3. Quantification of the fluorine containing drug 5-fluorouracil in cancer cells by GaF molecular absorption via high-resolution continuum source molecular absorption spectrometry

    NASA Astrophysics Data System (ADS)

    Krüger, Magnus; Huang, Mao-Dong; Becker-Roß, Helmut; Florek, Stefan; Ott, Ingo; Gust, Ronald

    The development of high-resolution continuum source molecular absorption spectrometry made the quantification of fluorine feasible by measuring the molecular absorption as gallium monofluoride (GaF). Using this new technique, we developed on the example of 5-fluorouracil (5-FU) a graphite furnace method to quantify fluorine in organic molecules. The effect of 5-FU on the generation of the diatomic GaF molecule was investigated. The experimental conditions such as gallium nitrate amount, temperature program, interfering anions (represented as corresponding acids) and calibration for the determination of 5-FU in standard solution and in cellular matrix samples were investigated and optimized. The sample matrix showed no effect on the sensitivity of GaF molecular absorption. A simple calibration curve using an inorganic sodium fluoride solution can conveniently be used for the calibration. The described method is sensitive and the achievable limit of detection is 0.23 ng of 5-FU. In order to establish the concept of "fluorine as a probe in medicinal chemistry" an exemplary application was selected, in which the developed method was successfully demonstrated by performing cellular uptake studies of the 5-FU in human colon carcinoma cells.

  4. Validation and Application of a Simple UHPLC–MS-MS Method for the Enantiospecific Determination of Warfarin in Human Urine

    PubMed Central

    Alshogran, Osama Y.; Ocque, Andrew J.; Leblond, François A.; Pichette, Vincent; Nolin, Thomas D.

    2016-01-01

    A simple and rapid liquid chromatographic–tandem mass spectrometric method has been developed and validated for the enantiospecific determination of R- and S-warfarin in human urine. Warfarin enantiomers were extracted from urine using methyl tert-butyl ether. Chromatographic separation of warfarin enantiomers and the internal standard d5-warfarin was achieved using a Astec Chirobiotic V column with gradient mobile phase at a flow rate of 400 µL/min over 10 min. Detection was performed on a TSQ Quantum Ultra triple quadrupole mass spectrometer equipped with a heated electrospray ionization source. Analytes were detected in negative ionization mode using selected reaction monitoring. Calibration curves were linear with a correlation coefficient of ≥0.996 for both enantiomers over a concentration range of 5–500 ng/mL. The intra- and interday accuracy and precision for both analytes were within ±9.0%. Excellent extraction efficiency and negligible matrix effects were observed. The applicability of the method was demonstrated by successful measurement of warfarin enantiomers in urine of patients with kidney disease. The method is simple, accurate and reproducible and is currently being used to support warfarin pharmacokinetic studies. PMID:26657732

  5. Analysis of titanium content in titanium tetrachloride solution

    NASA Astrophysics Data System (ADS)

    Bi, Xiaoguo; Dong, Yingnan; Li, Shanshan; Guan, Duojiao; Wang, Jianyu; Tang, Meiling

    2018-03-01

    Strontium titanate, barium titan and lead titanate are new type of functional ceramic materials with good prospect, and titanium tetrachloride is a commonly in the production such products. Which excellent electrochemical performance of ferroelectric tempreature coefficient effect.In this article, three methods are used to calibrate the samples of titanium tetrachloride solution by back titration method, replacement titration method and gravimetric analysis method. The results show that the back titration method has many good points, for example, relatively simple operation, easy to judgment the titration end point, better accuracy and precision of analytical results, the relative standard deviation not less than 0.2%. So, it is the ideal of conventional analysis methods in the mass production.

  6. An Auto-Calibrating Knee Flexion-Extension Axis Estimator Using Principal Component Analysis with Inertial Sensors.

    PubMed

    McGrath, Timothy; Fineman, Richard; Stirling, Leia

    2018-06-08

    Inertial measurement units (IMUs) have been demonstrated to reliably measure human joint angles—an essential quantity in the study of biomechanics. However, most previous literature proposed IMU-based joint angle measurement systems that required manual alignment or prescribed calibration motions. This paper presents a simple, physically-intuitive method for IMU-based measurement of the knee flexion/extension angle in gait without requiring alignment or discrete calibration, based on computationally-efficient and easy-to-implement Principle Component Analysis (PCA). The method is compared against an optical motion capture knee flexion/extension angle modeled through OpenSim. The method is evaluated using both measured and simulated IMU data in an observational study ( n = 15) with an absolute root-mean-square-error (RMSE) of 9.24∘ and a zero-mean RMSE of 3.49∘. Variation in error across subjects was found, made emergent by the larger subject population than previous literature considers. Finally, the paper presents an explanatory model of RMSE on IMU mounting location. The observational data suggest that RMSE of the method is a function of thigh IMU perturbation and axis estimation quality. However, the effect size for these parameters is small in comparison to potential gains from improved IMU orientation estimations. Results also highlight the need to set relevant datums from which to interpret joint angles for both truth references and estimated data.

  7. Simulation model calibration and validation : phase II : development of implementation handbook and short course.

    DOT National Transportation Integrated Search

    2006-01-01

    A previous study developed a procedure for microscopic simulation model calibration and validation and evaluated the procedure via two relatively simple case studies using three microscopic simulation models. Results showed that default parameters we...

  8. A simple topography-driven, calibration-free runoff generation model

    NASA Astrophysics Data System (ADS)

    Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.

    2017-12-01

    Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader geoscience studies beyond hydrology.

  9. Thickness Gauging of Single-Layer Conductive Materials with Two-Point Non Linear Calibration Algorithm

    NASA Technical Reports Server (NTRS)

    Fulton, James P. (Inventor); Namkung, Min (Inventor); Simpson, John W. (Inventor); Wincheski, Russell A. (Inventor); Nath, Shridhar C. (Inventor)

    1998-01-01

    A thickness gauging instrument uses a flux focusing eddy current probe and two-point nonlinear calibration algorithm. The instrument is small and portable due to the simple interpretation and operational characteristics of the probe. A nonlinear interpolation scheme incorporated into the instrument enables a user to make highly accurate thickness measurements over a fairly wide calibration range from a single side of nonferromagnetic conductive metals. The instrument is very easy to use and can be calibrated quickly.

  10. Comparative artificial neural network and partial least squares models for analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in pharmaceutical preparations.

    PubMed

    Elkhoudary, Mahmoud M; Abdel Salam, Randa A; Hadad, Ghada M

    2014-09-15

    Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components' mixtures using easy and widely used UV spectrophotometer. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Comparative artificial neural network and partial least squares models for analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in pharmaceutical preparations

    NASA Astrophysics Data System (ADS)

    Elkhoudary, Mahmoud M.; Abdel Salam, Randa A.; Hadad, Ghada M.

    2014-09-01

    Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components’ mixtures using easy and widely used UV spectrophotometer.

  12. Comparison of the calibration of ionospheric delay in VLBI data by the methods of dual frequency and Faraday rotation

    NASA Technical Reports Server (NTRS)

    Scheid, J. A.

    1985-01-01

    When both S-band and X-band data are recorded for a signal which has passed through the ionosphere, it is possible to calculate the ionospheric contribution to signal delay. In Very Long Baseline Interferometry (VLBI) this method is used to calibrate the ionosphere. In the absence of dual frequency data, the ionospheric content measured by Faraday rotation, using a signal from a geostationary satellite, is mapped to the VLBI observing direction. The purpose here is to compare the ionospheric delay obtained by these two methods. The principal conclusions are: (1) the correlation between delays obtained by these two methods is weak; (2) in mapping Faraday rotation measurements to the VLBI observing direction, a simple mapping algorithm which accounts only for changes in hour angle and elevation angle is better than a more elaborate algorithm which includes solar and geomagnetic effects; (3) fluctuations in the difference in total electron content as seen by two antennas defining a baseline limit the application of Faraday rotation data to VLBI.

  13. A porphyrin-based fluorescence method for zinc determination in commercial propolis extracts without sample pretreatment.

    PubMed

    Pierini, Gastón Darío; Pinto, Victor Hugo A; Maia, Clarissa G C; Fragoso, Wallace D; Reboucas, Julio S; Centurión, María Eugenia; Pistonesi, Marcelo Fabián; Di Nezio, María Susana

    2017-11-01

    The quantification of zinc in over-the-counter drugs as commercial propolis extracts by molecular fluorescence technique using meso-tetrakis(4-carboxyphenyl)porphyrin (H 2 TCPP 4 ) was developed for the first time. The calibration curve is linear from 6.60 to 100 nmol L -1 of Zn 2+ . The detection and quantification limits were 6.22 nmol L -1 and 19.0 nmol L -1 , respectively. The reproducibility and repeatability calculated as the percentage variation of slopes of seven calibration curves were 6.75% and 4.61%, respectively. Commercial propolis extract samples from four Brazilian states were analyzed and the results (0.329-0.797 mg/100 mL) obtained with this method are in good agreement with that obtained with the Atomic Absorption Spectroscopy (AAS) technique. The method is simple, fast, of low cost and allows the analysis of the samples without pretreatment. Moreover the major advantage is that Zn-porphyrin complex presents fluorescent characteristic promoting the selectivity and sensitivity of the method. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Calibration and diagnostic accuracy of simple flotation, McMaster and FLOTAC for parasite egg counts in sheep.

    PubMed

    Rinaldi, L; Coles, G C; Maurelli, M P; Musella, V; Cringoli, G

    2011-05-11

    The present study was aimed at carrying out a calibration and a comparison of diagnostic accuracy of three faecal egg counts (FEC) techniques, simple flotation, McMaster and FLOTAC, in order to find the best flotation solution (FS) for Dicrocoelium dendriticum, Moniezia expansa and gastrointestinal (GI) strongyle eggs, and to evaluate the influence of faecal preservation methods combined with FS on egg counts. Simple flotation failed to give satisfactory results with any samples. Overall, FLOTAC resulted in similar or higher eggs per gram of faeces (EPG) and lower coefficient of variation (CV) than McMaster. The "gold standard" for D. dendriticum was obtained with FLOTAC when using FS7 (EPG=219, CV=3.9%) and FS8 (EPG=226, CV=5.2%) on fresh faeces. The "gold standard" for M. expansa was obtained with FLOTAC, using FS3 (EPG=122, CV=4.1%) on fresh faeces. The "gold standard" for GI strongyles was obtained with FLOTAC when using FS5 (EPG=320, CV=4%) and FS2 (EPG=298, CV=5%). As regard to faecal preservation methods, formalin 5% and 10% or freezing showed performance similar to fresh faeces for eggs of D. dendriticum and M. expansa. However, these methods of preservation were not as successful with GI strongyle eggs. Vacuum packing with storage at +4°C permitted storage of GI strongyle eggs for up to 21 days prior to counting. Where accurate egg counts are required in ovine samples the optimum method of counting is the use of FLOTAC. In addition, we suggest the use of two solutions that are easy and cheap to purchase and prepare, saturated sodium chloride (FS2) for nematoda and cestoda eggs and saturated zinc sulphate (FS7) for trematoda eggs and nematoda larvae. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide.

    PubMed

    Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You

    2013-11-04

    A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.

  16. A SENSITIVE METHOD FOR THE DETERMINATION OF CARBOXYHAEMOGLOBIN IN A FINGER PRICK SAMPLE OF BLOOD

    PubMed Central

    Commins, B. T.; Lawther, P. J.

    1965-01-01

    About 0·01 ml. of blood taken from a finger prick is dissolved in 10 ml. of 0·04% ammonia solution. The solution is divided into two halves, and oxygen is bubbled through one half to convert any carboxyhaemoglobin into oxyhaemoglobin. The spectra of the two halves are then compared in a spectrophotometer, and the difference between them is used to estimate the carboxyhaemoglobin content of the blood either graphically or by calculation from a simple formula. Calibration is simple and need only be done once. A sample of blood can be analysed in about 20 minutes, which includes the time to collect the sample. The method is sensitive enough to be used for the analysis of solutions of blood containing less than 1% carboxyhaemoglobin. PMID:14278801

  17. Simple and sensitive analysis of blonanserin and blonanserin C in human plasma by liquid chromatography tandem mass spectrometry and its application.

    PubMed

    Zheng, Yunliang; Hu, Xingjiang; Liu, Jian; Wu, Guolan; Zhou, Huili; Zhu, Meixiang; Zhai, You; Wu, Lihua; Shentu, Jianzhong

    2014-01-01

    A highly sensitive, simple, and rapid liquid chromatography tandem mass spectrometry method to simultaneously determine blonanserin and blonanserin C in human plasma with AD-5332 as internal standard (IS) was established. A simple direct protein precipitation method was used for the sample pretreatment, and chromatographic separation was performed on a Waters XBridge C8 (4.6 × 150 mm, 3.5  μ m) column. The mobile phase consists of a mixture of 10 mM ammonium formate and 0.1% formic acid in water (A) and 0.1% formic acid in methanol (B). To quantify blonanserin, blonanserin C, and IS, multiple reaction monitoring (MRM) was performed in positive ESI mode. The calibration curve was linear in the concentration range of 0.012-5.78 ng·mL(-1) for blonanserin and 0.023-11.57 ng·mL(-1) for blonanserin C (r (2) > 0.9990). The intra- and interday precision of three quality control (QC) levels in plasma were less than 7.5%. Finally, the current simple, sensitive, and accurate LC-MS/MS method was successfully applied to investigate the pharmacokinetics of blonanserin and blonanserin C in healthy Chinese volunteers.

  18. Simple and Sensitive Analysis of Blonanserin and Blonanserin C in Human Plasma by Liquid Chromatography Tandem Mass Spectrometry and Its Application

    PubMed Central

    Zheng, Yunliang; Hu, Xingjiang; Liu, Jian; Wu, Guolan; Zhou, Huili; Zhu, Meixiang; Zhai, You; Wu, Lihua; ShenTu, Jianzhong

    2014-01-01

    A highly sensitive, simple, and rapid liquid chromatography tandem mass spectrometry method to simultaneously determine blonanserin and blonanserin C in human plasma with AD-5332 as internal standard (IS) was established. A simple direct protein precipitation method was used for the sample pretreatment, and chromatographic separation was performed on a Waters XBridge C8 (4.6 × 150 mm, 3.5 μm) column. The mobile phase consists of a mixture of 10 mM ammonium formate and 0.1% formic acid in water (A) and 0.1% formic acid in methanol (B). To quantify blonanserin, blonanserin C, and IS, multiple reaction monitoring (MRM) was performed in positive ESI mode. The calibration curve was linear in the concentration range of 0.012–5.78 ng·mL−1 for blonanserin and 0.023–11.57 ng·mL−1 for blonanserin C (r 2 > 0.9990). The intra- and interday precision of three quality control (QC) levels in plasma were less than 7.5%. Finally, the current simple, sensitive, and accurate LC-MS/MS method was successfully applied to investigate the pharmacokinetics of blonanserin and blonanserin C in healthy Chinese volunteers. PMID:24678425

  19. FlowCal: A user-friendly, open source software tool for automatically converting flow cytometry data from arbitrary to calibrated units

    PubMed Central

    Castillo-Hair, Sebastian M.; Sexton, John T.; Landry, Brian P.; Olson, Evan J.; Igoshin, Oleg A.; Tabor, Jeffrey J.

    2017-01-01

    Flow cytometry is widely used to measure gene expression and other molecular biological processes with single cell resolution via fluorescent probes. Flow cytometers output data in arbitrary units (a.u.) that vary with the probe, instrument, and settings. Arbitrary units can be converted to the calibrated unit molecules of equivalent fluorophore (MEF) using commercially available calibration particles. However, there is no convenient, non-proprietary tool available to perform this calibration. Consequently, most researchers report data in a.u., limiting interpretation. Here, we report a software tool named FlowCal to overcome current limitations. FlowCal can be run using an intuitive Microsoft Excel interface, or customizable Python scripts. The software accepts Flow Cytometry Standard (FCS) files as inputs and is compatible with different calibration particles, fluorescent probes, and cell types. Additionally, FlowCal automatically gates data, calculates common statistics, and produces publication quality plots. We validate FlowCal by calibrating a.u. measurements of E. coli expressing superfolder GFP (sfGFP) collected at 10 different detector sensitivity (gain) settings to a single MEF value. Additionally, we reduce day-to-day variability in replicate E. coli sfGFP expression measurements due to instrument drift by 33%, and calibrate S. cerevisiae mVenus expression data to MEF units. Finally, we demonstrate a simple method for using FlowCal to calibrate fluorescence units across different cytometers. FlowCal should ease the quantitative analysis of flow cytometry data within and across laboratories and facilitate the adoption of standard fluorescence units in synthetic biology and beyond. PMID:27110723

  20. Stoichiometric determination of moisture in edible oils by Mid-FTIR spectroscopy.

    PubMed

    van de Voort, F R; Tavassoli-Kafrani, M H; Curtis, J M

    2016-04-28

    A simple and accurate method for the determination of moisture in edible oils by differential FTIR spectroscopy has been devised based on the stoichiometric reaction of the moisture in oil with toluenesulfonyl isocyanate (TSI) to produce CO2. Calibration standards were devised by gravimetrically spiking dry dioxane with water, followed by the addition of neat TSI and examination of the differential spectra relative to the dry dioxane. In the method, CO2 peak area changes are measured at 2335 cm(-1) and were shown to be related to the amount of moisture added, with any CO2 inherent to residual moisture in the dry dioxane subtracted ratioed out. CO2 volatility issues were determined to be minimal, with the overall SD of dioxane calibrations being ∼18 ppm over a range of 0-1000 ppm. Gravimetrically blended dry and water-saturated oils analysed in a similar manner produced linear CO2 responses with SD's of <15 ppm on average. One set of dry-wet blends was analysed in duplicate by FTIR and by two independent laboratories using coulometric Karl Fischer (KF) procedures. All 3 methods produced highly linear moisture relationships with SD's of 7, 16 and 28 ppm, respectively over a range of 200-1500 ppm. Although the absolute moisture values obtained by each method did not exactly coincide, each tracked the expected moisture changes proportionately. The FTIRTSI-H2O method provides a simple and accurate instrumental means of determining moisture in oils rivaling the accuracy and specificity of standard KF procedures and has the potential to be automated. It could also be applied to other hydrophobic matrices and possibly evolve into a more generalized method, if combined with polar aprotic solvent extraction. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique

    NASA Astrophysics Data System (ADS)

    Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua

    2018-05-01

    A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.

  2. Connections between survey calibration estimators and semiparametric models for incomplete data

    PubMed Central

    Lumley, Thomas; Shaw, Pamela A.; Dai, James Y.

    2012-01-01

    Survey calibration (or generalized raking) estimators are a standard approach to the use of auxiliary information in survey sampling, improving on the simple Horvitz–Thompson estimator. In this paper we relate the survey calibration estimators to the semiparametric incomplete-data estimators of Robins and coworkers, and to adjustment for baseline variables in a randomized trial. The development based on calibration estimators explains the ‘estimated weights’ paradox and provides useful heuristics for constructing practical estimators. We present some examples of using calibration to gain precision without making additional modelling assumptions in a variety of regression models. PMID:23833390

  3. Linear modeling of the soil-water partition coefficient normalized to organic carbon content by reversed-phase thin-layer chromatography.

    PubMed

    Andrić, Filip; Šegan, Sandra; Dramićanin, Aleksandra; Majstorović, Helena; Milojković-Opsenica, Dušanka

    2016-08-05

    Soil-water partition coefficient normalized to the organic carbon content (KOC) is one of the crucial properties influencing the fate of organic compounds in the environment. Chromatographic methods are well established alternative for direct sorption techniques used for KOC determination. The present work proposes reversed-phase thin-layer chromatography (RP-TLC) as a simpler, yet equally accurate method as officially recommended HPLC technique. Several TLC systems were studied including octadecyl-(RP18) and cyano-(CN) modified silica layers in combination with methanol-water and acetonitrile-water mixtures as mobile phases. In total 50 compounds of different molecular shape, size, and various ability to establish specific interactions were selected (phenols, beznodiazepines, triazine herbicides, and polyaromatic hydrocarbons). Calibration set of 29 compounds with known logKOC values determined by sorption experiments was used to build simple univariate calibrations, Principal Component Regression (PCR) and Partial Least Squares (PLS) models between logKOC and TLC retention parameters. Models exhibit good statistical performance, indicating that CN-layers contribute better to logKOC modeling than RP18-silica. The most promising TLC methods, officially recommended HPLC method, and four in silico estimation approaches have been compared by non-parametric Sum of Ranking Differences approach (SRD). The best estimations of logKOC values were achieved by simple univariate calibration of TLC retention data involving CN-silica layers and moderate content of methanol (40-50%v/v). They were ranked far well compared to the officially recommended HPLC method which was ranked in the middle. The worst estimates have been obtained from in silico computations based on octanol-water partition coefficient. Linear Solvation Energy Relationship study revealed that increased polarity of CN-layers over RP18 in combination with methanol-water mixtures is the key to better modeling of logKOC through significant diminishing of dipolar and proton accepting influence of the mobile phase as well as enhancing molar refractivity in excess of the chromatographic systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Evaluation of Strain-Life Fatigue Curve Estimation Methods and Their Application to a Direct-Quenched High-Strength Steel

    NASA Astrophysics Data System (ADS)

    Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.

    2018-03-01

    Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.

  5. Plastic scintillator block as photon beam monitor for EGRET calibration

    NASA Technical Reports Server (NTRS)

    Lin, Y. C.; Hofstadter, R.; Nolan, P. L.; Walker, A. H.; Mattox, J. R.; Hughes, E. B.

    1991-01-01

    The EGRET (Energetic Gamma Ray Experiment Telescope) detector has been calibrated at SLAC (Stanford Linear Accelerator) and, to a lesser degree, at the MIT Bates Linear Accelerator Center. To monitor the photon beams for the calibration, a plastic scintillator block, 5 cm x 5 cm in cross section, 15 cm in length, and viewed by a single photomultiplier tube, was used for the entire beam energy range of 15 MeV to 10 GeV. The design operation, and method of analysis of the beam intensity are presented. A mathematical framework has been developed to treat the general case of a beam with multiphoton beam pulses and with a background component. A procedure to deal with the fluctuations of the beam intensity over a data-taking period was also developed. The photon beam monitor is physically sturdy, electronically steady, simple to construct, and easy to operate. Its major merits lie in its sheer simplicity of construction and operation and in the wide energy range it can cover.

  6. Calibration system for radon EEC measurements.

    PubMed

    Mostafa, Y A M; Vasyanovich, M; Zhukovsky, M; Zaitceva, N

    2015-06-01

    The measurement of radon equivalent equilibrium concentration (EECRn) is very simple and quick technique for the estimation of radon progeny level in dwellings or working places. The most typical methods of EECRn measurements are alpha radiometry or alpha spectrometry. In such technique, the influence of alpha particle absorption in filters and filter effectiveness should be taken into account. In the authors' work, it is demonstrated that more precise and less complicated calibration of EECRn-measuring equipment can be conducted by the use of the gamma spectrometer as a reference measuring device. It was demonstrated that for this calibration technique systematic error does not exceed 3 %. The random error of (214)Bi activity measurements is in the range 3-6 %. In general, both these errors can be decreased. The measurements of EECRn by gamma spectrometry and improved alpha radiometry are in good agreement, but the systematic shift between average values can be observed. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Fast T Wave Detection Calibrated by Clinical Knowledge with Annotation of P and T Waves.

    PubMed

    Elgendi, Mohamed; Eskofier, Bjoern; Abbott, Derek

    2015-07-21

    There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies. Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry). The determination of T wave peaks is performed and the proposed algorithm is evaluated on two well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats). We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design.

  8. Calibrated Bayes Factors Should Not Be Used: A Reply to Hoijtink, van Kooten, and Hulsker.

    PubMed

    Morey, Richard D; Wagenmakers, Eric-Jan; Rouder, Jeffrey N

    2016-01-01

    Hoijtink, Kooten, and Hulsker ( 2016 ) present a method for choosing the prior distribution for an analysis with Bayes factor that is based on controlling error rates, which they advocate as an alternative to our more subjective methods (Morey & Rouder, 2014 ; Rouder, Speckman, Sun, Morey, & Iverson, 2009 ; Wagenmakers, Wetzels, Borsboom, & van der Maas, 2011 ). We show that the method they advocate amounts to a simple significance test, and that the resulting Bayes factors are not interpretable. Additionally, their method fails in common circumstances, and has the potential to yield arbitrarily high Type II error rates. After critiquing their method, we outline the position on subjectivity that underlies our advocacy of Bayes factors.

  9. A simplified gross primary production and evapotranspiration model for boreal coniferous forests - is a generic calibration sufficient?

    NASA Astrophysics Data System (ADS)

    Minunno, F.; Peltoniemi, M.; Launiainen, S.; Aurela, M.; Lindroth, A.; Lohila, A.; Mammarella, I.; Minkkinen, K.; Mäkelä, A.

    2015-07-01

    The problem of model complexity has been lively debated in environmental sciences as well as in the forest modelling community. Simple models are less input demanding and their calibration involves a lower number of parameters, but they might be suitable only at local scale. In this work we calibrated a simplified ecosystem process model (PRELES) to data from multiple sites and we tested if PRELES can be used at regional scale to estimate the carbon and water fluxes of Boreal conifer forests. We compared a multi-site (M-S) with site-specific (S-S) calibrations. Model calibrations and evaluations were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. To evaluate model performances BMC results were combined with more classical analysis of model-data mismatch (M-DM). Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 10 sites of Finland and Sweden were used in the study. Calibration results showed that similar estimates were obtained for the parameters at which model outputs are most sensitive. No significant differences were encountered in the predictions of the multi-site and site-specific versions of PRELES with exception of a site with agricultural history (Alkkia). Although PRELES predicted GPP better than evapotranspiration, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Our analyses underlined also the importance of using long and carefully collected flux datasets in model calibration. In fact, even a single site can provide model calibrations that can be applied at a wider spatial scale, since it covers a wide range of variability in climatic conditions.

  10. Ratio manipulating spectrophotometry versus chemometry as stability indicating methods for cefquinome sulfate determination.

    PubMed

    Yehia, Ali M; Arafa, Reham M; Abbas, Samah S; Amer, Sawsan M

    2016-01-15

    Spectral resolution of cefquinome sulfate (CFQ) in the presence of its degradation products was studied. Three selective, accurate and rapid spectrophotometric methods were performed for the determination of CFQ in the presence of either its hydrolytic, oxidative or photo-degradation products. The proposed ratio difference, derivative ratio and mean centering are ratio manipulating spectrophotometric methods that were satisfactorily applied for selective determination of CFQ within linear range of 5.0-40.0 μg mL(-1). Concentration Residuals Augmented Classical Least Squares was applied and evaluated for the determination of the cited drug in the presence of its all degradation products. Traditional Partial Least Squares regression was also applied and benchmarked against the proposed advanced multivariate calibration. Experimentally designed 25 synthetic mixtures of three factors at five levels were used to calibrate and validate the multivariate models. Advanced chemometrics succeeded in quantitative and qualitative analyses of CFQ along with its hydrolytic, oxidative and photo-degradation products. The proposed methods were applied successfully for different pharmaceutical formulations analyses. These developed methods were simple and cost-effective compared with the manufacturer's RP-HPLC method. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. A method to measure internal stray radiation of cryogenic infrared imaging systems under various ambient temperatures

    NASA Astrophysics Data System (ADS)

    Tian, Qijie; Chang, Songtao; Li, Zhou; He, Fengyun; Qiao, Yanfeng

    2017-03-01

    The suppression level of internal stray radiation is a key criterion for infrared imaging systems, especially for high-precision cryogenic infrared imaging systems. To achieve accurate measurement for internal stray radiation of cryogenic infrared imaging systems under various ambient temperatures, a measurement method, which is based on radiometric calibration, is presented in this paper. First of all, the calibration formula is deduced considering the integration time, and the effect of ambient temperature on internal stray radiation is further analyzed in detail. Then, an approach is proposed to measure the internal stray radiation of cryogenic infrared imaging systems under various ambient temperatures. By calibrating the system under two ambient temperatures, the quantitative relation between the internal stray radiation and the ambient temperature can be acquired, and then the internal stray radiation of the cryogenic infrared imaging system under various ambient temperatures can be calculated. Finally, several experiments are performed in a chamber with controllable inside temperatures to evaluate the effectiveness of the proposed method. Experimental results indicate that the proposed method can be used to measure internal stray radiation with high accuracy at various ambient temperatures and integration times. The proposed method has some advantages, such as simple implementation and the capability of high-precision measurement. The measurement results can be used to guide the stray radiation suppression and to test whether the internal stray radiation suppression performance meets the requirement or not.

  12. MO-D-213-08: Remote Dosimetric Credentialing for Clinical Trials with the Virtual EPID Standard Phantom Audit (VESPA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehmann, J; University of Sydney, Sydney, NSW; Miri, N

    Purpose: Report on implementation of a Virtual EPID Standard Phantom Audit (VESPA) for IMRT to support credentialing of facilities for clinical trials. Data is acquired by local facility staff and transferred electronically. Analysis is performed centrally. Methods: VESPA is based on published methods and a clinically established IMRT QA procedure, here extended to multi-vendor equipment. Facilities, provided with web-based comprehensive instructions and CT datasets, create IMRT treatment plans. They deliver the treatments directly to their EPID without phantom or couch in the beam. They also deliver a set of simple calibration fields. Collected EPID images are uploaded electronically. In themore » analysis, the dose is projected back into a virtual phantom and 3D gamma analysis is performed. 2D dose planes and linear dose profiles can be analysed when needed for clarification. Results: Pilot facilities covering a range of planning and delivery systems have performed data acquisition and upload successfully. Analysis showed agreement comparable to local experience with the method. Advantages of VESPA are (1) fast turnaround mainly driven by the facility’s capability to provide the requested EPID images, (2) the possibility for facilities performing the audit in parallel, as there is no need to wait for a phantom, (3) simple and efficient credentialing for international facilities, (4) a large set of data points, and (5) a reduced impact on resources and environment as there is no need to transport heavy phantoms or audit staff. Limitations of the current implementation of VESPA for trials credentialing are that it does not provide absolute dosimetry, therefore a Level 1 audit still required, and that it relies on correctly delivered open calibration fields, which are used for system calibration. Conclusion: The implemented EPID based IMRT audit system promises to dramatically improve credentialing efficiency for clinical trials and wider applications. VESPA for VMAT will follow soon.« less

  13. A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.

    PubMed

    Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang

    2009-01-01

    This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.

  14. Poster - 53: Improving inter-linac DMLC IMRT dose precision by fine tuning of MLC leaf calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakonechny, Keith; Tran, Muoi; Sasaki, David

    Purpose: To develop a method to improve the inter-linac precision of DMLC IMRT dosimetry. Methods: The distance between opposing MLC leaf banks (“gap size”) can be finely tuned on Varian linacs. The dosimetric effect due to small deviations from the nominal gap size (“gap error”) was studied by introducing known errors for several DMLC sliding gap sizes, and for clinical plans based on the TG119 test cases. The plans were delivered on a single Varian linac and the relationship between gap error and the corresponding change in dose was measured. The plans were also delivered on eight Varian 2100 seriesmore » linacs (at two institutions) in order to quantify the inter-linac variation in dose before and after fine tuning the MLC calibration. Results: The measured dose differences for each field agreed well with the predictions of LoSasso et al. Using the default MLC calibration, the variation in the physical MLC gap size was determined to be less than 0.4 mm between all linacs studied. The dose difference between the linacs with the largest and smallest physical gap was up to 5.4% (spinal cord region of the head and neck TG119 test case). This difference was reduced to 2.5% after fine tuning the MLC gap calibration. Conclusions: The inter-linac dose precision for DMLC IMRT on Varian linacs can be improved using a simple modification of the MLC calibration procedure that involves fine adjustment of the nominal gap size.« less

  15. Calibration, Monitoring, and Control of Complex Detector Systems

    NASA Astrophysics Data System (ADS)

    Breidenbach, M.

    1981-04-01

    LEP Detectors will probably be complex devices having tens of subsystems; some subsystems having perhaps tens of thousands of channels. Reasonable design goals for such a detector will include economic use of money and people, rapid and reliable calibration and monitoring of the detector, and simple control and operation of the device. The synchronous operation of an e+e- storage ring, coupled with its relatively low interaction rate, allow the design of simple circuits for time and charge measurements. These circuits, and more importantly, the basic detector channels, can usually be tested and calibrated by signal injection into the detector. Present detectors utilize semi-autonomous controllers which collect such calibration data and calculate statistics as well as control sparse data scans. Straightforward improvements in programming technology should move the entire calibration into these local controllers, so that calibration and testing time will be a constant independent of the number of channels in a system. Considerable programming effort may be saved by emphasizing the similarities of the subsystems, so that the subsystems can be described by a reasonable database and general purpose calibration and test routines can be used. Monitoring of the apparatus will probably continue to be of two classes: "passive" histogramming of channel occupancies and other more complex combinations of the data; and "active" injection of test patterns and calibration signals during a run. The relative importance of active monitoring will increase for the low data rates expected off resonances at high s. Experience at SPEAR and PEP is used to illustrate these approaches.

  16. Fast T Wave Detection Calibrated by Clinical Knowledge with Annotation of P and T Waves

    PubMed Central

    Elgendi, Mohamed; Eskofier, Bjoern; Abbott, Derek

    2015-01-01

    Background There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies. Methods Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry). Results The determination of T wave peaks is performed and the proposed algorithm is evaluated on two well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats). Conclusions We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design. PMID:26197321

  17. Empirical dual energy calibration (EDEC) for cone-beam computed tomography.

    PubMed

    Stenner, Philip; Berkus, Timo; Kachelriess, Marc

    2007-09-01

    Material-selective imaging using dual energy CT (DECT) relies heavily on well-calibrated material decomposition functions. These require the precise knowledge of the detected x-ray spectra, and even if they are exactly known the reliability of DECT will suffer from scattered radiation. We propose an empirical method to determine the proper decomposition function. In contrast to other decomposition algorithms our empirical dual energy calibration (EDEC) technique requires neither knowledge of the spectra nor of the attenuation coefficients. The desired material-selective raw data p1 and p2 are obtained as functions of the measured attenuation data q1 and q2 (one DECT scan = two raw data sets) by passing them through a polynomial function. The polynomial's coefficients are determined using a general least squares fit based on thresholded images of a calibration phantom. The calibration phantom's dimension should be of the same order of magnitude as the test object, but other than that no assumptions on its exact size or positioning are made. Once the decomposition coefficients are determined DECT raw data can be decomposed by simply passing them through the polynomial. To demonstrate EDEC simulations of an oval CTDI phantom, a lung phantom, a thorax phantom and a mouse phantom were carried out. The method was further verified by measuring a physical mouse phantom, a half-and-half-cylinder phantom and a Yin-Yang phantom with a dedicated in vivo dual source micro-CT scanner. The raw data were decomposed into their components, reconstructed, and the pixel values obtained were compared to the theoretical values. The determination of the calibration coefficients with EDEC is very robust and depends only slightly on the type of calibration phantom used. The images of the test phantoms (simulations and measurements) show a nearly perfect agreement with the theoretical micro values and density values. Since EDEC is an empirical technique it inherently compensates for scatter components. The empirical dual energy calibration technique is a pragmatic, simple, and reliable calibration approach that produces highly quantitative DECT images.

  18. An Improved Calibration Method for Hydrazine Monitors for the United States Air Force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korsah, K

    2003-07-07

    This report documents the results of Phase 1 of the ''Air Force Hydrazine Detector Characterization and Calibration Project''. A method for calibrating model MDA 7100 hydrazine detectors in the United States Air Force (AF) inventory has been developed. The calibration system consists of a Kintek 491 reference gas generation system, a humidifier/mixer system which combines the dry reference hydrazine gas with humidified diluent or carrier gas to generate the required humidified reference for calibrations, and a gas sampling interface. The Kintek reference gas generation system itself is periodically calibrated using an ORNL-constructed coulometric titration system to verify the hydrazine concentrationmore » of the sample atmosphere in the interface module. The Kintek reference gas is then used to calibrate the hydrazine monitors. Thus, coulometric titration is only used to periodically assess the performance of the Kintek reference gas generation system, and is not required for hydrazine monitor calibrations. One advantage of using coulometric titration for verifying the concentration of the reference gas is that it is a primary standard (if used for simple solutions), thereby guaranteeing, in principle, that measurements will be traceable to SI units (i.e., to the mole). The effect of humidity of the reference gas was characterized by using the results of concentrations determined by coulometric titration to develop a humidity correction graph for the Kintek 491 reference gas generation system. Using this calibration method, calibration uncertainty has been reduced by 50% compared to the current method used to calibrate hydrazine monitors in the Air Force inventory and calibration time has also been reduced by more than 20%. Significant findings from studies documented in this report are the following: (1) The Kintek 491 reference gas generation system (generator, humidifier and interface module) can be used to calibrate hydrazine detectors. (2) The Kintek system output concentration is less than the calculated output of the generator alone but can be calibrated as a system by using coulometric titration of gas samples collected with impingers. (3) The calibrated Kintek system output concentration is reproducible even after having been disassembled and moved and reassembled. (4) The uncertainty of the reference gas concentration generated by the Kintek system is less than half the uncertainty of the Zellweger Analytics' (ZA) reference gas concentration and can be easily lowered to one third or less of the ZA method by using lower-uncertainty flow rate or total flow measuring instruments. (5) The largest sources of uncertainty in the current ORNL calibration system are the permeation rate of the permeation tubes and the flow rate of the impinger sampling pump used to collect gas samples for calibrating the Kintek system. Upgrading the measurement equipment, as stated in (4), can reduce both of these. (6) The coulometric titration technique can be used to periodically assess the performance of the Kintek system and determine a suitable recalibration interval. (7) The Kintek system has been used to calibrate two MDA 7100s and an Interscan 4187 in less than one workday. The system can be upgraded (e.g., by automating it) to provide more calibrations per day. (8) The humidity of both the reference gas and the environment of the Chemcassette affect the MDA 7100 hydrazine detector's readings. However, ORNL believes that the environmental effect is less significant than the effect of the reference gas humidity. (9) The ORNL calibration method based on the Kintek 491 M-B gas standard can correct for the effect of the humidity of the reference gas to produce the same calibration as that of ZA's. Zellweger Analytics calibrations are typically performed at 45%-55% relative humidity. (10) Tests using the Interscan 4187 showed that the instrument was not accurate in its lower (0-100 ppb) range. Subsequent discussions with Kennedy Space Center (KSC) personnel also indicated that the Interscan units were not reproducible when new sensors were used. KSC had discovered that the Interscan units read incorrectly on the low range because of the presence of carbon dioxide. ORNL did not test the carbon dioxide effect, but it was found that the units did not read zero when a test gas containing no hydrazine was sampled. According to the KSC personnel that ORNL had these discussions with, NASA is phasing out the use of these Interscan detectors.« less

  19. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction

    PubMed Central

    Morel, Yann G.; Favoretto, Fabio

    2017-01-01

    All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a “near-nadir” view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint. PMID:28754028

  20. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction.

    PubMed

    Morel, Yann G; Favoretto, Fabio

    2017-07-21

    All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a "near-nadir" view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  1. Optical Interferometric Micrometrology

    NASA Technical Reports Server (NTRS)

    Abel, Phillip B.; Lauer, James R.

    1989-01-01

    Resolutions in angstrom and subangstrom range sought for atomic-scale surface probes. Experimental optical micrometrological system built to demonstrate calibration of piezoelectric transducer to displacement sensitivity of few angstroms. Objective to develop relatively simple system producing and measuring translation, across surface of specimen, of stylus in atomic-force or scanning tunneling microscope. Laser interferometer used to calibrate piezoelectric transducer used in atomic-force microscope. Electronic portion of calibration system made of commercially available components.

  2. A method of solving tilt illumination for multiple distance phase retrieval

    NASA Astrophysics Data System (ADS)

    Guo, Cheng; Li, Qiang; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun

    2018-07-01

    Multiple distance phase retrieval is a technique of using a series of intensity patterns to reconstruct a complex-valued image of object. However, tilt illumination originating from the off-axis displacement of incident light significantly impairs its imaging quality. To eliminate this affection, we use cross-correlation calibration to estimate oblique angle of incident light and a Fourier-based strategy to correct tilted illumination effect. Compared to other methods, binary and biological object are both stably reconstructed in simulation and experiment. This work provides a simple but beneficial method to solve the problem of tilt illumination for lens-free multi-distance system.

  3. Simultaneous determination of dextromethorphan HBr and bromhexine HCl in tablets by first-derivative spectrophotometry.

    PubMed

    Tantishaiyakul, V; Poeaknapo, C; Sribun, P; Sirisuppanon, K

    1998-06-01

    A rapid, simple and direct assay procedure based on first-derivative spectrophotometry, using a zero-crossing and peak-to-base measurement at 234 and 324 nm, respectively, has been developed for the specific determination of dextromethorphan HBr and bromhexine HCl in tablets. Calibration graphs were linear with the correlation coefficients of 0.9999 for both analytes. The limit of detections were 0.033 and 0.103 microgram ml-1 for dextromethorphan HBr and bromhexine HCl, respectively. A HPLC method has been developed as the reference method. The results obtained by the first-derivative spectrophotometry were in good agreement with those found by the HPLC method.

  4. Water content determination of superdisintegrants by means of ATR-FTIR spectroscopy.

    PubMed

    Szakonyi, G; Zelkó, R

    2012-04-07

    Water contents of superdisintegrant pharmaceutical excipients were determined by attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy using simple linear regression. Water contents of the investigated three common superdisintegrants (crospovidone, croscarmellose sodium, sodium starch glycolate) varied over a wide range (0-24%, w/w). In the case of crospovidone three different samples from two manufacturers were examined in order to study the effects of different grades on the calibration curves. Water content determinations were based on strong absorption of water between 3700 and 2800 cm⁻¹, other spectral changes associated with the different compaction of samples on the ATR crystal using the same pressure were followed by the infrared region between 1510 and 1050 cm⁻¹. The calibration curves were constructed using the ratio of absorbance intensities in the two investigated regions. Using appropriate baseline correction the linearity of the calibration curves was maintained over the entire investigated water content regions and the effect of particle size on the calibration was not significant in the case of crospovidones from the same manufacturer. The described method enables the water content determination of powdered hygroscopic materials containing homogeneously distributed water. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Fragmentation modeling of a resin bonded sand

    NASA Astrophysics Data System (ADS)

    Hilth, William; Ryckelynck, David

    2017-06-01

    Cemented sands exhibit a complex mechanical behavior that can lead to sophisticated models, with numerous parameters without real physical meaning. However, using a rather simple generalized critical state bonded soil model has proven to be a relevant compromise between an easy calibration and good results. The constitutive model formulation considers a non-associated elasto-plastic formulation within the critical state framework. The calibration procedure, using standard laboratory tests, is complemented by the study of an uniaxial compression test observed by tomography. Using finite elements simulations, this test is simulated considering a non-homogeneous 3D media. The tomography of compression sample gives access to 3D displacement fields by using image correlation techniques. Unfortunately these fields have missing experimental data because of the low resolution of correlations for low displacement magnitudes. We propose a recovery method that reconstructs 3D full displacement fields and 2D boundary displacement fields. These fields are mandatory for the calibration of the constitutive parameters by using 3D finite element simulations. The proposed recovery technique is based on a singular value decomposition of available experimental data. This calibration protocol enables an accurate prediction of the fragmentation of the specimen.

  6. Visualization and quantification of magnetic nanoparticles into vesicular systems by combined atomic and magnetic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, C.; Department of Physics, SAPIENZA University of Rome, Piazzale A. Moro 5, 00185, Rome; Corsetti, S.

    2015-06-23

    We report a phenomenological approach for the quantification of the diameter of magnetic nanoparticles (MNPs) incorporated in non-ionic surfactant vesicles (niosomes) using magnetic force microscopy (MFM). After a simple specimen preparation, i.e., by putting a drop of solution containing MNPs-loaded niosomes on flat substrates, topography and MFM phase images are collected. To attempt the quantification of the diameter of entrapped MNPs, the method is calibrated on the sole MNPs deposited on the same substrates by analyzing the MFM signal as a function of the MNP diameter (at fixed tip-sample distance) and of the tip-sample distance (for selected MNPs). After calibration,more » the effective diameter of the MNPs entrapped in some niosomes is quantitatively deduced from MFM images.« less

  7. Comparative study between univariate spectrophotometry and multivariate calibration as analytical tools for quantitation of Benazepril alone and in combination with Amlodipine.

    PubMed

    Farouk, M; Elaziz, Omar Abd; Tawakkol, Shereen M; Hemdan, A; Shehata, Mostafa A

    2014-04-05

    Four simple, accurate, reproducible, and selective methods have been developed and subsequently validated for the determination of Benazepril (BENZ) alone and in combination with Amlodipine (AML) in pharmaceutical dosage form. The first method is pH induced difference spectrophotometry, where BENZ can be measured in presence of AML as it showed maximum absorption at 237nm and 241nm in 0.1N HCl and 0.1N NaOH, respectively, while AML has no wavelength shift in both solvents. The second method is the new Extended Ratio Subtraction Method (EXRSM) coupled to Ratio Subtraction Method (RSM) for determination of both drugs in commercial dosage form. The third and fourth methods are multivariate calibration which include Principal Component Regression (PCR) and Partial Least Squares (PLSs). A detailed validation of the methods was performed following the ICH guidelines and the standard curves were found to be linear in the range of 2-30μg/mL for BENZ in difference and extended ratio subtraction spectrophotometric method, and 5-30 for AML in EXRSM method, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Stable Calibration of Raman Lidar Water-Vapor Measurements

    NASA Technical Reports Server (NTRS)

    Leblanc, Thierry; McDermid, Iain S.

    2008-01-01

    A method has been devised to ensure stable, long-term calibration of Raman lidar measurements that are used to determine the altitude-dependent mixing ratio of water vapor in the upper troposphere and lower stratosphere. Because the lidar measurements yield a quantity proportional to the mixing ratio, rather than the mixing ratio itself, calibration is necessary to obtain the factor of proportionality. The present method involves the use of calibration data from two sources: (1) absolute calibration data from in situ radiosonde measurements made during occasional campaigns and (2) partial calibration data obtained by use, on a regular schedule, of a lamp that emits in a known spectrum determined in laboratory calibration measurements. In this method, data from the first radiosonde campaign are used to calculate a campaign-averaged absolute lidar calibration factor (t(sub 1)) and the corresponding campaign-averaged ration (L(sub 1)) between lamp irradiances at the water-vapor and nitrogen wavelengths. Depending on the scenario considered, this ratio can be assumed to be either constant over a long time (L=L(sub 1)) or drifting slowly with time. The absolutely calibrated water-vapor mixing ratio (q) obtained from the ith routine off-campaign lidar measurement is given by q(sub 1)=P(sub 1)/t(sub 1)=LP(sub 1)/P(sup prime)(sub 1) where P(sub 1) is water-vapor/nitrogen measurement signal ration, t(sub 1) is the unknown and unneeded overall efficiency ratio of the lidar receiver during the ith routine off-campaign measurement run, and P(sup prime)(sub 1) is the water-vapor/nitrogen signal ratio obtained during the lamp run associated with the ith routine off-campaign measurement run. If L is assumed constant, then the lidar calibration is routinely obtained without the need for new radiosonde data. In this case, one uses L=L(sub 1) = P(sup prime)(sub 1)/t(sub 1), where P(sub 1)(sup prime) is the water-vapor/nitrogen signal ratio obtained during the lamp run associated with the first radiosonde campaign. If L is assumed to drift slowly, then it is necessary to postpone calculation of a(sub 1) until after a second radiosonde campaign. In this case, one obtains a new value, L(sub 2), from the second radiosonde campaign, and for the ith routine off-campaign measurement run, one uses an intermediate value of L obtained by simple linear time interpolation between L(sub 1) and L(sub 2).

  9. The Comparison Of In-Flight Pitot Static Calibration Method By Using Radio Altimeter As Reference with GPS and Tower Fly By Methods On CN235-100 MPA

    NASA Astrophysics Data System (ADS)

    Derajat; Hariowibowo, Hindawan

    2018-04-01

    The new proposed In-Flight Pitot Static Calibration Method has been carried out during Development and Qualification of CN235-100 MPA (Military Patrol Aircraft). This method is expected to reduce flight hours, less human resources required, no additional special equipment, simple analysis calculation and finally by using this method it is expected to automatically minimized operational cost. At The Indonesian Aerospace (IAe) Flight Test Center Division, the development and updating of new flight test technique and data analysis method as specially for flight physics test subject are still continued to be developed as long as it safety for flight and give additional value for the industrial side. More than 30 years, Flight Test Data Engineers at The Flight Test center Division work together with the Air Crew (Test Pilots, Co-Pilots, and Flight Test Engineers) to execute the flight test activity with standard procedure for both the existance or development test techniques and test data analysis. In this paper the approximation of mathematical model, data reduction and flight test technique of The In-Flight Pitot Static Calibration by using Radio Altimeter as reference will be described and the test results had been compared with another methods ie. By using Global Position System (GPS) and the traditional method (Tower Fly By Method) which were used previously during this Flight Test Program (Ref. [10]). The flight test data case are using CN235-100 MPA flight test data during development and Qualification Flight Test Program at Cazaux Airport, France, in June-November 2009 (Ref. [2]).

  10. Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data

    PubMed Central

    Zhao, Shanshan

    2014-01-01

    Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  11. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  12. Validation of an Analytical Method for Determination of Benzo[a]pyrene Bread using QuEChERS Method by GC-MS

    PubMed Central

    Eslamizad, Samira; Yazdanpanah, Hassan; Javidnia, Katayon; Sadeghi, Ramezan; Bayat, Mitra; Shahabipour, Sara; Khalighian, Najmeh; Kobarfard, Farzad

    2016-01-01

    A fast and simple modified QuEChERS (quick, easy, cheap, rugged and safe) extraction method based on spiked calibration curves and direct sample introduction was developed for determination of Benzo [a] pyrene (BaP) in bread by gas chromatography-mass spectrometry single quadrupole selected ion monitoring (GC/MS-SQ-SIM). Sample preparation includes: extraction of BaP into acetone followed by cleanup with dispersive solid phase extraction. The use of spiked samples for constructing the calibration curve substantially reduced adverse matrix-related effects. The average recovery of BaP at 6 concentration levels was in range of 95-120%. The method was proved to be reproducible with relative standard deviation less than 14.5% for all of the concentration levels. The limit of detection and limit of quantification were 0.3 ng/g and 0.5 ng/g, respectively. Correlation coefficient of 0.997 was obtained for spiked calibration standards over the concentration range of 0.5-20 ng/g. To the best of our knowledge, this is the first time that a QuEChERS method is used for the analysis of BaP in breads. The developed method was used for determination of BaP in 29 traditional (Sangak) and industrial (Senan) bread samples collected from Tehran in 2014. These results showed that two Sangak samples were contaminated with BaP. Therefore, a comprehensive survey for monitoring of BaP in Sangak bread samples seems to be needed. This is the first report concerning contamination of bread samples with BaP in Iran. PMID:27642317

  13. An assessment of the liquid-gas partitioning behavior of major wastewater odorants using two comparative experimental approaches: liquid sample-based vaporization vs. impinger-based dynamic headspace extraction into sorbent tubes.

    PubMed

    Iqbal, Mohammad Asif; Kim, Ki-Hyun; Szulejko, Jan E; Cho, Jinwoo

    2014-01-01

    The gas-liquid partitioning behavior of major odorants (acetic acid, propionic acid, isobutyric acid, n-butyric acid, i-valeric acid, n-valeric acid, hexanoic acid, phenol, p-cresol, indole, skatole, and toluene (as a reference)) commonly found in microbially digested wastewaters was investigated by two experimental approaches. Firstly, a simple vaporization method was applied to measure the target odorants dissolved in liquid samples with the aid of sorbent tube/thermal desorption/gas chromatography/mass spectrometry. As an alternative method, an impinger-based dynamic headspace sampling method was also explored to measure the partitioning of target odorants between the gas and liquid phases with the same detection system. The relative extraction efficiency (in percent) of the odorants by dynamic headspace sampling was estimated against the calibration results derived by the vaporization method. Finally, the concentrations of the major odorants in real digested wastewater samples were also analyzed using both analytical approaches. Through a parallel application of the two experimental methods, we intended to develop an experimental approach to be able to assess the liquid-to-gas phase partitioning behavior of major odorants in a complex wastewater system. The relative sensitivity of the two methods expressed in terms of response factor ratios (RFvap/RFimp) of liquid standard calibration between vaporization and impinger-based calibrations varied widely from 981 (skatole) to 6,022 (acetic acid). Comparison of this relative sensitivity thus highlights the rather low extraction efficiency of the highly soluble and more acidic odorants from wastewater samples in dynamic headspace sampling.

  14. Development and validation of a fast and simple multi-analyte procedure for quantification of 40 drugs relevant to emergency toxicology using GC-MS and one-point calibration.

    PubMed

    Meyer, Golo M J; Weber, Armin A; Maurer, Hans H

    2014-05-01

    Diagnosis and prognosis of poisonings should be confirmed by comprehensive screening and reliable quantification of xenobiotics, for example by gas chromatography-mass spectrometry (GC-MS) or liquid chromatography-mass spectrometry (LC-MS). The turnaround time should be short enough to have an impact on clinical decisions. In emergency toxicology, quantification using full-scan acquisition is preferable because this allows screening and quantification of expected and unexpected drugs in one run. Therefore, a multi-analyte full-scan GC-MS approach was developed and validated with liquid-liquid extraction and one-point calibration for quantification of 40 drugs relevant to emergency toxicology. Validation showed that 36 drugs could be determined quickly, accurately, and reliably in the range of upper therapeutic to toxic concentrations. Daily one-point calibration with calibrators stored for up to four weeks reduced workload and turn-around time to less than 1 h. In summary, the multi-analyte approach with simple liquid-liquid extraction, GC-MS identification, and quantification over fast one-point calibration could successfully be applied to proficiency tests and real case samples. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Simple high-performance liquid chromatography method for formaldehyde determination in human tissue through derivatization with 2,4-dinitrophenylhydrazine.

    PubMed

    Yilmaz, Bilal; Asci, Ali; Kucukoglu, Kaan; Albayrak, Mevlut

    2016-08-01

    A simple high-performance liquid chromatography method has been developed for the determination of formaldehyde in human tissue. FA Formaldehyde was derivatized with 2,4-dinitrophenylhydrazine. It was extracted from human tissue with ethyl acetate by liquid-liquid extraction and analyzed by high-performance liquid chromatography. The calibration curve was linear in the concentration range of 5.0-200 μg/mL. Intra- and interday precision values for formaldehyde in tissue were <6.9%, and accuracy (relative error) was better than 6.5%. The extraction recoveries of formaldehyde from human tissue were between 88 and 98%. The limits of detection and quantification of formaldehyde were 1.5 and 5.0 μg/mL, respectively. Also, this assay was applied to liver samples taken from a biopsy material. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Rapid quantitation of atorvastatin in process pharmaceutical powder sample using Raman spectroscopy and evaluation of parameters related to accuracy of analysis.

    PubMed

    Lim, Young-Il; Han, Janghee; Woo, Young-Ah; Kim, Jaejin; Kang, Myung Joo

    2018-07-05

    The purpose of this study was to determine the atorvastatin (ATV) content in process pharmaceutical powder sample using Raman spectroscopy. To establish the analysis method, the influence of the type of Raman measurements (back-scattering or transmission mode), preparation of calibration sample (simple admixing or granulation), sample pre-treatment (pelletization), and spectral pretreatment on the Raman spectra was investigated. The characteristic peak of the active compound was more distinctively detected in transmission Raman mode with a laser spot size of 4mm than in the back-scattering method. Preparation of calibration samples by wet granulation, identical to the actual manufacturing process, provided unchanged spectral patterns for the in process sample, with no changes and/or shifts in the spectrum. Pelletization before Raman analysis remarkably improved spectral reproducibility by decreasing the difference in density between the samples. Probabilistic quotient normalization led to accurate and consistent quantification of the ATV content in the calibration samples (standard error of cross validation: 1.21%). Moreover, the drug content in the granules obtained from five commercial batches were reliably quantified, with no statistical difference (p=0.09) with that obtained by HPLC assay. From these findings, we suggest that transmission Raman analysis may be a fast and non-invasive method for the quantification of ATV in actual manufacturing processes. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Rapid quantitation of atorvastatin in process pharmaceutical powder sample using Raman spectroscopy and evaluation of parameters related to accuracy of analysis

    NASA Astrophysics Data System (ADS)

    Lim, Young-Il; Han, Janghee; Woo, Young-Ah; Kim, Jaejin; Kang, Myung Joo

    2018-07-01

    The purpose of this study was to determine the atorvastatin (ATV) content in process pharmaceutical powder sample using Raman spectroscopy. To establish the analysis method, the influence of the type of Raman measurements (back-scattering or transmission mode), preparation of calibration sample (simple admixing or granulation), sample pre-treatment (pelletization), and spectral pretreatment on the Raman spectra was investigated. The characteristic peak of the active compound was more distinctively detected in transmission Raman mode with a laser spot size of 4 mm than in the back-scattering method. Preparation of calibration samples by wet granulation, identical to the actual manufacturing process, provided unchanged spectral patterns for the in process sample, with no changes and/or shifts in the spectrum. Pelletization before Raman analysis remarkably improved spectral reproducibility by decreasing the difference in density between the samples. Probabilistic quotient normalization led to accurate and consistent quantification of the ATV content in the calibration samples (standard error of cross validation: 1.21%). Moreover, the drug content in the granules obtained from five commercial batches were reliably quantified, with no statistical difference (p = 0.09) with that obtained by HPLC assay. From these findings, we suggest that transmission Raman analysis may be a fast and non-invasive method for the quantification of ATV in actual manufacturing processes.

  18. Calibration-free quantification of interior properties of porous media with x-ray computed tomography.

    PubMed

    Hussein, Esam M A; Agbogun, H M D; Al, Tom A

    2015-03-01

    A method is presented for interpreting the values of x-ray attenuation coefficients reconstructed in computed tomography of porous media, while overcoming the ambiguity caused by the multichromatic nature of x-rays, dilution by void, and material heterogeneity. The method enables determination of porosity without relying on calibration or image segmentation or thresholding to discriminate pores from solid material. It distinguishes between solution-accessible and inaccessible pores, and provides the spatial and frequency distributions of solid-matrix material in a heterogeneous medium. This is accomplished by matching an image of a sample saturated with a contrast solution with that saturated with a transparent solution. Voxels occupied with solid-material and inaccessible pores are identified by the fact that they maintain the same location and image attributes in both images, with voxels containing inaccessible pores appearing empty in both images. Fully porous and accessible voxels exhibit the maximum contrast, while the rest are porous voxels containing mixtures of pore solutions and solid. This matching process is performed with an image registration computer code, and image processing software that requires only simple subtraction and multiplication (scaling) processes. The process is demonstrated in dolomite (non-uniform void distribution, homogeneous solid matrix) and sandstone (nearly uniform void distribution, heterogeneous solid matrix) samples, and its overall performance is shown to compare favorably with a method based on calibration and thresholding. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. New Submersed Chamber for Calibration of Relative Humidity Instruments at HMI/FSB-LPM

    NASA Astrophysics Data System (ADS)

    Sestan, D.; Zvizdic, D.; Sariri, K.

    2018-02-01

    This paper gives a detailed description of a new chamber designed for calibration of relative humidity (RH) instruments at Laboratory for Process Measurement (HMI/FSB-LPM). To the present time, the calibrations of RH instruments at the HMI/FSB-LPM were done by comparison method using a climatic chamber of large volume and calibrated dew point hygrometer with an additional thermometer. Since 2010, HMI/FSB-LPM in cooperation with Centre for Metrology and Accreditation in Finland (MIKES) developed the two primary dew point generators which cover the dew point temperature range between - 70 {°}C and 60 {°}C. In order to utilize these facilities for calibrations of the RH instruments, the new chamber was designed, manufactured and installed in the existing system, aiming to extend its range and reduce the related calibration uncertainties. The chamber construction allows its use in a thermostatic bath of larger volume as well as in the climatic chambers. In the scope of this paper, performances of the new chamber were tested while it was submersed in a thermostated bath. The chamber can simultaneously accommodate up to three RH sensors. In order to keep the design of the chamber simple, only cylindrical RH sensors detachable from display units can be calibrated. Possible optimizations are also discussed, and improvements in the design proposed. By using the new chamber, HMI/FSB-LPM reduced the expanded calibration uncertainties (level of confidence 95 %, coverage factor k=2) from 0.6 %rh to 0.25 %rh at 30 %rh (23 {°}C), and from 0.8 %rh to 0.53 %rh at 70 %rh (23 {°}C).

  20. Simple determination of aflatoxins in rice by ultra-high performance liquid chromatography coupled to chemical post-column derivatization and fluorescence detection.

    PubMed

    Huertas-Pérez, José Fernando; Arroyo-Manzanares, Natalia; Hitzler, Dominik; Castro-Guerrero, Francisco Germán; Gámiz-Gracia, Laura; García-Campaña, Ana M

    2018-04-15

    A fast and simple analytical method was developed and characterized for the determination of aflatoxins (B 1 , B 2 , G 1 and G 2 ) in rice. The procedure is based on a simple solid-liquid extraction without further clean-up, and analysis by ultra-high performance liquid chromatography coupled with fluorescence detection. Fluorescence emission of aflatoxins B 1 and G 1 was enhanced by post-column chemical derivatization using pyridinium bromide perbromide. The analytical method was satisfactorily characterized in white and brown rice. Under optimum conditions, external calibration in solvent could be used for quantification purposes and limits of quantification were below the maximum contents established by the European Union regulation for these contaminants/commodity group combination (0.07-0.14 µg/kg for white rice and 0.20-0.28 µg/kg for brown rice). Recovery studies carried out at three different concentration levels (0.5, 2 and 5 µg/kg) showed values in the range of 84.5-105.3%, and RSDs ≤ 5%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Determination of chloride in admixtures and aggregates for cement by a simple flow injection potentiometric system.

    PubMed

    Junsomboon, Jaroon; Jakmunee, Jaroon

    2008-07-15

    A simple flow injection system using three 3-way solenoid valves as an electric control injection valve and with a simple home-made chloride ion selective electrode based on Ag/AgCl wire as a sensor for determination of water soluble chloride in admixtures and aggregates for cement has been developed. A liquid sample or an extract was injected into a water carrier stream which was then merged with 0.1M KNO(3) stream and flowed through a flow cell where the solution will be in contact with the sensor, producing a potential change recorded as a peak. A calibration graph in range of 10-100 mg L(-1) was obtained with a detection limit of 2 mg L(-1). Relative standard deviations for 7 replicates injecting of 20, 60 and 90 mg L(-1) chloride solutions were 1.0, 1.2 and 0.6%, respectively. Sample throughput of 60 h(-1) was achieved with the consumption of 1 mL each of electrolyte solution and water carrier. The developed method was validated by the British Standard methods.

  2. Rapid screening of selective serotonin re-uptake inhibitors in urine samples using solid-phase microextraction gas chromatography-mass spectrometry.

    PubMed

    Salgado-Petinal, Carmen; Lamas, J Pablo; Garcia-Jares, Carmen; Llompart, Maria; Cela, Rafael

    2005-07-01

    In this paper a solid-phase microextraction-gas chromatography-mass spectrometry (SPME-GC-MS) method is proposed for a rapid analysis of some frequently prescribed selective serotonin re-uptake inhibitors (SSRI)-venlafaxine, fluvoxamine, mirtazapine, fluoxetine, citalopram, and sertraline-in urine samples. The SPME-based method enables simultaneous determination of the target SSRI after simple in-situ derivatization of some of the target compounds. Calibration curves in water and in urine were validated and statistically compared. This revealed the absence of matrix effect and, in consequence, the possibility of quantifying SSRI in urine samples by external water calibration. Intra-day and inter-day precision was satisfactory for all the target compounds (relative standard deviation, RSD, <14%) and the detection limits achieved were <0.4 ng mL(-1) urine. The time required for the SPME step and for GC analysis (30 min each) enables high throughput. The method was applied to real urine samples from different patients being treated with some of these pharmaceuticals. Some SSRI metabolites were also detected and tentatively identified.

  3. A comparative study between different alternatives to prepare gaseous standards for calibrating UV-Ion Mobility Spectrometers.

    PubMed

    Criado-García, Laura; Garrido-Delgado, Rocío; Arce, Lourdes; Valcárcel, Miguel

    2013-07-15

    An UV-Ion Mobility Spectrometer is a simple, rapid, inexpensive instrument widely used in environmental analysis among other fields. The advantageous features of its underlying technology can be of great help towards developing reliable, economical methods for determining gaseous compounds from gaseous, liquid and solid samples. Developing an effective method using UV-Ion Mobility Spectrometry (UV-IMS) to determine volatile analytes entails using appropriate gaseous standards for calibrating the spectrometer. In this work, two home-made sample introduction systems (SISs) and a commercial gas generator were used to obtain such gaseous standards. The first home-made SIS used was a static head-space to measure compounds present in liquid samples and the other home-made system was an exponential dilution set-up to measure compounds present in gaseous samples. Gaseous compounds generated by each method were determined on-line by UV-IMS. Target analytes chosen for this comparative study were ethanol, acetone, benzene, toluene, ethylbenzene and xylene isomers. The different alternatives were acceptable in terms of sensitivity, precision and selectivity. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Analysis of serotonin concentrations in human milk by high-performance liquid chromatography with fluorescence detection.

    PubMed

    Chiba, Takeshi; Maeda, Tomoji; Tairabune, Tomohiko; Tomita, Takashi; Sanbe, Atsushi; Takeda, Rika; Kikuchi, Akihiko; Kudo, Kenzo

    2017-03-25

    Serotonin (5-hydroxytryptamine, 5-HT) plays an important role in milk volume homeostasis in the mammary gland during lactation; 5-HT in milk may also affect infant development. However, there are few reports on 5-HT concentrations in human breast milk. To address this issue, we developed a simple method based on high-performance liquid chromatography with fluorescence detection (HPLC-FD) for measuring 5-HT concentrations in human breast milk. Breast milk samples were provided by four healthy Japanese women. Calibration curves for 5-HT in each sample were prepared with the standard addition method between 5 and 1000 ng/ml, and all had correlation coefficients >0.999. The recovery of 5-HT was 96.1%-101.0%, with a coefficient of variation of 3.39%-8.62%. The range of 5-HT concentrations estimated from the calibration curves was 11.1-51.1 ng/ml. Thus, the HPLC-FD method described here can effectively extract 5-HT from human breast milk with high reproducibility. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Development and validation of an LC-UV method for the determination of sulfonamides in animal feeds.

    PubMed

    Kumar, P; Companyó, R

    2012-05-01

    A simple LC-UV method was developed for the determination of residues of eight sulfonamides (sulfachloropyridazine, sulfadiazine, sulfadimidine, sulfadoxine, sulfamethoxypyridazine, sulfaquinoxaline, sulfamethoxazole, and sulfadimethoxine) in six types of animal feed. C18, Oasis HLB, Plexa and Plexa PCX stationary phases were assessed for the clean-up step and the latter was chosen as it showed greater efficiency in the clean-up of interferences. Feed samples spiked with sulfonamides at 2 mg/kg were used to assess the trueness (recovery %) and precision of the method. Mean recovery values ranged from 47% to 66%, intra-day precision (RSD %) from 4% to 15% and inter-day precision (RSD %) from 7% to 18% in pig feed. Recoveries and intra-day precisions were also evaluated in rabbit, hen, cow, chicken and piglet feed matrices. Calibration curves with standards prepared in mobile phase and matrix-matched calibration curves were compared and the matrix effects were ascertained. The limits of detection and quantification in the feeds ranged from 74 to 265 µg/kg and from 265 to 868 µg/kg, respectively. Copyright © 2011 John Wiley & Sons, Ltd.

  6. SU-F-T-584: Investigating Correction Methods for Ion Recombination Effects in OCTAVIUS 1000 SRS Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knill, C; Wayne State University School of Medicine, Detroit, MI; Snyder, M

    Purpose: PTW’s Octavius 1000 SRS array performs IMRT QA measurements with liquid filled ionization chambers (LICs). Collection efficiencies of LICs have been shown to change during IMRT delivery as a function of LINAC pulse frequency and pulse dose, which affects QA results. In this study, two methods were developed to correct changes in collection efficiencies during IMRT QA measurements, and the effects of these corrections on QA pass rates were compared. Methods: For the first correction, Matlab software was developed that calculates pulse frequency and pulse dose for each detector, using measurement and DICOM RT Plan files. Pulse information ismore » converted to collection efficiency and measurements are corrected by multiplying detector dose by ratios of calibration to measured collection efficiencies. For the second correction, MU/min in daily 1000 SRS calibration was chosen to match average MU/min of the VMAT plan. Usefulness of derived corrections were evaluated using 6MV and 10FFF SBRT RapidArc plans delivered to the OCTAVIUS 4D system using a TrueBeam equipped with an HD- MLC. Effects of the two corrections on QA results were examined by performing 3D gamma analysis comparing predicted to measured dose, with and without corrections. Results: After complex Matlab corrections, average 3D gamma pass rates improved by [0.07%,0.40%,1.17%] for 6MV and [0.29%,1.40%,4.57%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. Maximum changes in gamma pass rates were [0.43%,1.63%,3.05%] for 6MV and [1.00%,4.80%,11.2%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. On average, pass rates of simple daily calibration corrections were within 1% of complex Matlab corrections. Conclusion: Ion recombination effects can potentially be clinically significant for OCTAVIUS 1000 SRS measurements, especially for higher pulse dose unflattened beams when using tighter gamma tolerances. Matching daily 1000 SRS calibration MU/min to average planned MU/min is a simple correction that greatly reduces ion recombination effects, improving measurements accuracy and gamma pass rates. This work was supported by PTW.« less

  7. Hydrophone calibration for MERMAID seismic network

    NASA Astrophysics Data System (ADS)

    Joubert, C.; Nolet, G.; Sukhovich, A.; Ogé, A.; Argentino, J.; Hello, Y.

    2013-12-01

    The MERMAID float (Mobile Earthquake Recorder in Marine Areas by Independent Divers) is a new oceanic seismometer which has already successfully recorded P-waves from teleseismic events, when deployed in the Mediterranean Sea and the Indian Ocean. The frequency band of teleseismic acquisition is for frequencies up to 2 Hz. The P-waves are recorded with a Rafos II hydrophone. The hydrophone has a flat frequency response from 5 Hz to 10 kHz but its behavior below 5 Hz is not documented. In this work we determine the Rafos II response with the electronic card used for MERMAID in the frequency band of 0.1 to 2 Hz. A simple and low-cost calibration method at low frequencies was developed and applied to the Rafos II but can be used for any hydrophone. In this calibration method a brief pressure increase of 1000 Pa is applied to the hydrophone. We record response after filtering and digitizing by the electronic card. To create the pressure increase, the hydrophone is placed in a calibration chamber filled with water and making sure no air bubbles are present. By opening a solenoid valve connected to a tube with 10 cm extra water on top of the chamber, an abrupt pressure of 1000 Pa is applied. The output signal is fitted to an empirical response function, that characterizes it with four parameters: A, B, α and τ which control the shape of the signal: h(t) = t^(ατ) e^(-αt) (A + Bt). A represents the magnification, α defines the exponential relaxation of the signal, B models the overshoot and τ allows for a slightly delayed response due to the low-pass filtering in the electronics. A set of 20 experiments is used to characterize the Rafos II instrumental response in association with the electronic card. The method developed here offers a good and simple way to estimate the response at low frequencies. The MERMAID hydrophone response to the step function input of 1000 Pa can be defined by A = 0.36 × 0.02 mV/Pa, B = - 0.08 × 0.01 mV Pa^(-1) s^(-1), α = 0.28 × 0.02 s^(-1) and τ = 0.41 × 0.14 s.

  8. Application of validation data for assessing spatial interpolation methods for 8-h ozone or other sparsely monitored constituents.

    PubMed

    Joseph, John; Sharif, Hatim O; Sunil, Thankam; Alamgir, Hasanat

    2013-07-01

    The adverse health effects of high concentrations of ground-level ozone are well-known, but estimating exposure is difficult due to the sparseness of urban monitoring networks. This sparseness discourages the reservation of a portion of the monitoring stations for validation of interpolation techniques precisely when the risk of overfitting is greatest. In this study, we test a variety of simple spatial interpolation techniques for 8-h ozone with thousands of randomly selected subsets of data from two urban areas with monitoring stations sufficiently numerous to allow for true validation. Results indicate that ordinary kriging with only the range parameter calibrated in an exponential variogram is the generally superior method, and yields reliable confidence intervals. Sparse data sets may contain sufficient information for calibration of the range parameter even if the Moran I p-value is close to unity. R script is made available to apply the methodology to other sparsely monitored constituents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Analysis of bakery products by laser-induced breakdown spectroscopy.

    PubMed

    Bilge, Gonca; Boyacı, İsmail Hakkı; Eseller, Kemal Efe; Tamer, Uğur; Çakır, Serhat

    2015-08-15

    In this study, we focused on the detection of Na in bakery products by using laser-induced breakdown spectroscopy (LIBS) as a quick and simple method. LIBS experiments were performed to examine the Na at 589 nm to quantify NaCl. A series of standard bread sample pellets containing various concentrations of NaCl (0.025-3.5%) were used to construct the calibration curves and to determine the detection limits of the measurements. Calibration graphs were drawn to indicate functions of NaCl and Na concentrations, which showed good linearity in the range of 0.025-3.5% NaCl and 0.01-1.4% Na concentrations with correlation coefficients (R(2)) values greater than 0.98 and 0.96. The obtained detection limits for NaCl and Na were 175 and 69 ppm, respectively. Performed experimental studies showed that LIBS is a convenient method for commercial bakery products to quantify NaCl concentrations as a rapid and in situ technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Simple solution for a complex problem: proanthocyanidins, galloyl glucoses and ellagitannins fit on a single calibration curve in high performance-gel permeation chromatography.

    PubMed

    Stringano, Elisabetta; Gea, An; Salminen, Juha-Pekka; Mueller-Harvey, Irene

    2011-10-28

    This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 °C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera.

    PubMed

    Reyes, Bersain A; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H

    2016-03-18

    A smartphone-based tidal volume (V(T)) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference V(T) measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and V(T) to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of -0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis.

  12. Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera

    PubMed Central

    Reyes, Bersain A.; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H.

    2016-01-01

    A smartphone-based tidal volume (VT) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference VT measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and VT to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of −0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis. PMID:26999152

  13. Are the gyro-ages of field stars underestimated?

    NASA Astrophysics Data System (ADS)

    Kovács, Géza

    2015-09-01

    By using the current photometric rotational data on eight galactic open clusters, we show that the evolutionary stellar model (isochrone) ages of these clusters are tightly correlated with the period shifts applied to the (B - V)0-Prot ridges that optimally align these ridges to the one defined by Praesepe and the Hyades. On the other hand, when the traditional Skumanich-type multiplicative transformation is used, the ridges become far less aligned due to the age-dependent slope change introduced by the period multiplication. Therefore, we employ our simple additive gyro-age calibration on various datasets of Galactic field stars to test its applicability. We show that, in the overall sense, the gyro-ages are systematically greater than the isochrone ages. The difference could exceed several giga years, depending on the stellar parameters. Although the age overlap between the open clusters used in the calibration and the field star samples is only partial, the systematic difference indicates the limitation of the currently available gyro-age methods and suggests that the rotation of field stars slows down with a considerably lower speed than we would expect from the simple extrapolation of the stellar rotation rates in open clusters.

  14. Reproducibility and calibration of MMC-based high-resolution gamma detectors

    DOE PAGES

    Bates, C. R.; Pies, C.; Kempf, S.; ...

    2016-07-15

    Here, we describe a prototype γ-ray detector based on a metallic magnetic calorimeter with an energy resolution of 46 eV at 60 keV and a reproducible response function that follows a simple second-order polynomial. The simple detector calibration allows adding high-resolution spectra from different pixels and different cool-downs without loss in energy resolution to determine γ-ray centroids with high accuracy. As an example of an application in nuclear safeguards enabled by such a γ-ray detector, we discuss the non-destructive assay of 242Pu in a mixed-isotope Pu sample.

  15. Validation and Application of a Simple UHPLC-MS-MS Method for the Enantiospecific Determination of Warfarin in Human Urine.

    PubMed

    Alshogran, Osama Y; Ocque, Andrew J; Leblond, François A; Pichette, Vincent; Nolin, Thomas D

    2016-04-01

    A simple and rapid liquid chromatographic-tandem mass spectrometric method has been developed and validated for the enantiospecific determination of R- and S-warfarin in human urine. Warfarin enantiomers were extracted from urine using methyl tert-butyl ether. Chromatographic separation of warfarin enantiomers and the internal standard d5-warfarin was achieved using a Astec Chirobiotic V column with gradient mobile phase at a flow rate of 400 µL/min over 10 min. Detection was performed on a TSQ Quantum Ultra triple quadrupole mass spectrometer equipped with a heated electrospray ionization source. Analytes were detected in negative ionization mode using selected reaction monitoring. Calibration curves were linear with a correlation coefficient of ≥0.996 for both enantiomers over a concentration range of 5-500 ng/mL. The intra- and interday accuracy and precision for both analytes were within ±9.0%. Excellent extraction efficiency and negligible matrix effects were observed. The applicability of the method was demonstrated by successful measurement of warfarin enantiomers in urine of patients with kidney disease. The method is simple, accurate and reproducible and is currently being used to support warfarin pharmacokinetic studies. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Use of paired simple and complex models to reduce predictive bias and quantify uncertainty

    NASA Astrophysics Data System (ADS)

    Doherty, John; Christensen, Steen

    2011-12-01

    Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.

  17. Simultaneous determination of umbelliferone and scopoletin in Tibetan medicine Saussurea laniceps and traditional Chinese medicine Radix angelicae pubescentis using excitation-emission matrix fluorescence coupled with second-order calibration method

    NASA Astrophysics Data System (ADS)

    Wang, Li; Wu, Hai-Long; Yin, Xiao-Li; Hu, Yong; Gu, Hui-Wen; Yu, Ru-Qin

    2017-01-01

    A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method is presented for simultaneous determination of umbelliferone and scopoletin in Tibetan medicine Saussurea laniceps (SL) and traditional Chinese medicine Radix angelicae pubescentis (RAP). Using the strategy of combining EEM fluorescence data with second-order calibration method based on the alternating trilinear decomposition (ATLD) algorithm, the simultaneous quantification of umbelliferone and scopoletin in the two different complex systems was achieved successfully, even in the presence of potential interferents. The pretreatment is simple due to the "second-order advantage" and the use of "mathematical separation" instead of awkward "physical or chemical separation". Satisfactory results have been achieved with the limits of detection (LODs) of umbelliferone and scopoletin being 0.06 ng mL- 1 and 0.16 ng mL- 1, respectively. The average spike recoveries of umbelliferone and scopoletin are 98.8 ± 4.3% and 102.5 ± 3.3%, respectively. Besides, HPLC-DAD method was used to further validate the presented strategy, and t-test indicates that prediction results of the two methods have no significant differences. Satisfactory experimental results imply that our method is fast, low-cost and sensitive when compared with HPLC-DAD method.

  18. Development of normalized spectra manipulating spectrophotometric methods for simultaneous determination of Dimenhydrinate and Cinnarizine binary mixture.

    PubMed

    Lamie, Nesrine T; Yehia, Ali M

    2015-01-01

    Simultaneous determination of Dimenhydrinate (DIM) and Cinnarizine (CIN) binary mixture with simple procedures were applied. Three ratio manipulating spectrophotometric methods were proposed. Normalized spectrum was utilized as a divisor for simultaneous determination of both drugs with minimum manipulation steps. The proposed methods were simultaneous constant center (SCC), simultaneous derivative ratio spectrophotometry (S(1)DD) and ratio H-point standard addition method (RHPSAM). Peak amplitudes at isoabsorptive point in ratio spectra were measured for determination of total concentrations of DIM and CIN. For subsequent determination of DIM concentration, difference between peak amplitudes at 250 nm and 267 nm were used in SCC. While the peak amplitude at 275 nm of the first derivative ratio spectra were used in S(1)DD; then subtraction of DIM concentration from the total one provided the CIN concentration. The last RHPSAM was a dual wavelength method in which two calibrations were plotted at 220 nm and 230 nm. The coordinates of intersection point between the two calibration lines were corresponding to DIM and CIN concentrations. The proposed methods were successfully applied for combined dosage form analysis, Moreover statistical comparison between the proposed and reported spectrophotometric methods was applied. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. (Pre-) calibration of a Reduced Complexity Model of the Antarctic Contribution to Sea-level Changes

    NASA Astrophysics Data System (ADS)

    Ruckert, K. L.; Guan, Y.; Shaffer, G.; Forest, C. E.; Keller, K.

    2015-12-01

    (Pre-) calibration of a Reduced Complexity Model of the Antarctic Contribution to Sea-level ChangesKelsey L. Ruckert1*, Yawen Guan2, Chris E. Forest1,3,7, Gary Shaffer 4,5,6, and Klaus Keller1,7,81 Department of Geosciences, The Pennsylvania State University, University Park, Pennsylvania, USA 2 Department of Statistics, The Pennsylvania State University, University Park, Pennsylvania, USA 3 Department of Meteorology, The Pennsylvania State University, University Park, Pennsylvania, USA 4 GAIA_Antarctica, University of Magallanes, Punta Arenas, Chile 5 Center for Advanced Studies in Arid Zones, La Serena, Chile 6 Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark 7 Earth and Environmental Systems Institute, The Pennsylvania State University, University Park, Pennsylvania, USA 8 Department of Engineering and Public Policy, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA * Corresponding author. E-mail klr324@psu.eduUnderstanding and projecting future sea-level changes poses nontrivial challenges. Sea-level changes are driven primarily by changes in the density of seawater as well as changes in the size of glaciers and ice sheets. Previous studies have demonstrated that a key source of uncertainties surrounding sea-level projections is the response of the Antarctic ice sheet to warming temperatures. Here we calibrate a previously published and relatively simple model of the Antarctic ice sheet over a hindcast period from the last interglacial period to the present. We apply and compare a range of (pre-) calibration methods, including a Bayesian approach that accounts for heteroskedasticity. We compare the model hindcasts and projections for different levels of model complexity and calibration methods. We compare the projections with the upper bounds from previous studies and find our projections have a narrower range in 2100. Furthermore we discuss the implications for the design of climate risk management strategies.

  20. Single-Vector Calibration of Wind-Tunnel Force Balances

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; DeLoach, R.

    2003-01-01

    An improved method of calibrating a wind-tunnel force balance involves the use of a unique load application system integrated with formal experimental design methodology. The Single-Vector Force Balance Calibration System (SVS) overcomes the productivity and accuracy limitations of prior calibration methods. A force balance is a complex structural spring element instrumented with strain gauges for measuring three orthogonal components of aerodynamic force (normal, axial, and side force) and three orthogonal components of aerodynamic torque (rolling, pitching, and yawing moments). Force balances remain as the state-of-the-art instrument that provide these measurements on a scale model of an aircraft during wind tunnel testing. Ideally, each electrical channel of the balance would respond only to its respective component of load, and it would have no response to other components of load. This is not entirely possible even though balance designs are optimized to minimize these undesirable interaction effects. Ultimately, a calibration experiment is performed to obtain the necessary data to generate a mathematical model and determine the force measurement accuracy. In order to set the independent variables of applied load for the calibration 24 NASA Tech Briefs, October 2003 experiment, a high-precision mechanical system is required. Manual deadweight systems have been in use at Langley Research Center (LaRC) since the 1940s. These simple methodologies produce high confidence results, but the process is mechanically complex and labor-intensive, requiring three to four weeks to complete. Over the past decade, automated balance calibration systems have been developed. In general, these systems were designed to automate the tedious manual calibration process resulting in an even more complex system which deteriorates load application quality. The current calibration approach relies on a one-factor-at-a-time (OFAT) methodology, where each independent variable is incremented individually throughout its full-scale range, while all other variables are held at a constant magnitude. This OFAT approach has been widely accepted because of its inherent simplicity and intuitive appeal to the balance engineer. LaRC has been conducting research in a "modern design of experiments" (MDOE) approach to force balance calibration. Formal experimental design techniques provide an integrated view to the entire calibration process covering all three major aspects of an experiment; the design of the experiment, the execution of the experiment, and the statistical analyses of the data. In order to overcome the weaknesses in the available mechanical systems and to apply formal experimental techniques, a new mechanical system was required. The SVS enables the complete calibration of a six-component force balance with a series of single force vectors.

  1. Effect of nonideal square-law detection on static calibration in noise-injection radiometers

    NASA Technical Reports Server (NTRS)

    Hearn, C. P.

    1984-01-01

    The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.

  2. Feasibility study on the verification of actual beam delivery in a treatment room using EPID transit dosimetry.

    PubMed

    Baek, Tae Seong; Chung, Eun Ji; Son, Jaeman; Yoon, Myonggeun

    2014-12-04

    The aim of this study is to evaluate the ability of transit dosimetry using commercial treatment planning system (TPS) and an electronic portal imaging device (EPID) with simple calibration method to verify the beam delivery based on detection of large errors in treatment room. Twenty four fields of intensity modulated radiotherapy (IMRT) plans were selected from four lung cancer patients and used in the irradiation of an anthropomorphic phantom. The proposed method was evaluated by comparing the calculated dose map from TPS and EPID measurement on the same plane using a gamma index method with a 3% dose and 3 mm distance-to-dose agreement tolerance limit. In a simulation using a homogeneous plastic water phantom, performed to verify the effectiveness of the proposed method, the average passing rate of the transit dose based on gamma index was high enough, averaging 94.2% when there was no error during beam delivery. The passing rate of the transit dose for 24 IMRT fields was lower with the anthropomorphic phantom, averaging 86.8% ± 3.8%, a reduction partially due to the inaccuracy of TPS calculations for inhomogeneity. Compared with the TPS, the absolute value of the transit dose at the beam center differed by -0.38% ± 2.1%. The simulation study indicated that the passing rate of the gamma index was significantly reduced, to less than 40%, when a wrong field was erroneously irradiated to patient in the treatment room. This feasibility study suggested that transit dosimetry based on the calculation with commercial TPS and EPID measurement with simple calibration can provide information about large errors for treatment beam delivery.

  3. The construction of a highly transportable laser ranging station

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The technology of the transportable Laser Ranging Station (TLRS) used in crustal dynamics studies was examined. The TLRS used a single photoelectron beam of limited energy density returned from the Laser Geodynamic Satellite (LAGEOS). Calibration was accomplished by the diversion of a small portion of the outgoing beam attenuated to the same level as the satellite return. Timing for the system was based on a self calibrating Ortec TD811, 100 picosec time interval device. The system was contained in a modified, single chassis recreational vehicle that allowed rapid deployment. The TLRS system was only airmobile on the largest transport aircraft. A 30 cm simple plano/concave transfer lens telescope aided in beam direction. The TLRS system fulfills the need for an accurate method of obtaining range measurements to the LAGEOS satellite incorporated in a mobile, air transportable, and economical configuration.

  4. LIDAR TS for ITER core plasma. Part III: calibration and higher edge resolution

    NASA Astrophysics Data System (ADS)

    Nielsen, P.; Gowers, C.; Salzmann, H.

    2017-12-01

    Calibration, after initial installation, of the proposed two wavelength LIDAR Thomson Scattering System requires no access to the front end and does not require a foreign gas fill for Raman scattering. As already described, the variation of solid angle of collection with scattering position is a simple geometrical variation over the unvignetted region. The additional loss over the vignetted region can easily be estimated and in the case of a small beam dump located between the Be tiles, it is within the specified accuracy of the density. The only additional calibration is the absolute spectral transmission of the front-end optics. Over time we expect the transmission of the two front-end mirrors to suffer a deterioration mainly due to depositions. The reduction in transmission is likely to be worse towards the blue end of the scattering spectrum. It is therefore necessary to have a method to monitor such changes and to determine its spectral variation. Standard methods use two lasers at different wavelength with a small time separation. Using the two-wavelength approach, a method has been developed to determine the relative spectral variation of the transmission loss, using simply the measured signals in plasmas with peak temperatures of 4-6 keV . Comparing the calculated line integral of the fitted density over the full chord to the corresponding interferometer data we also have an absolute calibration. At the outer plasma boundary, the standard resolution of the LIDAR Thomson Scattering System is not sufficient to determine the edge gradient in an H-mode plasma. However, because of the step like nature of the signal here, it is possible to carry out a deconvolution of the scattered signals, thereby achieving an effective resolution of ~ 1-2 cm in the outer 10-20 cm.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, J; Lockamy, V; Harrison, A

    Purpose: To support radiobiological research with the Xstrahl small animal radiation research platform (SARRP) by developing a simple and effective method using commercially available optically stimulated luminescent dosimeters (OSLDs) that ensures dose output consistency. Methods: The SARRP output is calibrated according to the vendor standards and TG-61 protocol utilizing an ADCL calibrated ion chamber and electrometer at 2 cm depth of solid water. A cross calibration is performed by replacing the ion chamber with five OSLDs at the 2 cm depth. The OSLDs are irradiated to 500 cGy with 220 keV at 13 mA (78s delivery time) with a coppermore » filter for an uncollimated 17×17 cm{sup 2} aperture. Instead of the absolute dose, the total amount of raw counts are collected from the OSLD reader and used for analysis. This constancy procedure was performed two more times over the course of three weeks with two OLSDs for validity. Results: The average reading for all OSLDs is 494939 with a 1-sigma standard deviation of the 5.8%. With an acceptable dose output range of ±10%, the OSLD readings have a counts range of [445445, 544433]. Conclusion: This method of using nanoDot™ OSLDs to perform output constancy checks for the SARRP ensures the output of the device is within 10% from the time of calibration and is convenient as well as time efficient. Because this, the frequency of output checks can be increased, which can improve the output stability for research with this device. The output trend of the SARRP will continue to be monitored in the future to establish a timeline for constancy checks and recalibration.« less

  6. Redox-iodometry: a new potentiometric method.

    PubMed

    Gottardi, Waldemar; Pfleiderer, Jörg

    2005-07-01

    A new iodometric method for quantifying aqueous solutions of iodide-oxidizing and iodine-reducing substances, as well as plain iodine/iodide solutions, is presented. It is based on the redox potential of said solutions after reaction with iodide (or iodine) of known initial concentration. Calibration of the system and calculations of unknown concentrations was performed on the basis of developed algorithms and simple GWBASIC-programs. The method is distinguished by a short analysis time (2-3 min) and a simple instrumentation consisting of pH/mV meter, platinum and reference electrodes. In general the feasible concentration range encompasses 0.1 to 10(-6) mol/L, although it goes down to 10(-8) mol/L (0.001 mg Cl2/L) for oxidants like active chlorine compounds. The calculated imprecision and inaccuracy of the method were found to be 0.4-0.9% and 0.3-0.8%, respectively, resulting in a total error of 0.5-1.2%. Based on the experiments, average imprecisions of 1.0-1.5% at c(Ox)>10(-5) M, 1.5-3% at 10(-5) to 10(-7) M, and 4-7% at <10(-7) M were found. Redox-iodometry is a simple, precise, and time-saving substitute for the more laborious and expensive iodometric titration method, which, like other well-established colorimetric procedures, is clearly outbalanced at low concentrations; this underlines the practical importance of redox-iodometry.

  7. Application of Certain π-Acceptors for the Spectrophotometric Determination of Alendronate Sodium in Pharmaceutical Bulk and Dosage Forms.

    PubMed

    Raza, Asad; Zia-Ul-Haq, Muhammad

    2011-01-01

    Two simple, fast, and accurate spectrophotometric methods for the determination of alendronate sodium are described. The methods are based on charge-transfer complex formation of the drug with two π-electron acceptors 7,7,7,8-tetracyanoquinodimethane (TCNQ) and 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) in acetonitrile and methanol medium. The methods are followed spectrophotometrically by measuring the maximum absorbance at 840 nm and 465 nm, respectively. Under the optimized experimental conditions, the calibration curves showed a linear relationship over the concentration ranges of 2-10 μg mL(-1) and 2-12 μg mL(-1), respectively. The optimal reactions conditions values such as the reagent concentration, heating time, and stability of reaction product were determined. No significant difference was obtained between the results of newly proposed methods and the B.P. Titrimetric procedures. The charge transfer approach using TCNQ and DDQ procedures described in this paper is simple, fast, accurate, precise, and extraction-free.

  8. Application of Certain π-Acceptors for the Spectrophotometric Determination of Alendronate Sodium in Pharmaceutical Bulk and Dosage Forms

    PubMed Central

    Raza, Asad; Zia-ul-Haq, Muhammad

    2011-01-01

    Two simple, fast, and accurate spectrophotometric methods for the determination of alendronate sodium are described. The methods are based on charge-transfer complex formation of the drug with two π-electron acceptors 7,7,7,8-tetracyanoquinodimethane (TCNQ) and 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) in acetonitrile and methanol medium. The methods are followed spectrophotometrically by measuring the maximum absorbance at 840 nm and 465 nm, respectively. Under the optimized experimental conditions, the calibration curves showed a linear relationship over the concentration ranges of 2–10 μg mL−1 and 2–12 μg mL−1, respectively. The optimal reactions conditions values such as the reagent concentration, heating time, and stability of reaction product were determined. No significant difference was obtained between the results of newly proposed methods and the B.P. Titrimetric procedures. The charge transfer approach using TCNQ and DDQ procedures described in this paper is simple, fast, accurate, precise, and extraction-free. PMID:21760789

  9. Calibration transfer of a Raman spectroscopic quantification method for the assessment of liquid detergent compositions between two at-line instruments installed at two liquid detergent production plants.

    PubMed

    Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T

    2017-09-01

    Calibration transfer of partial least squares (PLS) quantification models is established between two Raman spectrometers located at two liquid detergent production plants. As full recalibration of existing calibration models is time-consuming, labour-intensive and costly, it is investigated whether the use of mathematical correction methods requiring only a handful of standardization samples can overcome the dissimilarities in spectral response observed between both measurement systems. Univariate and multivariate standardization approaches are investigated, ranging from simple slope/bias correction (SBC), local centring (LC) and single wavelength standardization (SWS) to more complex direct standardization (DS) and piecewise direct standardization (PDS). The results of these five calibration transfer methods are compared reciprocally, as well as with regard to a full recalibration. Four PLS quantification models, each predicting the concentration of one of the four main ingredients in the studied liquid detergent composition, are aimed at transferring. Accuracy profiles are established from the original and transferred quantification models for validation purposes. A reliable representation of the calibration models performance before and after transfer is thus established, based on β-expectation tolerance intervals. For each transferred model, it is investigated whether every future measurement that will be performed in routine will be close enough to the unknown true value of the sample. From this validation, it is concluded that instrument standardization is successful for three out of four investigated calibration models using multivariate (DS and PDS) transfer approaches. The fourth transferred PLS model could not be validated over the investigated concentration range, due to a lack of precision of the slave instrument. Comparing these transfer results to a full recalibration on the slave instrument allows comparison of the predictive power of both Raman systems and leads to the formulation of guidelines for further standardization projects. It is concluded that it is essential to evaluate the performance of the slave instrument prior to transfer, even when it is theoretically identical to the master apparatus. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Calibrating the simple biosphere model for Amazonian tropical forest using field and remote sensing data. I - Average calibration with field data

    NASA Technical Reports Server (NTRS)

    Sellers, Piers J.; Shuttleworth, W. James; Dorman, Jeff L.; Dalcher, Amnon; Roberts, John M.

    1989-01-01

    Using meteorological and hydrological measurements taken in and above the central-Amazon-basin tropical forest, calibration of the Sellers et al. (1986) simple biosphere (SiB) model are described. The SiB model is a one-dimensional soil-vegetation-atmosphere model designed for use within GCMs models, representing the vegetation cover by analogy with processes operating within a single representative plant. The experimental systems and the procedures used to obtain field data are described, together with the specification of the physiological parameterization required to provide an average description of data. It was found that some of the existing literature on stomatal behavior for tropical species is inconsistent with the observed behavior of the complete canopy in Amazonia, and that the rainfall interception store of the canopy is considerably smaller than originally specified in the SiB model.

  11. Model-based monitoring of stormwater runoff quality.

    PubMed

    Birch, Heidi; Vezzaro, Luca; Mikkelsen, Peter Steen

    2013-01-01

    Monitoring of micropollutants (MP) in stormwater is essential to evaluate the impacts of stormwater on the receiving aquatic environment. The aim of this study was to investigate how different strategies for monitoring of stormwater quality (combining a model with field sampling) affect the information obtained about MP discharged from the monitored system. A dynamic stormwater quality model was calibrated using MP data collected by automatic volume-proportional sampling and passive sampling in a storm drainage system on the outskirts of Copenhagen (Denmark) and a 10-year rain series was used to find annual average (AA) and maximum event mean concentrations. Use of this model reduced the uncertainty of predicted AA concentrations compared to a simple stochastic method based solely on data. The predicted AA concentration, obtained by using passive sampler measurements (1 month installation) for calibration of the model, resulted in the same predicted level but with narrower model prediction bounds than by using volume-proportional samples for calibration. This shows that passive sampling allows for a better exploitation of the resources allocated for stormwater quality monitoring.

  12. Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response.

    PubMed

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios

    2016-04-29

    In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system's response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor's optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section.

  13. Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response

    PubMed Central

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios

    2016-01-01

    In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system’s response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor’s optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section. PMID:27136562

  14. Multivariate methods on the excitation emission matrix fluorescence spectroscopic data of diesel-kerosene mixtures: a comparative study.

    PubMed

    Divya, O; Mishra, Ashok K

    2007-05-29

    Quantitative determination of kerosene fraction present in diesel has been carried out based on excitation emission matrix fluorescence (EEMF) along with parallel factor analysis (PARAFAC) and N-way partial least squares regression (N-PLS). EEMF is a simple, sensitive and nondestructive method suitable for the analysis of multifluorophoric mixtures. Calibration models consisting of varying compositions of diesel and kerosene were constructed and their validation was carried out using leave-one-out cross validation method. The accuracy of the model was evaluated through the root mean square error of prediction (RMSEP) for the PARAFAC, N-PLS and unfold PLS methods. N-PLS was found to be a better method compared to PARAFAC and unfold PLS method because of its low RMSEP values.

  15. Emerging Techniques for Vicarious Calibration of Visible Through Short Wave Infrared Remote Sensing Systems

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.

    2006-01-01

    Simple field-portable white light LED calibration source shows promise for visible range (420-750 nm) 1) Prototype demonstrated <0.5% drift over 10-40 C temperature range; 2) Additional complexity (more LEDs) will be necessary for extending spectral range into the NIR and SWIR; 3) LED long lifetimes should produce at least several hundreds of hours or more stability, minimizing need for expensive calibrations and supporting long-duration field campaigns; and 4) Enabling technology for developing autonomous sites.

  16. An X-ray diffraction method for semiquantitative mineralogical analysis of Chilean nitrate ore

    USGS Publications Warehouse

    Jackson, J.C.; Ericksent, G.E.

    1997-01-01

    Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.

  17. An x-ray diffraction method for semiquantitative mineralogical analysis of chilean nitrate ore

    USGS Publications Warehouse

    John, C.; George, J.; Ericksen, E.

    1997-01-01

    Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.

  18. 3-D imaging of large scale buried structure by 1-D inversion of very early time electromagnetic (VETEM) data

    USGS Publications Warehouse

    Aydmer, A.A.; Chew, W.C.; Cui, T.J.; Wright, D.L.; Smith, D.V.; Abraham, J.D.

    2001-01-01

    A simple and efficient method for large scale three-dimensional (3-D) subsurface imaging of inhomogeneous background is presented. One-dimensional (1-D) multifrequency distorted Born iterative method (DBIM) is employed in the inversion. Simulation results utilizing synthetic scattering data are given. Calibration of the very early time electromagnetic (VETEM) experimental waveforms is detailed along with major problems encountered in practice and their solutions. This discussion is followed by the results of a large scale application of the method to the experimental data provided by the VETEM system of the U.S. Geological Survey. The method is shown to have a computational complexity that is promising for on-site inversion.

  19. Method development towards qualitative and semi-quantitative analysis of multiple pesticides from food surfaces and extracts by desorption electrospray ionization mass spectrometry as a preselective tool for food control.

    PubMed

    Gerbig, Stefanie; Stern, Gerold; Brunn, Hubertus E; Düring, Rolf-Alexander; Spengler, Bernhard; Schulz, Sabine

    2017-03-01

    Direct analysis of fruit and vegetable surfaces is an important tool for in situ detection of food contaminants such as pesticides. We tested three different ways to prepare samples for the qualitative desorption electrospray ionization mass spectrometry (DESI-MS) analysis of 32 pesticides found on nine authentic fruits collected from food control. Best recovery rates for topically applied pesticides (88%) were found by analyzing the surface of a glass slide which had been rubbed against the surface of the food. Pesticide concentration in all samples was at or below the maximum residue level allowed. In addition to the high sensitivity of the method for qualitative analysis, quantitative or, at least, semi-quantitative information is needed in food control. We developed a DESI-MS method for the simultaneous determination of linear calibration curves of multiple pesticides of the same chemical class using normalization to one internal standard (ISTD). The method was first optimized for food extracts and subsequently evaluated for the quantification of pesticides in three authentic food extracts. Next, pesticides and the ISTD were applied directly onto food surfaces, and the corresponding calibration curves were obtained. The determination of linear calibration curves was still feasible, as demonstrated for three different food surfaces. This proof-of-principle method was used to simultaneously quantify two pesticides on an authentic sample, showing that the method developed could serve as a fast and simple preselective tool for disclosure of pesticide regulation violations. Graphical Abstract Multiple pesticide residues were detected and quantified in-situ from an authentic set of food items and extracts in a proof of principle study.

  20. A surface hydrology model for regional vector borne disease models

    NASA Astrophysics Data System (ADS)

    Tompkins, Adrian; Asare, Ernest; Bomblies, Arne; Amekudzi, Leonard

    2016-04-01

    Small, sun-lit temporary pools that form during the rainy season are important breeding sites for many key mosquito vectors responsible for the transmission of malaria and other diseases. The representation of this surface hydrology in mathematical disease models is challenging, due to their small-scale, dependence on the terrain and the difficulty of setting soil parameters. Here we introduce a model that represents the temporal evolution of the aggregate statistics of breeding sites in a single pond fractional coverage parameter. The model is based on a simple, geometrical assumption concerning the terrain, and accounts for the processes of surface runoff, pond overflow, infiltration and evaporation. Soil moisture, soil properties and large-scale terrain slope are accounted for using a calibration parameter that sets the equivalent catchment fraction. The model is calibrated and then evaluated using in situ pond measurements in Ghana and ultra-high (10m) resolution explicit simulations for a village in Niger. Despite the model's simplicity, it is shown to reproduce the variability and mean of the pond aggregate water coverage well for both locations and validation techniques. Example malaria simulations for Uganda will be shown using this new scheme with a generic calibration setting, evaluated using district malaria case data. Possible methods for implementing regional calibration will be briefly discussed.

  1. Calibration and standardization of microwave ovens for fixation of brain and peripheral nerve tissue.

    PubMed

    Login, G R; Leonard, J B; Dvorak, A M

    1998-06-01

    Rapid and reproducible fixation of brain and peripheral nerve tissue for light and electron microscopy studies can be done in a microwave oven. In this review we report a standardized nomenclature for diverse fixation techniques that use microwave heating: (1) microwave stabilization, (2) fast and ultrafast primary microwave-chemical fixation, (3) microwave irradiation followed by chemical fixation, (4) primary chemical fixation followed by microwave irradiation, and (5) microwave fixation used in various combinations with freeze fixation. All of these methods are well suited to fix brain tissue for light microscopy. Fast primary microwave-chemical fixation is best for immunoelectron microscopy studies. We also review how the physical characteristics of the microwave frequency and the dimensions of microwave oven cavities can compromise microwave fixation results. A microwave oven can be calibrated for fixation when the following parameters are standardized: irradiation time; water load volume, initial temperature, and placement within the oven; fixative composition, volume, and initial temperature; and specimen container shape and placement within the oven. Using two recently developed calibration tools, the neon bulb array and the agar-saline-Giemsa tissue phantom, we report a simple calibration protocol that identifies regions within a microwave oven for uniform microwave fixation. Copyright 1998 Academic Press.

  2. Semi-automated in vivo solid-phase microextraction sampling and the diffusion-based interface calibration model to determine the pharmacokinetics of methoxyfenoterol and fenoterol in rats.

    PubMed

    Yeung, Joanne Chung Yan; de Lannoy, Inés; Gien, Brad; Vuckovic, Dajana; Yang, Yingbo; Bojko, Barbara; Pawliszyn, Janusz

    2012-09-12

    In vivo solid-phase microextraction (SPME) can be used to sample the circulating blood of animals without the need to withdraw a representative blood sample. In this study, in vivo SPME in combination with liquid-chromatography tandem mass spectrometry (LC-MS/MS) was used to determine the pharmacokinetics of two drug analytes, R,R-fenoterol and R,R-methoxyfenoterol, administered as 5 mg kg(-1) i.v. bolus doses to groups of 5 rats. This research illustrates, for the first time, the feasibility of the diffusion-based calibration interface model for in vivo SPME studies. To provide a constant sampling rate as required for the diffusion-based interface model, partial automation of the SPME sampling of the analytes from the circulating blood was accomplished using an automated blood sampling system. The use of the blood sampling system allowed automation of all SPME sampling steps in vivo, except for the insertion and removal of the SPME probe from the sampling interface. The results from in vivo SPME were compared to the conventional method based on blood withdrawal and sample clean up by plasma protein precipitation. Both whole blood and plasma concentrations were determined by the conventional method. The concentrations of methoxyfenoterol and fenoterol obtained by SPME generally concur with the whole blood concentrations determined by the conventional method indicating the utility of the proposed method. The proposed diffusion-based interface model has several advantages over other kinetic calibration models for in vivo SPME sampling including (i) it does not require the addition of a standard into the sample matrix during in vivo studies, (ii) it is simple and rapid and eliminates the need to pre-load appropriate standard onto the SPME extraction phase and (iii) the calibration constant for SPME can be calculated based on the diffusion coefficient, extraction time, fiber length and radius, and size of the boundary layer. In the current study, the experimental calibration constants of 338.9±30 mm(-3) and 298.5±25 mm(-3) are in excellent agreement with the theoretical calibration constants of 307.9 mm(-3) and 316.0 mm(-3) for fenoterol and methoxyfenoterol respectively. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. A fast and direct spectrophotometric method for the simultaneous determination of methyl paraben and hydroquinone in cosmetic products using successive projections algorithm.

    PubMed

    Esteki, M; Nouroozi, S; Shahsavari, Z

    2016-02-01

    To develop a simple and efficient spectrophotometric technique combined with chemometrics for the simultaneous determination of methyl paraben (MP) and hydroquinone (HQ) in cosmetic products, and specifically, to: (i) evaluate the potential use of successive projections algorithm (SPA) to derivative spectrophotometric data in order to provide sufficient accuracy and model robustness and (ii) determine MP and HQ concentration in cosmetics without tedious pre-treatments such as derivatization or extraction techniques which are time-consuming and require hazardous solvents. The absorption spectra were measured in the wavelength range of 200-350 nm. Prior to performing chemometric models, the original and first-derivative absorption spectra of binary mixtures were used as calibration matrices. Variable selected by successive projections algorithm was used to obtain multiple linear regression (MLR) models based on a small subset of wavelengths. The number of wavelengths and the starting vector were optimized, and the comparison of the root mean square error of calibration (RMSEC) and cross-validation (RMSECV) was applied to select effective wavelengths with the least collinearity and redundancy. Principal component regression (PCR) and partial least squares (PLS) were also developed for comparison. The concentrations of the calibration matrix ranged from 0.1 to 20 μg mL(-1) for MP, and from 0.1 to 25 μg mL(-1) for HQ. The constructed models were tested on an external validation data set and finally cosmetic samples. The results indicated that successive projections algorithm-multiple linear regression (SPA-MLR), applied on the first-derivative spectra, achieved the optimal performance for two compounds when compared with the full-spectrum PCR and PLS. The root mean square error of prediction (RMSEP) was 0.083, 0.314 for MP and HQ, respectively. To verify the accuracy of the proposed method, a recovery study on real cosmetic samples was carried out with satisfactory results (84-112%). The proposed method, which is an environmentally friendly approach, using minimum amount of solvent, is a simple, fast and low-cost analysis method that can provide high accuracy and robust models. The suggested method does not need any complex extraction procedure which is time-consuming and requires hazardous solvents. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  4. Spectrophotometric determination of ketoprofen and its application in pharmaceutical analysis.

    PubMed

    Kormosh, Zholt; Hunka, Iryna; Basel, Yaroslav

    2009-01-01

    A new simple rapid and sensitive spectrophotometric method has been developed for the determination of ketoprofen in pharmaceutical preparations. The method is based on the reaction of ketoprofen with an analytical reagent--Astra Phloxin FF--at pH 8.0-10.8 and followed by the extraction of formed ion associate in toluene with spectrophotometric detection (it has an absorption maximum at 563 nm, epsilon = 7.6 x 10(4) L x mol(-1) x cm(-1)). The calibration plot was linear from 0.8-16.0 microg x mL(-1) of ketoprofen, and the detection limit was 0.037 microg x mL(-1).

  5. Characterization of x-ray framing cameras for the National Ignition Facility using single photon pulse height analysis.

    PubMed

    Holder, J P; Benedetti, L R; Bradley, D K

    2016-11-01

    Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.

  6. Calibration artefacts in radio interferometry - II. Ghost patterns for irregular arrays

    NASA Astrophysics Data System (ADS)

    Wijnholds, S. J.; Grobler, T. L.; Smirnov, O. M.

    2016-04-01

    Calibration artefacts, like the self-calibration bias, usually emerge when data are calibrated using an incomplete sky model. In the first paper of this series, in which we analysed calibration artefacts in data from the Westerbork Synthesis Radio Telescope, we showed that these artefacts take the form of spurious positive and negative sources, which we refer to as ghosts or ghost sources. We also developed a mathematical framework with which we could predict the ghost pattern of an east-west interferometer for a simple two-source test case. In this paper, we extend our analysis to more general array layouts. This provides us with a useful method for the analysis of ghosts that we refer to as extrapolation. Combining extrapolation with a perturbation analysis, we are able to (1) analyse the ghost pattern for a two-source test case with one modelled and one unmodelled source for an arbitrary array layout, (2) explain why some ghosts are brighter than others, (3) define a taxonomy allowing us to classify the different ghosts, (4) derive closed form expressions for the fluxes and positions of the brightest ghosts, and (5) explain the strange two-peak structure with which some ghosts manifest during imaging. We illustrate our mathematical predictions using simulations of the KAT-7 (seven-dish Karoo Array Telescope) array. These results show the explanatory power of our mathematical model. The insights gained in this paper provide a solid foundation to study calibration artefacts in arbitrary, I.e. more complicated than the two-source example discussed here, incomplete sky models or full synthesis observations including direction-dependent effects.

  7. Extensions in Pen Ink Dosimetry: Ultraviolet Calibration Applications for Primary and Secondary Schools

    ERIC Educational Resources Information Center

    Downs, Nathan; Parisi, Alfio; Powell, Samantha; Turner, Joanna; Brennan, Chris

    2010-01-01

    A technique has previously been described for secondary school-aged children to make ultraviolet (UV) dosimeters from highlighter pen ink drawn onto strips of paper. This technique required digital comparison of exposed ink paper strips with unexposed ink paper strips to determine a simple calibration function relating the degree of ink fading to…

  8. Mobile Cubesat Command and Control (Mc3) 3-Meter Dish Calibration and Capabilities

    DTIC Science & Technology

    2014-06-01

    accuracy of this simple calibration is tested by tracking the sun, an easily accessible celestial body. To track the sun, a Systems Tool Kit ( STK ... visually verified. The shadow created by the dish system when it is pointed directly at the sun is symmetrical. If the dish system is not pointed

  9. Financial model calibration using consistency hints.

    PubMed

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  10. Improving calibration and validation of cosmic-ray neutron sensors in the light of spatial sensitivity

    NASA Astrophysics Data System (ADS)

    Schrön, Martin; Köhli, Markus; Scheiffele, Lena; Iwema, Joost; Bogena, Heye R.; Lv, Ling; Martini, Edoardo; Baroni, Gabriele; Rosolem, Rafael; Weimar, Jannis; Mai, Juliane; Cuntz, Matthias; Rebmann, Corinna; Oswald, Sascha E.; Dietrich, Peter; Schmidt, Ulrich; Zacharias, Steffen

    2017-10-01

    In the last few years the method of cosmic-ray neutron sensing (CRNS) has gained popularity among hydrologists, physicists, and land-surface modelers. The sensor provides continuous soil moisture data, averaged over several hectares and tens of decimeters in depth. However, the signal still may contain unidentified features of hydrological processes, and many calibration datasets are often required in order to find reliable relations between neutron intensity and water dynamics. Recent insights into environmental neutrons accurately described the spatial sensitivity of the sensor and thus allowed one to quantify the contribution of individual sample locations to the CRNS signal. Consequently, data points of calibration and validation datasets are suggested to be averaged using a more physically based weighting approach. In this work, a revised sensitivity function is used to calculate weighted averages of point data. The function is different from the simple exponential convention by the extraordinary sensitivity to the first few meters around the probe, and by dependencies on air pressure, air humidity, soil moisture, and vegetation. The approach is extensively tested at six distinct monitoring sites: two sites with multiple calibration datasets and four sites with continuous time series datasets. In all cases, the revised averaging method improved the performance of the CRNS products. The revised approach further helped to reveal hidden hydrological processes which otherwise remained unexplained in the data or were lost in the process of overcalibration. The presented weighting approach increases the overall accuracy of CRNS products and will have an impact on all their applications in agriculture, hydrology, and modeling.

  11. A fast and simple dose-calibrator-based quality control test for the radionuclidic purity of cyclotron-produced (99m)Tc.

    PubMed

    Tanguay, J; Hou, X; Esquinas, P; Vuckovic, M; Buckley, K; Schaffer, P; Bénard, F; Ruth, T J; Celler, A

    2015-11-07

    Cyclotron production of 99mTc through the (100)Mo(p,2n)99mTc reaction channel is actively being investigated as an alternative to reactor-based (99)Mo generation by nuclear fission of (235)U. Like most radioisotope production methods, cyclotron production of 99mTc will result in creation of unwanted impurities, including Tc and non-Tc isotopes. It is important to measure the amounts of these impurities for release of cyclotron-produced 99mTc (CPTc) for clinical use. Detection of radioactive impurities will rely on measurements of their gamma (γ) emissions. Gamma spectroscopy is not suitable for this purpose because the overwhelming presence of 99mTc and the count-rate limitations of γ spectroscopy systems preclude fast and accurate measurement of small amounts of impurities. In this article we describe a simple and fast method for measuring γ emission rates from radioactive impurities in CPTc. The proposed method is similar to that used to identify (99)Mo breakthrough in generator-produced 99mTc: one dose calibrator (DC) reading of a CPTc source placed in a lead shield is followed by a second reading of the same source in air. Our experimental and theoretical analysis show that the ratio of DC readings in lead to those in air are linearly related to γ emission rates from impurities per MBq of 99mTc over a large range of clinically-relevant production conditions. We show that estimates of the γ emission rates from Tc impurities per MBq of 99mTc can be used to estimate increases in radiation dose (relative to pure 99mTc) to patients injected with CPTc-based radiopharmaceuticals. This enables establishing dosimetry-based clinical-release criteria that can be tested using commercially-available dose calibrators. We show that our approach is highly sensitive to the presence of 93gTc, 93mTc, 94gTc, 94mTc, 95mTc, 95gTc, and 96gTc, in addition to a number of non-Tc impurities.

  12. A fast and simple dose-calibrator-based quality control test for the radionuclidic purity of cyclotron-produced 99mTc

    NASA Astrophysics Data System (ADS)

    Tanguay, J.; Hou, X.; Esquinas, P.; Vuckovic, M.; Buckley, K.; Schaffer, P.; Bénard, F.; Ruth, T. J.; Celler, A.

    2015-11-01

    Cyclotron production of {{}99\\text{m}} Tc through the 100Mo(p,2n){{}99\\text{m}} Tc reaction channel is actively being investigated as an alternative to reactor-based 99Mo generation by nuclear fission of 235U. Like most radioisotope production methods, cyclotron production of {{}99\\text{m}} Tc will result in creation of unwanted impurities, including Tc and non-Tc isotopes. It is important to measure the amounts of these impurities for release of cyclotron-produced {{}99\\text{m}} Tc (CPTc) for clinical use. Detection of radioactive impurities will rely on measurements of their gamma (γ) emissions. Gamma spectroscopy is not suitable for this purpose because the overwhelming presence of {{}99\\text{m}} Tc and the count-rate limitations of γ spectroscopy systems preclude fast and accurate measurement of small amounts of impurities. In this article we describe a simple and fast method for measuring γ emission rates from radioactive impurities in CPTc. The proposed method is similar to that used to identify 99Mo breakthrough in generator-produced {{}99\\text{m}} Tc: one dose calibrator (DC) reading of a CPTc source placed in a lead shield is followed by a second reading of the same source in air. Our experimental and theoretical analysis show that the ratio of DC readings in lead to those in air are linearly related to γ emission rates from impurities per MBq of {{}99\\text{m}} Tc over a large range of clinically-relevant production conditions. We show that estimates of the γ emission rates from Tc impurities per MBq of {{}99\\text{m}} Tc can be used to estimate increases in radiation dose (relative to pure {{}99\\text{m}} Tc) to patients injected with CPTc-based radiopharmaceuticals. This enables establishing dosimetry-based clinical-release criteria that can be tested using commercially-available dose calibrators. We show that our approach is highly sensitive to the presence of {{}93\\text{g}} Tc, {{}93\\text{m}} Tc, {{}94\\text{g}} Tc, {{}94\\text{m}} Tc, {{}95\\text{m}} Tc, {{}95\\text{g}} Tc, and {{}96\\text{g}} Tc, in addition to a number of non-Tc impurities.

  13. A calibration method for fringe reflection technique based on the analytical phase-slope description

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong

    2018-05-01

    The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.

  14. Linking big models to big data: efficient ecosystem model calibration through Bayesian model emulation

    NASA Astrophysics Data System (ADS)

    Fer, I.; Kelly, R.; Andrews, T.; Dietze, M.; Richardson, A. D.

    2016-12-01

    Our ability to forecast ecosystems is limited by how well we parameterize ecosystem models. Direct measurements for all model parameters are not always possible and inverse estimation of these parameters through Bayesian methods is computationally costly. A solution to computational challenges of Bayesian calibration is to approximate the posterior probability surface using a Gaussian Process that emulates the complex process-based model. Here we report the integration of this method within an ecoinformatics toolbox, Predictive Ecosystem Analyzer (PEcAn), and its application with two ecosystem models: SIPNET and ED2.1. SIPNET is a simple model, allowing application of MCMC methods both to the model itself and to its emulator. We used both approaches to assimilate flux (CO2 and latent heat), soil respiration, and soil carbon data from Bartlett Experimental Forest. This comparison showed that emulator is reliable in terms of convergence to the posterior distribution. A 10000-iteration MCMC analysis with SIPNET itself required more than two orders of magnitude greater computation time than an MCMC run of same length with its emulator. This difference would be greater for a more computationally demanding model. Validation of the emulator-calibrated SIPNET against both the assimilated data and out-of-sample data showed improved fit and reduced uncertainty around model predictions. We next applied the validated emulator method to the ED2, whose complexity precludes standard Bayesian data assimilation. We used the ED2 emulator to assimilate demographic data from a network of inventory plots. For validation of the calibrated ED2, we compared the model to results from Empirical Succession Mapping (ESM), a novel synthesis of successional patterns in Forest Inventory and Analysis data. Our results revealed that while the pre-assimilation ED2 formulation cannot capture the emergent demographic patterns from ESM analysis, constrained model parameters controlling demographic processes increased their agreement considerably.

  15. Optical aberration correction for simple lenses via sparse representation

    NASA Astrophysics Data System (ADS)

    Cui, Jinlin; Huang, Wei

    2018-04-01

    Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.

  16. Treatment of Overactive Bladder Syndrome with Urethral Calibration in Women

    PubMed Central

    Sato, Renee L; Matsuura, Grace HK; Wei, David C; Chen, John J

    2013-01-01

    Our objective was to determine whether urethral calibration with Walther's urethral sounds may be an effective treatment for overactive bladder syndrome. The diagnosis of overactive bladder syndrome is a clinical one based on the presence of urgency, with or without urge incontinence, and is usually accompanied by frequency and nocturia in the absence of obvious pathologic or metabolic disease. These symptoms exert a profound effect on the quality of life. Pharmacologic treatment is generally used to relieve symptoms, however anticholinergic medications may be associated with several undesirable side effects. There are case reports of symptom relief following a relatively quick and simple office procedure known as urethral dilation. It is hypothesized that this may be an effective treatment for the symptoms of overactive bladder. Women with clinical symptoms of overactive bladder were evaluated. Eighty-eight women were randomized to either urethral calibration (Treatment), or placebo (Control) treatment. Women's clinical outcomes at two and eight weeks were assessed and compared between the two treatment arms. Eight weeks after treatment, 31.1% (n=14) of women who underwent urethral calibration were responsive to the treatment versus 9.3% (n=4) of the Control group. Also, 51.1% (n=23) of women within the Treatment group showed at least a partial response versus 20.9% (n=9) of the Control group. Our conclusion is that Urethral calibration significantly improves the symptoms of overactive bladder when compared to placebo and may be an effective alternative treatment method. PMID:24167769

  17. Development and Validation of a Simple High Performance Liquid Chromatography/UV Method for Simultaneous Determination of Urinary Uric Acid, Hypoxanthine, and Creatinine in Human Urine.

    PubMed

    Wijemanne, Nimanthi; Soysa, Preethi; Wijesundara, Sulochana; Perera, Hemamali

    2018-01-01

    Uric acid and hypoxanthine are produced in the catabolism of purine. Abnormal urinary levels of these products are associated with many diseases and therefore it is necessary to have a simple and rapid method to detect them. Hence, we report a simple reverse phase high performance liquid chromatography (HPLC/UV) technique, developed and validated for simultaneous analysis of uric acid, hypoxanthine, and creatinine in human urine. Urine was diluted appropriately and eluted with C-18 column 100 mm × 4.6 mm with a C-18 precolumn 25 mm × 4.6 mm in series. Potassium phosphate buffer (20 mM, pH 7.25) at a flow rate of 0.40 mL/min was employed as the solvent and peaks were detected at 235 nm. Tyrosine was used as the internal standard. The experimental conditions offered a good separation of analytes without interference of endogenous substances. The calibration curves were linear for all test compounds with a regression coefficient, r 2 > 0.99. Uric acid, creatinine, tyrosine, and hypoxanthine were eluted at 5.2, 6.1, 7.2, and 8.3 min, respectively. Intraday and interday variability were less than 4.6% for all the analytes investigated and the recovery ranged from 98 to 102%. The proposed HPLC procedure is a simple, rapid, and low cost method with high accuracy with minimum use of organic solvents. This method was successfully applied for the determination of creatinine, hypoxanthine, and uric acid in human urine.

  18. SCS-CN based time-distributed sediment yield model

    NASA Astrophysics Data System (ADS)

    Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.

    2008-05-01

    SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.

  19. A Self-Calibrating Radar Sensor System for Measuring Vital Signs.

    PubMed

    Huang, Ming-Chun; Liu, Jason J; Xu, Wenyao; Gu, Changzhan; Li, Changzhi; Sarrafzadeh, Majid

    2016-04-01

    Vital signs (i.e., heartbeat and respiration) are crucial physiological signals that are useful in numerous medical applications. The process of measuring these signals should be simple, reliable, and comfortable for patients. In this paper, a noncontact self-calibrating vital signs monitoring system based on the Doppler radar is presented. The system hardware and software were designed with a four-tiered layer structure. To enable accurate vital signs measurement, baseband signals in the radar sensor were modeled and a framework for signal demodulation was proposed. Specifically, a signal model identification method was formulated into a quadratically constrained l1 minimization problem and solved using the upper bound and linear matrix inequality (LMI) relaxations. The performance of the proposed system was comprehensively evaluated using three experimental sets, and the results indicated that this system can be used to effectively measure human vital signs.

  20. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    USGS Publications Warehouse

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  1. Iridium Oxide pH Sensor Based on Stainless Steel Wire for pH Mapping on Metal Surface

    NASA Astrophysics Data System (ADS)

    Shahrestani, S.; Ismail, M. C.; Kakooei, S.; Beheshti, M.; Zabihiazadboni, M.; Zavareh, M. A.

    2018-03-01

    A simple technique to fabricate the iridium oxide pH sensor is useful in several applications such as medical, food processing and engineering material where it is able to detect the changes of pH. Generally, the fabrication technique can be classified into three types: electro-deposition iridium oxide film (EIrOF), activated iridium oxide film (AIROF) and sputtering iridium oxide film (SIROF). This study focuses on fabricating electrode, calibration and test. Electro-deposition iridium oxide film is a simple and effective method of fabricating this kind of sensor via cyclic voltammetry process. The iridium oxide thick film was successfully electrodeposited on the surface of stainless steel wire with 500 cycles of sweep potential. A further analysis under FESEM shows detailed image of iridium oxide film which has cauliflower-liked microstructure. EDX analysis shows the highest element present are iridium and oxygen which concluded that the process is successful. The iridium oxide based pH sensor has shown a good performance in comparison to conventional glass pH sensor when it is being calibrated in buffer solutions with 2, 4, 7 and 9 pH values. The iridium oxide pH sensor is specifically designed to measure the pH on the surface of metal plate.

  2. Ground-based atmospheric water vapor monitoring system with spectroscopy of radiation in 20-30 GHz and 50-60 GHz bands

    NASA Astrophysics Data System (ADS)

    Nagasaki, Takeo; Tajima, Osamu; Araki, Kentaro; Ishimoto, Hiroshi

    2016-07-01

    We propose a novel ground-based meteorological monitoring system. In the 20{30 GHz band, our system simultaneously measures a broad absorption peak of water vapor and cloud liquid water. Additional observation in the 50{60 GHz band obtains the radiation of oxygen. Spectral results contain vertical profiles of the physical temperature of atmospheric molecules. We designed a simple method for placing the system atop high buildings and mountains and on decks of ships. There is a simple optical system in front of horn antennas for each frequency band. A focused signal from a reflector is separated into two polarized optical paths by a wire grid. Each signal received by the horn antenna is amplified by low-noise amplifiers. Spectra of each signal are measured as a function of frequency using two analyzers. A blackbody calibration source is maintained at 50 K in a cryostat. The calibration signal is led to each receiver via the wire grid. The input path of the signal is selected by rotation of the wire grid by 90°, because the polarization axis of the reflected path and axis of the transparent path are orthogonal. We developed a prototype receiver and demonstrated its performance using monitoring at the zenith.

  3. A critical assessment of two types of personal UV dosimeters.

    PubMed

    Seckmeyer, Gunther; Klingebiel, Marcus; Riechelmann, Stefan; Lohse, Insa; McKenzie, Richard L; Liley, J Ben; Allen, Martin W; Siani, Anna-Maria; Casale, Giuseppe R

    2012-01-01

    Doses of erythemally weighted irradiances derived from polysulphone (PS) and electronic ultraviolet (EUV) dosimeters have been compared with measurements obtained using a reference spectroradiometer. PS dosimeters showed mean absolute deviations of 26% with a maximum deviation of 44%, the calibrated EUV dosimeters showed mean absolute deviations of 15% (maximum 33%) around noon during several test days in the northern hemisphere autumn. In the case of EUV dosimeters, measurements with various cut-off filters showed that part of the deviation from the CIE erythema action spectrum was due to a small, but significant sensitivity to visible radiation that varies between devices and which may be avoided by careful preselection. Usually the method of calibrating UV sensors by direct comparison to a reference instrument leads to reliable results. However, in some circumstances the quality of measurements made with simple sensors may be over-estimated. In the extreme case, a simple pyranometer can be used as a UV instrument, providing acceptable results for cloudless skies, but very poor results under cloudy conditions. It is concluded that while UV dosimeters are useful for their design purpose, namely to estimate personal UV exposures, they should not be regarded as an inexpensive replacement for meteorological grade instruments. © 2011 Wiley Periodicals, Inc. Photochemistry and Photobiology © 2011 The American Society of Photobiology.

  4. Estimation of ion competition via correlated responsivity offset in linear ion trap mass spectrometry analysis: theory and practical use in the analysis of cyanobacterial hepatotoxin microcystin-LR in extracts of food additives.

    PubMed

    Urban, Jan; Hrouzek, Pavel; Stys, Dalibor; Martens, Harald

    2013-01-01

    Responsivity is a conversion qualification of a measurement device given by the functional dependence between the input and output quantities. A concentration-response-dependent calibration curve represents the most simple experiment for the measurement of responsivity in mass spectrometry. The cyanobacterial hepatotoxin microcystin-LR content in complex biological matrices of food additives was chosen as a model example of a typical problem. The calibration curves for pure microcystin and its mixtures with extracts of green alga and fish meat were reconstructed from the series of measurement. A novel approach for the quantitative estimation of ion competition in ESI is proposed in this paper. We define the correlated responsivity offset in the intensity values using the approximation of minimal correlation given by the matrix to the target mass values of the analyte. The estimation of the matrix influence enables the approximation of the position of a priori unknown responsivity and was easily evaluated using a simple algorithm. The method itself is directly derived from the basic attributes of the theory of measurements. There is sufficient agreement between the theoretical and experimental values. However, some theoretical issues are discussed to avoid misinterpretations and excessive expectations.

  5. Regionalization of response routine parameters

    NASA Astrophysics Data System (ADS)

    Tøfte, Lena S.; Sultan, Yisak A.

    2013-04-01

    When area distributed hydrological models are to be calibrated or updated, fewer calibration parameters is of a considerable advantage. Based on, among others, Kirchner, we have developed a simple non-threshold response model for drainage in natural catchments, to be used in the gridded hydrological model ENKI. The new response model takes only the hydrogram into account, it has one state and two parameters, and is adapted to catchments that are dominated by terrain drainage. The method is based on the assumption that in catchments where precipitation, evaporation and snowmelt is neglect able, the discharge is entirely determined by the amount of stored water. It can then be characterized as a simple first-order nonlinear dynamical system, where the governing equations can be found directly from measured stream flow fluctuations. This means that the response in the catchment can be modelled by using hydrogram data where all data from periods with rain, snowmelt or evaporation is left out, and adjust these series to a two or three parameter equation. A large number of discharge series from catchments in different regions in Norway are analyzed, and parameters found for all the series. By combining the computed parameters and known catchments characteristics, we try to regionalize the parameters. Then the parameters in the response routine can easily be found also for ungauged catchments, from maps or data bases.

  6. Estimation of Ion Competition via Correlated Responsivity Offset in Linear Ion Trap Mass Spectrometry Analysis: Theory and Practical Use in the Analysis of Cyanobacterial Hepatotoxin Microcystin-LR in Extracts of Food Additives

    PubMed Central

    Hrouzek, Pavel; Štys, Dalibor; Martens, Harald

    2013-01-01

    Responsivity is a conversion qualification of a measurement device given by the functional dependence between the input and output quantities. A concentration-response-dependent calibration curve represents the most simple experiment for the measurement of responsivity in mass spectrometry. The cyanobacterial hepatotoxin microcystin-LR content in complex biological matrices of food additives was chosen as a model example of a typical problem. The calibration curves for pure microcystin and its mixtures with extracts of green alga and fish meat were reconstructed from the series of measurement. A novel approach for the quantitative estimation of ion competition in ESI is proposed in this paper. We define the correlated responsivity offset in the intensity values using the approximation of minimal correlation given by the matrix to the target mass values of the analyte. The estimation of the matrix influence enables the approximation of the position of a priori unknown responsivity and was easily evaluated using a simple algorithm. The method itself is directly derived from the basic attributes of the theory of measurements. There is sufficient agreement between the theoretical and experimental values. However, some theoretical issues are discussed to avoid misinterpretations and excessive expectations. PMID:23586036

  7. Multivariate Calibration Approach for Quantitative Determination of Cell-Line Cross Contamination by Intact Cell Mass Spectrometry and Artificial Neural Networks.

    PubMed

    Valletta, Elisa; Kučera, Lukáš; Prokeš, Lubomír; Amato, Filippo; Pivetta, Tiziana; Hampl, Aleš; Havel, Josef; Vaňhara, Petr

    2016-01-01

    Cross-contamination of eukaryotic cell lines used in biomedical research represents a highly relevant problem. Analysis of repetitive DNA sequences, such as Short Tandem Repeats (STR), or Simple Sequence Repeats (SSR), is a widely accepted, simple, and commercially available technique to authenticate cell lines. However, it provides only qualitative information that depends on the extent of reference databases for interpretation. In this work, we developed and validated a rapid and routinely applicable method for evaluation of cell culture cross-contamination levels based on mass spectrometric fingerprints of intact mammalian cells coupled with artificial neural networks (ANNs). We used human embryonic stem cells (hESCs) contaminated by either mouse embryonic stem cells (mESCs) or mouse embryonic fibroblasts (MEFs) as a model. We determined the contamination level using a mass spectra database of known calibration mixtures that served as training input for an ANN. The ANN was then capable of correct quantification of the level of contamination of hESCs by mESCs or MEFs. We demonstrate that MS analysis, when linked to proper mathematical instruments, is a tangible tool for unraveling and quantifying heterogeneity in cell cultures. The analysis is applicable in routine scenarios for cell authentication and/or cell phenotyping in general.

  8. Multivariate Calibration Approach for Quantitative Determination of Cell-Line Cross Contamination by Intact Cell Mass Spectrometry and Artificial Neural Networks

    PubMed Central

    Prokeš, Lubomír; Amato, Filippo; Pivetta, Tiziana; Hampl, Aleš; Havel, Josef; Vaňhara, Petr

    2016-01-01

    Cross-contamination of eukaryotic cell lines used in biomedical research represents a highly relevant problem. Analysis of repetitive DNA sequences, such as Short Tandem Repeats (STR), or Simple Sequence Repeats (SSR), is a widely accepted, simple, and commercially available technique to authenticate cell lines. However, it provides only qualitative information that depends on the extent of reference databases for interpretation. In this work, we developed and validated a rapid and routinely applicable method for evaluation of cell culture cross-contamination levels based on mass spectrometric fingerprints of intact mammalian cells coupled with artificial neural networks (ANNs). We used human embryonic stem cells (hESCs) contaminated by either mouse embryonic stem cells (mESCs) or mouse embryonic fibroblasts (MEFs) as a model. We determined the contamination level using a mass spectra database of known calibration mixtures that served as training input for an ANN. The ANN was then capable of correct quantification of the level of contamination of hESCs by mESCs or MEFs. We demonstrate that MS analysis, when linked to proper mathematical instruments, is a tangible tool for unraveling and quantifying heterogeneity in cell cultures. The analysis is applicable in routine scenarios for cell authentication and/or cell phenotyping in general. PMID:26821236

  9. Hall and Seebeck measurements estimate the thickness of a (buried) carrier system: Identifying interface electrons in In-doped SnO{sub 2} films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papadogianni, Alexandra; Bierwagen, Oliver; White, Mark E.

    2015-12-21

    We propose a simple method based on the combination of Hall and Seebeck measurements to estimate the thickness of a carrier system within a semiconductor film. As an example, this method can distinguish “bulk” carriers, with homogeneous depth distribution, from “sheet” carriers, that are accumulated within a thin layer. The thickness of the carrier system is calculated as the ratio of the integral sheet carrier concentration, extracted from Hall measurements, to the volume carrier concentration, derived from the measured Seebeck coefficient of the same sample. For rutile SnO{sub 2}, the necessary relation of Seebeck coefficient to volume electron concentration inmore » the range of 3 × 10{sup 17} to 3 × 10{sup 20 }cm{sup −3} has been experimentally obtained from a set of single crystalline thin films doped with varying Sb-doping concentrations and unintentionally doped bulk samples, and is given as a “calibration curve.” Using this calibration curve, our method demonstrates the presence of interface electrons in homogeneously deep-acceptor (In) doped SnO{sub 2} films on sapphire substrates.« less

  10. Deviation rectification for dynamic measurement of rail wear based on coordinate sets projection

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Ma, Ziji; Li, Yanfu; Zeng, Jiuzhen; Jin, Tan; Liu, Hongli

    2017-10-01

    Dynamic measurement of rail wear using a laser imaging system suffers from random vibrations in the laser-based imaging sensor which cause distorted rail profiles. In this paper, a simple and effective method for rectifying profile deviation is presented to address this issue. There are two main steps: profile recognition and distortion calibration. According to the constant camera and projector parameters, efficient recognition of measured profiles is achieved by analyzing the geometric difference between normal profiles and distorted ones. For a distorted profile, by constructing coordinate sets projecting from it to the standard one on triple projecting primitives, including the rail head inner line, rail waist curve and rail jaw, iterative extrinsic camera parameter self-compensation is implemented. The distortion is calibrated by projecting the distorted profile onto the x-y plane of a measuring coordinate frame, which is parallel to the rail cross section, to eliminate the influence of random vibrations in the laser-based imaging sensor. As well as evaluating the implementation with comprehensive experiments, we also compare our method with other published works. The results exhibit the effectiveness and superiority of our method for the dynamic measurement of rail wear.

  11. A lattice relaxation algorithm for three-dimensional Poisson-Nernst-Planck theory with application to ion transport through the gramicidin A channel.

    PubMed Central

    Kurnikova, M G; Coalson, R D; Graf, P; Nitzan, A

    1999-01-01

    A lattice relaxation algorithm is developed to solve the Poisson-Nernst-Planck (PNP) equations for ion transport through arbitrary three-dimensional volumes. Calculations of systems characterized by simple parallel plate and cylindrical pore geometries are presented in order to calibrate the accuracy of the method. A study of ion transport through gramicidin A dimer is carried out within this PNP framework. Good agreement with experimental measurements is obtained. Strengths and weaknesses of the PNP approach are discussed. PMID:9929470

  12. Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Strum, R.; Stiles, D.; Long, C.; Rakhman, A.; Blokland, W.; Winder, D.; Riemer, B.; Wendel, M.

    2018-03-01

    We describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. The proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry-Perot sensors for measurement of strains and vibrations.

  13. Chemometrics enhanced HPLC-DAD performance for rapid quantification of carbamazepine and phenobarbital in human serum samples.

    PubMed

    Vosough, Maryam; Ghafghazi, Shiva; Sabetkasaei, Masoumeh

    2014-02-01

    This paper describes development and validation of a simple and efficient bioanalytical procedure for simultaneous determination of phenobarbital and carbamazepine in human serum samples using high performance liquid chromatography with photodiode-array detection (HPLC-DAD) regarding a fast elution methodology in less than 5 min. Briefly, this method consisted of a simple deproteinization step of serum samples followed by HPLC analysis on a Bonus-RP column using an isocratic mode of elution with acetonitrile/K2HPO4 (pH=7.5) buffer solution (45:55). Due to the presence of serum endogenous components as non-calibrated components in the sample, second-order calibration based on multivariate curve resolution-alternating least squares (MCR-ALS), has been applied on a set of absorbance matrices collected as a function of retention time and wavelengths. Acceptable resolution and quantification results were achieved in the presence of matrix interferences and the second-order advantage was fully exploited. The average recoveries for carbamazepine and phenobarbital were 89.7% and 86.1% and relative standard deviation values were lower than 9%. Additionally, computed elliptical joint confidence region (EJCR) confirmed the accuracy of the proposed method and indicated the absence of both constant and proportional errors in the predicted concentrations. The developed method enabled the determination of the analytes in different serum samples in the presence of overlapped profiles, while keeping experimental time and extraction steps at minimum. Finally, the serum concentration levels of carbamazepine in three time intervals were reported for morphine-dependents who had received carbamazepine for treating their neuropathic pain. © 2013 Elsevier B.V. All rights reserved.

  14. A simple method for determination of carmine in food samples based on cloud point extraction and spectrophotometric detection.

    PubMed

    Heydari, Rouhollah; Hosseini, Mohammad; Zarabi, Sanaz

    2015-01-01

    In this paper, a simple and cost effective method was developed for extraction and pre-concentration of carmine in food samples by using cloud point extraction (CPE) prior to its spectrophotometric determination. Carmine was extracted from aqueous solution using Triton X-100 as extracting solvent. The effects of main parameters such as solution pH, surfactant and salt concentrations, incubation time and temperature were investigated and optimized. Calibration graph was linear in the range of 0.04-5.0 μg mL(-1) of carmine in the initial solution with regression coefficient of 0.9995. The limit of detection (LOD) and limit of quantification were 0.012 and 0.04 μg mL(-1), respectively. Relative standard deviation (RSD) at low concentration level (0.05 μg mL(-1)) of carmine was 4.8% (n=7). Recovery values in different concentration levels were in the range of 93.7-105.8%. The obtained results demonstrate the proposed method can be applied satisfactory to determine the carmine in food samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Spectrophotometric method for quantitative determination of total anthocyanins and quality characteristics of roselle (Hibiscus sabdariffa).

    PubMed

    Sukwattanasinit, Tasamaporn; Burana-Osot, Jankana; Sotanaphun, Uthai

    2007-11-01

    A simple, rapid and cost-saving method for the determination of total anthocyanins in roselle has been developed. The method was based on pH-differential spectrophotometry. The calibration curve of the major anthocyanin in roselle, delphinidin 3-sambubioside (Dp-3-sam), was constructed by using methyl orange and their correlation factor. The reliability of this developed method was comparable to the direct method using standard Dp-3-sam and the HPLC method. Quality characteristics of roselle produced in Thailand were also reported. Its physical quality met the required specifications. The overall chemical quality was herein surveyed for the first time and it was found to be the important parameter corresponded to the commercial grading of roselle. Total contents of anthocyanins and phenolics were proportional to the antiradical capacity.

  16. Simultaneous determination of all polyphenols in vegetables, fruits, and teas.

    PubMed

    Sakakibara, Hiroyuki; Honda, Yoshinori; Nakagawa, Satoshi; Ashida, Hitoshi; Kanazawa, Kazuki

    2003-01-29

    Polyphenols, which have beneficial effects on health and occur ubiquitously in plant foods, are extremely diverse. We developed a method for simultaneously determining all the polyphenols in foodstuffs, using HPLC and a photodiode array to construct a library comprising retention times, spectra of aglycons, and respective calibration curves for 100 standard chemicals. The food was homogenized in liquid nitrogen, lyophilized, extracted with 90% methanol, and subjected to HPLC without hydrolysis. The recovery was 68-92%, and the variation in reproducibility ranged between 1 and 9%. The HPLC eluted polyphenols with good resolution within 95 min in the following order: simple polyphenols, catechins, anthocyanins, glycosides of flavones, flavonols, isoflavones and flavanones, their aglycons, anthraquinones, chalcones, and theaflavins. All the polyphenols in 63 vegetables, fruits, and teas were then examined in terms of content and class. The present method offers accuracy by avoiding the decomposition of polyphenols during hydrolysis, the ability to determine aglycons separately from glycosides, and information on simple polyphenol levels simultaneously.

  17. Rapid determination of minoxidil in human plasma using ion-pair HPLC.

    PubMed

    Zarghi, A; Shafaati, A; Foroutan, S M; Khoddam, A

    2004-10-29

    A rapid, simple and sensitive ion-pair high-performance liquid chromatography (HPLC) method has been developed for quantification of minoxidil in plasma. The assay enables the measurement of minoxidil for therapeutic drug monitoring with a minimum detectable limit of 0.5 ng ml(-1). The method involves simple, one-step extraction procedure and analytical recovery was complete. The separation was performed on an analytical 150 x 4.6 mm i.d. microbondapak C18 column. The wavelength was set at 281 nm. The mobile phase was a mixture of 0.01 M sodium dihydrogen phosphate buffer and acetonitrile (60:40, v/v) containing 2.5 mM sodium dodecyl sulphate adjusted to pH 3.5 at a flow rate of 1 ml/min. The column temperature was set at 50 degrees C. The calibration curve was linear over the concentration range 2-100 ng ml(-1). The coefficients of variation for inter-day and intra-day assay were found to be less than 8%.

  18. Fast and accurate enzyme activity measurements using a chip-based microfluidic calorimeter.

    PubMed

    van Schie, Morten M C H; Ebrahimi, Kourosh Honarmand; Hagen, Wilfred R; Hagedoorn, Peter-Leon

    2018-03-01

    Recent developments in microfluidic and nanofluidic technologies have resulted in development of new chip-based microfluidic calorimeters with potential use in different fields. One application would be the accurate high-throughput measurement of enzyme activity. Calorimetry is a generic way to measure activity of enzymes, but unlike conventional calorimeters, chip-based calorimeters can be easily automated and implemented in high-throughput screening platforms. However, application of chip-based microfluidic calorimeters to measure enzyme activity has been limited due to problems associated with miniaturization such as incomplete mixing and a decrease in volumetric heat generated. To address these problems we introduced a calibration method and devised a convenient protocol for using a chip-based microfluidic calorimeter. Using the new calibration method, the progress curve of alkaline phosphatase, which has product inhibition for phosphate, measured by the calorimeter was the same as that recorded by UV-visible spectroscopy. Our results may enable use of current chip-based microfluidic calorimeters in a simple manner as a tool for high-throughput screening of enzyme activity with potential applications in drug discovery and enzyme engineering. Copyright © 2017. Published by Elsevier Inc.

  19. Watching elderly and disabled person's physical condition by remotely controlled monorail robot

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yasunori; Matsumoto, Yoshinori; Fukaya, Yasutoshi; Takahashi, Tomoichi; Takeshita, Toru

    2001-10-01

    We are developing a nursing system using robots and cameras. The cameras are mounted on a remote controlled monorail robot which moves inside a room and watches the elderly. It is necessary to pay attention to the elderly at home or nursing homes all time. This requires staffs to pay attention to them at every time. The purpose of our system is to help those staffs. This study intends to improve such situation. A host computer controls a monorail robot to go in front of the elderly using the images taken by cameras on the ceiling. A CCD camera is mounted on the monorail robot to take pictures of their facial expression or movements. The robot sends the images to a host computer that checks them whether something unusual happens or not. We propose a simple calibration method for positioning the monorail robots to track the moves of the elderly for keeping their faces at center of camera view. We built a small experiment system, and evaluated our camera calibration method and image processing algorithm.

  20. [Determination of acacetin in Xiangjuganmao Keli (no sweet) by HPLC].

    PubMed

    Bian, Jia-Hong; Qian, Kun; Xu, Xiang; Shen, Jun

    2006-11-01

    To establish a method for the determination of acacetin in Xiangjuganmao Keli (no sweet). Acacetin in powdered herb was extracted by ultrasonator with methanol and was hydrolyzed with hydrochloric acid. Separation was accomplished on an ODS reversed phase column (5 microm, 4.6 x 250 mm) with a mobile phase of methanol-water-acetic acid(350: 150: 2). The detective wavelength was at 340 nm. The method was accurate, the results were stable and reproducible. The linear range of calibration cure was within the concentration of 2.00 - 10.00 microg/ml (r = 0.9998). The average extraction recovery was 99.9% (n = 6), RSD = 0.41% (n = 6). The method is simple, convenient, sensitive, and reproducible for quality control of Xiangjuganmao Keli (no sweet).

  1. Three different methods for determination of binary mixture of Amlodipine and Atorvastatin using dual wavelength spectrophotometry

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2013-03-01

    Three simple, specific, accurate and precise spectrophotometric methods depending on the proper selection of two wavelengths are developed for the simultaneous determination of Amlodipine besylate (AML) and Atorvastatin calcium (ATV) in tablet dosage forms. The first method is the new Ratio Difference method, the second method is the Bivariate method and the third one is the Absorbance Ratio method. The calibration curve is linear over the concentration range of 4-40 and 8-32 μg/mL for AML and ATV, respectively. These methods are tested by analyzing synthetic mixtures of the above drugs and they are applied to commercial pharmaceutical preparation of the subjected drugs. Methods are validated according to the ICH guidelines and accuracy, precision, repeatability and robustness are found to be within the acceptable limit. The mathematical explanation of the procedures is illustrated.

  2. Titrimetric and photometric methods for determination of hypochlorite in commercial bleaches.

    PubMed

    Jonnalagadda, Sreekanth B; Gengan, Prabhashini

    2010-01-01

    Two methods, simple titration and photometric methods for determination of hypochlorite are developed, based its reaction with hydrogen peroxide and titration of the residual peroxide by acidic permanganate. In the titration method, the residual hydrogen peroxide is estimated by titration with standard permanganate solution to estimate the hypochlorite concentration. The photometric method is devised to measure the concentration of remaining permanganate, after the reaction with residual hydrogen peroxide. It employs 4 ranges of calibration curves to enable the determination of hypochlorite accurately. The new photometric method measures hypochlorite in the range 1.90 x 10(-3) to 1.90 x 10(-2) M, with high accuracy and with low variance. The concentrations of hypochlorite in diverse commercial bleach samples and in seawater which is enriched with hypochlorite were estimated using the proposed method and compared with the arsenite method. The statistical analysis validates the superiority of the proposed method.

  3. An HF coaxial bridge for measuring impedance ratios up to 1 MHz

    NASA Astrophysics Data System (ADS)

    Kucera, J.; Sedlacek, R.; Bohacek, J.

    2012-08-01

    A four-terminal pair coaxial ac bridge developed for calibrating both resistance and capacitance ratios and working in the frequency range from 100 kHz up to 1 MHz is described. A reference inductive voltage divider (IVD) makes it possible to calibrate ratios 1:1 and 10:1 with uncertainty of a few parts in 105. The IVD is calibrated by means of a series-parallel capacitance device (SPCD). Use of the same ac bridge with minimal changes for calibrating the SPCD, IVD and unknown impedances simplifies the whole calibration process. The bridge balance conditions are fulfilled with simple capacitance and resistance decades and by injecting voltage supplied from the auxiliary direct digital synthesizer. Bridge performance was checked on the basis of resistance ratio measurements and also capacitance ratio measurements.

  4. A practical approach for linearity assessment of calibration curves under the International Union of Pure and Applied Chemistry (IUPAC) guidelines for an in-house validation of method of analysis.

    PubMed

    Sanagi, M Marsin; Nasir, Zalilah; Ling, Susie Lu; Hermawan, Dadan; Ibrahim, Wan Aini Wan; Naim, Ahmedy Abu

    2010-01-01

    Linearity assessment as required in method validation has always been subject to different interpretations and definitions by various guidelines and protocols. However, there are very limited applicable implementation procedures that can be followed by a laboratory chemist in assessing linearity. Thus, this work proposes a simple method for linearity assessment in method validation by a regression analysis that covers experimental design, estimation of the parameters, outlier treatment, and evaluation of the assumptions according to the International Union of Pure and Applied Chemistry guidelines. The suitability of this procedure was demonstrated by its application to an in-house validation for the determination of plasticizers in plastic food packaging by GC.

  5. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  6. When to Make Mountains out of Molehills: The Pros and Cons of Simple and Complex Model Calibration Procedures

    NASA Astrophysics Data System (ADS)

    Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.

    2017-12-01

    Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.

  7. Point Cloud Refinement with a Target-Free Intrinsic Calibration of a Mobile Multi-Beam LIDAR System

    NASA Astrophysics Data System (ADS)

    Nouiraa, H.; Deschaud, J. E.; Goulettea, F.

    2016-06-01

    LIDAR sensors are widely used in mobile mapping systems. The mobile mapping platforms allow to have fast acquisition in cities for example, which would take much longer with static mapping systems. The LIDAR sensors provide reliable and precise 3D information, which can be used in various applications: mapping of the environment; localization of objects; detection of changes. Also, with the recent developments, multi-beam LIDAR sensors have appeared, and are able to provide a high amount of data with a high level of detail. A mono-beam LIDAR sensor mounted on a mobile platform will have an extrinsic calibration to be done, so the data acquired and registered in the sensor reference frame can be represented in the body reference frame, modeling the mobile system. For a multibeam LIDAR sensor, we can separate its calibration into two distinct parts: on one hand, we have an extrinsic calibration, in common with mono-beam LIDAR sensors, which gives the transformation between the sensor cartesian reference frame and the body reference frame. On the other hand, there is an intrinsic calibration, which gives the relations between the beams of the multi-beam sensor. This calibration depends on a model given by the constructor, but the model can be non optimal, which would bring errors and noise into the acquired point clouds. In the litterature, some optimizations of the calibration parameters are proposed, but need a specific routine or environment, which can be constraining and time-consuming. In this article, we present an automatic method for improving the intrinsic calibration of a multi-beam LIDAR sensor, the Velodyne HDL-32E. The proposed approach does not need any calibration target, and only uses information from the acquired point clouds, which makes it simple and fast to use. Also, a corrected model for the Velodyne sensor is proposed. An energy function which penalizes points far from local planar surfaces is used to optimize the different proposed parameters for the corrected model, and we are able to give a confidence value for the calibration parameters found. Optimization results on both synthetic and real data are presented.

  8. Development of a commercially viable piezoelectric force sensor system for static force measurement

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Luo, Xinwei; Liu, Jingcheng; Li, Min; Qin, Lan

    2017-09-01

    A compensation method for measuring static force with a commercial piezoelectric force sensor is proposed to disprove the theory that piezoelectric sensors and generators can only operate under dynamic force. After studying the model of the piezoelectric force sensor measurement system, the principle of static force measurement using a piezoelectric material or piezoelectric force sensor is analyzed. Then, the distribution law of the decay time constant of the measurement system and the variation law of the measurement system’s output are studied, and a compensation method based on the time interval threshold Δ t and attenuation threshold Δ {{u}th} is proposed. By calibrating the system and considering the influences of the environment and the hardware, a suitable Δ {{u}th} value is determined, and the system’s output attenuation is compensated based on the Δ {{u}th} value to realize the measurement. Finally, a static force measurement system with a piezoelectric force sensor is developed based on the compensation method. The experimental results confirm the successful development of a simple compensation method for static force measurement with a commercial piezoelectric force sensor. In addition, it is established that, contrary to the current perception, a piezoelectric force sensor system can be used to measure static force through further calibration.

  9. A Data-driven Approach for Forecasting Next-day River Discharge

    NASA Astrophysics Data System (ADS)

    Sharif, H. O.; Billah, K. S.

    2017-12-01

    This study focuses on evaluating the performance of the Soil and Water Assessment Tool (SWAT) eco-hydrological model, a simple Auto-Regressive with eXogenous input (ARX) model, and a Gene expression programming (GEP)-based model in one-day-ahead forecasting of discharge of a subtropical basin (the upper Kentucky River Basin). The three models were calibrated with daily flow at the US Geological Survey (USGS) stream gauging station not affected by flow regulation for the period of 2002-2005. The calibrated models were then validated at the same gauging station as well as another USGS gauge 88 km downstream for the period of 2008-2010. The results suggest that simple models outperform a sophisticated hydrological model with GEP having the advantage of being able to generate functional relationships that allow scientific investigation of the complex nonlinear interrelationships among input variables. Unlike SWAT, GEP, and to some extent, ARX are less sensitive to the length of the calibration time series and do not require a spin-up period.

  10. A Simple Accelerometer Calibrator

    NASA Astrophysics Data System (ADS)

    Salam, R. A.; Islamy, M. R. F.; Munir, M. M.; Latief, H.; Irsyam, M.; Khairurrijal

    2016-08-01

    High possibility of earthquake could lead to the high number of victims caused by it. It also can cause other hazards such as tsunami, landslide, etc. In that case it requires a system that can examine the earthquake occurrence. Some possible system to detect earthquake is by creating a vibration sensor system using accelerometer. However, the output of the system is usually put in the form of acceleration data. Therefore, a calibrator system for accelerometer to sense the vibration is needed. In this study, a simple accelerometer calibrator has been developed using 12 V DC motor, optocoupler, Liquid Crystal Display (LCD) and AVR 328 microcontroller as controller system. The system uses the Pulse Wave Modulation (PWM) form microcontroller to control the motor rotational speed as response to vibration frequency. The frequency of vibration was read by optocoupler and then those data was used as feedback to the system. The results show that the systems could control the rotational speed and the vibration frequencies in accordance with the defined PWM.

  11. Calibration of Response Data Using MIRT Models with Simple and Mixed Structures

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2012-01-01

    It is common to assume during a statistical analysis of a multiscale assessment that the assessment is composed of several unidimensional subtests or that it has simple structure. Under this assumption, the unidimensional and multidimensional approaches can be used to estimate item parameters. These two approaches are equivalent in parameter…

  12. A novel pretreatment method combining sealing technique with direct injection technique applied for improving biosafety.

    PubMed

    Wang, Xinyu; Gao, Jing-Lin; Du, Chaohui; An, Jing; Li, MengJiao; Ma, Haiyan; Zhang, Lina; Jiang, Ye

    2017-01-01

    People today have a stronger interest in the risk of biosafety in clinical bioanalysis. A safe, simple, effective method of preparation is needed urgently. To improve biosafety of clinical analysis, we used antiviral drugs of adefovir and tenofovir as model drugs and developed a safe pretreatment method combining sealing technique with direct injection technique. The inter- and intraday precision (RSD %) of the method were <4%, and the extraction recoveries ranged from 99.4 to 100.7%. Meanwhile, the results showed that standard solution could be used to prepare calibration curve instead of spiking plasma, acquiring more accuracy result. Compared with traditional methods, the novel method not only improved biosecurity of the pretreatment method significantly, but also achieved several advantages including higher precision, favorable sensitivity and satisfactory recovery. With these highly practical and desirable characteristics, the novel method may become a feasible platform in bioanalysis.

  13. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  14. Recent Self-Reported Cannabis Use Is Associated With the Biometrics of Delta-9-Tetrahydrocannabinol.

    PubMed

    Smith, Matthew J; Alden, Eva C; Herrold, Amy A; Roberts, Andrea; Stern, Dan; Jones, Joseph; Barnes, Allan; O'Connor, Kailyn P; Huestis, Marilyn A; Breiter, Hans C

    2018-05-01

    Research typically characterizes cannabis use by self-report of cannabis intake frequency. In an effort to better understand relationships between measures of cannabis use, we evaluated if Δ-9-tetrahydrocannabinol (THC) and metabolite concentrations (biometrics) were associated with a calibrated timeline followback (TLFB) assessment of cannabis use. Participants were 35 young adult male cannabis users who completed a calibrated TLFB measure of cannabis use over the past 30 days, including time of last use. The calibration required participants handling four plastic bags of a cannabis substitute (0.25, 0.5, 1.0, and 3.5 grams) to quantify cannabis consumed. Participants provided blood and urine samples for analysis of THC and metabolites, at two independent laboratories. Participants abstained from cannabis use on the day of sample collection. We tested Pearson correlations between the calibrated TLFB measures and cannabis biometrics. Strong correlations were seen between urine and blood biometrics (all r > .73, all p < .001). TLFB measures of times of use and grams of cannabis consumed were significantly related to each biometric, including urine 11-nor-9-carboxy-Δ9-tetrahydrocannabinol (THCCOOH) and blood THC, 11-hydroxy-THC (11-OH-THC), THCCOOH, THCCOOH-glucuronide (times of use: r > .48-.61, all p < .05; grams: r > .40-.49, all p < .05). This study extends prior work to show TLFB methods significantly relate to an extended array of cannabis biometrics. The calibration of cannabis intake in grams was associated with each biometric, although the simple TLFB measure of times of use produced the strongest relationships with all five biometrics. These findings suggest that combined self-report and biometric data together convey the complexity of cannabis use, but allow that either the use of calibrated TLFB measures or biometrics may be sufficient for assessment of cannabis use in research.

  15. High-speed spectral calibration by complex FIR filter in phase-sensitive optical coherence tomography.

    PubMed

    Kim, Sangmin; Raphael, Patrick D; Oghalai, John S; Applegate, Brian E

    2016-04-01

    Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms.

  16. High-speed spectral calibration by complex FIR filter in phase-sensitive optical coherence tomography

    PubMed Central

    Kim, Sangmin; Raphael, Patrick D.; Oghalai, John S.; Applegate, Brian E.

    2016-01-01

    Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms. PMID:27446666

  17. Improving the complementary methods to estimate evapotranspiration under diverse climatic and physical conditions

    NASA Astrophysics Data System (ADS)

    Anayah, F. M.; Kaluarachchi, J. J.

    2014-06-01

    Reliable estimation of evapotranspiration (ET) is important for the purpose of water resources planning and management. Complementary methods, including complementary relationship areal evapotranspiration (CRAE), advection aridity (AA) and Granger and Gray (GG), have been used to estimate ET because these methods are simple and practical in estimating regional ET using meteorological data only. However, prior studies have found limitations in these methods especially in contrasting climates. This study aims to develop a calibration-free universal method using the complementary relationships to compute regional ET in contrasting climatic and physical conditions with meteorological data only. The proposed methodology consists of a systematic sensitivity analysis using the existing complementary methods. This work used 34 global FLUXNET sites where eddy covariance (EC) fluxes of ET are available for validation. A total of 33 alternative model variations from the original complementary methods were proposed. Further analysis using statistical methods and simplified climatic class definitions produced one distinctly improved GG-model-based alternative. The proposed model produced a single-step ET formulation with results equal to or better than the recent studies using data-intensive, classical methods. Average root mean square error (RMSE), mean absolute bias (BIAS) and R2 (coefficient of determination) across 34 global sites were 20.57 mm month-1, 10.55 mm month-1 and 0.64, respectively. The proposed model showed a step forward toward predicting ET in large river basins with limited data and requiring no calibration.

  18. Development of a calibration equipment for spectrometer qualification

    NASA Astrophysics Data System (ADS)

    Michel, C.; Borguet, B.; Boueé, A.; Blain, P.; Deep, A.; Moreau, V.; François, M.; Maresi, L.; Myszkowiak, A.; Taccola, M.; Versluys, J.; Stockman, Y.

    2017-09-01

    With the development of new spectrometer concepts, it is required to adapt the calibration facilities to characterize correctly their performances. These spectro-imaging performances are mainly Modulation Transfer Function, spectral response, resolution and registration; polarization, straylight and radiometric calibration. The challenge of this calibration development is to achieve better performance than the item under test using mostly standard items. Because only the subsystem spectrometer needs to be calibrated, the calibration facility needs to simulate the geometrical "behaviours" of the imaging system. A trade-off study indicates that no commercial devices are able to fulfil completely all the requirements so that it was necessary to opt for an in home telecentric achromatic design. The proposed concept is based on an Offner design. This allows mainly to use simple spherical mirrors and to cover the spectral range. The spectral range is covered with a monochromator. Because of the large number of parameters to record the calibration facility is fully automatized. The performances of the calibration system have been verified by analysis and experimentally. Results achieved recently on a free-form grating Offner spectrometer demonstrate the capacities of this new calibration facility. In this paper, a full calibration facility is described, developed specifically for a new free-form spectro-imager.

  19. Validated HPLC-UV method for determination of naproxen in human plasma with proven selectivity against ibuprofen and paracetamol.

    PubMed

    Filist, Monika; Szlaska, Iwona; Kaza, Michał; Pawiński, Tomasz

    2016-06-01

    Estimating the influence of interfering compounds present in the biological matrix on the determination of an analyte is one of the most important tasks during bioanalytical method development and validation. Interferences from endogenous components and, if necessary, from major metabolites as well as possible co-administered medications should be evaluated during a selectivity test. This paper describes a simple, rapid and cost-effective HPLC-UV method for the determination of naproxen in human plasma in the presence of two other analgesics, ibuprofen and paracetamol. Sample preparation is based on a simple liquid-liquid extraction procedure with a short, 5 s mixing time. Fenoprofen, which is characterized by a similar structure and properties to naproxen, was first used as the internal standard. The calibration curve is linear in the concentration range of 0.5-80.0 µg/mL, which is suitable for pharmacokinetic studies following a single 220 mg oral dose of naproxen sodium. The method was fully validated according to international guidelines and was successfully applied in a bioequivalence study in humans. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  20. The 'sniffer-patch' technique for detection of neurotransmitter release.

    PubMed

    Allen, T G

    1997-05-01

    A wide variety of techniques have been employed for the detection and measurement of neurotransmitter release from biological preparations. Whilst many of these methods offer impressive levels of sensitivity, few are able to combine sensitivity with the necessary temporal and spatial resolution required to study quantal release from single cells. One detection method that is seeing a revival of interest and has the potential to fill this niche is the so-called 'sniffer-patch' technique. In this article, specific examples of the practical aspects of using this technique are discussed along with the procedures involved in calibrating these biosensors to extend their applications to provide quantitative, in addition to simple qualitative, measurements of quantal transmitter release.

  1. Emissivity correction for interpreting thermal radiation from a terrestrial surface

    NASA Technical Reports Server (NTRS)

    Sutherland, R. A.; Bartholic, J. F.; Gerber, J. F.

    1979-01-01

    A general method of accounting for emissivity in making temperature determinations of graybody surfaces from radiometric data is presented. The method differs from previous treatments in that a simple blackbody calibration and graphical approach is used rather than numerical integrations which require detailed knowledge of an instrument's spectral characteristics. Also, errors caused by approximating instrumental response with the Stephan-Boltzman law rather than with an appropriately weighted Planck integral are examined. In the 8-14 micron wavelength interval, it is shown that errors are at most on the order of 3 C for the extremes of the earth's temperature and emissivity. For more practical limits, however, errors are less than 0.5 C.

  2. Calibration of echocardiographic tissue doppler velocity, using simple universally applicable methods

    NASA Astrophysics Data System (ADS)

    Dhutia, Niti M.; Zolgharni, Massoud; Willson, Keith; Cole, Graham; Nowbar, Alexandra N.; Manisty, Charlotte H.; Francis, Darrel P.

    2014-03-01

    Some of the challenges with tissue Doppler measurement include: apparent inconsistency between manufacturers, uncertainty over which part of the trace to make measurements and a lack of calibration of measurements. We develop and test tools to solve these problems in echocardiography laboratories. We designed and constructed an actuator and phantom setup to produce automatic reproducible motion, and used it to compare velocities measured using 3 echocardiographic modalities: M-mode, speckle tracking, and tissue Doppler, against a non-ultrasound, optical gold standard. In the clinical phase, 25 patients underwent M-mode, speckle tracking and tissue Doppler measurements of tissue velocities. In-vitro, the M-mode and speckle tracking velocities were concordant with optical assessment. Of the three possible tissue Doppler measurement conventions (outer, middle and inner line) only the middle line agreed with the optical assessment (discrepancy -0.20 (95% confidence interval -0.44 to 0.03)cm/s, p=0.11, outer +5.19(4.65 to 5.73)cm/s, p<0.0001, inner -6.26(-6.87 to -5.65)cm/s, p<0.0001). All 4 studied manufacturers showed a similar pattern. M-mode was therefore chosen as the in-vivo gold standard. Clinical measurements of tissue velocities by speckle tracking and the middle line of the tissue Doppler were concordant with M-mode, while the outer line significantly overestimated (+1.27(0.96 to 1.59)cm/s, p<0.0001) and the inner line underestimated (-1.81(-2.11 to -1.52)cm/s, p<0.0001). Echocardiographic velocity measurements can be calibrated by simple, inexpensive tools. We found that the middle of the tissue Doppler trace represents velocity correctly. Echocardiographers requiring velocities to match between different equipment, settings or modalities should use the middle line as the "guideline".

  3. Quantifying Confidence in Model Predictions for Hypersonic Aircraft Structures

    DTIC Science & Technology

    2015-03-01

    of isolating calibrations of models in the network, segmented and simultaneous calibration are compared using the Kullback - Leibler ...value of θ. While not all test -statistics are as simple as measuring goodness or badness of fit , their directional interpretations tend to remain...data quite well, qualitatively. Quantitative goodness - of - fit tests are problematic because they assume a true empirical CDF is being tested or

  4. Non-contact thrust stand calibration method for repetitively pulsed electric thrusters.

    PubMed

    Wong, Andrea R; Toftul, Alexandra; Polzin, Kurt A; Pearson, J Boise

    2012-02-01

    A thrust stand calibration technique for use in testing repetitively pulsed electric thrusters for in-space propulsion has been developed and tested using a modified hanging pendulum thrust stand. In the implementation of this technique, current pulses are applied to a solenoid to produce a pulsed magnetic field that acts against a permanent magnet mounted to the thrust stand pendulum arm. The force on the magnet is applied in this non-contact manner, with the entire pulsed force transferred to the pendulum arm through a piezoelectric force transducer to provide a time-accurate force measurement. Modeling of the pendulum arm dynamics reveals that after an initial transient in thrust stand motion the quasi-steady average deflection of the thrust stand arm away from the unforced or "zero" position can be related to the average applied force through a simple linear Hooke's law relationship. Modeling demonstrates that this technique is universally applicable except when the pulsing period is increased to the point where it approaches the period of natural thrust stand motion. Calibration data were obtained using a modified hanging pendulum thrust stand previously used for steady-state thrust measurements. Data were obtained for varying impulse bit at constant pulse frequency and for varying pulse frequency. The two data sets exhibit excellent quantitative agreement with each other. The overall error on the linear regression fit used to determine the calibration coefficient was roughly 1%.

  5. Thermodynamically consistent model calibration in chemical kinetics

    PubMed Central

    2011-01-01

    Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can provide dimensionality reduction, better estimation performance, and lower computational complexity, and can help to alleviate the problem of data overfitting. PMID:21548948

  6. Accurate calibration of waveform data measured by the Plasma Wave Experiment on board the ARASE satellite

    NASA Astrophysics Data System (ADS)

    Kitahara, M.; Katoh, Y.; Hikishima, M.; Kasahara, Y.; Matsuda, S.; Kojima, H.; Ozaki, M.; Yagitani, S.

    2017-12-01

    The Plasma Wave Experiment (PWE) is installed on board the ARASE satellite to measure the electric field in the frequency range from DC to 10 MHz, and the magnetic field in the frequency range from a few Hz to 100 kHz using two dipole wire-probe antennas (WPT) and three magnetic search coils (MSC), respectively. In particular, the Waveform Capture (WFC), one of the receivers of the PWE, can detect electromagnetic field waveform in the frequency range from a few Hz to 20 kHz. The Software-type Wave Particle Interaction Analyzer (S-WPIA) is installed on the ARASE satellite to measure the energy exchange between plasma waves and particles. Since S-WPIA uses the waveform data measured by WFC to calculate the relative phase angle between the wave magnetic field and velocity of energetic electrons, the high-accuracy is required to calibration of both amplitude and phase of the waveform data. Generally, the calibration procedure of the signal passed through a receiver consists of three steps; the transformation into spectra, the calibration by the transfer function of a receiver, and the inverse transformation of the calibrated spectra into the time domain. Practically, in order to reduce the side robe effect, a raw data is filtered by a window function in the time domain before applying Fourier transform. However, for the case that a first order differential coefficient of the phase transfer function of the system is not negligible, the phase of the window function convoluted into the calibrated spectra is shifted differently at each frequency, resulting in a discontinuity in the time domain of the calibrated waveform data. To eliminate the effect of the phase shift of a window function, we suggest several methods to calibrate a waveform data accurately and carry out simulations assuming simple sinusoidal waves as an input signal and using transfer functions of WPT, MSC, and WFC obtained in pre-flight tests. In consequence, we conclude that the following two methods can reduce an error contaminated through the calibration to less than 0.1 % of amplitude of input waves; (1) a Turkey-type window function with a flat top region of one-third of the window length and (2) modification of the window function for each frequency by referring the estimation of the phase shift due to the first order differential coefficient from the transfer functions.

  7. Quality assessment of gasoline using comprehensive two-dimensional gas chromatography combined with unfolded partial least squares: A reliable approach for the detection of gasoline adulteration.

    PubMed

    Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan

    2016-01-01

    Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. [Spectrometric assessment of thyroid depth within the radioiodine test].

    PubMed

    Rink, T; Bormuth, F-J; Schroth, H-J; Braun, S; Zimny, M

    2005-01-01

    Aim of this study is the validation of a simple method for evaluating the depth of the target volume within the radioiodine test by analyzing the emitted iodine-131 energy spectrum. In a total of 250 patients (102 with a solitary autonomous nodule, 66 with multifocal autonomy, 29 with disseminated autonomy, 46 with Graves' disease, 6 for reducing goiter volume and 1 with only partly resectable papillary thyroid carcinoma), simultaneous uptake measurements in the Compton scatter (210 +/- 110 keV) and photopeak (364-45/+55 keV) windows were performed over one minute 24 hours after application of the 3 MBq test dose, with subsequent calculation of the respective count ratios. Measurements with a water-filled plastic neck phantom were carried out to perceive the relationship between these quotients and the average source depth and to get a calibration curve for calculating the depth of the target volume in the 250 patients for comparison with the sonographic reference data. Another calibration curve was obtained by evaluating the results of 125 randomly selected patient measurements to calculate the source depth in the other half of the group. The phantom measurements revealed a highly significant correlation (r = 0,99) between the count ratios and the source depth. Using these calibration data, a good relationship (r = 0,81, average deviation 6 mm corresponding to 22%) between the spectrometric and the sonographic depths was obtained. When using the calibration curve resulting from the 125 patient measurements, the overage deviation in the other half of the group was only 3 mm (12%). There was no difference between the disease groups. The described method allows on easy to use depth correction of the uptake measurements providing good results.

  9. Body composition in Nepalese children using isotope dilution: the production of ethnic-specific calibration equations and an exploration of methodological issues.

    PubMed

    Devakumar, Delan; Grijalva-Eternod, Carlos S; Roberts, Sebastian; Chaube, Shiva Shankar; Saville, Naomi M; Manandhar, Dharma S; Costello, Anthony; Osrin, David; Wells, Jonathan C K

    2015-01-01

    Background. Body composition is important as a marker of both current and future health. Bioelectrical impedance (BIA) is a simple and accurate method for estimating body composition, but requires population-specific calibration equations. Objectives. (1) To generate population specific calibration equations to predict lean mass (LM) from BIA in Nepalese children aged 7-9 years. (2) To explore methodological changes that may extend the range and improve accuracy. Methods. BIA measurements were obtained from 102 Nepalese children (52 girls) using the Tanita BC-418. Isotope dilution with deuterium oxide was used to measure total body water and to estimate LM. Prediction equations for estimating LM from BIA data were developed using linear regression, and estimates were compared with those obtained from the Tanita system. We assessed the effects of flexing the arms of children to extend the range of coverage towards lower weights. We also estimated potential error if the number of children included in the study was reduced. Findings. Prediction equations were generated, incorporating height, impedance index, weight and sex as predictors (R (2) 93%). The Tanita system tended to under-estimate LM, with a mean error of 2.2%, but extending up to 25.8%. Flexing the arms to 90° increased the lower weight range, but produced a small error that was not significant when applied to children <16 kg (p 0.42). Reducing the number of children increased the error at the tails of the weight distribution. Conclusions. Population-specific isotope calibration of BIA for Nepalese children has high accuracy. Arm position is important and can be used to extend the range of low weight covered. Smaller samples reduce resource requirements, but leads to large errors at the tails of the weight distribution.

  10. Development and validation of an ultra-performance liquid chromatography quadrupole time of flight mass spectrometry method for rapid quantification of free amino acids in human urine.

    PubMed

    Joyce, Richard; Kuziene, Viktorija; Zou, Xin; Wang, Xueting; Pullen, Frank; Loo, Ruey Leng

    2016-01-01

    An ultra-performance liquid chromatography quadrupole time of flight mass spectrometry (UPLC-qTOF-MS) method using hydrophilic interaction liquid chromatography was developed and validated for simultaneous quantification of 18 free amino acids in urine with a total acquisition time including the column re-equilibration of less than 18 min per sample. This method involves simple sample preparation steps which consisted of 15 times dilution with acetonitrile to give a final composition of 25 % aqueous and 75 % acetonitrile without the need of any derivatization. The dynamic range for our calibration curve is approximately two orders of magnitude (120-fold from the lowest calibration curve point) with good linearity (r (2) ≥ 0.995 for all amino acids). Good separation of all amino acids as well as good intra- and inter-day accuracy (<15 %) and precision (<15 %) were observed using three quality control samples at a concentration of low, medium and high range of the calibration curve. The limits of detection (LOD) and lower limit of quantification of our method were ranging from approximately 1-300 nM and 0.01-0.5 µM, respectively. The stability of amino acids in the prepared urine samples was found to be stable for 72 h at 4 °C, after one freeze thaw cycle and for up to 4 weeks at -80 °C. We have applied this method to quantify the content of 18 free amino acids in 646 urine samples from a dietary intervention study. We were able to quantify all 18 free amino acids in these urine samples, if they were present at a level above the LOD. We found our method to be reproducible (accuracy and precision were typically <10 % for QCL, QCM and QCH) and the relatively high sample throughput nature of this method potentially makes it a suitable alternative for the analysis of urine samples in clinical setting.

  11. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  12. A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry

    NASA Astrophysics Data System (ADS)

    Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.

    2018-03-01

    Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.

  13. A Simple Spectrophotometer Using Common Materials and a Digital Camera

    ERIC Educational Resources Information Center

    Widiatmoko, Eko; Widayani; Budiman, Maman; Abdullah, Mikrajuddin; Khairurrijal

    2011-01-01

    A simple spectrophotometer was designed using cardboard, a DVD, a pocket digital camera, a tripod and a computer. The DVD was used as a diffraction grating and the camera as a light sensor. The spectrophotometer was calibrated using a reference light prior to use. The spectrophotometer was capable of measuring optical wavelengths with a…

  14. Calibrating the Decline Rate - Peak Luminosity Relation for Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Rust, Bert W.; Pruzhinskaya, Maria V.; Thijsse, Barend J.

    2015-08-01

    The correlation between peak luminosity and rate of decline in luminosity for Type I supernovae was first studied by B. W. Rust [Ph.D. thesis, Univ. of Illinois (1974) ORNL-4953] and Yu. P. Pskovskii [Sov. Astron., 21 (1977) 675] in the 1970s. Their work was little-noted until Phillips rediscovered the correlation in 1993 [ApJ, 413 (1993) L105] and attempted to derive a calibration relation using a difference quotient approximation Δm15(B) to the decline rate after peak luminosity Mmax(B). Numerical differentiation of data containing measuring errors is a notoriously unstable calculation, but Δm15(B) remains the parameter of choice for most calibration methods developed since 1993. To succeed, it should be computed from good functional fits to the lightcurves, but most workers never exhibit their fits. In the few instances where they have, the fits are not very good. Some of the 9 supernovae in the Phillips study required extinction corrections in their estimates of the Mmax(B), and so were not appropriate for establishing a calibration relation. Although the relative uncertainties in his Δm15(B) estimates were comparable to those in his Mmax(B) estimates, he nevertheless used simple linear regression of the latter on the former, rather than major-axis regression (total least squares) which would have been more appropriate.Here we determine some new calibration relations using a sample of nearby "pure" supernovae suggested by M. V. Pruzhinskaya [Astron. Lett., 37 (2011) 663]. Their parent galaxies are all in the NED collection, with good distance estimates obtained by several different methods. We fit each lightcurve with an optimal regression spline obtained by B. J. Thijsse's spline2 [Comp. in Sci. & Eng., 10 (2008) 49]. The fits, which explain more that 99% of the variance in each case, are better than anything heretofore obtained by stretching "template" lightcurves or fitting combinations of standard lightcurves. We use the fits to compute estimates of Δm15(B) and some other calibration parameters suggested by Pskovskii [Sov. Astron., 28 (1984) 858] and compare their utility for cosmological testing.

  15. Automatic Camera Calibration for Cultural Heritage Applications Using Unstructured Planar Objects

    NASA Astrophysics Data System (ADS)

    Adam, K.; Kalisperakis, I.; Grammatikopoulos, L.; Karras, G.; Petsa, E.

    2013-07-01

    As a rule, image-based documentation of cultural heritage relies today on ordinary digital cameras and commercial software. As such projects often involve researchers not familiar with photogrammetry, the question of camera calibration is important. Freely available open-source user-friendly software for automatic camera calibration, often based on simple 2D chess-board patterns, are an answer to the demand for simplicity and automation. However, such tools cannot respond to all requirements met in cultural heritage conservation regarding possible imaging distances and focal lengths. Here we investigate the practical possibility of camera calibration from unknown planar objects, i.e. any planar surface with adequate texture; we have focused on the example of urban walls covered with graffiti. Images are connected pair-wise with inter-image homographies, which are estimated automatically through a RANSAC-based approach after extracting and matching interest points with the SIFT operator. All valid points are identified on all images on which they appear. Provided that the image set includes a "fronto-parallel" view, inter-image homographies with this image are regarded as emulations of image-to-world homographies and allow computing initial estimates for the interior and exterior orientation elements. Following this initialization step, the estimates are introduced into a final self-calibrating bundle adjustment. Measures are taken to discard unsuitable images and verify object planarity. Results from practical experimentation indicate that this method may produce satisfactory results. The authors intend to incorporate the described approach into their freely available user-friendly software tool, which relies on chess-boards, to assist non-experts in their projects with image-based approaches.

  16. Novel spectrophotometric determination of flumethasone pivalate and clioquinol in their binary mixture and pharmaceutical formulation.

    PubMed

    Abdel-Aleem, Eglal A; Hegazy, Maha A; Sayed, Nour W; Abdelkawy, M; Abdelfatah, Rehab M

    2015-02-05

    This work is concerned with development and validation of three simple, specific, accurate and precise spectrophotometric methods for determination of flumethasone pivalate (FP) and clioquinol (CL) in their binary mixture and ear drops. Method A is a ratio subtraction spectrophotometric one (RSM). Method B is a ratio difference spectrophotometric one (RDSM), while method C is a mean center spectrophotometric one (MCR). The calibration curves are linear over the concentration range of 3-45 μg/mL for FP, and 2-25 μg/mL for CL. The specificity of the developed methods was assessed by analyzing different laboratory prepared mixtures of the FP and CL. The three methods were validated as per ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Three different spectrophotometric methods manipulating ratio spectra for determination of binary mixture of Amlodipine and Atorvastatin

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeiny, Badr A.

    2011-12-01

    Three simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra are developed for the simultaneous determination of Amlodipine besylate (AM) and Atorvastatin calcium (AT) in tablet dosage forms. The first method is first derivative of the ratio spectra ( 1DD), the second is ratio subtraction and the third is the method of mean centering of ratio spectra. The calibration curve is linear over the concentration range of 3-40 and 8-32 μg/ml for AM and AT, respectively. These methods are tested by analyzing synthetic mixtures of the above drugs and they are applied to commercial pharmaceutical preparation of the subjected drugs. Standard deviation is <1.5 in the assay of raw materials and tablets. Methods are validated as per ICH guidelines and accuracy, precision, repeatability and robustness are found to be within the acceptable limit.

  18. SU-F-J-65: Prediction of Patient Setup Errors and Errors in the Calibration Curve from Prompt Gamma Proton Range Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, J; Labarbe, R; Sterpin, E

    2016-06-15

    Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less

  19. The simple determination method for anthocyanidin aglycones in fruits using ultra-high-performance liquid chromatography.

    PubMed

    Shim, You-Shin; Yoon, Won-Jin; Kim, Dong-Man; Watanabe, Masaki; Park, Hyun-Jin; Jang, Hae Won; Lee, Jangho; Ha, Jaeho

    2015-01-01

    The simple determination method for anthocyanidin aglycones in fruits using ultra-high-performance liquid chromatography (UHPLC) coupled with the heating-block acidic hydrolysis method was validated through the precision, accuracy and linearity. The UHPLC separation was performed on a reversed-phase C18 column (particle size 2 μm, i.d. 2 mm, length 100 mm) with a photodiode-array detector. The limits of detection and quantification of the UHPLC analyses were 0.09 and 0.29 mg/kg for delphinidin, 0.08 and 0.24 mg/kg for cyanidin, 0.09 and 0.26 mg/kg for petunidin, 0.14 and 0.42 mg/kg for pelargonidin, 0.16 and 0.48 mg/kg for peonidin and 0.30 and 0.91 mg/kg for malvidin, respectively. The intra- and inter-day precisions of individual anthocyanidin aglycones were <10.3%. All calibration curves exhibited good linearity (r = 0.999) within the tested ranges. The total run time of UHPLC was 8 min. The simple preparation method with UHPLC detection in this study presented herein significantly improved the speed and the simplicity for preparation step of delphinidin, cyanidin, petunidin, pelargonidin, peonidin and malvidin in fruits. Especially, the UHPLC detection exhibited good resolution in spite of shorter run time about four times than conventional HPLC detection. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Image-based spectral distortion correction for photon-counting x-ray detectors

    PubMed Central

    Ding, Huanjun; Molloi, Sabee

    2012-01-01

    Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608

  1. Polarimeter calibration error gets far out of control

    NASA Astrophysics Data System (ADS)

    Chipman, Russell A.

    2015-09-01

    This is a sad story about a polarization calibration error gone amuck. A simple laboratory mistake was mistaken for a new phenomena. Aggressive management did their job and sold the flawed idea very effectively and substantial funding followed. Questions were raised and a Government lab tried but couldn't to recreate the breakthrough. The results were unpleasant and the field of infrared polarimetry developed a bad reputation for several years.

  2. A calibration method of infrared LVF based spectroradiometer

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin

    2017-10-01

    In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.

  3. The effects of numerical-model complexity and observation type on estimated porosity values

    USGS Publications Warehouse

    Starn, Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.

    2015-01-01

    The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a “complex” highly parameterized porosity field and a “simple” parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.

  4. Simple method for quantification of gadolinium magnetic resonance imaging contrast agents using ESR spectroscopy.

    PubMed

    Takeshita, Keizo; Kinoshita, Shota; Okazaki, Shoko

    2012-01-01

    To develop an estimation method of gadolinium magnetic resonance imaging (MRI) contrast agents, the effect of concentration of Gd compounds on the ESR spectrum of nitroxyl radical was examined. A solution of either 4-oxo-2,2,6,6-tetramethylpiperidine-N-oxyl (TEMPONE) or 4-hydroxy-2,2,6,6-tetramethylpiperidine-N-oxyl (TEMPOL) was mixed with a solution of Gd compound and the ESR spectrum was recorded. Increased concentration of gadolinium-diethylenetriamine pentaacetic acid chelate (Gd-DTPA), an MRI contrast agent, increased the peak-to-peak line widths of ESR spectra of the nitroxyl radicals, in accordance with a decrease of their signal heights. A linear relationship was observed between concentration of Gd-DTPA and line width of ESR signal, up to approximately 50 mmol/L Gd-DTPA, with a high correlation coefficient. Response of TEMPONE was 1.4-times higher than that of TEMPOL as evaluated from the slopes of the lines. The response was slightly different among Gd compounds; the slopes of calibration curves for acua[N,N-bis[2-[(carboxymethyl)[(methylcarbamoyl)methyl]amino]ethyl]glycinato(3-)]gadolinium hydrate (Gd-DTPA-BMA) (6.22 μT·L/mmol) and gadolinium-1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid chelate (Gd-DOTA) (6.62 μT·L/mmol) were steeper than the slope for Gd-DTPA (5.45 μT·L/mmol), whereas the slope for gadolinium chloride (4.94 μT·L/mmol) was less steep than that for Gd-DTPA. This method is simple to apply. The results indicate that this method is useful for rough estimation of the concentration of Gd contrast agents if calibration is carried out with each standard compound. It was also found that the plot of the reciprocal square root of signal height against concentrations of contrast agents could be useful for the estimation if a constant volume of sample solution is taken and measured at the same position in the ESR cavity every time.

  5. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  6. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE PAGES

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...

    2016-12-15

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  7. HPLC determination of four active saponins from Panax notoginseng in rat serum and its application to pharmacokinetic studies.

    PubMed

    Li, Lie; Sheng, Yuxin; Zhang, Jinlan; Wang, Chuanshe; Guo, Dean

    2004-12-01

    Four main active saponins (ginsenosides Rg1, Rb1, Rd and notoginsenoside R1) in Panax notoginseng in rat serum after oral and intravenous administration of total saponins of P. notoginseng (PNS) to rats were determined using a simple and sensitive high-performance chromatographic method. The serum samples were pretreated with solid-phase extraction before analysis. The calibration curves for the four saponins were linear in the given concentration ranges. The intra-day and inter-day assay coefficients in serum were less than 10.0% and the recoveries of the method were higher than 80.0% in the high, middle and low concentrations. This method was applied to study the pharmacokinetics following oral and intravenous administration of PNS. Copyright 2004 John Wiley & Sons, Ltd.

  8. Simultaneous kinetic spectrometric determination of three flavonoid antioxidants in fruit with the aid of chemometrics

    NASA Astrophysics Data System (ADS)

    Sun, Ruiling; Wang, Yong; Ni, Yongnian; Kokot, Serge

    2014-03-01

    A simple, inexpensive and sensitive kinetic spectrophotometric method was developed for the simultaneous determination of three anti-carcinogenic flavonoids: catechin, quercetin and naringenin, in fruit samples. A yellow chelate product was produced in the presence neocuproine and Cu(I) - a reduction product of the reaction between the flavonoids with Cu(II), and this enabled the quantitative measurements with UV-vis spectrophotometry. The overlapping spectra obtained, were resolved with chemometrics calibration models, and the best performing method was the fast independent component analysis (fast-ICA/PCR (Principal component regression)); the limits of detection were 0.075, 0.057 and 0.063 mg L-1 for catechin, quercetin and naringenin, respectively. The novel method was found to outperform significantly the common HPLC procedure.

  9. Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.

  10. Assessment of measurement errors and dynamic calibration methods for three different tipping bucket rain gauges

    NASA Astrophysics Data System (ADS)

    Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.

    2016-09-01

    Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major implications to field- and watershed-scale hydrologic studies.

  11. Determination of Trace Available Heavy Metals in Soil Using Laser-Induced Breakdown Spectroscopy Assisted with Phase Transformation Method.

    PubMed

    Yi, Rongxing; Yang, Xinyan; Zhou, Ran; Li, Jiaming; Yu, Huiwu; Hao, Zhongqi; Guo, Lianbo; Li, Xiangyou; Lu, Yongfeng; Zeng, Xiaoyan

    2018-05-18

    To detect available heavy metals in soil using laser-induced breakdown spectroscopy (LIBS) and improve its poor detection sensitivity, a simple and low cost sample pretreatment method named solid-liquid-solid transformation was proposed. By this method, available heavy metals were extracted from soil through ultrasonic vibration and centrifuging and then deposited on a glass slide. Utilization of this solid-liquid-solid transformation method, available Cd and Pb elements in soil were detected successfully. The results show that the regression coefficients of calibration curves for soil analyses reach to more than 0.98. The limits of detection could reach to 0.067 and 0.94 ppm for available Cd and Pb elements in soil under optimized conditions, respectively, which are much better than those obtained by conventional LIBS.

  12. Wavelength calibration of dispersive near-infrared spectrometer using relative k-space distribution with low coherence interferometer

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2016-05-01

    The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.

  13. Verification of the ISO calibration method for field pyranometers under tropical sky conditions

    NASA Astrophysics Data System (ADS)

    Janjai, Serm; Tohsing, Korntip; Pattarapanitchai, Somjet; Detkhon, Pasakorn

    2017-02-01

    Field pyranomters need to be annually calibrated and the International Organization for Standardization (ISO) has defined a standard method (ISO 9847) for calibrating these pyranometers. According to this standard method for outdoor calibration, the field pyranometers have to be compared to a reference pyranometer for the period of 2 to 14 days, depending on sky conditions. In this work, the ISO 9847 standard method was verified under tropical sky conditions. To verify the standard method, calibration of field pyranometers was conducted at a tropical site located in Nakhon Pathom (13.82o N, 100.04o E), Thailand under various sky conditions. The conditions of the sky were monitored by using a sky camera. The calibration results for different time periods used for the calibration under various sky conditions were analyzed. It was found that the calibration periods given by this standard method could be reduced without significant change in the final calibration result. In addition, recommendation and discussion on the use of this standard method in the tropics were also presented.

  14. Calibration of a simple and a complex model of global marine biogeochemistry

    NASA Astrophysics Data System (ADS)

    Kriest, Iris

    2017-11-01

    The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types - a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) - for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer - although difficult to measure - may be an important asset for model calibration.

  15. Using structural equation modeling to construct calibration equations relating PM2.5 mass concentration samplers to the federal reference method sampler

    NASA Astrophysics Data System (ADS)

    Bilonick, Richard A.; Connell, Daniel P.; Talbott, Evelyn O.; Rager, Judith R.; Xue, Tao

    2015-02-01

    The objective of this study was to remove systematic bias among fine particulate matter (PM2.5) mass concentration measurements made by different types of samplers used in the Pittsburgh Aerosol Research and Inhalation Epidemiology Study (PARIES). PARIES is a retrospective epidemiology study that aims to provide a comprehensive analysis of the associations between air quality and human health effects in the Pittsburgh, Pennsylvania, region from 1999 to 2008. Calibration was needed in order to minimize the amount of systematic error in PM2.5 exposure estimation as a result of including data from 97 different PM2.5 samplers at 47 monitoring sites. Ordinary regression often has been used for calibrating air quality measurements from pairs of measurement devices; however, this is only appropriate when one of the two devices (the "independent" variable) is free from random error, which is rarely the case. A group of methods known as "errors-in-variables" (e.g., Deming regression, reduced major axis regression) has been developed to handle calibration between two devices when both are subject to random error, but these methods require information on the relative sizes of the random errors for each device, which typically cannot be obtained from the observed data. When data from more than two devices (or repeats of the same device) are available, the additional information is not used to inform the calibration. A more general approach that often has been overlooked is the use of a measurement error structural equation model (SEM) that allows the simultaneous comparison of three or more devices (or repeats). The theoretical underpinnings of all of these approaches to calibration are described, and the pros and cons of each are discussed. In particular, it is shown that both ordinary regression (when used for calibration) and Deming regression are particular examples of SEMs but with substantial deficiencies. To illustrate the use of SEMs, the 7865 daily average PM2.5 mass concentration measurements made by seven collocated samplers at an urban monitoring site in Pittsburgh, Pennsylvania, were used. These samplers, which included three federal reference method (FRM) samplers, three speciation samplers, and a tapered element oscillating microbalance (TEOM), operated at various times during the 10-year PARIES study period. Because TEOM measurements are known to depend on temperature, the constructed SEM provided calibration equations relating the TEOM to the FRM and speciation samplers as a function of ambient temperature. It was shown that TEOM imprecision and TEOM bias (relative to the FRM) both decreased as temperature increased. It also was shown that the temperature dependency for bias was non-linear and followed a sigmoidal (logistic) pattern. The speciation samplers exhibited only small bias relative to the FRM samplers, although the FRM samplers were shown to be substantially more precise than both the TEOM and the speciation samplers. Comparison of the SEM results to pairwise simple linear regression results showed that the regression results can differ substantially from the correctly-derived calibration equations, especially if the less-precise device is used as the independent variable in the regression.

  16. Phase noise measurements of the 400-kW, 2.115-GHz (S-band) transmitter

    NASA Technical Reports Server (NTRS)

    Boss, P.; Hoppe, D.; Bhanji, A.

    1987-01-01

    The measurement theory is described and a test method to perform phase noise verification using off-the-shelf components and instruments is presented. The measurement technique described consists of a double-balanced mixer used as phase detector, followed by a low noise amplifier. An FFT spectrum analyzer is then used to view the modulation components. A simple calibration procedure is outlined that ensures accurate measurements. A block diagram of the configuration is presented as well as actual phase noise data from the 400 kW, 2.115 GHz (S-band) klystron transmitter.

  17. Apparatus and method for measurement of weak optical absorptions by thermally induced laser pulsing

    DOEpatents

    Cremers, D.A.; Keller, R.A.

    1982-06-08

    The thermal lensing phenomenon is used as the basis for measurement of weak optical absorptions when a cell containing the sample to be investigated is inserted into a normally continuous-wave operation laser-pumped dye laser cavity for which the output coupler is deliberately tilted relative to intracavity circulating laser light, and pulsed laser output ensues, the pulsewidth of which can be rlated to the sample absorptivity by a simple algorithm or calibration curve. A minimum detection limit of less than 10/sup -5/ cm/sup -1/ has been demonstrated using this technique.

  18. Apparatus and method for measurement of weak optical absorptions by thermally induced laser pulsing

    DOEpatents

    Cremers, D.A.; Keller, R.A.

    1985-10-01

    The thermal lensing phenomenon is used as the basis for measurement of weak optical absorptions when a cell containing the sample to be investigated is inserted into a normally continuous-wave operation laser-pumped dye laser cavity for which the output coupler is deliberately tilted relative to intracavity circulating laser light, and pulsed laser output ensues, the pulsewidth of which can be related to the sample absorptivity by a simple algorithm or calibration curve. A minimum detection limit of less than 10[sup [minus]5] cm[sup [minus]1] has been demonstrated using this technique. 6 figs.

  19. Problems in the use of interference filters for spectrophotometric determination of total ozone

    NASA Technical Reports Server (NTRS)

    Basher, R. E.; Matthews, W. A.

    1977-01-01

    An analysis of the use of ultraviolet narrow-band interference filters for total ozone determination is given with reference to the New Zealand filter spectrophotometer under the headings of filter monochromaticity, temperature dependence, orientation dependence, aging, and specification tolerances and nonuniformity. Quantitative details of each problem are given, together with the means used to overcome them in the New Zealand instrument. The tuning of the instrument's filter center wavelengths to a common set of values by tilting the filters is also described, along with a simple calibration method used to adjust and set these center wavelengths.

  20. Apparatus and method for measurement of weak optical absorptions by thermally induced laser pulsing

    DOEpatents

    Cremers, David A.; Keller, Richard A.

    1985-01-01

    The thermal lensing phenomenon is used as the basis for measurement of weak optical absorptions when a cell containing the sample to be investigated is inserted into a normally continuous-wave operation laser-pumped dye laser cavity for which the output coupler is deliberately tilted relative to intracavity circulating laser light, and pulsed laser output ensues, the pulsewidth of which can be related to the sample absorptivity by a simple algorithm or calibration curve. A minimum detection limit of less than 10.sup.-5 cm.sup.-1 has been demonstrated using this technique.

  1. Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Strum, R.; Stiles, D.

    In this paper, we describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. Finally, the proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry–Perot sensors for measurement of strains and vibrations.

  2. Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors

    DOE PAGES

    Liu, Y.; Strum, R.; Stiles, D.; ...

    2017-11-20

    In this paper, we describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. Finally, the proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry–Perot sensors for measurement of strains and vibrations.

  3. Research on camera on orbit radial calibration based on black body and infrared calibration stars

    NASA Astrophysics Data System (ADS)

    Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng

    2018-05-01

    Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.

  4. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  5. Estimating flow duration curve in the humid tropics: a disaggregation approach in Hawaiian catchments

    NASA Astrophysics Data System (ADS)

    Chris, Leong; Yoshiyuki, Yokoo

    2017-04-01

    Islands that are concentrated in developing countries have poor hydrological research data which contribute to stress on hydrological resources due to unmonitored human influence and negligence. As studies in islands are relatively young, there is a need to understand these stresses and influences by building block research specifically targeting islands. The flow duration curve (FDC) is a simple start up hydrological tool that can be used in initial studies of islands. This study disaggregates the FDC into three sections, top, middle and bottom and in each section runoff is estimated with simple hydrological models. The study is based on Hawaiian Islands, toward estimating runoff in ungauged island catchments in the humid tropics. Runoff estimations in the top and middle sections include using the Curve Number (CN) method and the Regime Curve (RC) respectively. The bottom section is presented as a separate study from this one. The results showed that for majority of the catchments the RC can be used for estimations in the middle section of the FDC. It also showed that in order for the CN method to make stable estimations, it had to be calibrated. This study identifies simple methodologies that can be useful for making runoff estimations in ungauged island catchments.

  6. Simultaneous determination of loxoprofen and its diastereomeric alcohol metabolites in human plasma and urine by a simple HPLC-UV detection method.

    PubMed

    Choo, K S; Kim, I W; Jung, J K; Suh, Y G; Chung, S J; Lee, M H; Shim, C K

    2001-06-01

    A simple, reliable HPLC-UV detection method was developed for the simultaneous determination of loxoprofen and its metabolites (i.e. trans- and cis-alcohol metabolites), in human plasma and urine samples. The method involves the addition of a ketoprofen (internal standard) solution in methanol, zinc sulfate solution and acetonitrile to plasma and urine samples, followed by centrifugation. An aliquot of the supernatant was evaporated to dryness, and the residue reconstituted in a mobile phase (acetonitrile:water=35:65 v/v, pH 3.0). An aliquot of the solution was then directly injected into the HPLC system. Separations were performed on octadecylsilica column (250x4.5 mm, 5 microm) with a guard column (3.2x1.5 cm, 7 microm) at ambient temperature. Loxoprofen and the metabolites in the eluent were monitored at 220 nm (a.u.f.s. 0.005). Coefficients of variations (CV%) and recoveries for loxoprofen and its metabolites were below 10 and over 96%, respectively, in the 200 approximately 15000 ng ml(-1) range for plasma and 500 approximately 50000 ng ml(-1) range for urine. Calibration curves for all the compounds in the plasma and urine were linear over the above-mentioned concentration ranges with a common correlation coefficient of 0.999. The detection limit of the present method was 100 ng for all the compounds. These results indicate that the present method is very simple and readily applicable to routine bioavailability studies of these compounds with an acceptable sensitivity.

  7. Quantitative analysis of Sudan dye adulteration in paprika powder using FTIR spectroscopy.

    PubMed

    Lohumi, Santosh; Joshi, Ritu; Kandpal, Lalit Mohan; Lee, Hoonsoo; Kim, Moon S; Cho, Hyunjeong; Mo, Changyeun; Seo, Young-Wook; Rahman, Anisur; Cho, Byoung-Kwan

    2017-05-01

    As adulteration of foodstuffs with Sudan dye, especially paprika- and chilli-containing products, has been reported with some frequency, this issue has become one focal point for addressing food safety. FTIR spectroscopy has been used extensively as an analytical method for quality control and safety determination for food products. Thus, the use of FTIR spectroscopy for rapid determination of Sudan dye in paprika powder was investigated in this study. A net analyte signal (NAS)-based methodology, named HLA/GO (hybrid linear analysis in the literature), was applied to FTIR spectral data to predict Sudan dye concentration. The calibration and validation sets were designed to evaluate the performance of the multivariate method. The obtained results had a high determination coefficient (R 2 ) of 0.98 and low root mean square error (RMSE) of 0.026% for the calibration set, and an R 2 of 0.97 and RMSE of 0.05% for the validation set. The model was further validated using a second validation set and through the figures of merit, such as sensitivity, selectivity, and limits of detection and quantification. The proposed technique of FTIR combined with HLA/GO is rapid, simple and low cost, making this approach advantageous when compared with the main alternative methods based on liquid chromatography (LC) techniques.

  8. Simultaneous determination of hydroquinone, catechol and resorcinol by voltammetry using graphene screen-printed electrodes and partial least squares calibration.

    PubMed

    Aragó, Miriam; Ariño, Cristina; Dago, Àngela; Díaz-Cruz, José Manuel; Esteban, Miquel

    2016-11-01

    Catechol (CC), resorcinol (RC) and hydroquinone (HQ) are dihydroxybenzene isomers that usually coexist in different samples and can be determined using voltammetric techniques taking profit of their fast response, high sensitivity and selectivity, cheap instrumentation, simple and timesaving operation modes. However, a strong overlapping of CC and HQ signals is observed hindering their accurate analysis. In the present work, the combination of differential pulse voltammetry with graphene screen-printed electrodes (allowing detection limits of 2.7, 1.7 and 2.4µmolL(-1) for HQ, CC and RC respectively) and the data analysis by partial least squares calibration (giving root mean square errors of prediction, RMSEP values, of 2.6, 4.1 and 2.3 for HQ, CC and RC respectively) has been proposed as a powerful tool for the quantification of mixtures of these dihydroxybenzene isomers. The commercial availability of the screen-printed devices and the low cost and simplicity of the analysis suggest that the proposed method can be a valuable alternative to chromatographic and electrophoretic methods for the considered species. The method has been applied to the analysis of these isomers in spiked tap water. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Calibration and combination of monthly near-surface temperature and precipitation predictions over Europe

    NASA Astrophysics Data System (ADS)

    Rodrigues, Luis R. L.; Doblas-Reyes, Francisco J.; Coelho, Caio A. S.

    2018-02-01

    A Bayesian method known as the Forecast Assimilation (FA) was used to calibrate and combine monthly near-surface temperature and precipitation outputs from seasonal dynamical forecast systems. The simple multimodel (SMM), a method that combines predictions with equal weights, was used as a benchmark. This research focuses on Europe and adjacent regions for predictions initialized in May and November, covering the boreal summer and winter months. The forecast quality of the FA and SMM as well as the single seasonal dynamical forecast systems was assessed using deterministic and probabilistic measures. A non-parametric bootstrap method was used to account for the sampling uncertainty of the forecast quality measures. We show that the FA performs as well as or better than the SMM in regions where the dynamical forecast systems were able to represent the main modes of climate covariability. An illustration with the near-surface temperature over North Atlantic, the Mediterranean Sea and Middle-East in summer months associated with the well predicted first mode of climate covariability is offered. However, the main modes of climate covariability are not well represented in most situations discussed in this study as the seasonal dynamical forecast systems have limited skill when predicting the European climate. In these situations, the SMM performs better more often.

  10. Full Flight Envelope Direct Thrust Measurement on a Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.; Sims, Robert L.

    1998-01-01

    Direct thrust measurement using strain gages offers advantages over analytically-based thrust calculation methods. For flight test applications, the direct measurement method typically uses a simpler sensor arrangement and minimal data processing compared to analytical techniques, which normally require costly engine modeling and multisensor arrangements throughout the engine. Conversely, direct thrust measurement has historically produced less than desirable accuracy because of difficulty in mounting and calibrating the strain gages and the inability to account for secondary forces that influence the thrust reading at the engine mounts. Consequently, the strain-gage technique has normally been used for simple engine arrangements and primarily in the subsonic speed range. This paper presents the results of a strain gage-based direct thrust-measurement technique developed by the NASA Dryden Flight Research Center and successfully applied to the full flight envelope of an F-15 aircraft powered by two F100-PW-229 turbofan engines. Measurements have been obtained at quasi-steady-state operating conditions at maximum non-augmented and maximum augmented power throughout the altitude range of the vehicle and to a maximum speed of Mach 2.0 and are compared against results from two analytically-based thrust calculation methods. The strain-gage installation and calibration processes are also described.

  11. Simple and rapid analytical method for detection of amino acids in blood using blood spot on filter paper, fast-GC/MS and isotope dilution technique.

    PubMed

    Kawana, Shuichi; Nakagawa, Katsuhiro; Hasegawa, Yuki; Yamaguchi, Seiji

    2010-11-15

    A simple and rapid method for quantitative analysis of amino acids, including valine (Val), leucine (Leu), isoleucine (Ile), methionine (Met) and phenylalanine (Phe), in whole blood has been developed using GC/MS. In this method, whole blood was collected using a filter paper technique, and a 1/8 in. blood spot punch was used for sample preparation. Amino acids were extracted from the sample, and the extracts were purified using cation-exchange resins. The isotope dilution method using ²H₈-Val, ²H₃-Leu, ²H₃-Met and ²H₅-Phe as internal standards was applied. Following propyl chloroformate derivatization, the derivatives were analyzed using fast-GC/MS. The extraction recoveries using these techniques ranged from 69.8% to 87.9%, and analysis time for each sample was approximately 26 min. Calibration curves at concentrations from 0.0 to 1666.7 μmol/l for Val, Leu, Ile and Phe and from 0.0 to 333.3 μmol/l for Met showed good linearity with regression coefficients=1. The method detection limits for Val, Leu, Ile, Met and Phe were 24.2, 16.7, 8.7, 1.5 and 12.9 μmol/l, respectively. This method was applied to blood spot samples obtained from patients with phenylketonuria (PKU), maple syrup urine disease (MSUD), hypermethionine and neonatal intrahepatic cholestasis caused by citrin deficiency (NICCD), and the analysis results showed that the concentrations of amino acids that characterize these diseases were increased. These results indicate that this method provides a simple and rapid procedure for precise determination of amino acids in whole blood. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Development and validation of new spectrophotometric ratio H-point standard addition method and application to gastrointestinal acting drugs mixtures.

    PubMed

    Yehia, Ali M

    2013-05-15

    New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Development and validation of new spectrophotometric ratio H-point standard addition method and application to gastrointestinal acting drugs mixtures

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.

    2013-05-01

    New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines.

  14. Rapid determination of free fatty acid content in waste deodorizer distillates using single bounce-attenuated total reflectance-FTIR spectroscopy.

    PubMed

    Naz, Saba; Sherazi, Sayed Tufail Hussain; Talpur, Farah N; Mahesar, Sarfaraz A; Kara, Huseyin

    2012-01-01

    A simple, rapid, economical, and environmentally friendly analytical method was developed for the quantitative assessment of free fatty acids (FFAs) present in deodorizer distillates and crude oils by single bounce-attenuated total reflectance-FTIR spectroscopy. Partial least squares was applied for the calibration model based on the peak region of the carbonyl group (C=O) from 1726 to 1664 cm(-1) associated with the FFAs. The proposed method totally avoided the use of organic solvents or costly standards and could be applied easily in the oil processing industry. The accuracy of the method was checked by comparison to a conventional standard American Oil Chemists' Society (AOCS) titrimetric procedure, which provided good correlation (R = 0.99980), with an SD of +/- 0.05%. Therefore, the proposed method could be used as an alternate to the AOCS titrimetric method for the quantitative determination of FFAs especially in deodorizer distillates.

  15. Validated spectrophotometric methods for simultaneous determination of Omeprazole, Tinidazole and Doxycycline in their ternary mixture

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Hegazy, Maha A.; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-01-01

    A comparative study of smart spectrophotometric techniques for the simultaneous determination of Omeprazole (OMP), Tinidazole (TIN) and Doxycycline (DOX) without prior separation steps is developed. These techniques consist of several consecutive steps utilizing zero/or ratio/or derivative spectra. The proposed techniques adopt nine simple different methods, namely direct spectrophotometry, dual wavelength, first derivative-zero crossing, amplitude factor, spectrum subtraction, ratio subtraction, derivative ratio-zero crossing, constant center, and successive derivative ratio method. The calibration graphs are linear over the concentration range of 1-20 μg/mL, 5-40 μg/mL and 2-30 μg/mL for OMP, TIN and DOX, respectively. These methods are tested by analyzing synthetic mixtures of the above drugs and successfully applied to commercial pharmaceutical preparation. The methods that are validated according to the ICH guidelines, accuracy, precision, and repeatability, were found to be within the acceptable limits.

  16. Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter

    NASA Astrophysics Data System (ADS)

    Tulyakov, Stepan; Ivanov, Anton; Thomas, Nicolas; Roloff, Victoria; Pommerol, Antoine; Cremonese, Gabriele; Weigel, Thomas; Fleuret, Francois

    2018-01-01

    There are many geometric calibration methods for "standard" cameras. These methods, however, cannot be used for the calibration of telescopes with large focal lengths and complex off-axis optics. Moreover, specialized calibration methods for the telescopes are scarce in literature. We describe the calibration method that we developed for the Colour and Stereo Surface Imaging System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO). Although our method is described in the context of CaSSIS, with camera-specific experiments, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.

  17. Demonstrating Principles of Spectrophotometry by Constructing a Simple, Low-Cost, Functional Spectrophotometer Utilizing the Light Sensor on a Smartphone

    ERIC Educational Resources Information Center

    Hosker, Bill S.

    2018-01-01

    A highly simplified variation on the do-it-yourself spectrophotometer using a smartphone's light sensor as a detector and an app to calculate and display absorbance values was constructed and tested. This simple version requires no need for electronic components or postmeasurement spectral analysis. Calibration graphs constructed from two…

  18. Assessing the Impact of Retreat Mechanisms in a Simple Antarctic Ice Sheet Model Using Bayesian Calibration.

    PubMed

    Ruckert, Kelsey L; Shaffer, Gary; Pollard, David; Guan, Yawen; Wong, Tony E; Forest, Chris E; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing climate forcings is an important driver of sea-level changes. Anthropogenic climate change may drive a sizeable AIS tipping point response with subsequent increases in coastal flooding risks. Many studies analyzing flood risks use simple models to project the future responses of AIS and its sea-level contributions. These analyses have provided important new insights, but they are often silent on the effects of potentially important processes such as Marine Ice Sheet Instability (MISI) or Marine Ice Cliff Instability (MICI). These approximations can be well justified and result in more parsimonious and transparent model structures. This raises the question of how this approximation impacts hindcasts and projections. Here, we calibrate a previously published and relatively simple AIS model, which neglects the effects of MICI and regional characteristics, using a combination of observational constraints and a Bayesian inversion method. Specifically, we approximate the effects of missing MICI by comparing our results to those from expert assessments with more realistic models and quantify the bias during the last interglacial when MICI may have been triggered. Our results suggest that the model can approximate the process of MISI and reproduce the projected median melt from some previous expert assessments in the year 2100. Yet, our mean hindcast is roughly 3/4 of the observed data during the last interglacial period and our mean projection is roughly 1/6 and 1/10 of the mean from a model accounting for MICI in the year 2100. These results suggest that missing MICI and/or regional characteristics can lead to a low-bias during warming period AIS melting and hence a potential low-bias in projected sea levels and flood risks.

  19. A microcontroller-based microwave free-space measurement system for permittivity determination of lossy liquid materials.

    PubMed

    Hasar, U C

    2009-05-01

    A microcontroller-based noncontact and nondestructive microwave free-space measurement system for real-time and dynamic determination of complex permittivity of lossy liquid materials has been proposed. The system is comprised of two main sections--microwave and electronic. While the microwave section provides for measuring only the amplitudes of reflection coefficients, the electronic section processes these data and determines the complex permittivity using a general purpose microcontroller. The proposed method eliminates elaborate liquid sample holder preparation and only requires microwave components to perform reflection measurements from one side of the holder. In addition, it explicitly determines the permittivity of lossy liquid samples from reflection measurements at different frequencies without any knowledge on sample thickness. In order to reduce systematic errors in the system, we propose a simple calibration technique, which employs simple and readily available standards. The measurement system can be a good candidate for industrial-based applications.

  20. Robust, low-cost data loggers for stream temperature, flow intermittency, and relative conductivity monitoring

    USGS Publications Warehouse

    Chapin, Thomas; Todd, Andrew S.; Zeigler, Matthew P.

    2014-01-01

    Water temperature and streamflow intermittency are critical parameters influencing aquatic ecosystem health. Low-cost temperature loggers have made continuous water temperature monitoring relatively simple but determining streamflow timing and intermittency using temperature data alone requires significant and subjective data interpretation. Electrical resistance (ER) sensors have recently been developed to overcome the major limitations of temperature-based methods for the assessment of streamflow intermittency. This technical note introduces the STIC (Stream Temperature, Intermittency, and Conductivity logger); a robust, low-cost, simple to build instrument that provides long-duration, high-resolution monitoring of both relative conductivity (RC) and temperature. Simultaneously collected temperature and RC data provide unambiguous water temperature and streamflow intermittency information that is crucial for monitoring aquatic ecosystem health and assessing regulatory compliance. With proper calibration, the STIC relative conductivity data can be used to monitor specific conductivity.

  1. Calibration of mass spectrometric peptide mass fingerprint data without specific external or internal calibrants

    PubMed Central

    Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut

    2005-01-01

    Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175

  2. An investigation into force-moment calibration techniques applicable to a magnetic suspension and balance system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Eskins, Jonathan

    1988-01-01

    The problem of determining the forces and moments acting on a wind tunnel model suspended in a Magnetic Suspension and Balance System is addressed. Two calibration methods were investigated for three types of model cores, i.e., Alnico, Samarium-Cobalt, and a superconducting solenoid. Both methods involve calibrating the currents in the electromagnetic array against known forces and moments. The first is a static calibration method using calibration weights and a system of pulleys. The other method, dynamic calibration, involves oscillating the model and using its inertia to provide calibration forces and moments. Static calibration data, found to produce the most reliable results, is presented for three degrees of freedom at 0, 15, and -10 deg angle of attack. Theoretical calculations are hampered by the inability to represent iron-cored electromagnets. Dynamic calibrations, despite being quicker and easier to perform, are not as accurate as static calibrations. Data for dynamic calibrations at 0 and 15 deg is compared with the relevant static data acquired. Distortion of oscillation traces is cited as a major source of error in dynamic calibrations.

  3. The calibration methods for Multi-Filter Rotating Shadowband Radiometer: a review

    NASA Astrophysics Data System (ADS)

    Chen, Maosi; Davis, John; Tang, Hongzhao; Ownby, Carolyn; Gao, Wei

    2013-09-01

    The continuous, over two-decade data record from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) is ideal for climate research which requires timely and accurate information of important atmospheric components such as gases, aerosols, and clouds. Except for parameters derived from MFRSR measurement ratios, which are not impacted by calibration error, most applications require accurate calibration factor(s), angular correction, and spectral response function(s) from calibration. Although a laboratory lamp (or reference) calibration can provide all the information needed to convert the instrument readings to actual radiation, in situ calibration methods are implemented routinely (daily) to fill the gaps between lamp calibrations. In this paper, the basic structure and the data collection and pretreatment of the MFRSR are described. The laboratory lamp calibration and its limitations are summarized. The cloud screening algorithms for MFRSR data are presented. The in situ calibration methods, the standard Langley method and its variants, the ratio-Langley method, the general method, Alexandrov's comprehensive method, and Chen's multi-channel method, are outlined. The reason that all these methods do not fit for all situations is that they assume some properties, such as aerosol optical depth (AOD), total optical depth (TOD), precipitable water vapor (PWV), effective size of aerosol particles, or angstrom coefficient, are invariant over time. These properties are not universal and some of them rarely happen. In practice, daily calibration factors derived from these methods should be smoothed to restrain error.

  4. Tunable, Flexible, and Efficient Optimization of Control Pulses for Practical Qubits

    NASA Astrophysics Data System (ADS)

    Machnes, Shai; Assémat, Elie; Tannor, David; Wilhelm, Frank K.

    2018-04-01

    Quantum computation places very stringent demands on gate fidelities, and experimental implementations require both the controls and the resultant dynamics to conform to hardware-specific constraints. Superconducting qubits present the additional requirement that pulses must have simple parameterizations, so they can be further calibrated in the experiment, to compensate for uncertainties in system parameters. Other quantum technologies, such as sensing, require extremely high fidelities. We present a novel, conceptually simple and easy-to-implement gradient-based optimal control technique named gradient optimization of analytic controls (GOAT), which satisfies all the above requirements, unlike previous approaches. To demonstrate GOAT's capabilities, with emphasis on flexibility and ease of subsequent calibration, we optimize fast coherence-limited pulses for two leading superconducting qubits architectures—flux-tunable transmons and fixed-frequency transmons with tunable couplers.

  5. A simple and green analytical method for determination of glyphosate in commercial formulations and water by diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    da Silva, Aline Santana; Fernandes, Flávio Cesar Bedatty; Tognolli, João Olímpio; Pezza, Leonardo; Pezza, Helena Redigolo

    2011-09-01

    This article describes a simple, inexpensive, and environmentally friendly method for the monitoring of glyphosate using diffuse reflectance spectroscopy. The proposed method is based on reflectance measurements of the colored compound produced from the spot test reaction between glyphosate and p-dimethylaminocinnamaldehyde ( p-DAC) in acid medium, using a filter paper as solid support. Experimental designs were used to optimize the analytical conditions. All reflectance measurements were carried out at 495 nm. Under optimal conditions, the glyphosate calibration graphs obtained by plotting the optical density of the reflectance signal (A R) against the concentration were linear in the range 50-500 μg mL -1, with a correlation coefficient of 0.9987. The limit of detection (LOD) for glyphosate was 7.28 μg mL -1. The technique was successfully applied to the direct determination of glyphosate in commercial formulations, as well as in water samples (river water, pure water and mineral drinking water) after a previous clean-up or pre-concentration step. Recoveries were in the ranges 93.2-102.6% and 91.3-102.9% for the commercial formulations and water samples, respectively.

  6. A simple and green analytical method for determination of glyphosate in commercial formulations and water by diffuse reflectance spectroscopy.

    PubMed

    da Silva, Aline Santana; Fernandes, Flávio Cesar Bedatty; Tognolli, João Olímpio; Pezza, Leonardo; Pezza, Helena Redigolo

    2011-09-01

    This article describes a simple, inexpensive, and environmentally friendly method for the monitoring of glyphosate using diffuse reflectance spectroscopy. The proposed method is based on reflectance measurements of the colored compound produced from the spot test reaction between glyphosate and p-dimethylaminocinnamaldehyde (p-DAC) in acid medium, using a filter paper as solid support. Experimental designs were used to optimize the analytical conditions. All reflectance measurements were carried out at 495 nm. Under optimal conditions, the glyphosate calibration graphs obtained by plotting the optical density of the reflectance signal (AR) against the concentration were linear in the range 50-500 μg mL(-1), with a correlation coefficient of 0.9987. The limit of detection (LOD) for glyphosate was 7.28 μg mL(-1). The technique was successfully applied to the direct determination of glyphosate in commercial formulations, as well as in water samples (river water, pure water and mineral drinking water) after a previous clean-up or pre-concentration step. Recoveries were in the ranges 93.2-102.6% and 91.3-102.9% for the commercial formulations and water samples, respectively. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Determination of Organothiophosphate Insecticides in Environmental Water Samples by a Very Simple and Sensitive Spectrofluorimetric Method.

    PubMed

    Bavili Tabrizi, Ahad; Abdollahi, Ali

    2015-10-01

    A simple, rapid and sensitive spectrofluorimetric method was developed for the determination of di-syston, ethion and phorate in environmental water samples. The procedure is based on the oxidation of these pesticides with cerium (IV) to produce cerium (III), and its fluorescence was monitored at 368 ± 3 nm after excitation at 257 ± 3 nm. The variables effecting oxidation of each pesticide were studied and optimized. Under the experimental conditions used, the calibration graphs were linear over the range 0.2-15, 0.1-13, 0.1-13 ng mL(-1) for di-syston, ethion and phorate, respectively. The limit of detection and quantification were in the range 0.034-0.096 and 0.112-0.316 ng mL(-1), respectively. Intra- and inter-day assay precisions, expressed as the relative standard deviation (RSD), were lower than 5.2 % and 6.7 %, respectively. Good recoveries in the range 86 %-108 % were obtained for spiked water samples. The proposed method was applied to the determination of studied pesticides in environmental water samples.

  8. Determination of flavonoids from Orthosiphon stamineus in plasma using a simple HPLC method with ultraviolet detection.

    PubMed

    Loon, Yit Hong; Wong, Jia Woei; Yap, Siew Ping; Yuen, Kah Hay

    2005-02-25

    A simple liquid chromatographic method was developed for the simultaneous determination of flavonoids from Orthosiphon stamineus Benth, namely sinensitin, eupatorin and 3'-hydroxy-5,6,7,4'-tetramethoxyflavone, in plasma. Prior to analysis, the flavonoids and the internal standard (naproxen) were extracted from plasma samples using a 1:1 mixture of ethyl acetate and chloroform. The detection and quantification limits for the three flavonoids were similar being 3 and 5 ng/ml, respectively. The within-day and between-day accuracy values, expressed as percentage of true values, for the three flavonoids were between 95 and 107%, while the corresponding precision, expressed as coefficients of variation, for the three flavonoids were less than 14%. In addition, the mean recovery values of the extraction procedure for all the flavonoids were between 92 and 114%. The calibration curves were linear over a concentration range of 5-4000 ng/ml. The present method was applied to analyse plasma samples obtained from a pilot study using rats in which the mean absolute oral bioavailability values for sinensitin, eupatorin and 3'-hydroxy-5,6,7,4'-tetramethoxyflavone was 9.4, 1.0 and 1.5%, respectively.

  9. Development and validation of a simple GC-MS method for the simultaneous determination of 11 anticholinesterase pesticides in blood--clinical and forensic toxicology applications.

    PubMed

    Papoutsis, Ioannis; Mendonis, Marcela; Nikolaou, Panagiota; Athanaselis, Sotirios; Pistos, Constantinos; Maravelias, Constantinos; Spiliopoulou, Chara

    2012-05-01

    Anticholinesterase pesticides are widely used, and as a result they are involved in numerous acute and even fatal poisonings. The aim of this study was the development, optimization, and validation of a simple, rapid, specific, and sensitive gas chromatography-mass spectrometry method for the determination of 11 anticholinesterase pesticides (aldicarb, azinphos methyl, carbofuran, chlorpyrifos, dialifos, diazinon, malathion, methamidophos, methidathion, methomyl, and terbufos) in blood. Only 500 μL of blood was used, and the recoveries after liquid-liquid extraction (toluene/chloroform, 4:1, v/v) were more than 65.6%. The calibration curves were linear (R(2) ≥ 0.996). Limit of detections and limit of quantifications were found to be between 1.00-10.0 and 3.00-30.0 μg/L, respectively. Accuracy expressed as the %E(r) was found to be between -11.0 and 7.8%. Precision expressed as the percent relative standard deviation was found to be <9.4%. The developed method can be applied for the investigation of both forensic and clinical cases of accidental or suicidal poisoning with these pesticides. © 2011 American Academy of Forensic Sciences.

  10. Optimized star sensors laboratory calibration method using a regularization neural network.

    PubMed

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  11. Features calibration of the dynamic force transducers

    NASA Astrophysics Data System (ADS)

    Sc., M. Yu Prilepko D.; Lysenko, V. G.

    2018-04-01

    The article discusses calibration methods of dynamic forces measuring instruments. The relevance of work is dictated by need to valid definition of the dynamic forces transducers metrological characteristics taking into account their intended application. The aim of this work is choice justification of calibration method, which provides the definition dynamic forces transducers metrological characteristics under simulation operating conditions for determining suitability for using in accordance with its purpose. The following tasks are solved: the mathematical model and the main measurements equation of calibration dynamic forces transducers by load weight, the main budget uncertainty components of calibration are defined. The new method of dynamic forces transducers calibration with use the reference converter “force-deformation” based on the calibrated elastic element and measurement of his deformation by a laser interferometer is offered. The mathematical model and the main measurements equation of the offered method is constructed. It is shown that use of calibration method based on measurements by the laser interferometer of calibrated elastic element deformations allows to exclude or to considerably reduce the uncertainty budget components inherent to method of load weight.

  12. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  13. Detection of Posaconazole by Surface-Assisted Laser Desorption/Ionization Mass Spectrometry with Dispersive Liquid-Liquid Microextraction

    NASA Astrophysics Data System (ADS)

    Lin, Sheng-Yu; Chen, Pin-Shiuan; Chang, Sarah Y.

    2015-03-01

    A simple, rapid, and sensitive method for the detection of posaconazole using dispersive liquid-liquid microextraction (DLLME) coupled to surface-assisted laser desorption/ionization mass spectrometric detection (SALDI/MS) was developed. After the DLLME, posaconazole was detected using SALDI/MS with colloidal gold and α-cyano-4-hydroxycinnamic acid (CHCA) as the co-matrix. Under optimal extraction and detection conditions, the calibration curve, which ranged from 1.0 to 100.0 nM for posaconazole, was observed to be linear. The limit of detection (LOD) at a signal-to-noise ratio of 3 was 0.3 nM for posaconazole. This novel method was successfully applied to the determination of posaconazole in human urine samples.

  14. Quasi solution of radiation transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogosbekyan, L.R.; Lysov, D.A.

    There is uncertainty with experimental data as well as with input data of theoretical calculations. The neutron distribution from the variational principle, which takes into account both theoretical and experimental data, is obtained to increase the accuracy and speed of neutronic calculations. The neutron imbalance in mesh cells and the discrepancy between experimentally measured and calculated functional of the neutron distribution are simultaneously minimized. A fast-working and simple-programming iteration method is developed to minimize the objective functional. The method can be used in the core monitoring and control system for (a) power distribution calculations, (b) in- and ex-core detector calibration,more » (c) macro-cross sections or isotope distribution correction by experimental data, and (d) core and detector diagnostics.« less

  15. Application of the desulfurization of phenothiazines for a sensitive detection method by high-performance liquid chromatography.

    PubMed

    Shimada, K; Mino, T; Nakajima, M; Wakabayashi, H; Yamato, S

    1994-11-04

    A simple and sensitive high-performance liquid chromatographic (HPLC) method for the determination of phenothiazine (PHE) is described. PHE is converted to diphenylamine (DIP) by desulfurization with Raney nickel catalyst. DIP is highly sensitive to electrochemical detection. The calibration graph for PHE quantification after desulfurization was linear between 0.1 and 2.0 ng per injection. The detection limit (signal-to-noise ratio = 3) of PHE after desulfurization was 10 pg, which is twenty times higher than that of the parent compound PHE. The proposed desulfurization technique was applied to other PHE-related compounds. The structural confirmation of the desulfurized product of PHE was carried out by LC-MS using atmospheric pressure chemical ionization.

  16. [Determination of protopine in Corydalis racemose by HPLC].

    PubMed

    Jiang, Xiazhi; Ye, Jinxia; Zeng, Jianwei; Zou, Xiuhong; Wu, Jinzhong

    2010-09-01

    To develop a HPLC method for determining the content of protopine in Corydalis racemose. Analysis was performed on a Gemini C18 column (4.6 mm x 250 mm, 5 microm) eluted with acetonitrile-water containing 0.8% triethylamine and 3% acetic acid acetum (20:80) as the mobile phase. The flow rate was 1.0 mL x min(-1). The detection wavelength was 289 nm. The average content of protopine in Herb of Racemose Corydalis was 0.905%. The calibration curve of protopine was linear between 0.124-1.36 microg (r = 0.9999). The average recovery was 98.49% with RSD 1.9%. This method is simple, reproducible and can be used to determine the content of protopine in C. racemose.

  17. Identification of important image features for pork and turkey ham classification using colour and wavelet texture features and genetic selection.

    PubMed

    Jackman, Patrick; Sun, Da-Wen; Allen, Paul; Valous, Nektarios A; Mendoza, Fernando; Ward, Paddy

    2010-04-01

    A method to discriminate between various grades of pork and turkey ham was developed using colour and wavelet texture features. Image analysis methods originally developed for predicting the palatability of beef were applied to rapidly identify the ham grade. With high quality digital images of 50-94 slices per ham it was possible to identify the greyscale that best expressed the differences between the various ham grades. The best 10 discriminating image features were then found with a genetic algorithm. Using the best 10 image features, simple linear discriminant analysis models produced 100% correct classifications for both pork and turkey on both calibration and validation sets. 2009 Elsevier Ltd. All rights reserved.

  18. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2016-12-06

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.

  19. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method

    PubMed Central

    Tuta, Jure; Juric, Matjaz B.

    2016-01-01

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements. PMID:27929453

  20. A microwave imaging-based 3D localization algorithm for an in-body RF source as in wireless capsule endoscopes.

    PubMed

    Chandra, Rohit; Balasingham, Ilangko

    2015-01-01

    A microwave imaging-based technique for 3D localization of an in-body RF source is presented. Such a technique can be useful for localization of an RF source as in wireless capsule endoscopes for positioning of any abnormality in the gastrointestinal tract. Microwave imaging is used to determine the dielectric properties (relative permittivity and conductivity) of the tissues that are required for a precise localization. A 2D microwave imaging algorithm is used for determination of the dielectric properties. Calibration method is developed for removing any error due to the used 2D imaging algorithm on the imaging data of a 3D body. The developed method is tested on a simple 3D heterogeneous phantom through finite-difference-time-domain simulations. Additive white Gaussian noise at the signal-to-noise ratio of 30 dB is added to the simulated data to make them more realistic. The developed calibration method improves the imaging and the localization accuracy. Statistics on the localization accuracy are generated by randomly placing the RF source at various positions inside the small intestine of the phantom. The cumulative distribution function of the localization error is plotted. In 90% of the cases, the localization accuracy was found within 1.67 cm, showing the capability of the developed method for 3D localization.

  1. Quantification of endocrine disruptors and pesticides in water by gas chromatography-tandem mass spectrometry. Method validation using weighted linear regression schemes.

    PubMed

    Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P

    2010-10-22

    A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. ATR-FTIR membrane-based sensor for the simultaneous determination of surfactant and oil total indices in industrial degreasing baths.

    PubMed

    Lucena, Rafael; Cárdenas, Soledad; Gallego, Mercedes; Valcárcel, Miguel

    2006-03-01

    Monitoring the exhaustion of alkaline degreasing baths is one of the main aspects in metal mechanizing industrial process control. The global level of surfactant, and mainly grease, can be used as ageing indicators. In this paper, an attenuated total reflection-Fourier transform infrared (ATR-FTIR) membrane-based sensor is presented for the determination of these parameters. The system is based on a micro-liquid-liquid extraction of the analytes through a polymeric membrane from the aqueous to the organic solvent layer which is in close contact with the internal reflection element and continuously monitored. Samples are automatically processed using a simple, robust sequential injection analysis (SIA) configuration, on-line coupled to the instrument. The global signal obtained for both families of compounds are processed via a multivariate calibration technique (partial least squares, PLS). Excellent correlation was obtained for the values given by the proposed method compared to those of the gravimetric reference one with very low error values for both calibration and validation.

  3. Quantitative Modeling of Cerenkov Light Production Efficiency from Medical Radionuclides

    PubMed Central

    Beattie, Bradley J.; Thorek, Daniel L. J.; Schmidtlein, Charles R.; Pentlow, Keith S.; Humm, John L.; Hielscher, Andreas H.

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use. PMID:22363636

  4. Limits of detection and decision. Part 3

    NASA Astrophysics Data System (ADS)

    Voigtman, E.

    2008-02-01

    It has been shown that the MARLAP (Multi-Agency Radiological Laboratory Analytical Protocols) for estimating the Currie detection limit, which is based on 'critical values of the non-centrality parameter of the non-central t distribution', is intrinsically biased, even if no calibration curve or regression is used. This completed the refutation of the method, begun in Part 2. With the field cleared of obstructions, the true theory underlying Currie's limits of decision, detection and quantification, as they apply in a simple linear chemical measurement system (CMS) having heteroscedastic, Gaussian measurement noise and using weighted least squares (WLS) processing, was then derived. Extensive Monte Carlo simulations were performed, on 900 million independent calibration curves, for linear, "hockey stick" and quadratic noise precision models (NPMs). With errorless NPM parameters, all the simulation results were found to be in excellent agreement with the derived theoretical expressions. Even with as much as 30% noise on all of the relevant NPM parameters, the worst absolute errors in rates of false positives and false negatives, was only 0.3%.

  5. CrowdWater - Can people observe what models need?

    NASA Astrophysics Data System (ADS)

    van Meerveld, I. H. J.; Seibert, J.; Vis, M.; Etter, S.; Strobl, B.

    2017-12-01

    CrowdWater (www.crowdwater.ch) is a citizen science project that explores the usefulness of crowd-sourced data for hydrological model calibration and prediction. Hydrological models are usually calibrated based on observed streamflow data but it is likely easier for people to estimate relative stream water levels, such as the water level above or below a rock, than streamflow. Relative stream water levels may, therefore, be a more suitable variable for citizen science projects than streamflow. In order to test this assumption, we held surveys near seven different sized rivers in Switzerland and asked more than 450 volunteers to estimate the water level class based on a picture with a virtual staff gauge. The results show that people can generally estimate the relative water level well, although there were also a few outliers. We also asked the volunteers to estimate streamflow based on the stick method. The median estimated streamflow was close to the observed streamflow but the spread in the streamflow estimates was large and there were very large outliers, suggesting that crowd-based streamflow data is highly uncertain. In order to determine the potential value of water level class data for model calibration, we converted streamflow time series for 100 catchments in the US to stream level class time series and used these to calibrate the HBV model. The model was then validated using the streamflow data. The results of this modeling exercise show that stream level class data are useful for constraining a simple runoff model. Time series of only two stream level classes, e.g. above or below a rock in the stream, were already informative, especially when the class boundary was chosen towards the highest stream levels. There was hardly any improvement in model performance when more than five water level classes were used. This suggests that if crowd-sourced stream level observations are available for otherwise ungauged catchments, these data can be used to constrain a simple runoff model and to generate simulated streamflow time series from the level observations.

  6. Quantifying uncertainties in streamflow predictions through signature based inference of hydrological model parameters

    NASA Astrophysics Data System (ADS)

    Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro

    2016-04-01

    The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.

  7. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  8. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    PubMed

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  9. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  10. An operational epidemiological model for calibrating agent-based simulations of pandemic influenza outbreaks.

    PubMed

    Prieto, D; Das, T K

    2016-03-01

    Uncertainty of pandemic influenza viruses continue to cause major preparedness challenges for public health policymakers. Decisions to mitigate influenza outbreaks often involve tradeoff between the social costs of interventions (e.g., school closure) and the cost of uncontrolled spread of the virus. To achieve a balance, policymakers must assess the impact of mitigation strategies once an outbreak begins and the virus characteristics are known. Agent-based (AB) simulation is a useful tool for building highly granular disease spread models incorporating the epidemiological features of the virus as well as the demographic and social behavioral attributes of tens of millions of affected people. Such disease spread models provide excellent basis on which various mitigation strategies can be tested, before they are adopted and implemented by the policymakers. However, to serve as a testbed for the mitigation strategies, the AB simulation models must be operational. A critical requirement for operational AB models is that they are amenable for quick and simple calibration. The calibration process works as follows: the AB model accepts information available from the field and uses those to update its parameters such that some of its outputs in turn replicate the field data. In this paper, we present our epidemiological model based calibration methodology that has a low computational complexity and is easy to interpret. Our model accepts a field estimate of the basic reproduction number, and then uses it to update (calibrate) the infection probabilities in a way that its effect combined with the effects of the given virus epidemiology, demographics, and social behavior results in an infection pattern yielding a similar value of the basic reproduction number. We evaluate the accuracy of the calibration methodology by applying it for an AB simulation model mimicking a regional outbreak in the US. The calibrated model is shown to yield infection patterns closely replicating the input estimates of the basic reproduction number. The calibration method is also tested to replicate an initial infection incidence trend for a H1N1 outbreak like that of 2009.

  11. Finding trap stiffness of optical tweezers using digital filters.

    PubMed

    Almendarez-Rangel, Pedro; Morales-Cruzado, Beatriz; Sarmiento-Gómez, Erick; Pérez-Gutiérrez, Francisco G

    2018-02-01

    Obtaining trap stiffness and calibration of the position detection system is the basis of a force measurement using optical tweezers. Both calibration quantities can be calculated using several experimental methods available in the literature. In most cases, stiffness determination and detection system calibration are performed separately, often requiring procedures in very different conditions, and thus confidence of calibration methods is not assured due to possible changes in the environment. In this work, a new method to simultaneously obtain both the detection system calibration and trap stiffness is presented. The method is based on the calculation of the power spectral density of positions through digital filters to obtain the harmonic contributions of the position signal. This method has the advantage of calculating both trap stiffness and photodetector calibration factor from the same dataset in situ. It also provides a direct method to avoid unwanted frequencies that could greatly affect calibration procedure, such as electric noise, for example.

  12. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    PubMed Central

    Deng, Mingjun; Li, Jiansong

    2017-01-01

    The Chinese Gaofen-3 (GF-3) mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR) sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts) using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method. PMID:29240675

  13. Radiometric calibration of the Earth observing system's imaging sensors

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1987-01-01

    Philosophy, requirements, and methods of calibration of multispectral space sensor systems as applicable to the Earth Observing System (EOS) are discussed. Vicarious methods for calibration of low spatial resolution systems, with respect to the Advanced Very High Resolution Radiometer (AVHRR), are then summarized. Finally, a theoretical introduction is given to a new vicarious method of calibration using the ratio of diffuse-to-global irradiance at the Earth's surfaces as the key input. This may provide an additional independent method for in-flight calibration.

  14. Configurations and calibration methods for passive sampling techniques.

    PubMed

    Ouyang, Gangfeng; Pawliszyn, Janusz

    2007-10-19

    Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.

  15. Application of composite small calibration objects in traffic accident scene photogrammetry.

    PubMed

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  16. Full-wave Nonlinear Inverse Scattering for Acoustic and Electromagnetic Breast Imaging

    NASA Astrophysics Data System (ADS)

    Haynes, Mark Spencer

    Acoustic and electromagnetic full-wave nonlinear inverse scattering techniques are explored in both theory and experiment with the ultimate aim of noninvasively mapping the material properties of the breast. There is evidence that benign and malignant breast tissue have different acoustic and electrical properties and imaging these properties directly could provide higher quality images with better diagnostic certainty. In this dissertation, acoustic and electromagnetic inverse scattering algorithms are first developed and validated in simulation. The forward solvers and optimization cost functions are modified from traditional forms in order to handle the large or lossy imaging scenes present in ultrasonic and microwave breast imaging. An antenna model is then presented, modified, and experimentally validated for microwave S-parameter measurements. Using the antenna model, a new electromagnetic volume integral equation is derived in order to link the material properties of the inverse scattering algorithms to microwave S-parameters measurements allowing direct comparison of model predictions and measurements in the imaging algorithms. This volume integral equation is validated with several experiments and used as the basis of a free-space inverse scattering experiment, where images of the dielectric properties of plastic objects are formed without the use of calibration targets. These efforts are used as the foundation of a solution and formulation for the numerical characterization of a microwave near-field cavity-based breast imaging system. The system is constructed and imaging results of simple targets are given. Finally, the same techniques are used to explore a new self-characterization method for commercial ultrasound probes. The method is used to calibrate an ultrasound inverse scattering experiment and imaging results of simple targets are presented. This work has demonstrated the feasibility of quantitative microwave inverse scattering by way of a self-consistent characterization formalism, and has made headway in the same area for ultrasound.

  17. Ultra-high Performance Liquid Chromatography Tandem Mass-Spectrometry for Simple and Simultaneous Quantification of Cannabinoids

    PubMed Central

    Jamwal, Rohitash; Topletz, Ariel R.; Ramratnam, Bharat; Akhlaghi, Fatemeh

    2017-01-01

    Cannabis is used widely in the United States, both recreationally and for medical purposes. Current methods for analysis of cannabinoids in human biological specimens rely on complex extraction process and lengthy analysis time. We established a rapid and simple assay for quantification of Δ9-tetrahydrocannabinol (THC), cannabidiol (CBD), 11-hydroxy Δ9-tetrahydrocannabinol (11-OH THC) and 11-nor-9-carboxy-Δ9-tetrahydrocannbinol (THC-COOH) in human plasma by U-HPLC-MS/MS using Δ9-tetrahydrocannabinol-D3 as the internal standard. Chromatographic separation was achieved on an Acquity BEH C18 column using a gradient comprising of water (0.1% formic acid) and methanol (0.1% formic acid) over a 6 min run-time. Analytes from 200 µL plasma were extracted using acetonitrile (containing 1% formic acid and THC-D3). Mass spectrometry was performed in positive ionization mode, and total ion chromatogram was used for quantification of analytes. The assay was validated according to guidelines set forth by Food and Drug Administration of United States. An eight-point calibration curve was fitted with quadratic regression (r2>0.99) from 1.56 to 100 ng mL−1 and a lower limit of quantification (LLOQ) of 1.56 ng mL−1 was achieved. Accuracy and precision calculated from six calibration curves was between 85 to 115% while the mean extraction recovery was >90% for all the analytes. Several plasma phospholipids eluted after the analytes thus did not interfere with the assay. Bench-top, freeze-thaw, auto-sampler and short-term stability ranged from 92.7 to 106.8% of nominal values. Application of the method was evaluated by quantification of analytes in human plasma from six subjects. PMID:28192758

  18. Signal inference with unknown response: calibration-uncertainty renormalized estimator.

    PubMed

    Dorn, Sebastian; Enßlin, Torsten A; Greiner, Maksim; Selig, Marco; Boehm, Vanessa

    2015-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.

  19. Evaluating the potential for remote bathymetric mapping of a turbid, sand-bed river: 2. Application to hyperspectral image data from the Platte River

    USGS Publications Warehouse

    Legleiter, C.J.; Kinzel, P.J.; Overstreet, B.T.

    2011-01-01

    This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes. ?? 2011 by the American Geophysical Union.

  20. Evaluating the potential for remote bathymetric mapping of a turbid, sand-bed river: 2. application to hyperspectral image data from the Platte River

    USGS Publications Warehouse

    Legleiter, Carl J.; Kinzel, Paul J.; Overstreet, Brandon T.

    2011-01-01

    This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes.

  1. A simple method for determination of erythritol, maltitol, xylitol, and sorbitol in sugar-free chocolates by capillary electrophoresis with capacitively coupled contactless conductivity detection.

    PubMed

    Coelho, Aline Guadalupe; de Jesus, Dosil Pereira

    2016-11-01

    In this work, a novel and simple analytical method using capillary electrophoresis (CE) with capacitively coupled contactless conductivity detection (C 4 D) is proposed for the determination of the polyols erythritol, maltitol, xylitol, and sorbitol in sugar-free chocolate. CE separation of the polyols was achieved in less than 6 min, and it was mediated by the interaction between the polyols and the borate ions in the background electrolyte, forming negatively charged borate esters. The extraction of the polyols from the samples was simply obtained using ultra-pure water and ultrasonic energy. Linearity was assessed by calibration curves that showed R 2 varying from 0.9920 to 0.9976. The LOQs were 12.4, 15.9, 9.0, and 9.0 μg/g for erythritol, maltitol, xylitol, and sorbitol, respectively. The accuracy of the method was evaluated by recovery tests, and the obtained recoveries varied from 70 to 116% with standard deviations ranging from 0.2 to 19%. The CE-C 4 D method was successfully applied for the determination of the studied polyols in commercial samples of sugar-free chocolate. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Development of a simple analytical methodology for determination of glucosamine release from modified release matrix tablets.

    PubMed

    Wu, Yunqi; Hussain, Munir; Fassihi, Reza

    2005-06-15

    A simple spectrophotometric method for determination of glucosamine release from sustained release (SR) hydrophilic matrix tablet based on reaction with ninhydrin is developed, optimized and validated. The purple color (Ruhemann purple) resulted from the reaction was stabilized and measured at 570 nm. The method optimization was essential as many procedural parameters influenced the accuracy of determination including the ninhydrin concentration, reaction time, pH, reaction temperature, purple color stability period, and glucosamine/ninhydrin ratio. Glucosamine tablets (600 mg) with different hydrophilic polymers were formulated and manufactured on a rotary press. Dissolution studies were conducted (USP 26) using deionized water at 37+/-0.2 degrees C with paddle rotation of 50 rpm, and samples were removed manually at appropriate time intervals. Under given optimized reaction conditions that appeared to be critical, glucosamine was quantitatively analyzed and the calibration curve in the range of 0.202-2.020 mg (r=0.9999) was constructed. The recovery rate of the developed method was 97.8-101.7% (n=6). Reproducible dissolution profiles were achieved from the dissolution studies performed on different glucosamine tablets. The developed method is easy to use, accurate and highly cost-effective for routine studies relative to HPLC and other techniques.

  3. A simple method for the measurement of reflective foil emissivity

    NASA Astrophysics Data System (ADS)

    Ballico, M. J.; van der Ham, E. W. M.

    2013-09-01

    Reflective metal foil is widely used to reduce radiative heat transfer within the roof space of buildings. Such foils are typically mass-produced by vapor-deposition of a thin metallic coating onto a variety of substrates, ranging from plastic-coated reinforced paper to "bubble-wrap". Although the emissivity of such surfaces is almost negligible in the thermal infrared, typically less than 0.03, an insufficiently thick metal coating, or organic contamination of the surface, can significantly increase this value. To ensure that the quality of the installed insulation is satisfactory, Australian building code AS/NZS 4201.5:1994 requires a practical agreed method for measurement of the emissivity, and the standard ASTM-E408 is implied. Unfortunately this standard is not a "primary method" and requires the use of specified expensive apparatus and calibrated reference materials. At NMIA we have developed a simple primary technique, based on an apparatus to thermally modulate the sample and record the apparent modulation in infra-red radiance with commercially available radiation thermometers. The method achieves an absolute accuracy in the emissivity of approximately 0.004 (k=2). This paper theoretically analyses the equivalence between the thermal emissivity measured in this manner, the effective thermal emissivity in application, and the apparent emissivity measured in accordance with ASTM-E408.

  4. Determination of moxifloxacin in human plasma, plasma ultrafiltrate, and cerebrospinal fluid by a rapid and simple liquid chromatography- tandem mass spectrometry method.

    PubMed

    Pranger, Arianna D; Alffenaar, Jan-Willem C; Wessels, A Mireille A; Greijdanus, Ben; Uges, Donald R A

    2010-04-01

    Moxifloxacin (MFX) is a useful agent in the treatment of multi-drug-resistant tuberculosis (MDR-TB). At Tuberculosis Centre Beatrixoord, a referral center for tuberculosis in the Netherlands, approximately 36% of the patients have received MFX as treatment. Based on the variability of MFX AUC, the variability of in vitro susceptibility to MFX of M. tuberculosis, and the variability of penetration into sanctuary sites, measuring the concentration of MFX in plasma and cerebrospinal fluid (CSF) could be recommended. Therefore, a rapid and validated liquid chromatography-tandem mass spectrometry (LC-MS-MS) analyzing method with a simple pretreatment procedure was developed for therapeutic drug monitoring of MFX in human plasma and CSF. Because of the potential influence of protein binding on efficacy, we decided to determine both bound and unbound (ultrafiltrate) fraction of MFX. The calibration curves were linear in the therapeutic range of 0.05 to 5.0 mg/L plasma and CSF with CV in the range of -5.4% to 9.3%. MFX ultrafiltrate samples could be determined with the same method setup for analysis of MFX in CSF. The LC-MS-MS method developed in this study is suitable for monitoring MFX in human plasma, plasma ultrafiltrate, and CSF.

  5. Fast and Simple Analytical Method for Direct Determination of Total Chlorine Content in Polyglycerol by ICP-MS.

    PubMed

    Jakóbik-Kolon, Agata; Milewski, Andrzej; Dydo, Piotr; Witczak, Magdalena; Bok-Badura, Joanna

    2018-02-23

    The fast and simple method for total chlorine determination in polyglycerols using low resolution inductively coupled plasma mass spectrometry (ICP-MS) without the need for additional equipment and time-consuming sample decomposition was evaluated. Linear calibration curve for 35 Cl isotope in the concentration range 20-800 µg/L was observed. Limits of detection and quantification equaled to 15 µg/L and 44 µg/L, respectively. This corresponds to possibility of detection 3 µg/g and determination 9 µg/g of chlorine in polyglycerol using studied conditions (0.5% matrix-polyglycerol samples diluted or dissolved with water to an overall concentration of 0.5%). Matrix effects as well as the effect of chlorine origin have been evaluated. The presence of 0.5% (m/m) of matrix species similar to polyglycerol (polyethylene glycol-PEG) did not influence the chlorine determination for PEGs with average molecular weights (MW) up to 2000 Da. Good precision and accuracy of the chlorine content determination was achieved regardless on its origin (inorganic/organic). High analyte recovery level and low relative standard deviation values were observed for real polyglycerol samples spiked with chloride. Additionally, the Combustion Ion Chromatography System was used as a reference method. The results confirmed high accuracy and precision of the tested method.

  6. A simple method for the measurement of reflective foil emissivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballico, M. J.; Ham, E. W. M. van der

    Reflective metal foil is widely used to reduce radiative heat transfer within the roof space of buildings. Such foils are typically mass-produced by vapor-deposition of a thin metallic coating onto a variety of substrates, ranging from plastic-coated reinforced paper to 'bubble-wrap'. Although the emissivity of such surfaces is almost negligible in the thermal infrared, typically less than 0.03, an insufficiently thick metal coating, or organic contamination of the surface, can significantly increase this value. To ensure that the quality of the installed insulation is satisfactory, Australian building code AS/NZS 4201.5:1994 requires a practical agreed method for measurement of the emissivity,more » and the standard ASTM-E408 is implied. Unfortunately this standard is not a 'primary method' and requires the use of specified expensive apparatus and calibrated reference materials. At NMIA we have developed a simple primary technique, based on an apparatus to thermally modulate the sample and record the apparent modulation in infra-red radiance with commercially available radiation thermometers. The method achieves an absolute accuracy in the emissivity of approximately 0.004 (k=2). This paper theoretically analyses the equivalence between the thermal emissivity measured in this manner, the effective thermal emissivity in application, and the apparent emissivity measured in accordance with ASTM-E408.« less

  7. Development and validation of an RP-HPLC method for the quantitation of odanacatib in rat and human plasma and its application to a pharmacokinetic study.

    PubMed

    Police, Anitha; Gurav, Sandip; Dhiman, Vinay; Zainuddin, Mohd; Bhamidipati, Ravi Kanth; Rajagopal, Sriram; Mullangi, Ramesh

    2015-11-01

    A simple, specific, sensitive and reproducible high-performance liquid chromatography (HPLC) assay method has been developed and validated for the estimation of odanacatib in rat and human plasma. The bioanalytical procedure involves extraction of odanacatib and itraconazole (internal standard, IS) from a 200 μL plasma aliquot with simple liquid-liquid extraction process. Chromatographic separation was achieved on a Symmetry Shield RP18 using an isocratic mobile phase at a flow rate of 0.7 mL/min. The UV detection wave length was 268 nm. Odanacatib and IS eluted at 5.5 and 8.6 min, respectively with a total run time of 10 min. Method validation was performed as per US Food and Drug Administration guidelines and the results met the acceptance criteria. The calibration curve was linear over a concentration range of 50.9-2037 ng/mL (r(2) = 0.994). The intra- and inter-day precisions were in the range of 2.06-5.11 and 5.84-13.1%, respectively, in rat plasma and 2.38-7.90 and 6.39-10.2%, respectively, in human plasma. The validated HPLC method was successfully applied to a pharmacokinetic study in rats. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Evaluation of medium-range ensemble flood forecasting based on calibration strategies and ensemble methods in Lanjiang Basin, Southeast China

    NASA Astrophysics Data System (ADS)

    Liu, Li; Gao, Chao; Xuan, Weidong; Xu, Yue-Ping

    2017-11-01

    Ensemble flood forecasts by hydrological models using numerical weather prediction products as forcing data are becoming more commonly used in operational flood forecasting applications. In this study, a hydrological ensemble flood forecasting system comprised of an automatically calibrated Variable Infiltration Capacity model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated. The hydrological model is optimized by the parallel programmed ε-NSGA II multi-objective algorithm. According to the solutions by ε-NSGA II, two differently parameterized models are determined to simulate daily flows and peak flows at each of the three hydrological stations. Then a simple yet effective modular approach is proposed to combine these daily and peak flows at the same station into one composite series. Five ensemble methods and various evaluation metrics are adopted. The results show that ε-NSGA II can provide an objective determination on parameter estimation, and the parallel program permits a more efficient simulation. It is also demonstrated that the forecasts from ECMWF have more favorable skill scores than other Ensemble Prediction Systems. The multimodel ensembles have advantages over all the single model ensembles and the multimodel methods weighted on members and skill scores outperform other methods. Furthermore, the overall performance at three stations can be satisfactory up to ten days, however the hydrological errors can degrade the skill score by approximately 2 days, and the influence persists until a lead time of 10 days with a weakening trend. With respect to peak flows selected by the Peaks Over Threshold approach, the ensemble means from single models or multimodels are generally underestimated, indicating that the ensemble mean can bring overall improvement in forecasting of flows. For peak values taking flood forecasts from each individual member into account is more appropriate.

  9. "Hook"-calibration of GeneChip-microarrays: theory and algorithm.

    PubMed

    Binder, Hans; Preibisch, Stephan

    2008-08-29

    : The improvement of microarray calibration methods is an essential prerequisite for quantitative expression analysis. This issue requires the formulation of an appropriate model describing the basic relationship between the probe intensity and the specific transcript concentration in a complex environment of competing interactions, the estimation of the magnitude these effects and their correction using the intensity information of a given chip and, finally the development of practicable algorithms which judge the quality of a particular hybridization and estimate the expression degree from the intensity values. : We present the so-called hook-calibration method which co-processes the log-difference (delta) and -sum (sigma) of the perfect match (PM) and mismatch (MM) probe-intensities. The MM probes are utilized as an internal reference which is subjected to the same hybridization law as the PM, however with modified characteristics. After sequence-specific affinity correction the method fits the Langmuir-adsorption model to the smoothed delta-versus-sigma plot. The geometrical dimensions of this so-called hook-curve characterize the particular hybridization in terms of simple geometric parameters which provide information about the mean non-specific background intensity, the saturation value, the mean PM/MM-sensitivity gain and the fraction of absent probes. This graphical summary spans a metrics system for expression estimates in natural units such as the mean binding constants and the occupancy of the probe spots. The method is single-chip based, i.e. it separately uses the intensities for each selected chip. : The hook-method corrects the raw intensities for the non-specific background hybridization in a sequence-specific manner, for the potential saturation of the probe-spots with bound transcripts and for the sequence-specific binding of specific transcripts. The obtained chip characteristics in combination with the sensitivity corrected probe-intensity values provide expression estimates scaled in natural units which are given by the binding constants of the particular hybridization.

  10. Structured light system calibration method with optimal fringe angle.

    PubMed

    Li, Beiwen; Zhang, Song

    2014-11-20

    For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H)  mm×250(W)  mm×500(D)  mm.

  11. Stress Degradation Studies on Varenicline Tartrate and Development of a Validated Stability-Indicating HPLC Method

    PubMed Central

    Pujeri, Sudhakar S.; Khader, Addagadde M. A.; Seetharamappa, Jaldappagari

    2012-01-01

    A simple, rapid and stability-indicating reversed-phase liquid chromatographic method was developed for the assay of varenicline tartrate (VRT) in the presence of its degradation products generated from forced decomposition studies. The HPLC separation was achieved on a C18 Inertsil column (250 mm × 4.6 mm i.d. particle size is 5 μm) employing a mobile phase consisting of ammonium acetate buffer containing trifluoroacetic acid (0.02M; pH 4) and acetonitrile in gradient program mode with a flow rate of 1.0 mL min−1. The UV detector was operated at 237 nm while column temperature was maintained at 40 °C. The developed method was validated as per ICH guidelines with respect to specificity, linearity, precision, accuracy, robustness and limit of quantification. The method was found to be simple, specific, precise and accurate. Selectivity of the proposed method was validated by subjecting the stock solution of VRT to acidic, basic, photolysis, oxidative and thermal degradation. The calibration curve was found to be linear in the concentration range of 0.1–192 μg mL−1 (R2 = 0.9994). The peaks of degradation products did not interfere with that of pure VRT. The utility of the developed method was examined by analyzing the tablets containing VRT. The results of analysis were subjected to statistical analysis. PMID:22396908

  12. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  13. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  14. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  15. A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers

    DOE PAGES

    Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...

    2018-03-28

    A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less

  16. A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund

    A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less

  17. Development of high performance liquid chromatography method for miconazole analysis in powder sample

    NASA Astrophysics Data System (ADS)

    Hermawan, D.; Suwandri; Sulaeman, U.; Istiqomah, A.; Aboul-Enein, H. Y.

    2017-02-01

    A simple high performance liquid chromatography (HPLC) method has been developed in this study for the analysis of miconazole, an antifungal drug, in powder sample. The optimized HPLC system using C8 column was achieved using mobile phase composition containing methanol:water (85:15, v/v), a flow rate of 0.8 mL/min, and UV detection at 220 nm. The calibration graph was linear in the range from 10 to 50 mg/L with r 2 of 0.9983. The limit of detection (LOD) and limit of quantitation (LOQ) obtained were 2.24 mg/L and 7.47 mg/L, respectively. The present HPLC method is applicable for the determination of miconazole in the powder sample with a recovery of 101.28 % (RSD = 0.96%, n = 3). The developed HPLC method provides short analysis time, high reproducibility and high sensitivity.

  18. Removal of ring artifacts in microtomography by characterization of scintillator variations.

    PubMed

    Vågberg, William; Larsson, Jakob C; Hertz, Hans M

    2017-09-18

    Ring artifacts reduce image quality in tomography, and arise from faulty detector calibration. In microtomography, we have identified that ring artifacts can arise due to high-spatial frequency variations in the scintillator thickness. Such variations are normally removed by a flat-field correction. However, as the spectrum changes, e.g. due to beam hardening, the detector response varies non-uniformly introducing ring artifacts that persist after flat-field correction. In this paper, we present a method to correct for ring artifacts from variations in scintillator thickness by using a simple method to characterize the local scintillator response. The method addresses the actual physical cause of the ring artifacts, in contrary to many other ring artifact removal methods which rely only on image post-processing. By applying the technique to an experimental phantom tomography, we show that ring artifacts are strongly reduced compared to only making a flat-field correction.

  19. Development and Validation of an Analytical Methodology Based on Liquid Chromatography-Electrospray Tandem Mass Spectrometry for the Simultaneous Determination of Phenolic Compounds in Olive Leaf Extract.

    PubMed

    Cittan, Mustafa; Çelik, Ali

    2018-04-01

    A simple method was validated for the analysis of 31 phenolic compounds using liquid chromatography-electrospray tandem mass spectrometry. Proposed method was successfully applied to the determination of phenolic compounds in an olive leaf extract and 24 compounds were analyzed quantitatively. Olive biophenols were extracted from olive leaves by using microwave-assisted extraction with acceptable recovery values between 78.1 and 108.7%. Good linearities were obtained with correlation coefficients over 0.9916 from calibration curves of the phenolic compounds. The limits of quantifications were from 0.14 to 3.2 μg g-1. Intra-day and inter-day precision studies indicated that the proposed method was repeatable. As a result, it was confirmed that the proposed method was highly reliable for determination of the phenolic species in olive leaf extracts.

  20. Backward-gazing method for measuring solar concentrators shape errors.

    PubMed

    Coquand, Mathieu; Henault, François; Caliot, Cyril

    2017-03-01

    This paper describes a backward-gazing method for measuring the optomechanical errors of solar concentrating surfaces. It makes use of four cameras placed near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. Simple data processing then allows reconstructing the slope and shape errors of the surfaces. The originality of the method is enforced by the use of generalized quad-cell formulas and approximate mathematical relations between the slope errors of the mirrors and their reflected wavefront in the case of sun-tracking heliostats at high-incidence angles. Numerical simulations demonstrate that the measurement accuracy is compliant with standard requirements of solar concentrating optics in the presence of noise or calibration errors. The method is suited to fine characterization of the optical and mechanical errors of heliostats and their facets, or to provide better control for real-time sun tracking.

  1. A kinetic method for the determination of thiourea by its catalytic effect in micellar media

    NASA Astrophysics Data System (ADS)

    Abbasi, Shahryar; Khani, Hossein; Gholivand, Mohammad Bagher; Naghipour, Ali; Farmany, Abbas; Abbasi, Freshteh

    2009-03-01

    A highly sensitive, selective and simple kinetic method was developed for the determination of trace levels of thiourea based on its catalytic effect on the oxidation of janus green in phosphoric acid media and presence of Triton X-100 surfactant without any separation and pre-concentration steps. The reaction was monitored spectrophotometrically by tracing the formation of the green-colored oxidized product of janus green at 617 nm within 15 min of mixing the reagents. The effect of some factors on the reaction speed was investigated. Following the recommended procedure, thiourea could be determined with linear calibration graph in 0.03-10.00 μg/ml range. The detection limit of the proposed method is 0.02 μg/ml. Most of foreign species do not interfere with the determination. The high sensitivity and selectivity of the proposed method allowed its successful application to fruit juice and industrial waste water.

  2. Operating Experience Review of the INL HTE Gas Monitoring System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    L. C. Cadwallader; K. G. DeWall

    2010-06-01

    This paper describes the operations of several types of gas monitors in use at the Idaho National Laboratory (INL) High Temperature Electrolysis Experiment (HTE) laboratory. The gases monitored at hydrogen, carbon monoxide, carbon dioxide, and oxygen. The operating time, calibration, and unwanted alarms are described. The calibration session time durations are described. Some simple statistics are given for the reliability of these monitors and the results are compared to operating experiences of other types of monitors.

  3. The Role of Wakes in Modelling Tidal Current Turbines

    NASA Astrophysics Data System (ADS)

    Conley, Daniel; Roc, Thomas; Greaves, Deborah

    2010-05-01

    The eventual proper development of arrays of Tidal Current Turbines (TCT) will require a balance which maximizes power extraction while minimizing environmental impacts. Idealized analytical analogues and simple 2-D models are useful tools for investigating questions of a general nature but do not represent a practical tool for application to realistic cases. Some form of 3-D numerical simulations will be required for such applications and the current project is designed to develop a numerical decision-making tool for use in planning large scale TCT projects. The project is predicated on the use of an existing regional ocean modelling framework (the Regional Ocean Modelling System - ROMS) which is modified to enable the user to account for the effects of TCTs. In such a framework where mixing processes are highly parametrized, the fidelity of the quantitative results is critically dependent on the parameter values utilized. In light of the early stage of TCT development and the lack of field scale measurements, the calibration of such a model is problematic. In the absence of explicit calibration data sets, the device wake structure has been identified as an efficient feature for model calibration. This presentation will discuss efforts to design an appropriate calibration scheme which focuses on wake decay and the motivation for this approach, techniques applied, validation results from simple test cases and limitations shall be presented.

  4. Systems and methods for optically measuring properties of hydrocarbon fuel gases

    DOEpatents

    Adler-Golden, S.; Bernstein, L.S.; Bien, F.; Gersh, M.E.; Goldstein, N.

    1998-10-13

    A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution. 14 figs.

  5. [Study on the kinetic fluorimetric determination of tannins in tea].

    PubMed

    Feng, Su-ling; Tang, Jun-ming; Fan, Jing

    2003-04-01

    A simple and highly sensitive kinetic fluorimetric method is proposed for the determination of trace tannins, based on the activation of tannins on the oxidation of pyronine Y by hydrogen peroxide catalyzed by Cu(II) ion. The effects of some experimental conditions were investigated and discussed in detail. The fixed reaction time procedure was used to determine the fluorescence intensity of the system. The calibration curve of tannin was linear in the range of 0.06-0.96 mg.L-1, and the detection limit for tannin was 0.032 mg.L-1. The relative standard deviation for the measurement of 0.32 mg.L-1 tannin (n = 11) was 2.3%. The proposed method has been successfully applied to the determination of tannins in tea. The results obtained were compared with those provided by the Folin-Ciocalteu method.

  6. [Simultaneous determination of eleven components in Ginkgo biloba leaves by high performance liquid chromatography method].

    PubMed

    Lv, Jin-Li; Yang, Biao; Li, Meng-Xuan; Meng, Zhao-Qing; Ma, Shi-Ping; Wang, Zhen-Zhong; Ding, Gang; Huang, Wen-Zhe; Xiao, Wei

    2017-03-01

    To study Ginkgo biloba leaves in different producing area, we establish an HPLC method for the simultaneously determination of seven flavonoids glycosides and four biflavonoids in G. biloba leaves. The analysis was performed on an Agilent ZORBAX SB-C₁₈ column(4.6 mm×250 mm, 5 μm) wich acetonitrile, and 0.4% phosphoric acid as mobile phase at flow rate of 1 mL•min⁻¹ in a gradient edution, and the detection was carried out at 254 nm.The calibration curves of the seven flavonoids glycosides and four biflavonoids had a good linearitiy with good recoveries. The established HPLC method is simple, rapid, accurate, reliable, and sensitive, and can be applied to the identification and quality control of G. biloba leaves. Copyright© by the Chinese Pharmaceutical Association.

  7. A versatile method for the determination of photochemical quantum yields via online UV-Vis spectroscopy.

    PubMed

    Stadler, Eduard; Eibel, Anna; Fast, David; Freißmuth, Hilde; Holly, Christian; Wiech, Mathias; Moszner, Norbert; Gescheidt, Georg

    2018-05-16

    We have developed a simple method for determining the quantum yields of photo-induced reactions. Our setup features a fibre coupled UV-Vis spectrometer, LED irradiation sources, and a calibrated spectrophotometer for precise measurements of the LED photon flux. The initial slope in time-resolved absorbance profiles provides the quantum yield. We show the feasibility of our methodology for the kinetic analysis of photochemical reactions and quantum yield determination. The typical chemical actinometers, ferrioxalate and ortho-nitrobenzaldehyde, as well as riboflavin, a spiro-compound, phosphorus- and germanium-based photoinitiators for radical polymerizations and the frequently utilized photo-switch azobenzene serve as paradigms. The excellent agreement of our results with published data demonstrates the high potential of the proposed method as a convenient alternative to the time-consuming chemical actinometry.

  8. Determination of tocopherols and sitosterols in seeds and nuts by QuEChERS-liquid chromatography.

    PubMed

    Delgado-Zamarreño, M Milagros; Fernández-Prieto, Cristina; Bustamante-Rangel, Myriam; Pérez-Martín, Lara

    2016-02-01

    In the present work a simple, reliable and affordable sample treatment method for the simultaneous analysis of tocopherols and free phytosterols in nuts was developed. Analyte extraction was carried out using the QuEChERS methodology and analyte separation and detection were accomplished using HPLC-DAD. The use of this methodology for the extraction of natural occurring substances provides advantages such as speed, simplicity and ease of use. The parameters evaluated for the validation of the method developed included the linearity of the calibration plots, the detection and quantification limits, repeatability, reproducibility and recovery. The proposed method was successfully applied to the analysis of tocopherols and free phytosterols in samples of almonds, cashew nuts, hazelnuts, peanuts, tiger nuts, sun flower seeds and pistachios. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Systems and methods for optically measuring properties of hydrocarbon fuel gases

    DOEpatents

    Adler-Golden, Steven; Bernstein, Lawrence S.; Bien, Fritz; Gersh, Michael E.; Goldstein, Neil

    1998-10-13

    A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution.

  10. A new systematic calibration method of ring laser gyroscope inertial navigation system

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu

    2016-10-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.

  11. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  12. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  13. Shortwave Radiometer Calibration Methods Comparison and Resulting Solar Irradiance Measurement Differences: A User Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Banks financing solar energy projects require assurance that these systems will produce the energy predicted. Furthermore, utility planners and grid system operators need to understand the impact of the variable solar resource on solar energy conversion system performance. Accurate solar radiation data sets reduce the expense associated with mitigating performance risk and assist in understanding the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methods provided by radiometric calibrationmore » service providers, such as NREL and manufacturers of radiometers, on the resulting calibration responsivity. Some of these radiometers are calibrated indoors and some outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides the outdoor calibration responsivity of pyranometers and pyrheliometers at 45 degree solar zenith angle, and as a function of solar zenith angle determined by clear-sky comparisons with reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison between the test radiometer under calibration and a reference radiometer of the same type. In both methods, the reference radiometer calibrations are traceable to the World Radiometric Reference (WRR). These different methods of calibration demonstrated +1% to +2% differences in solar irradiance measurement. Analyzing these differences will ultimately help determine the uncertainty of the field radiometer data and guide the development of a consensus standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainty will allow more accurate prediction of solar output and improve the bankability of solar projects.« less

  14. Effects of Moisture and Particle Size on Quantitative Determination of Total Organic Carbon (TOC) in Soils Using Near-Infrared Spectroscopy.

    PubMed

    Tamburini, Elena; Vincenzi, Fabio; Costa, Stefania; Mantovi, Paolo; Pedrini, Paola; Castaldelli, Giuseppe

    2017-10-17

    Near-Infrared Spectroscopy is a cost-effective and environmentally friendly technique that could represent an alternative to conventional soil analysis methods, including total organic carbon (TOC). Soil fertility and quality are usually measured by traditional methods that involve the use of hazardous and strong chemicals. The effects of physical soil characteristics, such as moisture content and particle size, on spectral signals could be of great interest in order to understand and optimize prediction capability and set up a robust and reliable calibration model, with the future perspective of being applied in the field. Spectra of 46 soil samples were collected. Soil samples were divided into three data sets: unprocessed, only dried and dried, ground and sieved, in order to evaluate the effects of moisture and particle size on spectral signals. Both separate and combined normalization methods including standard normal variate (SNV), multiplicative scatter correction (MSC) and normalization by closure (NCL), as well as smoothing using first and second derivatives (DV1 and DV2), were applied to a total of seven cases. Pretreatments for model optimization were designed and compared for each data set. The best combination of pretreatments was achieved by applying SNV and DV2 on partial least squares (PLS) modelling. There were no significant differences between the predictions using the three different data sets ( p < 0.05). Finally, a unique database including all three data sets was built to include all the sources of sample variability that were tested and used for final prediction. External validation of TOC was carried out on 16 unknown soil samples to evaluate the predictive ability of the final combined calibration model. Hence, we demonstrate that sample preprocessing has minor influence on the quality of near infrared spectroscopy (NIR) predictions, laying the ground for a direct and fast in situ application of the method. Data can be acquired outside the laboratory since the method is simple and does not need more than a simple band ratio of the spectra.

  15. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  16. An approach to derive some simple empirical equations to calibrate nuclear and acoustic well logging tools.

    PubMed

    Mohammad Al Alfy, Ibrahim

    2018-01-01

    A set of three pads was constructed from primary materials (sand, gravel and cement) to calibrate the gamma-gamma density tool. A simple equation was devised to convert the qualitative cps values to quantitative g/cc values. The neutron-neutron porosity tool measures the qualitative cps porosity values. A direct equation was derived to calculate the porosity percentage from the cps porosity values. Cement-bond log illustrates the cement quantities, which surround well pipes. This log needs a difficult process due to the existence of various parameters, such as: drilling well diameter as well as internal diameter, thickness and type of well pipes. An equation was invented to calculate the cement percentage at standard conditions. This equation can be modified according to varying conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Emerging Techniques for Vicarious Calibration of Visible Through Short Wave Infrared Remote Sensing Systems

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.; Harrington, Gary; Holekamp, Kara; Pagnutti, Mary; Russell, Jeffrey; Frisbie, Troy; Stanley, Thomas

    2007-01-01

    Autonomous Visible to SWIR ground-based vicarious Cal/Val will be an essential Cal/Val component with such a large number of systems. Radiometrically calibrated spectroradiometers can improve confidence in current ground truth data through validation of radiometric modeling and validation or replacement of traditional sun photometer measurement. They also should enable significant reduction in deployed equipment such as equipment used in traditional sun photometer approaches. Simple, field-portable, white-light LED calibration source shows promise for visible range (420-750 nm). Prototype demonstrated <0.5% drift over 10-40 C temperature range. Additional complexity (more LEDs) will be necessary for extending spectral range into the NIR and SWIR. LED long lifetimes should produce at least several hundreds of hours or more of stability, minimizing the need for expensive calibrations and supporting long-duration field campaigns.

  18. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    PubMed Central

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726

  19. The phantom robot - Predictive displays for teleoperation with time delay

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Kim, Won S.; Venema, Steven C.

    1990-01-01

    An enhanced teleoperation technique for time-delayed bilateral teleoperator control is discussed. The control technique selected for time delay is based on the use of a high-fidelity graphics phantom robot that is being controlled in real time (without time delay) against the static task image. Thus, the motion of the phantom robot image on the monitor predicts the motion of the real robot. The real robot's motion will follow the phantom robot's motion on the monitor with the communication time delay implied in the task. Real-time high-fidelity graphics simulation of a PUMA arm is generated and overlaid on the actual camera view of the arm. A simple camera calibration technique is used for calibrated graphics overlay. A preliminary experiment is performed with the predictive display by using a very simple tapping task. The results with this simple task indicate that predictive display enhances the human operator's telemanipulation task performance significantly during free motion when there is a long time delay. It appears, however, that either two-view or stereoscopic predictive displays are necessary for general three-dimensional tasks.

  20. Simple laser vision sensor calibration for surface profiling applications

    NASA Astrophysics Data System (ADS)

    Abu-Nabah, Bassam A.; ElSoussi, Adnane O.; Al Alami, Abed ElRahman K.

    2016-09-01

    Due to the relatively large structures in the Oil and Gas industry, original equipment manufacturers (OEMs) have been implementing custom-designed laser vision sensor (LVS) surface profiling systems as part of quality control in their manufacturing processes. The rough manufacturing environment and the continuous movement and misalignment of these custom-designed tools adversely affect the accuracy of laser-based vision surface profiling applications. Accordingly, Oil and Gas businesses have been raising the demand from the OEMs to implement practical and robust LVS calibration techniques prior to running any visual inspections. This effort introduces an LVS calibration technique representing a simplified version of two known calibration techniques, which are commonly implemented to obtain a calibrated LVS system for surface profiling applications. Both calibration techniques are implemented virtually and experimentally to scan simulated and three-dimensional (3D) printed features of known profiles, respectively. Scanned data is transformed from the camera frame to points in the world coordinate system and compared with the input profiles to validate the introduced calibration technique capability against the more complex approach and preliminarily assess the measurement technique for weld profiling applications. Moreover, the sensitivity to stand-off distances is analyzed to illustrate the practicality of the presented technique.

Top