Sample records for obtained calibration curves

  1. Apollo 16/AS-511/LM-11 operational calibration curves. Volume 1: Calibration curves for command service module CSM 113

    NASA Technical Reports Server (NTRS)

    Demoss, J. F. (Compiler)

    1971-01-01

    Calibration curves for the Apollo 16 command service module pulse code modulation downlink and onboard display are presented. Subjects discussed are: (1) measurement calibration curve format, (2) measurement identification, (3) multi-mode calibration data summary, (4) pulse code modulation bilevel events listing, and (5) calibration curves for instrumentation downlink and meter link.

  2. A new form of the calibration curve in radiochromic dosimetry. Properties and results.

    PubMed

    Tamponi, Matteo; Bona, Rossana; Poggiu, Angela; Marini, Piergiorgio

    2016-07-01

    This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer-Lambert law and a simple modeling of the film. The new calibration curve has been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time

  3. A new form of the calibration curve in radiochromic dosimetry. Properties and results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamponi, Matteo, E-mail: mtamponi@aslsassari.it; B

    Purpose: This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. Methods: The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer–Lambert law and a simple modeling of the film. The new calibration curve hasmore » been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. Results: The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. Conclusions: The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films

  4. Bayesian inference of Calibration curves: application to archaeomagnetism

    NASA Astrophysics Data System (ADS)

    Lanos, P.

    2003-04-01

    The range of errors that occur at different stages of the archaeomagnetic calibration process are modelled using a Bayesian hierarchical model. The archaeomagnetic data obtained from archaeological structures such as hearths, kilns or sets of bricks and tiles, exhibit considerable experimental errors and are typically more or less well dated by archaeological context, history or chronometric methods (14C, TL, dendrochronology, etc.). They can also be associated with stratigraphic observations which provide prior relative chronological information. The modelling we describe in this paper allows all these observations, on materials from a given period, to be linked together, and the use of penalized maximum likelihood for smoothing univariate, spherical or three-dimensional time series data allows representation of the secular variation of the geomagnetic field over time. The smooth curve we obtain (which takes the form of a penalized natural cubic spline) provides an adaptation to the effects of variability in the density of reference points over time. Since our model takes account of all the known errors in the archaeomagnetic calibration process, we are able to obtain a functional highest-posterior-density envelope on the new curve. With this new posterior estimate of the curve available to us, the Bayesian statistical framework then allows us to estimate the calendar dates of undated archaeological features (such as kilns) based on one, two or three geomagnetic parameters (inclination, declination and/or intensity). Date estimates are presented in much the same way as those that arise from radiocarbon dating. In order to illustrate the model and inference methods used, we will present results based on German archaeomagnetic data recently published by a German team.

  5. The Value of Hydrograph Partitioning Curves for Calibrating Hydrological Models in Glacierized Basins

    NASA Astrophysics Data System (ADS)

    He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno

    2018-03-01

    This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.

  6. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    NASA Astrophysics Data System (ADS)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  7. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  8. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  9. Refinement of moisture calibration curves for nuclear gage : interim report no. 1.

    DOT National Transportation Integrated Search

    1972-01-01

    This study was initiated to determine the correct moisture calibration curves for different nuclear gages. It was found that the Troxler Model 227 had a linear response between count ratio and moisture content. Also, the two calibration curves for th...

  10. Construction of dose response calibration curves for dicentrics and micronuclei for X radiation in a Serbian population.

    PubMed

    Pajic, J; Rakic, B; Jovicic, D; Milovanovic, A

    2014-10-01

    Biological dosimetry using chromosome damage biomarkers is a valuable dose assessment method in cases of radiation overexposure with or without physical dosimetry data. In order to estimate dose by biodosimetry, any biological dosimetry service have to have its own dose response calibration curve. This paper reveals the results obtained after irradiation of blood samples from fourteen healthy male and female volunteers in order to establish biodosimetry in Serbia and produce dose response calibration curves for dicentrics and micronuclei. Taking into account pooled data from all the donors, the resultant fitted curve for dicentrics is: Ydic=0.0009 (±0.0003)+0.0421 (±0.0042)×D+0.0602 (±0.0022)×D(2); and for micronuclei: Ymn=0.0104 (±0.0015)+0.0824 (±0.0050)×D+0.0189 (±0.0017)×D(2). Following establishment of the dose response curve, a validation experiment was carried out with four blood samples. Applied and estimated doses were in good agreement. On this basis, the results reported here give us confidence to apply both calibration curves for future biological dosimetry requirements in Serbia. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. The cytokinesis-blocked micronucleus assay: dose-response calibration curve, background frequency in the population and dose estimation.

    PubMed

    Rastkhah, E; Zakeri, F; Ghoranneviss, M; Rajabpour, M R; Farshidpour, M R; Mianji, F; Bayat, M

    2016-03-01

    An in vitro study of the dose responses of human peripheral blood lymphocytes was conducted with the aim of creating calibrated dose-response curves for biodosimetry measuring up to 4 Gy (0.25-4 Gy) of gamma radiation. The cytokinesis-blocked micronucleus (CBMN) assay was employed to obtain the frequencies of micronuclei (MN) per binucleated cell in blood samples from 16 healthy donors (eight males and eight females) in two age ranges of 20-34 and 35-50 years. The data were used to construct the calibration curves for men and women in two age groups, separately. An increase in micronuclei yield with the dose in a linear-quadratic way was observed in all groups. To verify the applicability of the constructed calibration curve, MN yields were measured in peripheral blood lymphocytes of two real overexposed subjects and three irradiated samples with unknown dose, and the results were compared with dose values obtained from measuring dicentric chromosomes. The comparison of the results obtained by the two techniques indicated a good agreement between dose estimates. The average baseline frequency of MN for the 130 healthy non-exposed donors (77 men and 55 women, 20-60 years old divided into four age groups) ranged from 6 to 21 micronuclei per 1000 binucleated cells. Baseline MN frequencies were higher for women and for the older age group. The results presented in this study point out that the CBMN assay is a reliable, easier and valuable alternative method for biological dosimetry.

  12. LAMOST Spectrograph Response Curves: Stability and Application to Flux Calibration

    NASA Astrophysics Data System (ADS)

    Du, Bing; Luo, A.-Li; Kong, Xiao; Zhang, Jian-Nan; Guo, Yan-Xin; Cook, Neil James; Hou, Wen; Yang, Hai-Feng; Li, Yin-Bi; Song, Yi-Han; Chen, Jian-Jun; Zuo, Fang; Wu, Ke-Fei; Wang, Meng-Xin; Wu, Yue; Wang, You-Fen; Zhao, Yong-Heng

    2016-12-01

    The task of flux calibration for Large sky Area Multi-Object Spectroscopic Telescope (LAMOST) spectra is difficult due to many factors, such as the lack of standard stars, flat-fielding for large field of view, and variation of reddening between different stars, especially at low Galactic latitudes. Poor selection, bad spectral quality, or extinction uncertainty of standard stars not only might induce errors to the calculated spectral response curve (SRC) but also might lead to failures in producing final 1D spectra. In this paper, we inspected spectra with Galactic latitude | b| ≥slant 60^\\circ and reliable stellar parameters, determined through the LAMOST Stellar Parameter Pipeline (LASP), to study the stability of the spectrograph. To guarantee that the selected stars had been observed by each fiber, we selected 37,931 high-quality exposures of 29,000 stars from LAMOST DR2, and more than seven exposures for each fiber. We calculated the SRCs for each fiber for each exposure and calculated the statistics of SRCs for spectrographs with both the fiber variations and time variations. The result shows that the average response curve of each spectrograph (henceforth ASPSRC) is relatively stable, with statistical errors ≤10%. From the comparison between each ASPSRC and the SRCs for the same spectrograph obtained by the 2D pipeline, we find that the ASPSRCs are good enough to use for the calibration. The ASPSRCs have been applied to spectra that were abandoned by the LAMOST 2D pipeline due to the lack of standard stars, increasing the number of LAMOST spectra by 52,181 in DR2. Comparing those same targets with the Sloan Digital Sky Survey (SDSS), the relative flux differences between SDSS spectra and LAMOST spectra with the ASPSRC method are less than 10%, which underlines that the ASPSRC method is feasible for LAMOST flux calibration.

  13. Effects of experimental design on calibration curve precision in routine analysis

    PubMed Central

    Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.

    1998-01-01

    A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816

  14. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    PubMed

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  15. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    NASA Astrophysics Data System (ADS)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  16. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  17. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  18. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2011-07-01

    The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied

  19. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2010-12-01

    The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of

  20. New approach to calibrating bed load samplers

    USGS Publications Warehouse

    Hubbell, D.W.; Stevens, H.H.; Skinner, J.V.; Beverage, J.P.

    1985-01-01

    Cyclic variations in bed load discharge at a point, which are an inherent part of the process of bed load movement, complicate calibration of bed load samplers and preclude the use of average rates to define sampling efficiencies. Calibration curves, rather than efficiencies, are derived by two independent methods using data collected with prototype versions of the Helley‐Smith sampler in a large calibration facility capable of continuously measuring transport rates across a 9 ft (2.7 m) width. Results from both methods agree. Composite calibration curves, based on matching probability distribution functions of samples and measured rates from different hydraulic conditions (runs), are obtained for six different versions of the sampler. Sampled rates corrected by the calibration curves agree with measured rates for individual runs.

  1. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO/sub 3/ standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less

  2. INFLUENCE OF IRON CHELATION ON R1 AND R2 CALIBRATION CURVES IN GERBIL LIVER AND HEART

    PubMed Central

    Wood, John C.; Aguilar, Michelle; Otto-Duessel, Maya; Nick, Hanspeter; Nelson, Marvin D.; Moats, Rex

    2008-01-01

    MRI is gaining increasing importance for the noninvasive quantification of organ iron burden. Since transverse relaxation rates depend on iron distribution as well as iron concentration, physiologic and pharmacologic processes that alter iron distribution could change MRI calibration curves. This paper compares the effect of three iron chelators, deferoxamine, deferiprone, and deferasirox on R1 and R2 calibration curves according to two iron loading and chelation strategies. 33 Mongolian gerbils underwent iron loading (iron dextran 500 mg/kg/wk) for 3 weeks followed by 4 weeks of chelation. An additional 56 animals received less aggressive loading (200 mg/kg/week) for 10 weeks, followed by 12 weeks of chelation. R1 and R2 calibration curves were compared to results from 23 iron-loaded animals that had not received chelation. Acute iron loading and chelation biased R1 and R2 from the unchelated reference calibration curves but chelator-specific changes were not observed, suggesting physiologic rather than pharmacologic differences in iron distribution. Long term chelation deferiprone treatment increased liver R1 50% (p<0.01), while long term deferasirox lowered liver R2 30.9% (p<0.0001). The relationship between R1 and R2 and organ iron concentration may depend upon the acuity of iron loading and unloading as well as the iron chelator administered. PMID:18581418

  3. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    PubMed

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  4. Calibration curves for commercial copper and aluminum alloys using handheld laser-induced breakdown spectroscopy

    DOE PAGES

    Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; ...

    2018-02-13

    Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less

  5. Calibration curves for commercial copper and aluminum alloys using handheld laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; Garlea, E.

    2018-03-01

    Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising copper and aluminum alloys and data were collected from the samples' surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectra were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument's ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in situ, as a starting point for undertaking future complex material characterization work.

  6. Calibration curves for commercial copper and aluminum alloys using handheld laser-induced breakdown spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, B. N.; Martin, M. Z.; Leonard, D. N.

    Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less

  7. 7 CFR 42.142 - Curve for obtaining Operating Characteristic (OC) curve information for skip lot sampling and...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... CONDITION OF FOOD CONTAINERS Miscellaneous § 42.142 Curve for obtaining Operating Characteristic (OC) curve...

  8. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  9. GIADA: extended calibration activity: . the Electrostatic Micromanipulator

    NASA Astrophysics Data System (ADS)

    Sordini, R.; Accolla, M.; Della Corte, V.; Rotundi, A.

    GIADA (Grain Impact Analyser and Dust Accumulator), one of the scientific instruments onboard Rosetta/ESA space mission, is devoted to study dynamical properties of dust particles ejected by the short period comet 67P/Churyumov-Gerasimenko. In preparation for the scientific phase of the mission, we are performing laboratory calibration activities on the GIADA Proto Flight Model (PFM), housed in a clean room in our laboratory. Aim of the calibration activity is to characterize the response curve of the GIADA measurement sub-systems. These curves are then correlated with the calibration curves obtained for the GIADA payload onboard the Rosetta S/C. The calibration activity involves two of three sub-systems constituting GIADA: Grain Detection System (GDS) and Impact Sensor (IS). To get reliable calibration curves, a statistically relevant number of grains have to be dropped or shot into the GIADA instrument. Particle composition, structure, size, optical properties and porosity have been selected in order to obtain realistic cometary dust analogues. For each selected type of grain, we estimated that at least one hundred of shots are needed to obtain a calibration curve. In order to manipulate such a large number of particles, we have designed and developed an innovative electrostatic system able to capture, manipulate and shoot particles with sizes in the range 20 - 500 μm. The electrostatic Micromanipulator (EM) is installed on a manual handling system composed by X-Y-Z micrometric slides with a 360o rotational stage along Z, and mounted on a optical bench. In the present work, we display the tests on EM using ten different materials with dimension in the range 50 - 500 μm: the experimental results are in compliance with the requirements.

  10. Biological dosimetry of ionizing radiation: Evaluation of the dose with cytogenetic methodologies by the construction of calibration curves

    NASA Astrophysics Data System (ADS)

    Zafiropoulos, Demetre; Facco, E.; Sarchiapone, Lucia

    2016-09-01

    In case of a radiation accident, it is well known that in the absence of physical dosimetry biological dosimetry based on cytogenetic methods is a unique tool to estimate individual absorbed dose. Moreover, even when physical dosimetry indicates an overexposure, scoring chromosome aberrations (dicentrics and rings) in human peripheral blood lymphocytes (PBLs) at metaphase is presently the most widely used method to confirm dose assessment. The analysis of dicentrics and rings in PBLs after Giemsa staining of metaphase cells is considered the most valid assay for radiation injury. This work shows that applying the fluorescence in situ hybridization (FISH) technique, using telomeric/centromeric peptide nucleic acid (PNA) probes in metaphase chromosomes for radiation dosimetry, could become a fast scoring, reliable and precise method for biological dosimetry after accidental radiation exposures. In both in vitro methods described above, lymphocyte stimulation is needed, and this limits the application in radiation emergency medicine where speed is considered to be a high priority. Using premature chromosome condensation (PCC), irradiated human PBLs (non-stimulated) were fused with mitotic CHO cells, and the yield of excess PCC fragments in Giemsa stained cells was scored. To score dicentrics and rings under PCC conditions, the necessary centromere and telomere detection of the chromosomes was obtained using FISH and specific PNA probes. Of course, a prerequisite for dose assessment in all cases is a dose-effect calibration curve. This work illustrates the various methods used; dose response calibration curves, with 95% confidence limits used to estimate dose uncertainties, have been constructed for conventional metaphase analysis and FISH. We also compare the dose-response curve constructed after scoring of dicentrics and rings using PCC combined with FISH and PNA probes. Also reported are dose response curves showing scored dicentrics and rings per cell, combining

  11. Practical calibration curve of small-type optically stimulated luminescence (OSL) dosimeter for evaluation of entrance skin dose in the diagnostic X-ray region.

    PubMed

    Takegami, Kazuki; Hayashi, Hiroaki; Okino, Hiroki; Kimoto, Natsumi; Maehata, Itsumi; Kanazawa, Yuki; Okazaki, Tohru; Kobayashi, Ikuo

    2015-07-01

    For X-ray diagnosis, the proper management of the entrance skin dose (ESD) is important. Recently, a small-type optically stimulated luminescence dosimeter (nanoDot OSL dosimeter) was made commercially available by Landauer, and it is hoped that it will be used for ESD measurements in clinical settings. Our objectives in the present study were to propose a method for calibrating the ESD measured with the nanoDot OSL dosimeter and to evaluate its accuracy. The reference ESD is assumed to be based on an air kerma with consideration of a well-known back scatter factor. We examined the characteristics of the nanoDot OSL dosimeter using two experimental conditions: a free air irradiation to derive the air kerma, and a phantom experiment to determine the ESD. For evaluation of the ability to measure the ESD, a calibration curve for the nanoDot OSL dosimeter was determined in which the air kerma and/or the ESD measured with an ionization chamber were used as references. As a result, we found that the calibration curve for the air kerma was determined with an accuracy of 5 %. Furthermore, the calibration curve was applied to the ESD estimation. The accuracy of the ESD obtained was estimated to be 15 %. The origin of these uncertainties was examined based on published papers and Monte-Carlo simulation. Most of the uncertainties were caused by the systematic uncertainty of the reading system and the differences in efficiency corresponding to different X-ray energies.

  12. Calibration of thermocouple psychrometers and moisture measurements in porous materials

    NASA Astrophysics Data System (ADS)

    Guz, Łukasz; Sobczuk, Henryk; Połednik, Bernard; Guz, Ewa

    2016-07-01

    The paper presents in situ method of peltier psychrometric sensors calibration which allow to determine water potential. Water potential can be easily recalculated into moisture content of the porous material. In order to obtain correct results of water potential, each probe should be calibrated. NaCl salt solutions with molar concentration of 0.4M, 0.7M, 1.0M and 1.4M, were used for calibration which enabled to obtain osmotic potential in range: -1791 kPa to -6487 kPa. Traditionally, the value of voltage generated on thermocouples during wet-bulb temperature depression is calculated in order to determine the calibration function for psychrometric in situ sensors. In the new method of calibration, the field under psychrometric curve along with peltier cooling current and duration was taken into consideration. During calibration, different cooling currents were applied for each salt solution, i.e. 3, 5, 8 mA respectively, as well as different cooling duration for each current (from 2 to 100 sec with 2 sec step). Afterwards, the shape of each psychrometric curve was thoroughly examined and a value of field under psychrometric curve was computed. Results of experiment indicate that there is a robust correlation between field under psychrometric curve and water potential. Calibrations formulas were designated on the basis of these features.

  13. A BAYESIAN METHOD FOR CALCULATING REAL-TIME QUANTITATIVE PCR CALIBRATION CURVES USING ABSOLUTE PLASMID DNA STANDARDS

    EPA Science Inventory

    In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...

  14. Conversion of calibration curves for accurate estimation of molecular weight averages and distributions of polyether polyols by conventional size exclusion chromatography.

    PubMed

    Xu, Xiuqing; Yang, Xiuhan; Martin, Steven J; Mes, Edwin; Chen, Junlan; Meunier, David M

    2018-08-17

    Accurate measurement of molecular weight averages (M¯ n, M¯ w, M¯ z ) and molecular weight distributions (MWD) of polyether polyols by conventional SEC (size exclusion chromatography) is not as straightforward as it would appear. Conventional calibration with polystyrene (PS) standards can only provide PS apparent molecular weights which do not provide accurate estimates of polyol molecular weights. Using polyethylene oxide/polyethylene glycol (PEO/PEG) for molecular weight calibration could improve the accuracy, but the retention behavior of PEO/PEG is not stable in THF-based (tetrahydrofuran) SEC systems. In this work, two approaches for calibration curve conversion with narrow PS and polyol molecular weight standards were developed. Equations to convert PS-apparent molecular weight to polyol-apparent molecular weight were developed using both a rigorous mathematical analysis and graphical plot regression method. The conversion equations obtained by the two approaches were in good agreement. Factors influencing the conversion equation were investigated. It was concluded that the separation conditions such as column batch and operating temperature did not have significant impact on the conversion coefficients and a universal conversion equation could be obtained. With this conversion equation, more accurate estimates of molecular weight averages and MWDs for polyether polyols can be achieved from conventional PS-THF SEC calibration. Moreover, no additional experimentation is required to convert historical PS equivalent data to reasonably accurate molecular weight results. Copyright © 2018. Published by Elsevier B.V.

  15. Prediction of hydrographs and flow-duration curves in almost ungauged catchments: Which runoff measurements are most informative for model calibration?

    NASA Astrophysics Data System (ADS)

    Pool, Sandra; Viviroli, Daniel; Seibert, Jan

    2017-11-01

    Applications of runoff models usually rely on long and continuous runoff time series for model calibration. However, many catchments around the world are ungauged and estimating runoff for these catchments is challenging. One approach is to perform a few runoff measurements in a previously fully ungauged catchment and to constrain a runoff model by these measurements. In this study we investigated the value of such individual runoff measurements when taken at strategic points in time for applying a bucket-type runoff model (HBV) in ungauged catchments. Based on the assumption that a limited number of runoff measurements can be taken, we sought the optimal sampling strategy (i.e. when to measure the streamflow) to obtain the most informative data for constraining the runoff model. We used twenty gauged catchments across the eastern US, made the assumption that these catchments were ungauged, and applied different runoff sampling strategies. All tested strategies consisted of twelve runoff measurements within one year and ranged from simply using monthly flow maxima to a more complex selection of observation times. In each case the twelve runoff measurements were used to select 100 best parameter sets using a Monte Carlo calibration approach. Runoff simulations using these 'informed' parameter sets were then evaluated for an independent validation period in terms of the Nash-Sutcliffe efficiency of the hydrograph and the mean absolute relative error of the flow-duration curve. Model performance measures were normalized by relating them to an upper and a lower benchmark representing a well-informed and an uninformed model calibration. The hydrographs were best simulated with strategies including high runoff magnitudes as opposed to the flow-duration curves that were generally better estimated with strategies that captured low and mean flows. The choice of a sampling strategy covering the full range of runoff magnitudes enabled hydrograph and flow-duration curve

  16. Calibration of a Fusion Experiment to Investigate the Nuclear Caloric Curve

    NASA Astrophysics Data System (ADS)

    Keeler, Ashleigh

    2017-09-01

    In order to investigate the nuclear equation of state (EoS), the relation between two thermodynamic quantities can be examined. The correlation between the temperature and excitation energy of a nucleus, also known as the caloric curve, has been previously observed in peripheral heavy-ion collisions to exhibit a dependence on the neutron-proton asymmetry. To further investigate this result, fusion reactions (78Kr + 12C and 86Kr + 12C) were measured; the beam energy was varied in the range 15-35 MeV/u in order to vary the excitation energy. The light charged particles (LCPs) evaporated from the compound nucleus were measured in the Si-CsI(TI)/PD detector array FAUST (Forward Array Using Silicon Technology). The LCPs carry information about the temperature. The calibration of FAUST will be described in this presentation. The silicon detectors have resistive surfaces in perpendicular directions to allow position measurement of the LCP's to better than 200 um. The resistive nature requires a position-dependent correction to the energy calibration to take full advantage of the energy resolution. The momentum is calculated from the energy of these particles, and their position on the detectors. A parameterized formula based on the Bethe-Bloch equation was used to straighten the particle identification (PID) lines measured with the dE-E technique. The energy calibration of the CsI detectors is based on the silicon detector energy calibration and the PID. A precision slotted mask enables the relative positions of the detectors to be determined. DOE Grant: DE-FG02-93ER40773 and REU Grant: PHY - 1659847.

  17. Preliminary calibration of the ACP safeguards neutron counter

    NASA Astrophysics Data System (ADS)

    Lee, T. H.; Kim, H. D.; Yoon, J. S.; Lee, S. Y.; Swinhoe, M.; Menlove, H. O.

    2007-10-01

    The Advanced Spent Fuel Conditioning Process (ACP), a kind of pyroprocess, has been developed at the Korea Atomic Energy Research Institute (KAERI). Since there is no IAEA safeguards criteria for this process, KAERI has developed a neutron coincidence counter to make it possible to perform a material control and accounting (MC&A) for its ACP materials for the purpose of a transparency in the peaceful uses of nuclear materials at KAERI. The test results of the ACP Safeguards Neutron Counter (ASNC) show a satisfactory performance for the Doubles count measurement with a low measurement error for its cylindrical sample cavity. The neutron detection efficiency is about 21% with an error of ±1.32% along the axial direction of the cavity. Using two 252Cf neutron sources, we obtained various parameters for the Singles and Doubles rates for the ASNC. The Singles, Doubles, and Triples rates for a 252Cf point source were obtained by using the MCNPX code and the results for the ft8 cap multiplicity tally option with the values of ɛ, fd, and ft measured with a strong source most closely match the measurement results to within a 1% error. A preliminary calibration curve for the ASNC was generated by using the point model equation relationship between 244Cm and 252Cf and the calibration coefficient for the non-multiplying sample is 2.78×10 5 (Doubles counts/s/g 244Cm). The preliminary calibration curves for the ACP samples were also obtained by using an MCNPX simulation. A neutron multiplication influence on an increase of the Doubles rate for a metal ingot and UO2 powder is clearly observed. These calibration curves will be modified and complemented, when hot calibration samples become available. To verify the validity of this calibration curve, a measurement of spent fuel standards for a known 244Cm mass will be performed in the near future.

  18. Assessment of opacimeter calibration according to International Standard Organization 10155.

    PubMed

    Gomes, J F

    2001-01-01

    This paper compares the calibration method for opacimeters issued by the International Standard Organization (ISO) 10155 with the manual reference method for determination of dust content in stack gases. ISO 10155 requires at least nine operational measurements, corresponding to three operational measurements per each dust emission range within the stack. The procedure is assessed by comparison with previous calibration methods for opacimeters using only two operational measurements from a set of measurements made at stacks from pulp mills. The results show that even if the international standard for opacimeter calibration requires that the calibration curve is to be obtained using 3 x 3 points, a calibration curve derived using 3 points could be, at times, acceptable in statistical terms, provided that the amplitude of individual measurements is low.

  19. Gold Nanoparticle-Aptamer-Based LSPR Sensing of Ochratoxin A at a Widened Detection Range by Double Calibration Curve Method

    NASA Astrophysics Data System (ADS)

    Liu, Boshi; Huang, Renliang; Yu, Yanjun; Su, Rongxin; Qi, Wei; He, Zhimin

    2018-04-01

    Ochratoxin A (OTA) is a type of mycotoxin generated from the metabolism of Aspergillus and Penicillium, and is extremely toxic to humans, livestock, and poultry. However, traditional assays for the detection of OTA are expensive and complicated. Other than OTA aptamer, OTA itself at high concentration can also adsorb on the surface of gold nanoparticles (AuNPs), and further inhibit AuNPs salt aggregation. We herein report a new OTA assay by applying the localized surface plasmon resonance effect of AuNPs and their aggregates. The result obtained from only one single linear calibration curve is not reliable, and so we developed a “double calibration curve” method to address this issue and widen the OTA detection range. A number of other analytes were also examined, and the structural properties of analytes that bind with the AuNPs were further discussed. We found that various considerations must be taken into account in the detection of these analytes when applying AuNP aggregation-based methods due to their different binding strengths.

  20. Feasibility analysis on integration of luminous environment measuring and design based on exposure curve calibration

    NASA Astrophysics Data System (ADS)

    Zou, Yuan; Shen, Tianxing

    2013-03-01

    Besides illumination calculating during architecture and luminous environment design, to provide more varieties of photometric data, the paper presents combining relation between luminous environment design and SM light environment measuring system, which contains a set of experiment devices including light information collecting and processing modules, and can offer us various types of photometric data. During the research process, we introduced a simulation method for calibration, which mainly includes rebuilding experiment scenes in 3ds Max Design, calibrating this computer aid design software in simulated environment under conditions of various typical light sources, and fitting the exposure curves of rendered images. As analytical research went on, the operation sequence and points for attention during the simulated calibration were concluded, connections between Mental Ray renderer and SM light environment measuring system were established as well. From the paper, valuable reference conception for coordination between luminous environment design and SM light environment measuring system was pointed out.

  1. A dose-response curve for biodosimetry from a 6 MV electron linear accelerator

    PubMed Central

    Lemos-Pinto, M.M.P.; Cadena, M.; Santos, N.; Fernandes, T.S.; Borges, E.; Amaral, A.

    2015-01-01

    Biological dosimetry (biodosimetry) is based on the investigation of radiation-induced biological effects (biomarkers), mainly dicentric chromosomes, in order to correlate them with radiation dose. To interpret the dicentric score in terms of absorbed dose, a calibration curve is needed. Each curve should be constructed with respect to basic physical parameters, such as the type of ionizing radiation characterized by low or high linear energy transfer (LET) and dose rate. This study was designed to obtain dose calibration curves by scoring of dicentric chromosomes in peripheral blood lymphocytes irradiated in vitro with a 6 MV electron linear accelerator (Mevatron M, Siemens, USA). Two software programs, CABAS (Chromosomal Aberration Calculation Software) and Dose Estimate, were used to generate the curve. The two software programs are discussed; the results obtained were compared with each other and with other published low LET radiation curves. Both software programs resulted in identical linear and quadratic terms for the curve presented here, which was in good agreement with published curves for similar radiation quality and dose rates. PMID:26445334

  2. GIADA: extended calibration activities before the comet encounter

    NASA Astrophysics Data System (ADS)

    Accolla, Mario; Sordini, Roberto; Della Corte, Vincenzo; Ferrari, Marco; Rotundi, Alessandra

    2014-05-01

    The Grain Impact Analyzer and Dust Accumulator - GIADA - is one of the payloads on-board Rosetta Orbiter. Its three detection sub-systems are able to measure the speed, the momentum, the mass, the optical cross section of single cometary grains and the dust flux ejected by the periodic comet 67P Churyumov-Gerasimenko. During the Hibernation phase of the Rosetta mission, we have performed a dedicated extended calibration activity on the GIADA Proto Flight Model (accommodated in a clean room in our laboratory) involving two of three sub-systems constituting GIADA, i.e. the Grain Detection System (GDS) and the Impact Sensor (IS). Our aim is to carry out a new set of response curves for these two subsystems and to correlate them with the calibration curves obtained in 2002 for the GIADA payload onboard the Rosetta spacecraft, in order to improve the interpretation of the forthcoming scientific data. For the extended calibration we have dropped or shot into GIADA PFM a statistically relevant number of grains (i.e. about 1 hundred), acting as cometary dust analogues. We have studied the response of the GDS and IS as a function of grain composition, size and velocity. Different terrestrial materials were selected as cometary analogues according to the more recent knowledge gained through the analyses of Interplanetary Dust Particles and cometary samples returned from comet 81P/Wild 2 (Stardust mission). Therefore, for each material, we have produced grains with sizes ranging from 20-500 μm in diameter, that were characterized by FESEM and micro IR spectroscopy. Therefore, the grains were shot into GIADA PFM with speed ranging between 1 and 100 ms-1. Indeed, according to the estimation reported in Fink & Rubin (2012), this range is representative of the dust particle velocity expected at the comet scenario and lies within the GIADA velocity sensitivity (i.e. 1-100 ms-1 for GDSand 1-300 ms-1for GDS+IS 1-300 ms-1). The response curves obtained using the data collected

  3. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    NASA Astrophysics Data System (ADS)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  4. Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Katrašnik, Jaka; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    The goal of this article is to present a novel method for spectral characterization and calibration of spectrometers and hyper-spectral imaging systems based on non-collinear acousto-optical tunable filters. The method characterizes the spectral tuning curve (frequency-wavelength characteristic) of the AOTF (Acousto-Optic Tunable Filter) filter by matching the acquired and modeled spectra of the HgAr calibration lamp, which emits line spectrum that can be well modeled via AOTF transfer function. In this way, not only tuning curve characterization and corresponding spectral calibration but also spectral resolution assessment is performed. The obtained results indicated that the proposed method is efficient, accurate and feasible for routine calibration of AOTF spectrometers and hyper-spectral imaging systems and thereby a highly competitive alternative to the existing calibration methods.

  5. Radiochromic film calibration for the RQT9 quality beam

    NASA Astrophysics Data System (ADS)

    Costa, K. C.; Gomez, A. M. L.; Alonso, T. C.; Mourao, A. P.

    2017-11-01

    When ionizing radiation interacts with matter it generates energy deposition. Radiation dosimetry is important for medical applications of ionizing radiation due to the increasing demand for diagnostic radiology and radiotherapy. Different dosimetry methods are used and each one has its advantages and disadvantages. The film is a dose measurement method that records the energy deposition by the darkening of its emulsion. Radiochromic films have a little visible light sensitivity and respond better to ionizing radiation exposure. The aim of this study is to obtain the resulting calibration curve by the irradiation of radiochromic film strips, making it possible to relate the darkening of the film with the absorbed dose, in order to measure doses in experiments with X-ray beam of 120 kV, in computed tomography (CT). Film strips of GAFCHROMIC XR-QA2 were exposed according to RQT9 reference radiation, which defines an X-ray beam generated from a voltage of 120 kV. Strips were irradiated in "Laboratório de Calibração de Dosímetros do Centro de Desenvolvimento da Tecnologia Nuclear" (LCD / CDTN) at a dose range of 5-30 mGy, corresponding to the range values commonly used in CT scans. Digital images of the irradiated films were analyzed by using the ImageJ software. The darkening responses on film strips according to the doses were observed and they allowed obtaining the corresponding numeric values to the darkening for each specific dose value. From the numerical values of darkening, a calibration curve was obtained, which correlates the darkening of the film strip with dose values in mGy. The calibration curve equation is a simplified method for obtaining absorbed dose values using digital images of radiochromic films irradiated. With the calibration curve, radiochromic films may be applied on dosimetry in experiments on CT scans using X-ray beam of 120 kV, in order to improve CT acquisition image processes.

  6. Application of Geodetic VLBI Data to Obtaining Long-Term Light Curves for Astrophysics

    NASA Technical Reports Server (NTRS)

    Kijima, Masachika

    2010-01-01

    The long-term light curve is important to research on binary black holes and disk instability in AGNs. The light curves have been drawn mainly using single dish data provided by the University of Michigan Radio Observatory and the Metsahovi Radio Observatory. Hence, thus far, we have to research on limited sources. I attempt to draw light curves using VLBI data for those sources that have not been monitored by any observatories with single dish. I developed software, analyzed all geodetic VLBI data available at the IVS Data Centers, and drew the light curves at 8 GHz. In this report, I show the tentative results for two AGNs. I compared two light curves of 4C39.25, which were drawn based on single dish data and on VLBI data. I confirmed that the two light curves were consistent. Furthermore, I succeeded in drawing the light curve of 0454-234 with VLBI data, which has not been monitored by any observatory with single dish. In this report, I suggest that the geodetic VLBI archive data is useful to obtain the long-term light curves at radio bands for astrophysics.

  7. Use of armored RNA as a standard to construct a calibration curve for real-time RT-PCR.

    PubMed

    Donia, D; Divizia, M; Pana', A

    2005-06-01

    Armored Enterovirus RNA was used to standardize a real-time reverse transcription (RT)-PCR for environmental testing. Armored technology is a system to produce a robust and stable RNA standard, trapped into phage proteins, to be used as internal control. The Armored Enterovirus RNA protected sequence includes 263 bp of highly conserved sequences in 5' UTR region. During these tests, Armored RNA has been used to produce a calibration curve, comparing three different fluorogenic chemistry: TaqMan system, Syber Green I and Lux-primers. The effective evaluation of three amplifying commercial reagent kits, in use to carry out real-time RT-PCR, and several extraction procedures of protected viral RNA have been carried out. The highest Armored RNA recovery was obtained by heat treatment while chemical extraction may decrease the quantity of RNA. The best sensitivity and specificity was obtained using the Syber Green I technique since it is a reproducible test, easy to use and the cheapest one. TaqMan and Lux-primer assays provide good RT-PCR efficiency in relationship to the several extraction methods used, since labelled probe or primer request in these chemistry strategies, increases the cost of testing.

  8. Localized normalization for improved calibration curves of manganese and zinc in laser-induced plasma spectroscopy

    NASA Astrophysics Data System (ADS)

    Sabri, Nursalwanie Mohd; Haider, Zuhaib; Tufail, Kashif; Imran, Muhammad; Ali, Jalil

    2017-03-01

    Laser-induced plasma spectroscopy is performed to determine the elemental compositions of manganese and zinc in potassium bromide (KBr) matrix. This work has utilized Q-switched Nd:YAG laser installed in LIBS2500plus system at fundamental wavelength. The pelletized sample were ablated in air with maximum laser energy of 650 mJ for different gate delays ranging from 0-18 µs. The spectra of samples are obtained for five different compositions containing preferred spectral lines. The intensity of spectral line is observed at its maximum at a gate-delay 0.83 µs and subsequently decayed exponentially with the increasing of gate delay. Maximum signal-to-background ratio of Mn and Zn were found at gate delays of 7.92 and 7.50 µs, respectively. Initial calibration curves show bad data fitting, whereas the locally normalized intensity for both spectral lines shows enhancement since it is more linearly regressed. This study will give a better understanding in studying the plasma emission and the spectra analysis. At the request of all authors of the paper, and with the agreement of the Proceedings Editor, an updated version of this article was published on 24 May 2017.

  9. Capillary pressure curves for low permeability chalk obtained by NMR imaging of core saturation profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norgaard, J.V.; Olsen, D.; Springer, N.

    1995-12-31

    A new technique for obtaining water-oil capillary pressure curves, based on NMR imaging of the saturation distribution in flooded cores is presented. In this technique, a steady state fluid saturation profile is developed by flooding the core at a constant flow rate. At the steady state situation where the saturation distribution no longer changes, the local pressure difference between the wetting and non-wetting phases represents the capillary pressure. The saturation profile is measured using an NMR technique and for a drainage case, the pressure in the non-wetting phase is calculated numerically. The paper presents the NMR technique and the proceduremore » for calculating the pressure distribution in the sample. Inhomogeneous samples produce irregular saturation profiles, which may be interpreted in terms of variation in permeability, porosity, and capillary pressure. Capillary pressure curves for North Sea chalk obtained by the new technique show good agreement with capillary pressure curves obtained by traditional techniques.« less

  10. Computational Methodology for Absolute Calibration Curves for Microfluidic Optical Analyses

    PubMed Central

    Chang, Chia-Pin; Nagel, David J.; Zaghloul, Mona E.

    2010-01-01

    Optical fluorescence and absorption are two of the primary techniques used for analytical microfluidics. We provide a thorough yet tractable method for computing the performance of diverse optical micro-analytical systems. Sample sizes range from nano- to many micro-liters and concentrations from nano- to milli-molar. Equations are provided to trace quantitatively the flow of the fundamental entities, namely photons and electrons, and the conversion of energy from the source, through optical components, samples and spectral-selective components, to the detectors and beyond. The equations permit facile computations of calibration curves that relate the concentrations or numbers of molecules measured to the absolute signals from the system. This methodology provides the basis for both detailed understanding and improved design of microfluidic optical analytical systems. It saves prototype turn-around time, and is much simpler and faster to use than ray tracing programs. Over two thousand spreadsheet computations were performed during this study. We found that some design variations produce higher signal levels and, for constant noise levels, lower minimum detection limits. Improvements of more than a factor of 1,000 were realized. PMID:22163573

  11. Growthcurver: an R package for obtaining interpretable metrics from microbial growth curves.

    PubMed

    Sprouffske, Kathleen; Wagner, Andreas

    2016-04-19

    Plate readers can measure the growth curves of many microbial strains in a high-throughput fashion. The hundreds of absorbance readings collected simultaneously for hundreds of samples create technical hurdles for data analysis. Growthcurver summarizes the growth characteristics of microbial growth curve experiments conducted in a plate reader. The data are fitted to a standard form of the logistic equation, and the parameters have clear interpretations on population-level characteristics, like doubling time, carrying capacity, and growth rate. Growthcurver is an easy-to-use R package available for installation from the Comprehensive R Archive Network (CRAN). The source code is available under the GNU General Public License and can be obtained from Github (Sprouffske K, Growthcurver sourcecode, 2016).

  12. Satellite Calibration With LED Detectors at Mud Lake

    NASA Technical Reports Server (NTRS)

    Hiller, Jonathan D.

    2005-01-01

    Earth-monitoring instruments in orbit must be routinely calibrated in order to accurately analyze the data obtained. By comparing radiometric measurements taken on the ground in conjunction with a satellite overpass, calibration curves are derived for an orbiting instrument. A permanent, automated facility is planned for Mud Lake, Nevada (a large, homogeneous, dry lakebed) for this purpose. Because some orbiting instruments have low resolution (250 meters per pixel), inexpensive radiometers using LEDs as sensors are being developed to array widely over the lakebed. LEDs are ideal because they are inexpensive, reliable, and sense over a narrow bandwidth. By obtaining and averaging widespread data, errors are reduced and long-term surface changes can be more accurately observed.

  13. Self-calibrating multiplexer circuit

    DOEpatents

    Wahl, Chris P.

    1997-01-01

    A time domain multiplexer system with automatic determination of acceptable multiplexer output limits, error determination, or correction is comprised of a time domain multiplexer, a computer, a constant current source capable of at least three distinct current levels, and two series resistances employed for calibration and testing. A two point linear calibration curve defining acceptable multiplexer voltage limits may be defined by the computer by determining the voltage output of the multiplexer to very accurately known input signals developed from predetermined current levels across the series resistances. Drift in the multiplexer may be detected by the computer when the output voltage limits, expected during normal operation, are exceeded, or the relationship defined by the calibration curve is invalidated.

  14. Calibrations between the variables of microbial TTI response and ground pork qualities.

    PubMed

    Kim, Eunji; Choi, Dong Yeol; Kim, Hyun Chul; Kim, Keehyuk; Lee, Seung Ju

    2013-10-01

    A time-temperature indicator (TTI) based on a lactic acid bacterium, Weissella cibaria CIFP009, was applied to ground pork packaging. Calibration curves between TTI response and pork qualities were obtained from storage tests at 2°C, 10°C, and 13°C. The curves of the TTI vs. total cell number at different temperatures coincided to the greatest extent, indicating the highest representativeness of calibration, by showing the least coefficient of variance (CV=11%) of the quality variables at a given TTI response (titratable acidity) on the curves, followed by pH (23%), volatile basic nitrogen (VBN) (25%), and thiobarbituric acid-reactive substances (TBARS) (47%). Similarity of Arrhenius activation energy (Ea) could also reflect the representativeness of calibration. The total cell number (104.9 kJ/mol) was found to be the most similar to that of the TTI response (106.2 kJ/mol), followed by pH (113.6 kJ/mol), VBN (77.4 kJ/mol), and TBARS (55.0 kJ/mol). Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Radiance calibration of the High Altitude Observatory white-light coronagraph on Skylab

    NASA Technical Reports Server (NTRS)

    Poland, A. I.; Macqueen, R. M.; Munro, R. H.; Gosling, J. T.

    1977-01-01

    The processing of over 35,000 photographs of the solar corona obtained by the white-light coronograph on Skylab is described. Calibration of the vast amount of data was complicated by temporal effects of radiation fog and latent image loss. These effects were compensated by imaging a calibration step wedge on each data frame. Absolute calibration of the wedge was accomplished through comparison with a set of previously calibrated glass opal filters. Analysis employed average characteristic curves derived from measurements of step wedges from many frames within a given camera half-load. The net absolute accuracy of a given radiance measurement is estimated to be 20%.

  16. SU-C-204-02: Improved Patient-Specific Optimization of the Stopping Power Calibration for Proton Therapy Planning Using a Single Proton Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rinaldi, I; Ludwig Maximilian University, Garching, DE; Heidelberg University Hospital, Heidelberg, DE

    2015-06-15

    Purpose: We present an improved method to calculate patient-specific calibration curves to convert X-ray computed tomography (CT) Hounsfield Unit (HU) to relative stopping powers (RSP) for proton therapy treatment planning. Methods: By optimizing the HU-RSP calibration curve, the difference between a proton radiographic image and a digitally reconstructed X-ray radiography (DRR) is minimized. The feasibility of this approach has previously been demonstrated. This scenario assumes that all discrepancies between proton radiography and DRR originate from uncertainties in the HU-RSP curve. In reality, external factors cause imperfections in the proton radiography, such as misalignment compared to the DRR and unfaithful representationmore » of geometric structures (“blurring”). We analyze these effects based on synthetic datasets of anthropomorphic phantoms and suggest an extended optimization scheme which explicitly accounts for these effects. Performance of the method is been tested for various simulated irradiation parameters. The ultimate purpose of the optimization is to minimize uncertainties in the HU-RSP calibration curve. We therefore suggest and perform a thorough statistical treatment to quantify the accuracy of the optimized HU-RSP curve. Results: We demonstrate that without extending the optimization scheme, spatial blurring (equivalent to FWHM=3mm convolution) in the proton radiographies can cause up to 10% deviation between the optimized and the ground truth HU-RSP calibration curve. Instead, results obtained with our extended method reach 1% or better correspondence. We have further calculated gamma index maps for different acceptance levels. With DTA=0.5mm and RD=0.5%, a passing ratio of 100% is obtained with the extended method, while an optimization neglecting effects of spatial blurring only reach ∼90%. Conclusion: Our contribution underlines the potential of a single proton radiography to generate a patient-specific calibration curve and to

  17. Flight calibration tests of a nose-boom-mounted fixed hemispherical flow-direction sensor

    NASA Technical Reports Server (NTRS)

    Armistead, K. H.; Webb, L. D.

    1973-01-01

    Flight calibrations of a fixed hemispherical flow angle-of-attack and angle-of-sideslip sensor were made from Mach numbers of 0.5 to 1.8. Maneuvers were performed by an F-104 airplane at selected altitudes to compare the measurement of flow angle of attack from the fixed hemispherical sensor with that from a standard angle-of-attack vane. The hemispherical flow-direction sensor measured differential pressure at two angle-of-attack ports and two angle-of-sideslip ports in diametrically opposed positions. Stagnation pressure was measured at a center port. The results of these tests showed that the calibration curves for the hemispherical flow-direction sensor were linear for angles of attack up to 13 deg. The overall uncertainty in determining angle of attack from these curves was plus or minus 0.35 deg or less. A Mach number position error calibration curve was also obtained for the hemispherical flow-direction sensor. The hemispherical flow-direction sensor exhibited a much larger position error than a standard uncompensated pitot-static probe.

  18. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  19. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  20. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  1. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  2. Radiometric calibration of the vacuum-ultraviolet spectrograph SUMER on the SOHO spacecraft with the B detector.

    PubMed

    Schühle, U; Curdt, W; Hollandt, J; Feldman, U; Lemaire, P; Wilhelm, K

    2000-01-20

    The Solar Ultraviolet Measurement of Emitted Radiation (SUMER) vacuum-ultraviolet spectrograph was calibrated in the laboratory before the integration of the instrument on the Solar and Heliospheric Observatory (SOHO) spacecraft in 1995. During the scientific operation of the SOHO it has been possible to track the radiometric calibration of the SUMER spectrograph since March 1996 by a strategy that employs various methods to update the calibration status and improve the coverage of the spectral calibration curve. The results for the A Detector were published previously [Appl. Opt. 36, 6416 (1997)]. During three years of operation in space, the B detector was used for two and one-half years. We describe the characteristics of the B detector and present results of the tracking and refinement of the spectral calibration curves with it. Observations of the spectra of the stars alpha and rho Leonis permit an extrapolation of the calibration curves in the range from 125 to 149.0 nm. Using a solar coronal spectrum observed above the solar disk, we can extrapolate the calibration curves by measuring emission line pairs with well-known intensity ratios. The sensitivity ratio of the two photocathode areas can be obtained by registration of many emission lines in the entire spectral range on both KBr-coated and bare parts of the detector's active surface. The results are found to be consistent with the published calibration performed in the laboratory in the wavelength range from 53 to 124 nm. We can extrapolate the calibration outside this range to 147 nm with a relative uncertainty of ?30% (1varsigma) for wavelengths longer than 125 nm and to 46.5 nm with 50% uncertainty for the short-wavelength range below 53 nm.

  3. SUMS calibration test report

    NASA Technical Reports Server (NTRS)

    Robertson, G.

    1982-01-01

    Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.

  4. Non-uniformity calibration for MWIR polarization imagery obtained with integrated microgrid polarimeters

    NASA Astrophysics Data System (ADS)

    Liu, Hai-Zheng; Shi, Ze-Lin; Feng, Bin; Hui, Bin; Zhao, Yao-Hong

    2016-03-01

    Integrating microgrid polarimeters on focal plane array (FPA) of an infrared detector causes non-uniformity of polarization response. In order to reduce the effect of polarization non-uniformity, this paper constructs an experimental setup for capturing raw flat-field images and proposes a procedure for acquiring non-uniform calibration (NUC) matrix and calibrating raw polarization images. The proposed procedure takes the incident radiation as a polarization vector and offers a calibration matrix for each pixel. Both our matrix calibration and two-point calibration are applied to our mid-wavelength infrared (MWIR) polarization imaging system with integrated microgrid polarimeters. Compared with two point calibration, our matrix calibration reduces non-uniformity by 30 40% under condition of flat-field data test with polarization. The ourdoor scene observation experiment indicates that our calibration can effectively reduce polarization non-uniformity and improve the image quality of our MWIR polarization imaging system.

  5. Dependency of EBT2 film calibration curve on postirradiation time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Liyun, E-mail: liyunc@isu.edu.tw; Ding, Hueisch-Jy; Ho, Sheng-Yow

    2014-02-15

    Purpose: The Ashland Inc. product EBT2 film model is a widely used quality assurance tool, especially for verification of 2-dimensional dose distributions. In general, the calibration film and the dose measurement film are irradiated, scanned, and calibrated at the same postirradiation time (PIT), 1-2 days after the films are irradiated. However, for a busy clinic or in some special situations, the PIT for the dose measurement film may be different from that of the calibration film. In this case, the measured dose will be incorrect. This paper proposed a film calibration method that includes the effect of PIT. Methods: Themore » dose versus film optical density was fitted to a power function with three parameters. One of these parameters was PIT dependent, while the other two were found to be almost constant with a standard deviation of the mean less than 4%. The PIT-dependent parameter was fitted to another power function of PIT. The EBT2 film model was calibrated using the PDD method with 14 different PITs ranging from 1 h to 2 months. Ten of the fourteen PITs were used for finding the fitting parameters, and the other four were used for testing the model. Results: The verification test shows that the differences between the delivered doses and the film doses calculated with this modeling were mainly within 2% for delivered doses above 60 cGy, and the total uncertainties were generally under 5%. The errors and total uncertainties of film dose calculation were independent of the PIT using the proposed calibration procedure. However, the fitting uncertainty increased with decreasing dose or PIT, but stayed below 1.3% for this study. Conclusions: The EBT2 film dose can be modeled as a function of PIT. For the ease of routine calibration, five PITs were suggested to be used. It is recommended that two PITs be located in the fast developing period (1∼6 h), one in 1 ∼ 2 days, one around a week, and one around a month.« less

  6. Energy dispersive X-ray fluorescence (EDXRF) equipment calibration for multielement analysis of soil and rock samples

    NASA Astrophysics Data System (ADS)

    de Moraes, Alex Silva; Tech, Lohane; Melquíades, Fábio Luiz; Bastos, Rodrigo Oliveira

    2014-11-01

    Considering the importance to understand the behavior of the elements on different natural and/or anthropic processes, this study had as objective to verify the accuracy of a multielement analysis method for rocks characterization by using soil standards as calibration reference. An EDXRF equipment was used. The analyses were made on samples doped with known concentration of Mn, Zn, Rb, Sr and Zr, for the obtainment of the calibration curves, and on a certified rock sample to check the accuracy of the analytical curves. Then, a set of rock samples from Rio Bonito, located in Figueira city, Paraná State, Brazil, were analyzed. The concentration values obtained, in ppm, for Mn, Rb, Sr and Zr varied, respectively, from 175 to 1084, 7.4 to 268, 28 to 2247 and 15 to 761.

  7. SU-F-J-65: Prediction of Patient Setup Errors and Errors in the Calibration Curve from Prompt Gamma Proton Range Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, J; Labarbe, R; Sterpin, E

    2016-06-15

    Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less

  8. Financial model calibration using consistency hints.

    PubMed

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  9. Cloned plasmid DNA fragments as calibrators for controlling GMOs: different real-time duplex quantitative PCR methods.

    PubMed

    Taverniers, Isabel; Van Bockstaele, Erik; De Loose, Marc

    2004-03-01

    Analytical real-time PCR technology is a powerful tool for implementation of the GMO labeling regulations enforced in the EU. The quality of analytical measurement data obtained by quantitative real-time PCR depends on the correct use of calibrator and reference materials (RMs). For GMO methods of analysis, the choice of appropriate RMs is currently under debate. So far, genomic DNA solutions from certified reference materials (CRMs) are most often used as calibrators for GMO quantification by means of real-time PCR. However, due to some intrinsic features of these CRMs, errors may be expected in the estimations of DNA sequence quantities. In this paper, two new real-time PCR methods are presented for Roundup Ready soybean, in which two types of plasmid DNA fragments are used as calibrators. Single-target plasmids (STPs) diluted in a background of genomic DNA were used in the first method. Multiple-target plasmids (MTPs) containing both sequences in one molecule were used as calibrators for the second method. Both methods simultaneously detect a promoter 35S sequence as GMO-specific target and a lectin gene sequence as endogenous reference target in a duplex PCR. For the estimation of relative GMO percentages both "delta C(T)" and "standard curve" approaches are tested. Delta C(T) methods are based on direct comparison of measured C(T) values of both the GMO-specific target and the endogenous target. Standard curve methods measure absolute amounts of target copies or haploid genome equivalents. A duplex delta C(T) method with STP calibrators performed at least as well as a similar method with genomic DNA calibrators from commercial CRMs. Besides this, high quality results were obtained with a standard curve method using MTP calibrators. This paper demonstrates that plasmid DNA molecules containing either one or multiple target sequences form perfect alternative calibrators for GMO quantification and are especially suitable for duplex PCR reactions.

  10. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  11. Curved-flow, rolling-flow, and oscillatory pure-yawing wind-tunnel test methods for determination of dynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.; Grafton, S. B.; Lutze, F. H.

    1981-01-01

    The test capabilities of the Stability Wind Tunnel of the Virginia Polytechnic Institute and State University are described, and calibrations for curved and rolling flow techniques are given. Oscillatory snaking tests to determine pure yawing derivatives are considered. Representative aerodynamic data obtained for a current fighter configuration using the curved and rolling flow techniques are presented. The application of dynamic derivatives obtained in such tests to the analysis of airplane motions in general, and to high angle of attack flight conditions in particular, is discussed.

  12. Experimental Determination of the HPGe Spectrometer Efficiency Calibration Curves for Various Sample Geometry for Gamma Energy from 50 keV to 2000 keV

    NASA Astrophysics Data System (ADS)

    Saat, Ahmad; Hamzah, Zaini; Yusop, Mohammad Fariz; Zainal, Muhd Amiruddin

    2010-07-01

    Detection efficiency of a gamma-ray spectrometry system is dependent upon among others, energy, sample and detector geometry, volume and density of the samples. In the present study the efficiency calibration curves of newly acquired (August 2008) HPGe gamma-ray spectrometry system was carried out for four sample container geometries, namely Marinelli beaker, disc, cylindrical beaker and vial, normally used for activity determination of gamma-ray from environmental samples. Calibration standards were prepared by using known amount of analytical grade uranium trioxide ore, homogenized in plain flour into the respective containers. The ore produces gamma-rays of energy ranging from 53 keV to 1001 keV. Analytical grade potassium chloride were prepared to determine detection efficiency of 1460 keV gamma-ray emitted by potassium isotope K-40. Plots of detection efficiency against gamma-ray energy for the four sample geometries were found to fit smoothly to a general form of ɛ = AΕa+BΕb, where ɛ is efficiency, Ε is energy in keV, A, B, a and b are constants that are dependent on the sample geometries. All calibration curves showed the presence of a "knee" at about 180 keV. Comparison between the four geometries showed that the efficiency of Marinelli beaker is higher than cylindrical beaker and vial, while cylindrical disk showed the lowest.

  13. Venous return curves obtained from graded series of valsalva maneuvers

    NASA Technical Reports Server (NTRS)

    Mastenbrook, S. M., Jr.

    1974-01-01

    The effects were studied of a graded series of valsalva-like maneuvers on the venous return, which was measured transcutaneously in the jugular vein of an anesthetized dog, with the animal serving as its own control. At each of five different levels of central venous pressure, the airway pressure which just stopped venous return during each series of maneuvers was determined. It was found that this end-point airway pressure is not a good estimator of the animal's resting central venous pressure prior to the simulated valsalva maneuver. It was further found that the measured change in right atrial pressure during a valsalva maneuver is less than the change in airway pressure during the same maneuver, instead of being equal, as had been expected. Relative venous return curves were constructed from the data obtained during the graded series of valsalva maneuvers.

  14. A new calibration code for the JET polarimeter.

    PubMed

    Gelfusa, M; Murari, A; Gaudio, P; Boboc, A; Brombin, M; Orsitto, F P; Giovannozzi, E

    2010-05-01

    An equivalent model of JET polarimeter is presented, which overcomes the drawbacks of previous versions of the fitting procedures used to provide calibrated results. First of all the signal processing electronics has been simulated, to confirm that it is still working within the original specifications. Then the effective optical path of both the vertical and lateral chords has been implemented to produce the calibration curves. The principle approach to the model has allowed obtaining a unique procedure which can be applied to any manual calibration and remains constant until the following one. The optical model of the chords is then applied to derive the plasma measurements. The results are in good agreement with the estimates of the most advanced full wave propagation code available and have been benchmarked with other diagnostics. The devised procedure has proved to work properly also for the most recent campaigns and high current experiments.

  15. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  16. Simple solution for a complex problem: proanthocyanidins, galloyl glucoses and ellagitannins fit on a single calibration curve in high performance-gel permeation chromatography.

    PubMed

    Stringano, Elisabetta; Gea, An; Salminen, Juha-Pekka; Mueller-Harvey, Irene

    2011-10-28

    This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 °C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Static SPME sampling of VOCs emitted from indoor building materials: prediction of calibration curves of single compounds for two different emission cells.

    PubMed

    Mocho, Pierre; Desauziers, Valérie

    2011-05-01

    Solid-phase microextraction (SPME) is a powerful technique, easy to implement for on-site static sampling of indoor VOCs emitted by building materials. However, a major constraint lies in the establishment of calibration curves which requires complex generation of standard atmospheres. Thus, the purpose of this paper is to propose a model to predict adsorption kinetics (i.e., calibration curves) of four model VOCs. The model is based on Fick's laws for the gas phase and on the equilibrium or the solid diffusion model for the adsorptive phase. Two samplers (the FLEC® and a home-made cylindrical emission cell), coupled to SPME for static sampling of material emissions, were studied. A good agreement between modeling and experimental data is observed and results show the influence of sampling rate on mass transfer mode in function of sample volume. The equilibrium model is adapted to quite large volume sampler (cylindrical cell) while the solid diffusion model is dedicated to small volume sampler (FLEC®). The limiting steps of mass transfer are the diffusion in gas phase for the cylindrical cell and the pore surface diffusion for the FLEC®. In the future, this modeling approach could be a useful tool for time-saving development of SPME to study building material emission in static mode sampling.

  18. Sediment calibration strategies of Phase 5 Chesapeake Bay watershed model

    USGS Publications Warehouse

    Wu, J.; Shenk, G.W.; Raffensperger, Jeff P.; Moyer, D.; Linker, L.C.; ,

    2005-01-01

    Sediment is a primary constituent of concern for Chesapeake Bay due to its effect on water clarity. Accurate representation of sediment processes and behavior in Chesapeake Bay watershed model is critical for developing sound load reduction strategies. Sediment calibration remains one of the most difficult components of watershed-scale assessment. This is especially true for Chesapeake Bay watershed model given the size of the watershed being modeled and complexity involved in land and stream simulation processes. To obtain the best calibration, the Chesapeake Bay program has developed four different strategies for sediment calibration of Phase 5 watershed model, including 1) comparing observed and simulated sediment rating curves for different parts of the hydrograph; 2) analyzing change of bed depth over time; 3) relating deposition/scour to total annual sediment loads; and 4) calculating "goodness-of-fit' statistics. These strategies allow a more accurate sediment calibration, and also provide some insightful information on sediment processes and behavior in Chesapeake Bay watershed.

  19. Light curves of flat-spectrum radio sources (Jenness+, 2010)

    NASA Astrophysics Data System (ADS)

    Jenness, T.; Robson, E. I.; Stevens, J. A.

    2010-05-01

    Calibrated data for 143 flat-spectrum extragalactic radio sources are presented at a wavelength of 850um covering a 5-yr period from 2000 April. The data, obtained at the James Clerk Maxwell Telescope using the Submillimetre Common-User Bolometer Array (SCUBA) camera in pointing mode, were analysed using an automated pipeline process based on the Observatory Reduction and Acquisition Control - Data Reduction (ORAC-DR) system. This paper describes the techniques used to analyse and calibrate the data, and presents the data base of results along with a representative sample of the better-sampled light curves. A re-analysis of previously published data from 1997 to 2000 is also presented. The combined catalogue, comprising 10493 flux density measurements, provides a unique and valuable resource for studies of extragalactic radio sources. (2 data files).

  20. Energy calibration of the fly's eye detector

    NASA Technical Reports Server (NTRS)

    Baltrusaitis, R. M.; Cassiday, G. L.; Cooper, R.; Elbert, J. W.; Gerhardy, P. R.; Ko, S.; Loh, E. C.; Mizumoto, Y.; Sokolsky, P.; Steck, D.

    1985-01-01

    The methods used to calibrate the Fly's eye detector to evaluate the energy of EAS are discussed. The energy of extensive air showers (EAS) as seen by the Fly's Eye detector are obtained from track length integrals of observed shower development curves. The energy of the parent cosmic ray primary is estimated by applying corrections to account for undetected energy in the muon, neutrino and hadronic channels. Absolute values for E depend upon the measurement of shower sizes N sub e(x). The following items are necessary to convert apparent optical brightness into intrinsical optical brightness: (1) an assessment of those factors responsible for light production by the relativistic electrons in an EAS and the transmission of light thru the atmosphere, (2) calibration of the optical detection system, and (3) a knowledge of the trajectory of the shower.

  1. A simple topography-driven, calibration-free runoff generation model

    NASA Astrophysics Data System (ADS)

    Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.

    2017-12-01

    Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader

  2. Towards a global network of gamma-ray detector calibration facilities

    NASA Astrophysics Data System (ADS)

    Tijs, Marco; Koomans, Ronald; Limburg, Han

    2016-09-01

    Gamma-ray logging tools are applied worldwide. At various locations, calibration facilities are used to calibrate these gamma-ray logging systems. Several attempts have been made to cross-correlate well known calibration pits, but this cross-correlation does not include calibration facilities in Europe or private company calibration facilities. Our aim is to set-up a framework that gives the possibility to interlink all calibration facilities worldwide by using `tools of opportunity' - tools that have been calibrated in different calibration facilities, whether this usage was on a coordinated basis or by coincidence. To compare the measurement of different tools, it is important to understand the behaviour of the tools in the different calibration pits. Borehole properties, such as diameter, fluid, casing and probe diameter strongly influence the outcome of gamma-ray borehole logging. Logs need to be properly calibrated and compensated for these borehole properties in order to obtain in-situ grades or to do cross-hole correlation. Some tool providers provide tool-specific correction curves for this purpose. Others rely on reference measurements against sources of known radionuclide concentration and geometry. In this article, we present an attempt to set-up a framework for transferring `local' calibrations to be applied `globally'. This framework includes corrections for any geometry and detector size to give absolute concentrations of radionuclides from borehole measurements. This model is used to compare measurements in the calibration pits of Grand Junction, located in the USA; Adelaide (previously known as AMDEL), located in Adelaide Australia; and Stonehenge, located at Medusa Explorations BV in the Netherlands.

  3. Antenna Calibration and Measurement Equipment

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.; Cortes, Manuel Vazquez

    2012-01-01

    A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.

  4. Soil specific re-calibration of water content sensors for a field-scale sensor network

    NASA Astrophysics Data System (ADS)

    Gasch, Caley K.; Brown, David J.; Anderson, Todd; Brooks, Erin S.; Yourek, Matt A.

    2015-04-01

    point) were represented in the sensor readings. We anticipate that obtaining water retention curves for field soils will improve the re-calibration accuracy by providing more precise estimates of saturation, field capacity, and wilting point. This approach may serve as an alternative method for sensor calibration in lieu of or to complement pre-installation calibration.

  5. Marine04 Marine radiocarbon age calibration, 26 ? 0 ka BP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughen, K; Baille, M; Bard, E

    2004-11-01

    New radiocarbon calibration curves, IntCal04 and Marine04, have been constructed and internationally ratified to replace the terrestrial and marine components of IntCal98. The new calibration datasets extend an additional 2000 years, from 0-26 ka cal BP (Before Present, 0 cal BP = AD 1950), and provide much higher resolution, greater precision and more detailed structure than IntCal98. For the Marine04 curve, dendrochronologically dated tree-ring samples, converted with a box-diffusion model to marine mixed-layer ages, cover the period from 0-10.5 ka cal BP. Beyond 10.5 ka cal BP, high-resolution marine data become available from foraminifera in varved sediments and U/Th-dated corals.more » The marine records are corrected with site-specific {sup 14}C reservoir age information to provide a single global marine mixed-layer calibration from 10.5-26.0 ka cal BP. A substantial enhancement relative to IntCal98 is the introduction of a random walk model, which takes into account the uncertainty in both the calendar age and the radiocarbon age to calculate the underlying calibration curve. The marine datasets and calibration curve for marine samples from the surface mixed layer (Marine04) are discussed here. The tree-ring datasets, sources of uncertainty, and regional offsets are presented in detail in a companion paper by Reimer et al.« less

  6. Disease risk curves.

    PubMed

    Hughes, G; Burnett, F J; Havis, N D

    2013-11-01

    Disease risk curves are simple graphical relationships between the probability of need for treatment and evidence related to risk factors. In the context of the present article, our focus is on factors related to the occurrence of disease in crops. Risk is the probability of adverse consequences; specifically in the present context it denotes the chance that disease will reach a threshold level at which crop protection measures can be justified. This article describes disease risk curves that arise when risk is modeled as a function of more than one risk factor, and when risk is modeled as a function of a single factor (specifically the level of disease at an early disease assessment). In both cases, disease risk curves serve as calibration curves that allow the accumulated evidence related to risk to be expressed on a probability scale. When risk is modeled as a function of the level of disease at an early disease assessment, the resulting disease risk curve provides a crop loss assessment model in which the downside is denominated in terms of risk rather than in terms of yield loss.

  7. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    PubMed

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  8. A New Approach for Obtaining Cosmological Constraints from Type Ia Supernovae using Approximate Bayesian Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jennings, Elise; Wolf, Rachel; Sako, Masao

    2016-11-09

    Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set ofmore » $$\\sim$$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $$\\Omega_m, w_0, \\alpha$$ and $$\\beta$$ and a magnitude offset parameter, with no systematics we obtain $$\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$$ (a $$\\sim11$$% 1$$\\sigma$$ uncertainty) using the Tripp metric and $$\\Delta(w_0) = -0.055\\pm0.068$$ (a $$\\sim7$$% 1$$\\sigma$$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $$\\Delta(w_0) = -0.062\\pm0.132$$ (a $$\\sim14$$% 1$$\\sigma$$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $$w_0$$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.« less

  9. Obtaining changes in calibration-coil to seismometer output constants using sine waves

    USGS Publications Warehouse

    Ringler, Adam T.; Hutt, Charles R.; Gee, Lind S.; Sandoval, Leo D.; Wilson, David C.

    2013-01-01

    The midband sensitivity of a broadband seismometer is one of the most commonly used parameters from station metadata. Thus, it is critical for station operators to robustly estimate this quantity with a high degree of accuracy. We develop an in situ method for estimating changes in sensitivity using sine‐wave calibrations, assuming the calibration coil and its drive are stable over time and temperature. This approach has been used in the past for passive instruments (e.g., geophones) but has not been applied, to our knowledge, to derive sensitivities of modern force‐feedback broadband seismometers. We are able to detect changes in sensitivity to well within 1%, and our method is capable of detecting these sensitivity changes using any frequency of sine calibration within the passband of the instrument.

  10. 40 CFR 90.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... the form of the following equation (1) or (2). Include zero as a data point. Compensation for known...

  11. SU-F-T-368: Improved HPGe Detector Precise Efficiency Calibration with Monte Carlo Simulations and Radioactive Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Y. John

    2016-06-15

    Purpose: To obtain an improved precise gamma efficiency calibration curve of HPGe (High Purity Germanium) detector with a new comprehensive approach. Methods: Both of radioactive sources and Monte Carlo simulation (CYLTRAN) are used to determine HPGe gamma efficiency for energy range of 0–8 MeV. The HPGe is a GMX coaxial 280 cm{sup 3} N-type 70% gamma detector. Using Momentum Achromat Recoil Spectrometer (MARS) at the K500 superconducting cyclotron of Texas A&M University, the radioactive nucleus {sup 24} Al was produced and separated. This nucleus has positron decays followed by gamma transitions up to 8 MeV from {sup 24} Mg excitedmore » states which is used to do HPGe efficiency calibration. Results: With {sup 24} Al gamma energy spectrum up to 8MeV, the efficiency for γ ray 7.07 MeV at 4.9 cm distance away from the radioactive source {sup 24} Al was obtained at a value of 0.194(4)%, by carefully considering various factors such as positron annihilation, peak summing effect, beta detector efficiency and internal conversion effect. The Monte Carlo simulation (CYLTRAN) gave a value of 0.189%, which was in agreement with the experimental measurements. Applying to different energy points, then a precise efficiency calibration curve of HPGe detector up to 7.07 MeV at 4.9 cm distance away from the source {sup 24} Al was obtained. Using the same data analysis procedure, the efficiency for the 7.07 MeV gamma ray at 15.1 cm from the source {sup 24} Al was obtained at a value of 0.0387(6)%. MC simulation got a similar value of 0.0395%. This discrepancy led us to assign an uncertainty of 3% to the efficiency at 15.1 cm up to 7.07 MeV. The MC calculations also reproduced the intensity of observed single-and double-escape peaks, providing that the effects of positron annihilation-in-flight were incorporated. Conclusion: The precision improved gamma efficiency calibration curve provides more accurate radiation detection and dose calculation for cancer radiotherapy

  12. Film calibration for soft x-ray wavelengths

    NASA Astrophysics Data System (ADS)

    Tallents, Gregory J.; Krishnan, J.; Dwivedi, L.; Neely, David; Turcu, I. C. Edmond

    1997-10-01

    The response of photographic film to X-rays from laser- plasma is of practical interest. Film is often used for the ultimate detection of x-rays in crystal and grating spectrometers and in imaging instruments such as pinhole cameras largely because of its high spatial resolution (approximately 1 - 10 microns). Characteristic curves for wavelengths--3 nm and 23 nm are presented for eight x-ray films (Kodak 101-01, 101-07, 104-02, Kodak Industrex CX, Russian UF-SH4, UF-VR2, Ilford Q plates and Shanghai 5F film). The calibrations were obtained from the emission of laser-produced carbon plasmas and a Ne-like Ge X-ray laser.

  13. Calibration of the forward-scattering spectrometer probe - Modeling scattering from a multimode laser beam

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Lock, James A.

    1993-01-01

    Scattering calculations using a detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 microns, but the difference increases to approximately 10 percent at diameters of 50 microns. When using glass beads to calibrate the FSSP, calibration errors can be minimized by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.

  14. Calibration of the Forward-scattering Spectrometer Probe: Modeling Scattering from a Multimode Laser Beam

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Lock, James A.

    1993-01-01

    Scattering calculations using a more detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out by using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 micrometers, but the difference increases to approximately 10% at diameters of 50 micrometers. When using glass beads to calibrate the FSSP, calibration errors can be minimized, by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.

  15. Calibration-free optical chemical sensors

    DOEpatents

    DeGrandpre, Michael D.

    2006-04-11

    An apparatus and method for taking absorbance-based chemical measurements are described. In a specific embodiment, an indicator-based pCO2 (partial pressure of CO2) sensor displays sensor-to-sensor reproducibility and measurement stability. These qualities are achieved by: 1) renewing the sensing solution, 2) allowing the sensing solution to reach equilibrium with the analyte, and 3) calculating the response from a ratio of the indicator solution absorbances which are determined relative to a blank solution. Careful solution preparation, wavelength calibration, and stray light rejection also contribute to this calibration-free system. Three pCO2 sensors were calibrated and each had response curves which were essentially identical within the uncertainty of the calibration. Long-term laboratory and field studies showed the response had no drift over extended periods (months). The theoretical response, determined from thermodynamic characterization of the indicator solution, also predicted the observed calibration-free performance.

  16. Calibration Adjustment of the Mid-infrared Analyzer for an Accurate Determination of the Macronutrient Composition of Human Milk.

    PubMed

    Billard, Hélène; Simon, Laure; Desnots, Emmanuelle; Sochard, Agnès; Boscher, Cécile; Riaublanc, Alain; Alexandre-Gouabau, Marie-Cécile; Boquien, Clair-Yves

    2016-08-01

    Human milk composition analysis seems essential to adapt human milk fortification for preterm neonates. The Miris human milk analyzer (HMA), based on mid-infrared methodology, is convenient for a unique determination of macronutrients. However, HMA measurements are not totally comparable with reference methods (RMs). The primary aim of this study was to compare HMA results with results from biochemical RMs for a large range of protein, fat, and carbohydrate contents and to establish a calibration adjustment. Human milk was fractionated in protein, fat, and skim milk by covering large ranges of protein (0-3 g/100 mL), fat (0-8 g/100 mL), and carbohydrate (5-8 g/100 mL). For each macronutrient, a calibration curve was plotted by linear regression using measurements obtained using HMA and RMs. For fat, 53 measurements were performed, and the linear regression equation was HMA = 0.79RM + 0.28 (R(2) = 0.92). For true protein (29 measurements), the linear regression equation was HMA = 0.9RM + 0.23 (R(2) = 0.98). For carbohydrate (15 measurements), the linear regression equation was HMA = 0.59RM + 1.86 (R(2) = 0.95). A homogenization step with a disruptor coupled to a sonication step was necessary to obtain better accuracy of the measurements. Good repeatability (coefficient of variation < 7%) and reproducibility (coefficient of variation < 17%) were obtained after calibration adjustment. New calibration curves were developed for the Miris HMA, allowing accurate measurements in large ranges of macronutrient content. This is necessary for reliable use of this device in individualizing nutrition for preterm newborns. © The Author(s) 2015.

  17. Analysis of calibration accuracy of cameras with different target sizes for large field of view

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Chai, Zhiwen; Long, Changyu; Deng, Huaxia; Ma, Mengchao; Zhong, Xiang; Yu, Huan

    2018-03-01

    Visual measurement plays an increasingly important role in the field o f aerospace, ship and machinery manufacturing. Camera calibration of large field-of-view is a critical part of visual measurement . For the issue a large scale target is difficult to be produced, and the precision can not to be guaranteed. While a small target has the advantage of produced of high precision, but only local optimal solutions can be obtained . Therefore, studying the most suitable ratio of the target size to the camera field of view to ensure the calibration precision requirement of the wide field-of-view is required. In this paper, the cameras are calibrated by a series of different dimensions of checkerboard calibration target s and round calibration targets, respectively. The ratios of the target size to the camera field-of-view are 9%, 18%, 27%, 36%, 45%, 54%, 63%, 72%, 81% and 90%. The target is placed in different positions in the camera field to obtain the camera parameters of different positions . Then, the distribution curves of the reprojection mean error of the feature points' restructure in different ratios are analyzed. The experimental data demonstrate that with the ratio of the target size to the camera field-of-view increas ing, the precision of calibration is accordingly improved, and the reprojection mean error changes slightly when the ratio is above 45%.

  18. Quantitative analysis of essential oils in perfume using multivariate curve resolution combined with comprehensive two-dimensional gas chromatography.

    PubMed

    de Godoy, Luiz Antonio Fonseca; Hantao, Leandro Wang; Pedroso, Marcio Pozzobon; Poppi, Ronei Jesus; Augusto, Fabio

    2011-08-05

    The use of multivariate curve resolution (MCR) to build multivariate quantitative models using data obtained from comprehensive two-dimensional gas chromatography with flame ionization detection (GC×GC-FID) is presented and evaluated. The MCR algorithm presents some important features, such as second order advantage and the recovery of the instrumental response for each pure component after optimization by an alternating least squares (ALS) procedure. A model to quantify the essential oil of rosemary was built using a calibration set containing only known concentrations of the essential oil and cereal alcohol as solvent. A calibration curve correlating the concentration of the essential oil of rosemary and the instrumental response obtained from the MCR-ALS algorithm was obtained, and this calibration model was applied to predict the concentration of the oil in complex samples (mixtures of the essential oil, pineapple essence and commercial perfume). The values of the root mean square error of prediction (RMSEP) and of the root mean square error of the percentage deviation (RMSPD) obtained were 0.4% (v/v) and 7.2%, respectively. Additionally, a second model was built and used to evaluate the accuracy of the method. A model to quantify the essential oil of lemon grass was built and its concentration was predicted in the validation set and real perfume samples. The RMSEP and RMSPD obtained were 0.5% (v/v) and 6.9%, respectively, and the concentration of the essential oil of lemon grass in perfume agreed to the value informed by the manufacturer. The result indicates that the MCR algorithm is adequate to resolve the target chromatogram from the complex sample and to build multivariate models of GC×GC-FID data. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Comment on "Radiocarbon Calibration Curve Spanning 0 to 50,000 Years B.P. Based on Paired 230Th/234U/238U and 14C Dates on Pristine Corals" by R.G. Fairbanks, R. A. Mortlock, T.-C. Chiu, L. Cao, A. Kaplan, T. P. Guilderson, T. W. Fairbanks, A. L. Bloom, P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimer, P J; Baillie, M L; Bard, E

    2005-10-02

    Radiocarbon calibration curves are essential for converting radiocarbon dated chronologies to the calendar timescale. Prior to the 1980's numerous differently derived calibration curves based on radiocarbon ages of known age material were in use, resulting in ''apples and oranges'' comparisons between various records (Klein et al., 1982), further complicated by until then unappreciated inter-laboratory variations (International Study Group, 1982). The solution was to produce an internationally-agreed calibration curve based on carefully screened data with updates at 4-6 year intervals (Klein et al., 1982; Stuiver and Reimer, 1986; Stuiver and Reimer, 1993; Stuiver et al., 1998). The IntCal working group hasmore » continued this tradition with the active participation of researchers who produced the records that were considered for incorporation into the current, internationally-ratified calibration curves, IntCal04, SHCal04, and Marine04, for Northern Hemisphere terrestrial, Southern Hemisphere terrestrial, and marine samples, respectively (Reimer et al., 2004; Hughen et al., 2004; McCormac et al., 2004). Fairbanks et al. (2005), accompanied by a more technical paper, Chiu et al. (2005), and an introductory comment, Adkins (2005), recently published a ''calibration curve spanning 0-50,000 years''. Fairbanks et al. (2005) and Chiu et al. (2005) have made a significant contribution to the database on which the IntCal04 and Marine04 calibration curves are based. These authors have now taken the further step to derive their own radiocarbon calibration extending to 50,000 cal BP, which they claim is superior to that generated by the IntCal working group. In their papers, these authors are strongly critical of the IntCal calibration efforts for what they claim to be inadequate screening and sample pretreatment methods. While these criticisms may ultimately be helpful in identifying a better set of protocols, we feel that there are also several erroneous and misleading

  20. Linearization of Positional Response Curve of a Fiber-optic Displacement Sensor

    NASA Astrophysics Data System (ADS)

    Babaev, O. G.; Matyunin, S. A.; Paranin, V. D.

    2018-01-01

    Currently, the creation of optical measuring instruments and sensors for measuring linear displacement is one of the most relevant problems in the area of instrumentation. Fiber-optic contactless sensors based on the magneto-optical effect are of special interest. They are essentially contactless, non-electrical and have a closed optical channel not subject to contamination. The main problem of this type of sensors is the non-linearity of their positional response curve due to the hyperbolic nature of the magnetic field intensity variation induced by moving the magnetic source mounted on the controlled object relative to the sensing element. This paper discusses an algorithmic method of linearizing the positional response curve of fiber-optic displacement sensors in any selected range of the displacements to be measured. The method is divided into two stages: 1 - definition of the calibration function, 2 - measurement and linearization of the positional response curve (including its temperature stabilization). The algorithm under consideration significantly reduces the number of points of the calibration function, which is essential for the calibration of temperature dependence, due to the use of the points that randomly deviate from the grid points with uniform spacing. Subsequent interpolation of the deviating points and piecewise linear-plane approximation of the calibration function reduces the microcontroller storage capacity for storing the calibration function and the time required to process the measurement results. The paper also presents experimental results of testing real samples of fiber-optic displacement sensors.

  1. Calibration of a Modified Andersen Bacterial Aerosol Sampler

    PubMed Central

    May, K. R.

    1964-01-01

    A study of the flow regime in the commercial Andersen sampler revealed defects in the sampling of the larger airborne particles. Satisfactory sampling was obtained by redesigning the hole pattern of the top stages and adding one more stage to extend the range of the instrument. A new, rational hole pattern is suggested for the lower stages. With both patterns a special colony-counting mask can be used to facilitate the assay. A calibration of the modified system is presented which enables particle size distribution curves to be drawn from the colony counts. Images FIG. 2 FIG. 3 FIG. 4 FIG. 5 FIG. 6 FIG. 7 FIG. 8 PMID:14106938

  2. Can hydraulic-modelled rating curves reduce uncertainty in high flow data?

    NASA Astrophysics Data System (ADS)

    Westerberg, Ida; Lam, Norris; Lyon, Steve W.

    2017-04-01

    Flood risk assessments rely on accurate discharge data records. Establishing a reliable rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. In this study we compared the uncertainty in discharge data that resulted from these two rating curve modelling approaches. We applied both methods to a Swedish catchment, accounting for uncertainties in the stage-discharge gauging and water-surface slope data for the hydraulic model and in the stage-discharge gauging data and rating-curve parameters for the traditional method. We focused our analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken. First results show that the hydraulically-modelled rating curves were more sensitive to uncertainties in the calibration measurements of discharge than water surface slope. The uncertainty of the hydraulically-modelled rating curves were lowest within the range of the three calibration stage-discharge gaugings (i.e. between median and two-times median flow) whereas uncertainties were higher outside of this range. For instance, at the highest observed stage of the 24-year stage record, the 90% uncertainty band was -15% to +40% of the official rating curve. Additional gaugings at high flows (i.e. four to five times median flow) would likely substantially reduce those uncertainties. These first results show

  3. Direct Breakthrough Curve Prediction From Statistics of Heterogeneous Conductivity Fields

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Haslauer, Claus P.; Cirpka, Olaf A.; Vesselinov, Velimir V.

    2018-01-01

    This paper presents a methodology to predict the shape of solute breakthrough curves in heterogeneous aquifers at early times and/or under high degrees of heterogeneity, both cases in which the classical macrodispersion theory may not be applicable. The methodology relies on the observation that breakthrough curves in heterogeneous media are generally well described by lognormal distributions, and mean breakthrough times can be predicted analytically. The log-variance of solute arrival is thus sufficient to completely specify the breakthrough curves, and this is calibrated as a function of aquifer heterogeneity and dimensionless distance from a source plane by means of Monte Carlo analysis and statistical regression. Using the ensemble of simulated groundwater flow and solute transport realizations employed to calibrate the predictive regression, reliability estimates for the prediction are also developed. Additional theoretical contributions include heuristics for the time until an effective macrodispersion coefficient becomes applicable, and also an expression for its magnitude that applies in highly heterogeneous systems. It is seen that the results here represent a way to derive continuous time random walk transition distributions from physical considerations rather than from empirical field calibration.

  4. Determination of the content of fatty acid methyl esters (FAME) in biodiesel samples obtained by esterification using 1H-NMR spectroscopy.

    PubMed

    Mello, Vinicius M; Oliveira, Flavia C C; Fraga, William G; do Nascimento, Claudia J; Suarez, Paulo A Z

    2008-11-01

    Three different calibration curves based on (1)H-NMR spectroscopy (300 MHz) were used for quantifying the reaction yield during biodiesel synthesis by esterification of fatty acids mixtures and methanol. For this purpose, the integrated intensities of the hydrogens of the ester methoxy group (3.67 ppm) were correlated with the areas related to the various protons of the alkyl chain (olefinic hydrogens: 5.30-5.46 ppm; aliphatic: 2.67-2.78 ppm, 2.30 ppm, 1.96-2.12 ppm, 1.56-1.68 ppm, 1.22-1.42 ppm, 0.98 ppm, and 0.84-0.92 ppm). The first curve was obtained using the peaks relating the olefinic hydrogens, a second with the parafinic protons and the third curve using the integrated intensities of all the hydrogens. A total of 35 samples were examined: 25 samples to build the three different calibration curves and ten samples to serve as external validation samples. The results showed no statistical differences among the three methods, and all presented prediction errors less than 2.45% with a co-efficient of variation (CV) of 4.66%. 2008 John Wiley & Sons, Ltd.

  5. Towards Robust Self-Calibration for Handheld 3d Line Laser Scanning

    NASA Astrophysics Data System (ADS)

    Bleier, M.; Nüchter, A.

    2017-11-01

    This paper studies self-calibration of a structured light system, which reconstructs 3D information using video from a static consumer camera and a handheld cross line laser projector. Intersections between the individual laser curves and geometric constraints on the relative position of the laser planes are exploited to achieve dense 3D reconstruction. This is possible without any prior knowledge of the movement of the projector. However, inaccurrately extracted laser lines introduce noise in the detected intersection positions and therefore distort the reconstruction result. Furthermore, when scanning objects with specular reflections, such as glossy painted or metalic surfaces, the reflections are often extracted from the camera image as erroneous laser curves. In this paper we investiagte how robust estimates of the parameters of the laser planes can be obtained despite of noisy detections.

  6. Gaussian decomposition of high-resolution melt curve derivatives for measuring genome-editing efficiency

    PubMed Central

    Zaboikin, Michail; Freter, Carl

    2018-01-01

    We describe a method for measuring genome editing efficiency from in silico analysis of high-resolution melt curve data. The melt curve data derived from amplicons of genome-edited or unmodified target sites were processed to remove the background fluorescent signal emanating from free fluorophore and then corrected for temperature-dependent quenching of fluorescence of double-stranded DNA-bound fluorophore. Corrected data were normalized and numerically differentiated to obtain the first derivatives of the melt curves. These were then mathematically modeled as a sum or superposition of minimal number of Gaussian components. Using Gaussian parameters determined by modeling of melt curve derivatives of unedited samples, we were able to model melt curve derivatives from genetically altered target sites where the mutant population could be accommodated using an additional Gaussian component. From this, the proportion contributed by the mutant component in the target region amplicon could be accurately determined. Mutant component computations compared well with the mutant frequency determination from next generation sequencing data. The results were also consistent with our earlier studies that used difference curve areas from high-resolution melt curves for determining the efficiency of genome-editing reagents. The advantage of the described method is that it does not require calibration curves to estimate proportion of mutants in amplicons of genome-edited target sites. PMID:29300734

  7. Simple transfer calibration method for a Cimel Sun-Moon photometer: calculating lunar calibration coefficients from Sun calibration constants.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane

    2016-09-20

    The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.

  8. Calibration improvements to electronically scanned pressure systems and preliminary statistical assessment

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.

    1996-01-01

    Orifice-to-orifice inconsistencies in data acquired with an electronically-scanned pressure system at the beginning of a wind tunnel experiment forced modifications to the standard, instrument calibration procedures. These modifications included a large increase in the number of calibration points which would allow a critical examination of the calibration curve-fit process, and a subsequent post-test reduction of the pressure data. Evaluation of these data has resulted in an improved functional representation of the pressure-voltage signature for electronically-scanned pressures sensors, which can reduce the errors due to calibration curve fit to under 0.10 percent of reading compared to the manufacturer specified 0.10 percent of full scale. Application of the improved calibration function allows a more rational selection of the calibration set-point pressures. These pressures should be adjusted to achieve a voltage output which matches the physical shape of the pressure-voltage signature of the sensor. This process is conducted in lieu of the more traditional approach where a calibration pressure is specified and the resulting sensor voltage is recorded. The fifteen calibrations acquired over the two-week duration of the wind tunnel test were further used to perform a preliminary, statistical assessment of the variation in the calibration process. The results allowed the estimation of the bias uncertainty for a single instrument calibration; and, they form the precursor for more extensive and more controlled studies in the laboratory.

  9. A formulation of tissue- and water-equivalent materials using the stoichiometric analysis method for CT-number calibration in radiotherapy treatment planning.

    PubMed

    Yohannes, Indra; Kolditz, Daniel; Langner, Oliver; Kalender, Willi A

    2012-03-07

    Tissue- and water-equivalent materials (TEMs) are widely used in quality assurance and calibration procedures, both in radiodiagnostics and radiotherapy. In radiotherapy, particularly, the TEMs are often used for computed tomography (CT) number calibration in treatment planning systems. However, currently available TEMs may not be very accurate in the determination of the calibration curves due to their limitation in mimicking radiation characteristics of the corresponding real tissues in both low- and high-energy ranges. Therefore, we are proposing a new formulation of TEMs using a stoichiometric analysis method to obtain TEMs for the calibration purposes. We combined the stoichiometric calibration and the basic data method to compose base materials to develop TEMs matching standard real tissues from ICRU Report 44 and 46. First, the CT numbers of six materials with known elemental compositions were measured to get constants for the stoichiometric calibration. The results of the stoichiometric calibration were used together with the basic data method to formulate new TEMs. These new TEMs were scanned to validate their CT numbers. The electron density and the stopping power calibration curves were also generated. The absolute differences of the measured CT numbers of the new TEMs were less than 4 HU for the soft tissues and less than 22 HU for the bone compared to the ICRU real tissues. Furthermore, the calculated relative electron density and electron and proton stopping powers of the new TEMs differed by less than 2% from the corresponding ICRU real tissues. The new TEMs which were formulated using the proposed technique increase the simplicity of the calibration process and preserve the accuracy of the stoichiometric calibration simultaneously.

  10. Calibrating Wide Field Surveys

    NASA Astrophysics Data System (ADS)

    González Fernández, Carlos; Irwin, M.; Lewis, J.; González Solares, E.

    2017-09-01

    "In this talk I will review the strategies in CASU to calibrate wide field surveys, in particular applied to data taken with the VISTA telescope. These include traditional night-by-night calibrations along with the search for a global, coherent calibration of all the data once observations are finished. The difficulties of obtaining photometric accuracy of a few percent and a good absolute calibration will also be discussed."

  11. X-ray Diffraction Crystal Calibration and Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael J. Haugh; Richard Stewart; Nathan Kugland

    2009-06-05

    National Security Technologies’ X-ray Laboratory is comprised of a multi-anode Manson type source and a Henke type source that incorporates a dual goniometer and XYZ translation stage. The first goniometer is used to isolate a particular spectral band. The Manson operates up to 10 kV and the Henke up to 20 kV. The Henke rotation stages and translation stages are automated. Procedures have been developed to characterize and calibrate various NIF diagnostics and their components. The diagnostics include X-ray cameras, gated imagers, streak cameras, and other X-ray imaging systems. Components that have been analyzed include filters, filter arrays, grazing incidencemore » mirrors, and various crystals, both flat and curved. Recent efforts on the Henke system are aimed at characterizing and calibrating imaging crystals and curved crystals used as the major component of an X-ray spectrometer. The presentation will concentrate on these results. The work has been done at energies ranging from 3 keV to 16 keV. The major goal was to evaluate the performance quality of the crystal for its intended application. For the imaging crystals we measured the laser beam reflection offset from the X-ray beam and the reflectivity curves. For the curved spectrometer crystal, which was a natural crystal, resolving power was critical. It was first necessary to find sources of crystals that had sufficiently narrow reflectivity curves. It was then necessary to determine which crystals retained their resolving power after being thinned and glued to a curved substrate.« less

  12. Calibrating Images from the MINERVA Cameras

    NASA Astrophysics Data System (ADS)

    Mercedes Colón, Ana

    2016-01-01

    The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.

  13. Holographic Entanglement Entropy, SUSY & Calibrations

    NASA Astrophysics Data System (ADS)

    Colgáin, Eoin Ó.

    2018-01-01

    Holographic calculations of entanglement entropy boil down to identifying minimal surfaces in curved spacetimes. This generically entails solving second-order equations. For higher-dimensional AdS geometries, we demonstrate that supersymmetry and calibrations reduce the problem to first-order equations. We note that minimal surfaces corresponding to disks preserve supersymmetry, whereas strips do not.

  14. Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.

    PubMed

    Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D

    2017-01-17

    In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.

  15. The All-Sky Automated Survey for Supernovae (ASAS-SN) Light Curve Server v1.0

    NASA Astrophysics Data System (ADS)

    Kochanek, C. S.; Shappee, B. J.; Stanek, K. Z.; Holoien, T. W.-S.; Thompson, Todd A.; Prieto, J. L.; Dong, Subo; Shields, J. V.; Will, D.; Britt, C.; Perzanowski, D.; Pojmański, G.

    2017-10-01

    The All-Sky Automated Survey for Supernovae (ASAS-SN) is working toward imaging the entire visible sky every night to a depth of V˜ 17 mag. The present data covers the sky and spans ˜2-5 years with ˜100-400 epochs of observation. The data should contain some ˜1 million variable sources, and the ultimate goal is to have a database of these observations publicly accessible. We describe here a first step, a simple but unprecedented web interface https://asas-sn.osu.edu/ that provides an up to date aperture photometry light curve for any user-selected sky coordinate. The V band photometry is obtained using a two-pixel (16.″0) radius aperture and is calibrated against the APASS catalog. Because the light curves are produced in real time, this web tool is relatively slow and can only be used for small samples of objects. However, it also imposes no selection bias on the part of the ASAS-SN team, allowing the user to obtain a light curve for any point on the celestial sphere. We present the tool, describe its capabilities, limitations, and known issues, and provide a few illustrative examples.

  16. Effect of Using Extreme Years in Hydrologic Model Calibration Performance

    NASA Astrophysics Data System (ADS)

    Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.

    2017-12-01

    Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.

  17. Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2012 Tests)

    NASA Technical Reports Server (NTRS)

    Pastor-Barsi, Christine; Allen, Arrington E.

    2013-01-01

    A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel (IRT) was completed in 2012 following the major modifications to the facility that included replacement of the refrigeration plant and heat exchanger. The calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT.

  18. Exploring Alternative Characteristic Curve Approaches to Linking Parameter Estimates from the Generalized Partial Credit Model.

    ERIC Educational Resources Information Center

    Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill

    Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…

  19. DEM Calibration Approach: design of experiment

    NASA Astrophysics Data System (ADS)

    Boikov, A. V.; Savelev, R. V.; Payor, V. A.

    2018-05-01

    The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.

  20. A calibration method for patient specific IMRT QA using a single therapy verification film

    PubMed Central

    Shukla, Arvind Kumar; Oinam, Arun S.; Kumar, Sanjeev; Sandhu, I.S.; Sharma, S.C.

    2013-01-01

    Aim The aim of the present study is to develop and verify the single film calibration procedure used in intensity-modulated radiation therapy (IMRT) quality assurance. Background Radiographic films have been regularly used in routine commissioning of treatment modalities and verification of treatment planning system (TPS). The radiation dosimetery based on radiographic films has ability to give absolute two-dimension dose distribution and prefer for the IMRT quality assurance. However, the single therapy verification film gives a quick and significant reliable method for IMRT verification. Materials and methods A single extended dose rate (EDR 2) film was used to generate the sensitometric curve of film optical density and radiation dose. EDR 2 film was exposed with nine 6 cm × 6 cm fields of 6 MV photon beam obtained from a medical linear accelerator at 5-cm depth in solid water phantom. The nine regions of single film were exposed with radiation doses raging from 10 to 362 cGy. The actual dose measurements inside the field regions were performed using 0.6 cm3 ionization chamber. The exposed film was processed after irradiation using a VIDAR film scanner and the value of optical density was noted for each region. Ten IMRT plans of head and neck carcinoma were used for verification using a dynamic IMRT technique, and evaluated using the gamma index method with TPS calculated dose distribution. Results Sensitometric curve has been generated using a single film exposed at nine field region to check quantitative dose verifications of IMRT treatments. The radiation scattered factor was observed to decrease exponentially with the increase in the distance from the centre of each field region. The IMRT plans based on calibration curve were verified using the gamma index method and found to be within acceptable criteria. Conclusion The single film method proved to be superior to the traditional calibration method and produce fast daily film calibration for highly

  1. Nuclear Gauge Calibration and Testing Guidelines for Hawaii

    DOT National Transportation Integrated Search

    2006-12-15

    Project proposal brief: AASHTO and ASTM nuclear gauge testing procedures can lead to misleading density and moisture readings for certain Hawaiian soils. Calibration curves need to be established for these unique materials, along with clear standard ...

  2. UNDERFLIGHT CALIBRATION OF SOHO/CDS AND HINODE/EIS WITH EUNIS-07

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Tongjiang; Brosius, Jeffrey W.; Thomas, Roger J.

    2011-12-01

    Flights of Goddard Space Flight Center's Extreme Ultraviolet Normal Incidence Spectrograph (EUNIS) sounding rocket in 2006 and 2007 provided updated radiometric calibrations for Solar and Heliospheric Observatory/Coronal Diagnostic Spectrometer (SOHO/CDS) and Hinode/Extreme Ultraviolet Imaging Spectrometer (Hinode/EIS). EUNIS carried two independent imaging spectrographs covering wavebands of 300-370 A in first order and 170-205 A in second order. After each flight, end-to-end radiometric calibrations of the rocket payload were carried out in the same facility used for pre-launch calibrations of CDS and EIS. During the 2007 flight, EUNIS, SOHO/CDS, and Hinode/EIS observed the same solar locations, allowing the EUNIS calibrations to bemore » directly applied to both CDS and EIS. The measured CDS NIS 1 line intensities calibrated with the standard (version 4) responsivities with the standard long-term corrections are found to be too low by a factor of 1.5 due to the decrease in responsivity. The EIS calibration update is performed in two ways. One uses the direct calibration transfer of the calibrated EUNIS-07 short wavelength (SW) channel. The other uses the insensitive line pairs, in which one member was observed by the EUNIS-07 long wavelength (LW) channel and the other by EIS in either the LW or SW waveband. Measurements from both methods are in good agreement, and confirm (within the measurement uncertainties) the EIS responsivity measured directly before the instrument's launch. The measurements also suggest that the EIS responsivity decreased by a factor of about 1.2 after the first year of operation (although the size of the measurement uncertainties is comparable to this decrease). The shape of the EIS SW response curve obtained by EUNIS-07 is consistent with the one measured in laboratory prior to launch. The absolute value of the quiet-Sun He II 304 A intensity measured by EUNIS-07 is consistent with the radiance measured by CDS NIS in quiet regions near

  3. Dose Calibration of the ISS-RAD Fast Neutron Detector

    NASA Technical Reports Server (NTRS)

    Zeitlin, C.

    2015-01-01

    The ISS-RAD instrument has been fabricated by Southwest Research Institute and delivered to NASA for flight to the ISS in late 2015 or early 2016. ISS-RAD is essentially two instruments that share a common interface to ISS. The two instruments are the Charged Particle Detector (CPD), which is very similar to the MSL-RAD detector on Mars, and the Fast Neutron Detector (FND), which is a boron-loaded plastic scintillator with readout optimized for the 0.5 to 10 MeV energy range. As the FND is completely new, it has been necessary to develop methodology to allow it to be used to measure the neutron dose and dose equivalent. This talk will focus on the methods developed and their implementation using calibration data obtained in quasi-monoenergetic (QMN) neutron fields at the PTB facility in Braunschweig, Germany. The QMN data allow us to determine an approximate response function, from which we estimate dose and dose equivalent contributions per detected neutron as a function of the pulse height. We refer to these as the "pSv per count" curves for dose equivalent and the "pGy per count" curves for dose. The FND is required to provide a dose equivalent measurement with an accuracy of ?10% of the known value in a calibrated AmBe field. Four variants of the analysis method were developed, corresponding to two different approximations of the pSv per count curve, and two different implementations, one for real-time analysis onboard ISS and one for ground analysis. We will show that the preferred method, when applied in either real-time or ground analysis, yields good accuracy for the AmBe field. We find that the real-time algorithm is more susceptible to chance-coincidence background than is the algorithm used in ground analysis, so that the best estimates will come from the latter.

  4. Study of the influence of heat sources on the out-of-pile calibration curve of calorimetric cells used for nuclear energy deposition quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Vita, C.; Brun, J.; Reynard-Carette, C.

    2015-07-01

    inside the calorimeter cell head. This discrepancy is higher than in previous experiments because the calorimeter owns a high sensitivity. Consequently, a new prototype was created and instrumented by other heat sources in order to impose an energy deposition on the calorimetric cell structure (in particular in the base) and to improve the calibration step in out-of-pile conditions. In this paper, on the first part a detailed description of the new calorimetric sensor will be given. On the second part, the experimental response of the sensor obtained for several internal heating conditions will be shown. The influence of these conditions on the calibration curve will be discussed. Then the response of this prototype will be also presented for different external cooling fluid conditions (in particular flow temperature). In this part, the comparison between the in-pile and out-of-pile experimental results will be performed. On the last part, these out-of-pile experiments will be completed by 2D axisymmetrical thermal simulations with the CEA code CAST3M using Finite Elements Method. After a comparison between experimental and numerical works, improvements of the sensor prototype will be studied (new heat sources). (authors)« less

  5. On the absolute calibration of SO2 cameras

    USGS Publications Warehouse

    Lübcke, Peter; Bobrowski, Nicole; Illing, Sebastian; Kern, Christoph; Alvarez Nieves, Jose Manuel; Vogel, Leif; Zielcke, Johannes; Delgados Granados, Hugo; Platt, Ulrich

    2013-01-01

    This work investigates the uncertainty of results gained through the two commonly used, but quite different, calibration methods (DOAS and calibration cells). Measurements with three different instruments, an SO2 camera, a NFOVDOAS system and an Imaging DOAS (I-DOAS), are presented. We compare the calibration-cell approach with the calibration from the NFOV-DOAS system. The respective results are compared with measurements from an I-DOAS to verify the calibration curve over the spatial extent of the image. The results show that calibration cells, while working fine in some cases, can lead to an overestimation of the SO2 CD by up to 60% compared with CDs from the DOAS measurements. Besides these errors of calibration, radiative transfer effects (e.g. light dilution, multiple scattering) can significantly influence the results of both instrument types. The measurements presented in this work were taken at Popocatepetl, Mexico, between 1 March 2011 and 4 March 2011. Average SO2 emission rates between 4.00 and 14.34 kg s−1 were observed.

  6. On the long-term stability of calibration standards in different matrices.

    PubMed

    Kandić, A; Vukanac, I; Djurašević, M; Novković, D; Šešlak, B; Milošević, Z

    2012-09-01

    In order to assure Quality Control in accordance with ISO/IEC 17025, it was important, from metrological point of view, to examine the long-term stability of calibration standards previously prepared. Comprehensive reconsideration on efficiency curves with respect to the ageing of calibration standards is presented in this paper. The calibration standards were re-used after a period of 5 years and analysis of the results showed discrepancies in efficiency values. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Obtaining continuous BrAC/BAC estimates in the field: A hybrid system integrating transdermal alcohol biosensor, Intellidrink smartphone app, and BrAC Estimator software tools.

    PubMed

    Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary

    2018-08-01

    Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.

  8. Spitzer/JWST Cross Calibration: IRAC Observations of Potential Calibrators for JWST

    NASA Astrophysics Data System (ADS)

    Carey, Sean J.; Gordon, Karl D.; Lowrance, Patrick; Ingalls, James G.; Glaccum, William J.; Grillmair, Carl J.; E Krick, Jessica; Laine, Seppo J.; Fazio, Giovanni G.; Hora, Joseph L.; Bohlin, Ralph

    2017-06-01

    We present observations at 3.6 and 4.5 microns using IRAC on the Spitzer Space Telescope of a set of main sequence A stars and white dwarfs that are potential calibrators across the JWST instrument suite. The stars range from brightnesses of 4.4 to 15 mag in K band. The calibration observations use a similar redundancy to the observing strategy for the IRAC primary calibrators (Reach et al. 2005) and the photometry is obtained using identical methods and instrumental photometric corrections as those applied to the IRAC primary calibrators (Carey et al. 2009). The resulting photometry is then compared to the predictions based on spectra from the CALSPEC Calibration Database (http://www.stsci.edu/hst/observatory/crds/calspec.html) and the IRAC bandpasses. These observations are part of an ongoing collaboration between IPAC and STScI investigating absolute calibration in the infrared.

  9. Focal length calibration of an electrically tunable lens by digital holography.

    PubMed

    Wang, Zhaomin; Qu, Weijuan; Yang, Fang; Asundi, Anand Krishna

    2016-02-01

    The electrically tunable lens (ETL) is a novel current-controlled adaptive optical component which can continuously tune its focus in a specific range via changing its surface curvature. To quantitatively characterize its tuning power, here we assume the ETL to be a pure phase object and present a novel calibration method to dynamically measure its wavefront by use of digital holographic microscopy (DHM). The least squares method is then used to fit the radius of curvature of the wavefront. The focal length is obtained by substituting the radius into the Zemax model of the ETL. The behavior curve between the focal length of the ETL and its driven current is drawn, and a quadratic mathematic model is set up to characterize it. To verify our model, an ETL and offset lens combination is proposed and applied to ETL-based transport of intensity equation (TIE) phase retrieval microscopy. The experimental result demonstrates the calibration works well in TIE phase retrieval in comparison with the phase measured by DHM.

  10. The Importance of Calibration in Clinical Psychology.

    PubMed

    Lindhiem, Oliver; Petersen, Isaac T; Mentch, Lucas K; Youngstrom, Eric A

    2018-02-01

    Accuracy has several elements, not all of which have received equal attention in the field of clinical psychology. Calibration, the degree to which a probabilistic estimate of an event reflects the true underlying probability of the event, has largely been neglected in the field of clinical psychology in favor of other components of accuracy such as discrimination (e.g., sensitivity, specificity, area under the receiver operating characteristic curve). Although it is frequently overlooked, calibration is a critical component of accuracy with particular relevance for prognostic models and risk-assessment tools. With advances in personalized medicine and the increasing use of probabilistic (0% to 100%) estimates and predictions in mental health research, the need for careful attention to calibration has become increasingly important.

  11. Flight loads measurements obtained from calibrated strain-gage bridges mounted externally on the skin of a low-aspect-ratio wing

    NASA Technical Reports Server (NTRS)

    Eckstrom, C. V.

    1976-01-01

    Flight-test measurements of wingloads (shear, bending moment, and torque) were obtained by means of strain-gage bridges mounted on the exterior surface of a low-aspect-ratio, thin, swept wing which had a structural skin, full-depth honeycomb core, sandwich construction. Details concerning the strain-gage bridges, the calibration procedures used, and the flight-test results are presented along with some pressure measurements and theoretical calculations for comparison purposes.

  12. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  13. SU-D-18C-02: Feasibility of Using a Short ASL Scan for Calibrating Cerebral Blood Flow Obtained From DSC-MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P; Chang, T; Huang, K

    2014-06-01

    Purpose: This study aimed to evaluate the feasibility of using a short arterial spin labeling (ASL) scan for calibrating the dynamic susceptibility contrast- (DSC-) MRI in a group of patients with internal carotid artery stenosis. Methods: Six patients with unilateral ICA stenosis enrolled in the study on a 3T clinical MRI scanner. The ASL-cerebral blood flow (-CBF) maps were calculated by averaging different number of dynamic points (N=1-45) acquired by using a Q2TIPS sequence. For DSC perfusion analysis, arterial input function was selected to derive the relative cerebral blood flow (rCBF) map and the delay (Tmax) map. Patient-specific CF wasmore » calculated from the mean ASL- and DSC-CBF obtained from three different masks: (1)Tmax< 3s, (2)combined gray matter mask with mask 1, (3)mask 2 with large vessels removed. One CF value was created for each number of averages by using each of the three masks for calibrating the DSC-CBF map. The CF value of the largest number of averages (NL=45) was used to determine the acceptable range(< 10%, <15%, and <20%) of CF values corresponding to the minimally acceptable number of average (NS) for each patient. Results: Comparing DSC CBF maps corrected by CF values of NL (CBFL) in ACA, MCA and PCA territories, all masks resulted in smaller CBF on the ipsilateral side than the contralateral side of the MCA territory(p<.05). The values obtained from mask 1 were significantly different than the mask 3(p<.05). Using mask 3, the medium values of Ns were 4(<10%), 2(<15%) and 2(<20%), with the worst case scenario (maximum Ns) of 25, 4, and 4, respectively. Conclusion: This study found that reliable calibration of DSC-CBF can be achieved from a short pulsed ASL scan. We suggested use a mask based on the Tmax threshold, the inclusion of gray matter only and the exclusion of large vessels for performing the calibration.« less

  14. Revised landsat-5 thematic mapper radiometric calibration

    USGS Publications Warehouse

    Chander, G.; Markham, B.L.; Barsi, J.A.

    2007-01-01

    Effective April 2, 2007, the radiometric calibration of Landsat-5 (L5) Thematic Mapper (TM) data that are processed and distributed by the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) will be updated. The lifetime gain model that was implemented on May 5, 2003, for the reflective bands (1-5, 7) will be replaced by a new lifetime radiometric-calibration curve that is derived from the instrument's response to pseudoinvariant desert sites and from cross calibration with the Landsat-7 (L7) Enhanced TM Plus (ETM+). Although this calibration update applies to all archived and future L5 TM data, the principal improvements in the calibration are for the data acquired during the first eight years of the mission (1984-1991), where the changes in the instrument-gain values are as much as 15%. The radiometric scaling coefficients for bands 1 and 2 for approximately the first eight years of the mission have also been changed. Users will need to apply these new coefficients to convert the calibrated data product digital numbers to radiance. The scaling coefficients for the other bands have not changed.

  15. A Linear Viscoelastic Model Calibration of Sylgard 184.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Kevin Nicholas; Brown, Judith Alice

    2017-04-01

    We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANLmore » data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.« less

  16. Calibration of areal surface topography measuring instruments

    NASA Astrophysics Data System (ADS)

    Seewig, J.; Eifler, M.

    2017-06-01

    The ISO standards which are related to the calibration of areal surface topography measuring instruments are the ISO 25178-6xx series which defines the relevant metrological characteristics for the calibration of different measuring principles and the ISO 25178-7xx series which defines the actual calibration procedures. As the field of areal measurement is however not yet fully standardized, there are still open questions to be addressed which are subject to current research. Based on this, selected research results of the authors in this area are presented. This includes the design and fabrication of areal material measures. For this topic, two examples are presented with the direct laser writing of a stepless material measure for the calibration of the height axis which is based on the Abbott- Curve and the manufacturing of a Siemens star for the determination of the lateral resolution limit. Based on these results, as well a new definition for the resolution criterion, the small scale fidelity, which is still under discussion, is presented. Additionally, a software solution for automated calibration procedures is outlined.

  17. Cross-Calibration between ASTER and MODIS Visible to Near-Infrared Bands for Improvement of ASTER Radiometric Calibration

    PubMed Central

    Tsuchida, Satoshi; Thome, Kurtis

    2017-01-01

    Radiometric cross-calibration between the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Terra-Moderate Resolution Imaging Spectroradiometer (MODIS) has been partially used to derive the ASTER radiometric calibration coefficient (RCC) curve as a function of date on visible to near-infrared bands. However, cross-calibration is not sufficiently accurate, since the effects of the differences in the sensor’s spectral and spatial responses are not fully mitigated. The present study attempts to evaluate radiometric consistency across two sensors using an improved cross-calibration algorithm to address the spectral and spatial effects and derive cross-calibration-based RCCs, which increases the ASTER calibration accuracy. Overall, radiances measured with ASTER bands 1 and 2 are on averages 3.9% and 3.6% greater than the ones measured on the same scene with their MODIS counterparts and ASTER band 3N (nadir) is 0.6% smaller than its MODIS counterpart in current radiance/reflectance products. The percentage root mean squared errors (%RMSEs) between the radiances of two sensors are 3.7, 4.2, and 2.3 for ASTER band 1, 2, and 3N, respectively, which are slightly greater or smaller than the required ASTER radiometric calibration accuracy (4%). The uncertainty of the cross-calibration is analyzed by elaborating the error budget table to evaluate the International System of Units (SI)-traceability of the results. The use of the derived RCCs will allow further reduction of errors in ASTER radiometric calibration and subsequently improve interoperability across sensors for synergistic applications. PMID:28777329

  18. Calibration of streamflow gauging stations at the Tenderfoot Creek Experimental Forest

    Treesearch

    Scott W. Woods

    2007-01-01

    We used tracer based methods to calibrate eleven streamflow gauging stations at the Tenderfoot Creek Experimental Forest in western Montana. At six of the stations the measured flows were consistent with the existing rating curves. At Lower and Upper Stringer Creek, Upper Sun Creek and Upper Tenderfoot Creek the published flows, based on the existing rating curves,...

  19. First attempts to obtain a reference drift curve for traditional olive grove's plantations following ISO 22866.

    PubMed

    Gil, Emilio; Llorens, Jordi; Gallart, Montserrat; Gil-Ribes, Jesús A; Miranda-Fuentes, Antonio

    2018-06-15

    The current standard for the field measurements of spray drift (ISO 22866) is the only official standard for drift measurements in field conditions for all type of crops, including bushes and trees. A series of field trials following all the requirements established in the standard were arranged in a traditional olive grove in Córdoba (south of Spain). The aims of the study were to evaluate the applicability of the current standard procedure to the particular conditions of traditional olive trees plantations, to evaluate the critical requirements for performing the tests and to obtain a specific drift curve for such as important and specific crop as olive trees in traditional plantations, considering the enormous area covered by this type of crop all around the world. Results showed that the field trials incur a very complex process due to the particular conditions of the crop and the very precise environmental requirements. Furthermore, the trials offered a very low level of repeatability as the drift values varied significantly from one spray application to the next, with the obtained results being closely related to the wind speed, even when considering the standard minimum value of 1 m·s -1 . The collector's placement with respect to the position of the isolated trees was determined as being critical since this substantially modifies the ground deposit in the first 5 m. Even though, a new drift curve for olive trees in traditional plantation has been defined, giving an interesting tool for regulatory aspects. Conclusions indicated that a deep review of the official standard is needed to allow its application to the most relevant orchard/fruit crops. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Germanium resistance thermometer calibration at superfluid helium temperatures

    NASA Technical Reports Server (NTRS)

    Mason, F. C.

    1985-01-01

    The rapid increase in resistance of high purity semi-conducting germanium with decreasing temperature in the superfluid helium range of temperatures makes this material highly adaptable as a very sensitive thermometer. Also, a germanium thermometer exhibits a highly reproducible resistance versus temperature characteristic curve upon cycling between liquid helium temperatures and room temperature. These two factors combine to make germanium thermometers ideally suited for measuring temperatures in many cryogenic studies at superfluid helium temperatures. One disadvantage, however, is the relatively high cost of calibrated germanium thermometers. In space helium cryogenic systems, many such thermometers are often required, leading to a high cost for calibrated thermometers. The construction of a thermometer calibration cryostat and probe which will allow for calibrating six germanium thermometers at one time, thus effecting substantial savings in the purchase of thermometers is considered.

  1. Self-Calibrating Pressure Transducer

    NASA Technical Reports Server (NTRS)

    Lueck, Dale E. (Inventor)

    2006-01-01

    A self-calibrating pressure transducer is disclosed. The device uses an embedded zirconia membrane which pumps a determined quantity of oxygen into the device. The associated pressure can be determined, and thus, the transducer pressure readings can be calibrated. The zirconia membrane obtains oxygen .from the surrounding environment when possible. Otherwise, an oxygen reservoir or other source is utilized. In another embodiment, a reversible fuel cell assembly is used to pump oxygen and hydrogen into the system. Since a known amount of gas is pumped across the cell, the pressure produced can be determined, and thus, the device can be calibrated. An isolation valve system is used to allow the device to be calibrated in situ. Calibration is optionally automated so that calibration can be continuously monitored. The device is preferably a fully integrated MEMS device. Since the device can be calibrated without removing it from the process, reductions in costs and down time are realized.

  2. An absolute calibration method of an ethyl alcohol biosensor based on wavelength-modulated differential photothermal radiometry

    NASA Astrophysics Data System (ADS)

    Liu, Yi Jun; Mandelis, Andreas; Guo, Xinxin

    2015-11-01

    In this work, laser-based wavelength-modulated differential photothermal radiometry (WM-DPTR) is applied to develop a non-invasive in-vehicle alcohol biosensor. WM-DPTR features unprecedented ethanol-specificity and sensitivity by suppressing baseline variations through a differential measurement near the peak and baseline of the mid-infrared ethanol absorption spectrum. Biosensor signal calibration curves are obtained from WM-DPTR theory and from measurements in human blood serum and ethanol solutions diffused from skin. The results demonstrate that the WM-DPTR-based calibrated alcohol biosensor can achieve high precision and accuracy for the ethanol concentration range of 0-100 mg/dl. The high-performance alcohol biosensor can be incorporated into ignition interlocks that could be fitted as a universal accessory in vehicles in an effort to reduce incidents of drinking and driving.

  3. An absolute calibration method of an ethyl alcohol biosensor based on wavelength-modulated differential photothermal radiometry.

    PubMed

    Liu, Yi Jun; Mandelis, Andreas; Guo, Xinxin

    2015-11-01

    In this work, laser-based wavelength-modulated differential photothermal radiometry (WM-DPTR) is applied to develop a non-invasive in-vehicle alcohol biosensor. WM-DPTR features unprecedented ethanol-specificity and sensitivity by suppressing baseline variations through a differential measurement near the peak and baseline of the mid-infrared ethanol absorption spectrum. Biosensor signal calibration curves are obtained from WM-DPTR theory and from measurements in human blood serum and ethanol solutions diffused from skin. The results demonstrate that the WM-DPTR-based calibrated alcohol biosensor can achieve high precision and accuracy for the ethanol concentration range of 0-100 mg/dl. The high-performance alcohol biosensor can be incorporated into ignition interlocks that could be fitted as a universal accessory in vehicles in an effort to reduce incidents of drinking and driving.

  4. [Evaluation of vaporizers by anesthetic gas monitors corrected with a new method for preparation of calibration gases].

    PubMed

    Kurashiki, T

    1996-11-01

    For resolving the discrepancy of concentrations found among anesthetic gas monitors, the author proposed a new method using a vaporizer as a standard anesthetic gas generator for calibration. In this method, the carrier gas volume is measured by a mass flow meter (SEF-510 + FI-101) installed before the inlet of the vaporizer. The vaporized weight of volatile anesthetic agent is simultaneously measured by an electronic force balance (E12000S), on which the vaporizer is placed directly. The molar percent of the anesthetic is calculated using these data and is transformed into the volume percent. These gases discharging from the vaporizer are utilized for calibrating anesthetic gas monitors. These monitors are normalized by the linear equation describing the relationship between concentrations of calibration gases and readings of the anesthetic gas monitors. By using normalized monitors, flow rate-concentration performance curves of several anesthetic vaporizers were obtained. The author concludes that this method can serve as a standard in evaluating anesthetic vaporizers.

  5. An update on 'dose calibrator' settings for nuclides used in nuclear medicine.

    PubMed

    Bergeron, Denis E; Cessna, Jeffrey T

    2018-06-01

    Most clinical measurements of radioactivity, whether for therapeutic or imaging nuclides, rely on commercial re-entrant ionization chambers ('dose calibrators'). The National Institute of Standards and Technology (NIST) maintains a battery of representative calibrators and works to link calibration settings ('dial settings') to primary radioactivity standards. Here, we provide a summary of NIST-determined dial settings for 22 radionuclides. We collected previously published dial settings and determined some new ones using either the calibration curve method or the dialing-in approach. The dial settings with their uncertainties are collected in a comprehensive table. In general, current manufacturer-provided calibration settings give activities that agree with National Institute of Standards and Technology standards to within a few percent.

  6. A Bionic Polarization Navigation Sensor and Its Calibration Method.

    PubMed

    Zhao, Huijie; Xu, Wujian

    2016-08-03

    The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects' polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor's signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation.

  7. A Bionic Polarization Navigation Sensor and Its Calibration Method

    PubMed Central

    Zhao, Huijie; Xu, Wujian

    2016-01-01

    The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects’ polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor’s signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation. PMID:27527171

  8. Non-Contact Thrust Stand Calibration Method for Repetitively-Pulsed Electric Thrusters

    NASA Technical Reports Server (NTRS)

    Wong, Andrea R.; Toftul, Alexandra; Polzin, Kurt A.; Pearson, J. Boise

    2011-01-01

    A thrust stand calibration technique for use in testing repetitively-pulsed electric thrusters for in-space propulsion has been developed and tested using a modified hanging pendulum thrust stand. In the implementation of this technique, current pulses are applied to a solenoidal coil to produce a pulsed magnetic field that acts against the magnetic field produced by a permanent magnet mounted to the thrust stand pendulum arm. The force on the magnet is applied in this non-contact manner, with the entire pulsed force transferred to the pendulum arm through a piezoelectric force transducer to provide a time-accurate force measurement. Modeling of the pendulum arm dynamics reveals that after an initial transient in thrust stand motion the quasisteady average deflection of the thrust stand arm away from the unforced or zero position can be related to the average applied force through a simple linear Hooke s law relationship. Modeling demonstrates that this technique is universally applicable except when the pulsing period is increased to the point where it approaches the period of natural thrust stand motion. Calibration data were obtained using a modified hanging pendulum thrust stand previously used for steady-state thrust measurements. Data were obtained for varying impulse bit at constant pulse frequency and for varying pulse frequency. The two data sets exhibit excellent quantitative agreement with each other as the constant relating average deflection and average thrust match within the errors on the linear regression curve fit of the data. Quantitatively, the error on the calibration coefficient is roughly 1% of the coefficient value.

  9. Steep Gravel Bedload Rating Curves Obtained From Bedload Traps Shift Effective Discharge to Flows Much Higher Than "Bankfull"

    NASA Astrophysics Data System (ADS)

    Bunte, K.; Swingle, K. W.; Abt, S. R.; Cenderelli, D.

    2012-12-01

    Effective discharge (Qeff) is defined as the flow at which the product of flow frequency and bedload transport rates obtains its maximum. Qeff is often reported to correspond with bankfull flow (Qbf), where Qeff approximates the 1.5 year recurrence interval flow (Q1.5). Because it transports the majority of all bedload, Qeff is considered a design flow for stream restoration and flow management. This study investigates the relationship between Qeff and Q1.5 for gravel bedload in high elevation Rocky Mountain streams. Both the flow frequency distribution (FQ = a × Qbin-b) where Qbin is the flow class, and the bedload transport rating curve (QB = c × Qd) can be described by power functions. The product FQ × QB = (a × c × Q(-b + d)) is positive if d + -b >0, and negative if d + -b <0. FQ × QB can only attain a maximum (=Qeff) if either FQ or QB exhibit an inflection point. In snowmelt regimes, low flows prevail for much of the year, while high flows are limited to a few days, and extreme floods are rare. In log-log plotting scale, this distribution causes the longterm flow frequency function FQ to steepen in the vicinity of Q1.5. If the bedload rating curve exponent is small, e.g., = 3 as is typical of Helley-Smith bedload samples, d + -b shifts from >0 to <0, causing FQ × QB to peak, and Qeff to be around Q1.5. For measurements thought to be more representative of actual gravel transport obtained using bedload traps and similar devices, large rating curve exponents d of 6 - 16 are typical. In this case, d + -b remains >0, and FQ × QB reaches its maximum near the largest flow on record (Qeff,BT = Qmax). Expression of FQ by negative exponential functions FQ = k × e(Qbin×-m) smooths the product function FQ × QB that displays its maximum as a gentle hump rather than a sharp peak, but without drastically altering Qeff. However, a smooth function FQ × QB allows Qeff to react to small changes in rating curve exponents d. As d increases from <1 to >10, Qeff

  10. p-Curve and p-Hacking in Observational Research.

    PubMed

    Bruns, Stephan B; Ioannidis, John P A

    2016-01-01

    The p-curve, the distribution of statistically significant p-values of published studies, has been used to make inferences on the proportion of true effects and on the presence of p-hacking in the published literature. We analyze the p-curve for observational research in the presence of p-hacking. We show by means of simulations that even with minimal omitted-variable bias (e.g., unaccounted confounding) p-curves based on true effects and p-curves based on null-effects with p-hacking cannot be reliably distinguished. We also demonstrate this problem using as practical example the evaluation of the effect of malaria prevalence on economic growth between 1960 and 1996. These findings call recent studies into question that use the p-curve to infer that most published research findings are based on true effects in the medical literature and in a wide range of disciplines. p-values in observational research may need to be empirically calibrated to be interpretable with respect to the commonly used significance threshold of 0.05. Violations of randomization in experimental studies may also result in situations where the use of p-curves is similarly unreliable.

  11. p-Curve and p-Hacking in Observational Research

    PubMed Central

    Bruns, Stephan B.; Ioannidis, John P. A.

    2016-01-01

    The p-curve, the distribution of statistically significant p-values of published studies, has been used to make inferences on the proportion of true effects and on the presence of p-hacking in the published literature. We analyze the p-curve for observational research in the presence of p-hacking. We show by means of simulations that even with minimal omitted-variable bias (e.g., unaccounted confounding) p-curves based on true effects and p-curves based on null-effects with p-hacking cannot be reliably distinguished. We also demonstrate this problem using as practical example the evaluation of the effect of malaria prevalence on economic growth between 1960 and 1996. These findings call recent studies into question that use the p-curve to infer that most published research findings are based on true effects in the medical literature and in a wide range of disciplines. p-values in observational research may need to be empirically calibrated to be interpretable with respect to the commonly used significance threshold of 0.05. Violations of randomization in experimental studies may also result in situations where the use of p-curves is similarly unreliable. PMID:26886098

  12. Dutch X-band SLAR calibration

    NASA Technical Reports Server (NTRS)

    Groot, J. S.

    1990-01-01

    In August 1989 the NASA/JPL airborne P/L/C-band DC-8 SAR participated in several remote sensing campaigns in Europe. Amongst other test sites, data were obtained of the Flevopolder test site in the Netherlands on August the 16th. The Dutch X-band SLAR was flown on the same date and imaged parts of the same area as the SAR. To calibrate the two imaging radars a set of 33 calibration devices was deployed. 16 trihedrals were used to calibrate a part of the SLAR data. This short paper outlines the X-band SLAR characteristics, the experimental set-up and the calibration method used to calibrate the SLAR data. Finally some preliminary results are given.

  13. Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2004 and 2005 Tests)

    NASA Technical Reports Server (NTRS)

    Arrington, E. Allen; Pastor, Christine M.; Gonsalez, Jose C.; Curry, Monroe R., III

    2010-01-01

    A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel was completed in 2004 following the replacement of the inlet guide vanes upstream of the tunnel drive system and improvement to the facility total temperature instrumentation. This calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT. The 2004 test was also the first to use the 2-D RTD array, an improved total temperature calibration measurement platform.

  14. Stage-discharge rating curves based on satellite altimetry and modeled discharge in the Amazon basin

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; Dias de Paiva, Rodrigo; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Calmant, Stephane; Garambois, Pierre-André; Collischonn, Walter; Bonnet, Marie-Paule; Seyler, Frederique

    2016-05-01

    In this study, rating curves (RCs) were determined by applying satellite altimetry to a poorly gauged basin. This study demonstrates the synergistic application of remote sensing and watershed modeling to capture the dynamics and quantity of flow in the Amazon River Basin, respectively. Three major advancements for estimating basin-scale patterns in river discharge are described. The first advancement is the preservation of the hydrological meanings of the parameters expressed by Manning's equation to obtain a data set containing the elevations of the river beds throughout the basin. The second advancement is the provision of parameter uncertainties and, therefore, the uncertainties in the rated discharge. The third advancement concerns estimating the discharge while considering backwater effects. We analyzed the Amazon Basin using nearly one thousand series that were obtained from ENVISAT and Jason-2 altimetry for more than 100 tributaries. Discharge values and related uncertainties were obtained from the rain-discharge MGB-IPH model. We used a global optimization algorithm based on the Monte Carlo Markov Chain and Bayesian framework to determine the rating curves. The data were randomly allocated into 80% calibration and 20% validation subsets. A comparison with the validation samples produced a Nash-Sutcliffe efficiency (Ens) of 0.68. When the MGB discharge uncertainties were less than 5%, the Ens value increased to 0.81 (mean). A comparison with the in situ discharge resulted in an Ens value of 0.71 for the validation samples (and 0.77 for calibration). The Ens values at the mouths of the rivers that experienced backwater effects significantly improved when the mean monthly slope was included in the RC. Our RCs were not mission-dependent, and the Ens value was preserved when applying ENVISAT rating curves to Jason-2 altimetry at crossovers. The cease-to-flow parameter of our RCs provided a good proxy for determining river bed elevation. This proxy was validated

  15. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  16. An absolute calibration method of an ethyl alcohol biosensor based on wavelength-modulated differential photothermal radiometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yi Jun; Mandelis, Andreas, E-mail: mandelis@mie.utoronto.ca; Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3G9

    In this work, laser-based wavelength-modulated differential photothermal radiometry (WM-DPTR) is applied to develop a non-invasive in-vehicle alcohol biosensor. WM-DPTR features unprecedented ethanol-specificity and sensitivity by suppressing baseline variations through a differential measurement near the peak and baseline of the mid-infrared ethanol absorption spectrum. Biosensor signal calibration curves are obtained from WM-DPTR theory and from measurements in human blood serum and ethanol solutions diffused from skin. The results demonstrate that the WM-DPTR-based calibrated alcohol biosensor can achieve high precision and accuracy for the ethanol concentration range of 0-100 mg/dl. The high-performance alcohol biosensor can be incorporated into ignition interlocks that couldmore » be fitted as a universal accessory in vehicles in an effort to reduce incidents of drinking and driving.« less

  17. Cross-calibration of liquid and solid QCT calibration standards: corrections to the UCSF normative data

    NASA Technical Reports Server (NTRS)

    Faulkner, K. G.; Gluer, C. C.; Grampp, S.; Genant, H. K.

    1993-01-01

    Quantitative computed tomography (QCT) has been shown to be a precise and sensitive method for evaluating spinal bone mineral density (BMD) and skeletal response to aging and therapy. Precise and accurate determination of BMD using QCT requires a calibration standard to compensate for and reduce the effects of beam-hardening artifacts and scanner drift. The first standards were based on dipotassium hydrogen phosphate (K2HPO4) solutions. Recently, several manufacturers have developed stable solid calibration standards based on calcium hydroxyapatite (CHA) in water-equivalent plastic. Due to differences in attenuating properties of the liquid and solid standards, the calibrated BMD values obtained with each system do not agree. In order to compare and interpret the results obtained on both systems, cross-calibration measurements were performed in phantoms and patients using the University of California San Francisco (UCSF) liquid standard and the Image Analysis (IA) solid standard on the UCSF GE 9800 CT scanner. From the phantom measurements, a highly linear relationship was found between the liquid- and solid-calibrated BMD values. No influence on the cross-calibration due to simulated variations in body size or vertebral fat content was seen, though a significant difference in the cross-calibration was observed between scans acquired at 80 and 140 kVp. From the patient measurements, a linear relationship between the liquid (UCSF) and solid (IA) calibrated values was derived for GE 9800 CT scanners at 80 kVp (IA = [1.15 x UCSF] - 7.32).(ABSTRACT TRUNCATED AT 250 WORDS).

  18. Development and verification of an innovative photomultiplier calibration system with a 10-fold increase in photometer resolution

    NASA Astrophysics Data System (ADS)

    Jiang, Shyh-Biau; Yeh, Tse-Liang; Chen, Li-Wu; Liu, Jann-Yenq; Yu, Ming-Hsuan; Huang, Yu-Qin; Chiang, Chen-Kiang; Chou, Chung-Jen

    2018-05-01

    In this study, we construct a photomultiplier calibration system. This calibration system can help scientists measuring and establishing the characteristic curve of the photon count versus light intensity. The system uses an innovative 10-fold optical attenuator to enable an optical power meter to calibrate photomultiplier tubes which have the resolution being much greater than that of the optical power meter. A simulation is firstly conducted to validate the feasibility of the system, and then the system construction, including optical design, circuit design, and software algorithm, is realized. The simulation generally agrees with measurement data of the constructed system, which are further used to establish the characteristic curve of the photon count versus light intensity.

  19. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements.

    PubMed

    Franesqui, Miguel A; Yepes, Jorge; García-González, Cándida

    2017-08-01

    This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA). The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled "Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves" (Franesqui et al., 2017) [1].

  20. Common Envelope Light Curves. I. Grid-code Module Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galaviz, Pablo; Marco, Orsola De; Staff, Jan E.

    The common envelope (CE) binary interaction occurs when a star transfers mass onto a companion that cannot fully accrete it. The interaction can lead to a merger of the two objects or to a close binary. The CE interaction is the gateway of all evolved compact binaries, all stellar mergers, and likely many of the stellar transients witnessed to date. CE simulations are needed to understand this interaction and to interpret stars and binaries thought to be the byproduct of this stage. At this time, simulations are unable to reproduce the few observational data available and several ideas have been putmore » forward to address their shortcomings. The need for more definitive simulation validation is pressing and is already being fulfilled by observations from time-domain surveys. In this article, we present an initial method and its implementation for post-processing grid-based CE simulations to produce the light curve so as to compare simulations with upcoming observations. Here we implemented a zeroth order method to calculate the light emitted from CE hydrodynamic simulations carried out with the 3D hydrodynamic code Enzo used in unigrid mode. The code implements an approach for the computation of luminosity in both optically thick and optically thin regimes and is tested using the first 135 days of the CE simulation of Passy et al., where a 0.8  M {sub ⊙} red giant branch star interacts with a 0.6  M {sub ⊙} companion. This code is used to highlight two large obstacles that need to be overcome before realistic light curves can be calculated. We explain the nature of these problems and the attempted solutions and approximations in full detail to enable the next step to be identified and implemented. We also discuss our simulation in relation to recent data of transients identified as CE interactions.« less

  1. Aerodynamic method for obtaining the soil water retention curve

    NASA Astrophysics Data System (ADS)

    Alekseev, V. V.; Maksimov, I. I.

    2013-07-01

    A new method for the rapid plotting of the soil water retention curve (SWRC) has been proposed that considers the soil water as an environment limited by the soil solid phase on one side and by the soil air on the other side. Both contact surfaces have surface energies, which play the main role in water retention. The use of an idealized soil model with consideration for the nonequilibrium thermodynamic laws and the aerodynamic similarity principles allows us to estimate the volumetric specific surface areas of soils and, using the proposed pedotransfer function (PTF), to plot the SWRC. The volumetric specific surface area of the solid phase, the porosity, and the specific free surface energy at the water-air interface are used as the SWRC parameters. Devices for measuring the parameters are briefly described. The differences between the proposed PTF and the experimental data have been analyzed using the statistical processing of the data.

  2. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order

  3. VSHEC—A program for the automatic spectrum calibration

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.; Utyonkov, V. K.; Tsyganov, Yu. S.

    2013-02-01

    Calibration is the transformation of the output channels of a measuring device into the physical values (energies, times, angles, etc.). If dealt with manually, it is a labor- and time-consuming procedure even if only a few detectors are used. However, the situation changes appreciably if a calibration of multi-detector systems is required, where the number of registering devices extends to hundreds (Tsyganov et al. (2004) [1]). The calibration is aggravated by the fact that needed pivotal channel numbers should be determined from peak-like distributions. But peak distribution is an informal pattern so that a procedure of pattern recognition should be employed to discard the operator interference. The automatic calibration is the determination of the calibration curve parameters on the basis of reference quantity list and the data which partially are characterized by these quantities (energies, angles, etc). The program allows the physicist to perform the calibration of the spectrometric detectors for both the cases: that of one tract and that of many. Program summaryProgram title: VSHEC Catalogue identifier: AENN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6403 No. of bytes in distributed program, including test data, etc.: 325847 Distribution format: tar.gz Programming language: DELPHI-5 and higher. Computer: Any IBM PC compatible. Operating system: Windows XX. Classification: 2.3, 4.9. Nature of problem: Automatic conversion of detector channels into their energy equivalents. Solution method: Automatic decomposition of a spectrum into geometric figures such as peaks and an envelope of peaks from below, estimation of peak centers and search for the maximum peak center subsequence which matches the

  4. Color calibration and color-managed medical displays: does the calibration method matter?

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Rehm, Kelly; Silverstein, Louis D.; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.

    2010-02-01

    Our laboratory has investigated the efficacy of a suite of color calibration and monitor profiling packages which employ a variety of color measurement sensors. Each of the methods computes gamma correction tables for the red, green and blue color channels of a monitor that attempt to: a) match a desired luminance range and tone reproduction curve; and b) maintain a target neutral point across the range of grey values. All of the methods examined here produce International Color Consortium (ICC) profiles that describe the color rendering capabilities of the monitor after calibration. Color profiles incorporate a transfer matrix that establishes the relationship between RGB driving levels and the International Commission on Illumination (CIE) XYZ (tristimulus) values of the resulting on-screen color; the matrix is developed by displaying color patches of known RGB values on the monitor and measuring the tristimulus values with a sensor. The number and chromatic distribution of color patches varies across methods and is usually not under user control. In this work we examine the effect of employing differing calibration and profiling methods on rendition of color images. A series of color patches encoded in sRGB color space were presented on the monitor using color-management software that utilized the ICC profile produced by each method. The patches were displayed on the calibrated monitor and measured with a Minolta CS200 colorimeter. Differences in intended and achieved luminance and chromaticity were computed using the CIE DE2000 color-difference metric, in which a value of ▵E = 1 is generally considered to be approximately one just noticeable difference (JND) in color. We observed between one and 17 JND's for individual colors, depending on calibration method and target.

  5. Larger Optics and Improved Calibration Techniques for Small Satellite Observations with the ERAU OSCOM System

    NASA Astrophysics Data System (ADS)

    Bilardi, S.; Barjatya, A.; Gasdia, F.

    OSCOM, Optical tracking and Spectral characterization of CubeSats for Operational Missions, is a system capable of providing time-resolved satellite photometry using commercial-off-the-shelf (COTS) hardware and custom tracking and analysis software. This system has acquired photometry of objects as small as CubeSats using a Celestron 11” RASA and an inexpensive CMOS machine vision camera. For satellites with known shapes, these light curves can be used to verify a satellite’s attitude and the state of its deployed solar panels or antennae. While the OSCOM system can successfully track satellites and produce light curves, there is ongoing improvement towards increasing its automation while supporting additional mounts and telescopes. A newly acquired Celestron 14” Edge HD can be used with a Starizona Hyperstar to increase the SNR for small objects as well as extend beyond the limiting magnitude of the 11” RASA. OSCOM currently corrects instrumental brightness measurements for satellite range and observatory site average atmospheric extinction, but calibrated absolute brightness is required to determine information about satellites other than their spin rate, such as surface albedo. A calibration method that automatically detects and identifies background stars can use their catalog magnitudes to calibrate the brightness of the satellite in the image. We present a photometric light curve from both the 14” Edge HD and 11” RASA optical systems as well as plans for a calibration method that will perform background star photometry to efficiently determine calibrated satellite brightness in each frame.

  6. Calibrator device for the extrusion of cable coatings

    NASA Astrophysics Data System (ADS)

    Garbacz, Tomasz; Dulebová, Ľudmila; Spišák, Emil; Dulebová, Martina

    2016-05-01

    This paper presents selected results of theoretical and experimental research works on a new calibration device (calibrators) used to produce coatings of electric cables. The aim of this study is to present design solution calibration equipment and present a new calibration machine, which is an important element of the modernized technology extrusion lines for coating cables. As a result of the extrusion process of PVC modified with blowing agents, an extrudate in the form of an electrical cable was obtained. The conditions of the extrusion process were properly selected, which made it possible to obtain a product with solid external surface and cellular core.

  7. A Comparison of Radiometric Calibration Techniques for Lunar Impact Flashes

    NASA Technical Reports Server (NTRS)

    Suggs, R.

    2016-01-01

    Video observations of lunar impact flashes have been made by a number of researchers since the late 1990's and the problem of determination of the impact energies has been approached in different ways (Bellot Rubio, et al., 2000 [1], Bouley, et al., 2012.[2], Suggs, et al. 2014 [3], Rembold and Ryan 2015 [4], Ortiz, et al. 2015 [5]). The wide spectral response of the unfiltered video cameras in use for all published measurements necessitates color correction for the standard filter magnitudes available for the comparison stars. An estimate of the color of the impact flash is also needed to correct it to the chosen passband. Magnitudes corrected to standard filters are then used to determine the luminous energy in the filter passband according to the stellar atmosphere calibrations of Bessell et al., 1998 [6]. Figure 1 illustrates the problem. The camera pass band is the wide black curve and the blue, green, red, and magenta curves show the band passes of the Johnson-Cousins B, V, R, and I filters for which we have calibration star magnitudes. The blackbody curve of an impact flash of temperature 2800K (Nemtchinov, et al., 1998 [7]) is the dashed line. This paper compares the various photometric calibration techniques and how they address the color corrections necessary for the calculation of luminous energy (radiometry) of impact flashes. This issue has significant implications for determination of luminous efficiency, predictions of impact crater sizes for observed flashes, and the flux of meteoroids in the 10s of grams to kilogram size range.

  8. An extended CFD model to predict the pumping curve in low pressure plasma etch chamber

    NASA Astrophysics Data System (ADS)

    Zhou, Ning; Wu, Yuanhao; Han, Wenbin; Pan, Shaowu

    2014-12-01

    Continuum based CFD model is extended with slip wall approximation and rarefaction effect on viscosity, in an attempt to predict the pumping flow characteristics in low pressure plasma etch chambers. The flow regime inside the chamber ranges from slip wall (Kn ˜ 0.01), and up to free molecular (Kn = 10). Momentum accommodation coefficient and parameters for Kn-modified viscosity are first calibrated against one set of measured pumping curve. Then the validity of this calibrated CFD models are demonstrated in comparison with additional pumping curves measured in chambers of different geometry configurations. More detailed comparison against DSMC model for flow conductance over slits with contraction and expansion sections is also discussed.

  9. Problems in archaeomagnetic reference curves elaboration in the prehistoric past.

    NASA Astrophysics Data System (ADS)

    Avramova, M.; Kovacheva, M.; Boyadziev, Y.

    2012-04-01

    Most important task of archaeomagnetic studies is the construction of the geomagnetic field secular variations in the past for a given territory. The obtained reference curves would be precise only when a large number of well-dated archaeological sites from different time periods are included as input data. Sofia Palaeomagnetic laboratory is the first one in the Balkans to accumulate a large number of data spanning the time interval of 3000 BP to 8000 BP. Many archaeological sites in Bulgaria are multilevel settlements with clear stratigraphy. Commonly the prehistoric sites are dated according to the relative chronology, the type of archaeological artifacts found and 14C dates, the latter being not always available. The biggest difficulty is that usually the radiocarbon dates are not well constrained, often contradictory to the vertical stratigraphy. The transformation of conventional 14C dates to absolute dates BC depends a lot on which part of dendrochronological calibration curve are they related. Besides, multiple dating intervals are often obtained and the calibrated intervals are usually very large (from 100 to 300 and more years). However, when archaeological discoveries of multilevel prehistoric sites are combined with archaeomagnetic investigations the corresponding archaeomagnetic profiles can be obtained. Currently we have archaeomagnetic data for fifteen prehistoric multilevel sites - two of them are Neolithic, nine - Eneolithic and four are from the Bronze Age. The next step is to juxtapose these stratigraphic profiles with the absolute scale of time having in mind that not all sites and layers have 14C dates. Thanks to abundance of 14C dates for most of Bulgarian prehistoric sites, multilayer sites connected with the absolute chronology exist and can be used as reference profiles. The time frames of each horizon for such reference site are based on series of 14C dates (not single measurements), the analysis of the thickness of layers, type of material

  10. Stable Calibration of Raman Lidar Water-Vapor Measurements

    NASA Technical Reports Server (NTRS)

    Leblanc, Thierry; McDermid, Iain S.

    2008-01-01

    A method has been devised to ensure stable, long-term calibration of Raman lidar measurements that are used to determine the altitude-dependent mixing ratio of water vapor in the upper troposphere and lower stratosphere. Because the lidar measurements yield a quantity proportional to the mixing ratio, rather than the mixing ratio itself, calibration is necessary to obtain the factor of proportionality. The present method involves the use of calibration data from two sources: (1) absolute calibration data from in situ radiosonde measurements made during occasional campaigns and (2) partial calibration data obtained by use, on a regular schedule, of a lamp that emits in a known spectrum determined in laboratory calibration measurements. In this method, data from the first radiosonde campaign are used to calculate a campaign-averaged absolute lidar calibration factor (t(sub 1)) and the corresponding campaign-averaged ration (L(sub 1)) between lamp irradiances at the water-vapor and nitrogen wavelengths. Depending on the scenario considered, this ratio can be assumed to be either constant over a long time (L=L(sub 1)) or drifting slowly with time. The absolutely calibrated water-vapor mixing ratio (q) obtained from the ith routine off-campaign lidar measurement is given by q(sub 1)=P(sub 1)/t(sub 1)=LP(sub 1)/P(sup prime)(sub 1) where P(sub 1) is water-vapor/nitrogen measurement signal ration, t(sub 1) is the unknown and unneeded overall efficiency ratio of the lidar receiver during the ith routine off-campaign measurement run, and P(sup prime)(sub 1) is the water-vapor/nitrogen signal ratio obtained during the lamp run associated with the ith routine off-campaign measurement run. If L is assumed constant, then the lidar calibration is routinely obtained without the need for new radiosonde data. In this case, one uses L=L(sub 1) = P(sup prime)(sub 1)/t(sub 1), where P(sub 1)(sup prime) is the water-vapor/nitrogen signal ratio obtained during the lamp run associated

  11. Kinetic approach for the enzymatic determination of levodopa and carbidopa assisted by multivariate curve resolution-alternating least squares.

    PubMed

    Grünhut, Marcos; Garrido, Mariano; Centurión, Maria E; Fernández Band, Beatriz S

    2010-07-12

    A combination of kinetic spectroscopic monitoring and multivariate curve resolution-alternating least squares (MCR-ALS) was proposed for the enzymatic determination of levodopa (LVD) and carbidopa (CBD) in pharmaceuticals. The enzymatic reaction process was carried out in a reverse stopped-flow injection system and monitored by UV-vis spectroscopy. The spectra (292-600 nm) were recorded throughout the reaction and were analyzed by multivariate curve resolution-alternating least squares. A small calibration matrix containing nine mixtures was used in the model construction. Additionally, to evaluate the prediction ability of the model, a set with six validation mixtures was used. The lack of fit obtained was 4.3%, the explained variance 99.8% and the overall prediction error 5.5%. Tablets of commercial samples were analyzed and the results were validated by pharmacopeia method (high performance liquid chromatography). No significant differences were found (alpha=0.05) between the reference values and the ones obtained with the proposed method. It is important to note that a unique chemometric model made it possible to determine both analytes simultaneously. Copyright 2010 Elsevier B.V. All rights reserved.

  12. Determination of calibration parameters of a VRX CT system using an "Amoeba" algorithm.

    PubMed

    Jordan, Lawrence M; Dibianca, Frank A; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M Waleed

    2004-01-01

    Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge "clouds" created by the detected x-ray photons, i.e., the "physics limit." This paper focuses on implementing a technique called "projective compression." which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm "variable-resolution x-ray" (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown.

  13. Updates on the Performance and Calibration of HST/STIS

    NASA Astrophysics Data System (ADS)

    Lockwood, Sean A.; Monroe, TalaWanda R.; Ogaz, Sara; Branton, Doug; Carlberg, Joleen K.; Debes, John H.; Jedrzejewski, Robert I.; Proffitt, Charles R.; Riley, Allyssa; Sohn, Sangmo Tony; Sonnentrucker, Paule; Walborn, Nolan R.; Welty, Daniel

    2018-06-01

    The Space Telescope Imaging Spectrograph (STIS) on the Hubble Space Telescope (HST) has been in orbit for 21 years and continues to produce high quality scientific results using a diverse complement of operating modes. These include spatially resolved spectroscopy in the UV and optical, high spatial resolution echelle spectroscopy in the UV, and solar-blind imaging in the UV. In addition, STIS possesses unique visible-light coronagraphic modes that keep the instrument at the forefront of exoplanet and debris-disk research. As the instrument's characteristics evolve over its lifetime, the instrument team at the Space Telescope Science Institute monitors its performance and works towards improving the quality of its data products. Here we present updates on the status of the STIS CCD and FUV & NUV MAMA detectors, as well as changes to the CalSTIS reduction pipeline. We also discuss progress toward the recalibration of the E140M/1425 echelle mode. The E140M grating blaze function shapes have changed since flux calibration was carried out following SM4, which limits the relative photometric flux accuracy of some spectral orders up to 5-10% at the edges. In Cycle 25 a special calibration program was executed to obtain updated sensitivity curves for the E140M/1425 setting.

  14. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    NASA Astrophysics Data System (ADS)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  15. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  16. The curvature of sensitometric curves for Kodak XV-2 film irradiated with photon and electron beams.

    PubMed

    van Battum, L J; Huizenga, H

    2006-07-01

    Sensitometric curves of Kodak XV-2 film, obtained in a time period of ten years with various types of equipment, have been analyzed both for photon and electron beams. The sensitometric slope in the dataset varies more than a factor of 2, which is attributed mainly to variations in developer conditions. In the literature, the single hit equation has been proposed as a model for the sensitometric curve, as with the parameters of the sensitivity and maximum optical density. In this work, the single hit equation has been translated into a polynomial like function as with the parameters of the sensitometric slope and curvature. The model has been applied to fit the sensitometric data. If the dataset is fitted for each single sensitometric curve separately, a large variation is observed for both fit parameters. When sensitometric curves are fitted simultaneously it appears that all curves can be fitted adequately with a sensitometric curvature that is related to the sensitometric slope. When fitting each curve separately, apparently measurement uncertainty hides this relation. This relation appears to be dependent only on the type of densitometer used. No significant differences between beam energies or beam modalities are observed. Using the intrinsic relation between slope and curvature in fitting sensitometric data, e.g., for pretreatment verification of intensity-modulated radiotherapy, will increase the accuracy of the sensitometric curve. A calibration at a single dose point, together with a predetermined densitometer-dependent parameter ODmax will be adequate to find the actual relation between optical density and dose.

  17. Optical Mass Displacement Tracking: A simplified field calibration method for the electro-mechanical seismometer.

    NASA Astrophysics Data System (ADS)

    Burk, D. R.; Mackey, K. G.; Hartse, H. E.

    2016-12-01

    We have developed a simplified field calibration method for use in seismic networks that still employ the classical electro-mechanical seismometer. Smaller networks may not always have the financial capability to purchase and operate modern, state of the art equipment. Therefore these networks generally operate a modern, low-cost digitizer that is paired to an existing electro-mechanical seismometer. These systems are typically poorly calibrated. Calibration of the station is difficult to estimate because coil loading, digitizer input impedance, and amplifier gain differences vary by station and digitizer model. Therefore, it is necessary to calibrate the station channel as a complete system to take into account all components from instrument, to amplifier, to even the digitizer. Routine calibrations at the smaller networks are not always consistent, because existing calibration techniques require either specialized equipment or significant technical expertise. To improve station data quality at the small network, we developed a calibration method that utilizes open source software and a commonly available laser position sensor. Using a signal generator and a small excitation coil, we force the mass of the instrument to oscillate at various frequencies across its operating range. We then compare the channel voltage output to the laser-measured mass displacement to determine the instrument voltage sensitivity at each frequency point. Using the standard equations of forced motion, a representation of the calibration curve as a function of voltage per unit of ground velocity is calculated. A computer algorithm optimizes the curve and then translates the instrument response into a Seismic Analysis Code (SAC) poles & zeros format. Results have been demonstrated to fall within a few percent of a standard laboratory calibration. This method is an effective and affordable option for networks that employ electro-mechanical seismometers, and it is currently being deployed in

  18. A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems

    PubMed Central

    Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.

    2013-01-01

    Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415

  19. Linking Parameters Estimated with the Generalized Graded Unfolding Model: A Comparison of the Accuracy of Characteristic Curve Methods

    ERIC Educational Resources Information Center

    Anderson Koenig, Judith; Roberts, James S.

    2007-01-01

    Methods for linking item response theory (IRT) parameters are developed for attitude questionnaire responses calibrated with the generalized graded unfolding model (GGUM). One class of IRT linking methods derives the linking coefficients by comparing characteristic curves, and three of these methods---test characteristic curve (TCC), item…

  20. Calibration of solid state nuclear track detectors at high energy ion beams for cosmic radiation measurements: HAMLET results

    NASA Astrophysics Data System (ADS)

    Szabó, J.; Pálfalvi, J. K.

    2012-12-01

    The MATROSHKA experiments and the related HAMLET project funded by the European Commission aimed to study the dose burden of the crew working on the International Space Station (ISS). During these experiments a human phantom equipped with several thousands of radiation detectors was exposed to cosmic rays inside and outside the ISS. Besides the measurements realized in Earth orbit, the HAMLET project included also a ground-based program of calibration and intercomparison of the different detectors applied by the participating groups using high-energy ion beams. The Space Dosimetry Group of the Centre for Energy Research (formerly Atomic Energy Research Institute) participated in these experiments with passive solid state nuclear track detectors (SSNTDs). The paper presents the results of the calibration experiments performed in the years 2008-2011 at the Heavy Ion Medical Accelerator (HIMAC) of the National Institute of Radiological Sciences (NIRS), Chiba, Japan. The data obtained serve as update and improvement for the previous calibration curves which are necessary for the evaluation of the SSNTDs exposed in unknown space radiation fields.

  1. GIFTS SM EDU Radiometric and Spectral Calibrations

    NASA Technical Reports Server (NTRS)

    Tian, J.; Reisse, R. a.; Johnson, D. G.; Gazarik, J. J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument gathers measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration. The calibration procedures can be subdivided into three categories: the pre-calibration stage, the calibration stage, and finally, the post-calibration stage. Detailed derivations for each stage are presented in this paper.

  2. Effect of nonideal square-law detection on static calibration in noise-injection radiometers

    NASA Technical Reports Server (NTRS)

    Hearn, C. P.

    1984-01-01

    The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.

  3. Wavelength selection-based nonlinear calibration for transcutaneous blood glucose sensing using Raman spectroscopy

    PubMed Central

    Dingari, Narahara Chari; Barman, Ishan; Kang, Jeon Woong; Kong, Chae-Ryon; Dasari, Ramachandra R.; Feld, Michael S.

    2011-01-01

    While Raman spectroscopy provides a powerful tool for noninvasive and real time diagnostics of biological samples, its translation to the clinical setting has been impeded by the lack of robustness of spectroscopic calibration models and the size and cumbersome nature of conventional laboratory Raman systems. Linear multivariate calibration models employing full spectrum analysis are often misled by spurious correlations, such as system drift and covariations among constituents. In addition, such calibration schemes are prone to overfitting, especially in the presence of external interferences that may create nonlinearities in the spectra-concentration relationship. To address both of these issues we incorporate residue error plot-based wavelength selection and nonlinear support vector regression (SVR). Wavelength selection is used to eliminate uninformative regions of the spectrum, while SVR is used to model the curved effects such as those created by tissue turbidity and temperature fluctuations. Using glucose detection in tissue phantoms as a representative example, we show that even a substantial reduction in the number of wavelengths analyzed using SVR lead to calibration models of equivalent prediction accuracy as linear full spectrum analysis. Further, with clinical datasets obtained from human subject studies, we also demonstrate the prospective applicability of the selected wavelength subsets without sacrificing prediction accuracy, which has extensive implications for calibration maintenance and transfer. Additionally, such wavelength selection could substantially reduce the collection time of serial Raman acquisition systems. Given the reduced footprint of serial Raman systems in relation to conventional dispersive Raman spectrometers, we anticipate that the incorporation of wavelength selection in such hardware designs will enhance the possibility of miniaturized clinical systems for disease diagnosis in the near future. PMID:21895336

  4. Calibration strategies for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Gaug, Markus; Berge, David; Daniel, Michael; Doro, Michele; Förster, Andreas; Hofmann, Werner; Maccarone, Maria C.; Parsons, Dan; de los Reyes Lopez, Raquel; van Eldik, Christopher

    2014-08-01

    The Central Calibration Facilities workpackage of the Cherenkov Telescope Array (CTA) observatory for very high energy gamma ray astronomy defines the overall calibration strategy of the array, develops dedicated hardware and software for the overall array calibration and coordinates the calibration efforts of the different telescopes. The latter include LED-based light pulsers, and various methods and instruments to achieve a calibration of the overall optical throughput. On the array level, methods for the inter-telescope calibration and the absolute calibration of the entire observatory are being developed. Additionally, the atmosphere above the telescopes, used as a calorimeter, will be monitored constantly with state-of-the-art instruments to obtain a full molecular and aerosol profile up to the stratosphere. The aim is to provide a maximal uncertainty of 10% on the reconstructed energy-scale, obtained through various independent methods. Different types of LIDAR in combination with all-sky-cameras will provide the observatory with an online, intelligent scheduling system, which, if the sky is partially covered by clouds, gives preference to sources observable under good atmospheric conditions. Wide-field optical telescopes and Raman Lidars will provide online information about the height-resolved atmospheric extinction, throughout the field-of-view of the cameras, allowing for the correction of the reconstructed energy of each gamma-ray event. The aim is to maximize the duty cycle of the observatory, in terms of usable data, while reducing the dead time introduced by calibration activities to an absolute minimum.

  5. Optical Rotation Curves and Linewidths for Tully-Fisher Applications

    NASA Astrophysics Data System (ADS)

    Courteau, Stephane

    1997-12-01

    We present optical long-slit rotation curves for 304 northern Sb-Sc UGC galaxies from a sample designed for Tully-Fisher (TF) applications. Matching r-band photometry exists for each galaxy. We describe the procedures of rotation curve (RC) extraction and construction of optical profiles analogous to 21 cm integrated linewidths. More than 20% of the galaxies were observed twice or more, allowing for a proper determination of systematic errors. Various measures of maximum rotational velocity to be used as input in the TF relation are tested on the basis of their repeatability, minimization of TF scatter, and match with 21 cm linewidths. The best measure of TF velocity, V2.2 is given at the location of peak rotational velocity of a pure exponential disk. An alternative measure to V2.2 which makes no assumption about the luminosity profile or shape of the rotation curve is Vhist, the 20% width of the velocity histogram, though the match with 21 cm linewidths is not as good. We show that optical TF calibrations yield internal scatter comparable to, if not smaller than, the best calibrations based on single-dish 21 cm radio linewidths. Even though resolved H I RCs are more extended than their optical counterpart, a tight match between optical and radio linewidths exists since the bulk of the H I surface density is enclosed within the optical radius. We model the 304 RCs presented here plus a sample of 958 curves from Mathewson et al. (1992, APJS, 81, 413) with various fitting functions. An arctan function provides an adequate simple fit (not accounting for non-circular motions and spiral arms). More elaborate, empirical models may yield a better match at the expense of strong covariances. We caution against physical or "universal" parametrizations for TF applications.

  6. Repopulation of calibrations with samples from the target site: effect of the size of the calibration.

    NASA Astrophysics Data System (ADS)

    Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.

    2009-04-01

    Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least

  7. Analysis of the curve of Spee and the curve of Wilson in adult Indian population: A three-dimensional measurement study.

    PubMed

    Surendran, Sowmya Velekkatt; Hussain, Sharmila; Bhoominthan, S; Nayar, Sanjna; Jayesh, Ragavendra

    2016-01-01

    When reconstructing the occlusal curvatures dentists often use a 4-inch radii arc as a rough standard based on Monson spherical theory. The use of an identical radius for the curve of Spee for all patients may not be appropriate because each patient is individually different. The validity of application of this theory in the Indian population and the present study has been undertaken. This study is an attempt to evaluate the curve of Spee and curve of Wilson in young Indian population using three dimensional analysis. This study compared the radius and the depth of right and left, maxillary and mandibular curves of Spee and the radius of maxillary and mandibular curves of Wilson in males and females. The cusp tips of canines, buccal cusp tips of premolars and molars and palatal/lingual cusp tips of second molars of 60 maxillary and 60 mandibular casts were obtained. Three-dimensional (x, y, z) coordinates of the cusp tips of the molars, premolars, and canines of the right and left sides of the maxilla and mandible were obtained with three dimensional coordinate measuring machine. The radius and the depth of right and left, maxillary and mandibular curves of Spee and the radius of maxillary and mandibular curves of Wilson were measured by means of computer software Metrologic-XG. Pearson's correlation test and Independent t-test were used to test the statistical significance (α=.05). The values of curve of Spee and curve of Wilson in Indian population obtained from this study were higher than the 4 inch (100 mm) radius proposed by Monson. These findings suggest ethnic differences in the radius of curve of Spee and curve of Wilson.

  8. Hydrological modelling of the Mara River Basin, Kenya: Dealing with uncertain data quality and calibrating using river stage

    NASA Astrophysics Data System (ADS)

    Hulsman, P.; Bogaard, T.; Savenije, H. H. G.

    2016-12-01

    In hydrology and water resources management, discharge is the main time series for model calibration. Rating curves are needed to derive discharge from continuously measured water levels. However, assuring their quality is demanding due to dynamic changes and problems in accurately deriving discharge at high flows. This is valid everywhere, but even more in African socio-economic context. To cope with these uncertainties, this study proposes to use water levels instead of discharge data for calibration. Also uncertainties in rainfall measurements, especially the spatial heterogeneity needs to be considered. In this study, the semi-distributed rainfall runoff model FLEX-Topo was applied to the Mara River Basin. In this model seven sub-basins were distinguished and four hydrological response units with each a unique model structure based on the expected dominant flow processes. Parameter and process constrains were applied to exclude unrealistic results. To calibrate the model, the water levels were back-calculated from modelled discharges, using cross-section data and the Strickler formula calibrating parameter `k•s1/2', and compared to measured water levels. The model simulated the water depths well for the entire basin and the Nyangores sub-basin in the north. However, the calibrated and observed rating curves differed significantly at the basin outlet, probably due to uncertainties in the measured discharge, but at Nyangores they were almost identical. To assess the effect of rainfall uncertainties on the hydrological model, the representative rainfall in each sub-basin was estimated with three different methods: 1) single station, 2) average precipitation, 3) areal sub-division using Thiessen polygons. All three methods gave on average similar results, but method 1 resulted in more flashy responses, method 2 dampened the water levels due to averaging the rainfall and method 3 was a combination of both. In conclusion, in the case of unreliable rating curves

  9. Patient-specific calibration of cone-beam computed tomography data sets for radiotherapy dose calculations and treatment plan assessment.

    PubMed

    MacFarlane, Michael; Wong, Daniel; Hoover, Douglas A; Wong, Eugene; Johnson, Carol; Battista, Jerry J; Chen, Jeff Z

    2018-03-01

    In this work, we propose a new method of calibrating cone beam computed tomography (CBCT) data sets for radiotherapy dose calculation and plan assessment. The motivation for this patient-specific calibration (PSC) method is to develop an efficient, robust, and accurate CBCT calibration process that is less susceptible to deformable image registration (DIR) errors. Instead of mapping the CT numbers voxel-by-voxel with traditional DIR calibration methods, the PSC methods generates correlation plots between deformably registered planning CT and CBCT voxel values, for each image slice. A linear calibration curve specific to each slice is then obtained by least-squares fitting, and applied to the CBCT slice's voxel values. This allows each CBCT slice to be corrected using DIR without altering the patient geometry through regional DIR errors. A retrospective study was performed on 15 head-and-neck cancer patients, each having routine CBCTs and a middle-of-treatment re-planning CT (reCT). The original treatment plan was re-calculated on the patient's reCT image set (serving as the gold standard) as well as the image sets produced by voxel-to-voxel DIR, density-overriding, and the new PSC calibration methods. Dose accuracy of each calibration method was compared to the reference reCT data set using common dose-volume metrics and 3D gamma analysis. A phantom study was also performed to assess the accuracy of the DIR and PSC CBCT calibration methods compared with planning CT. Compared with the gold standard using reCT, the average dose metric differences were ≤ 1.1% for all three methods (PSC: -0.3%; DIR: -0.7%; density-override: -1.1%). The average gamma pass rates with thresholds 3%, 3 mm were also similar among the three techniques (PSC: 95.0%; DIR: 96.1%; density-override: 94.4%). An automated patient-specific calibration method was developed which yielded strong dosimetric agreement with the results obtained using a re-planning CT for head-and-neck patients.

  10. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  11. Calibration of the Concorde radiation detection instrument and measurements at SST altitude.

    DOT National Transportation Integrated Search

    1971-06-01

    Performance tests were carried out on a solar cosmic radiation detection instrument developed for the Concorde SST. The instrument calibration curve (log dose-rate vs instrument reading) was reasonably linear from 0.004 to 1 rem/hr for both gamma rad...

  12. Apparatus for in-situ calibration of instruments that measure fluid depth

    DOEpatents

    Campbell, M.D.

    1994-01-11

    The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position. 8 figures.

  13. Apparatus for in-situ calibration of instruments that measure fluid depth

    DOEpatents

    Campbell, Melvin D.

    1994-01-01

    The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position.

  14. VizieR Online Data Catalog: SN 2007on and SN 2011iv light curves (Gall+, 2018)

    NASA Astrophysics Data System (ADS)

    Gall, C.; Stritzinger, M. D.; Ashall, C.; Baron, E.; Burns, C. R.; Hoeflich, P.; Hsiao, E. Y.; Mazzali, P. A.; Phillips, M. M.; Filippenko, A. V.; Anderson, J. P.; Benetti, S.; Brown, P. J.; Campillay, A.; Challis, P.; Contreras, C.; Elias de La Rosa, N.; Folatelli, G.; Foley, R. J.; Fraser, M.; Holmbo, S.; Marion, G. H.; Morrell, N.; Pan, Y.-C.; Pignata, G.; Suntzeff, N. B.; Taddia, F.; Torres Robledo, S.; Valenti, S.

    2017-11-01

    Detailed optical and NIR light curves of SN 2007on obtained by the first phase of the Carnegie Supernova Project (CSP-I, 2004-2009; Hamuy et al., 2006PASP..118....2H) were published by Stritzinger et al. (2011, Cat. J/AJ/142/156).UV uvw2-, uvm2-, and uvw1-band imaging of both SN 2007on and SN 2011iv were obtained with Swift (+ UVOT). Photome- try of SN 2007on and SN 2011iv was computed following the method described in detail by Brown et al. (2014Ap&SS.354...89B), who use the calibration published by Breeveld et al. (2011, AIPCS, 1358, 373). The Swift UVOT images and photometry are also available as part of the Swift Optical Ultraviolet Supernova Archive (SOUSA; Brown et al. 2014Ap&SS.354...89B). Optical ugriBV-band imaging of SN 2007on and SN 2011iv was obtained with the Henrietta Swope 1.0m telescope (+ SITe3 direct CCD camera) located at the Las Campanas Observatory (LCO). The NIR YJH-band imaging of SN 2007on was obtained with the Swope (+ RetroCam) and the Irenee du Pont 2.5m (+ WIRC: Wide Field Infrared Camera) telescopes (Stritzinger et al., Cat. J/AJ/142/156), while in the case of SN 2011iv all NIR YJH-band imaging was taken with RetroCam attached to the Irenee du Pont telescope. The optical local sequence is calibrated relative to Landolt (1992AJ....104..372L) (BV) and Smith et al. (2002AJ....123.2121S) (ugri) standard-star fields observed over multiple photometric nights. The NIR J-band and H-band local sequences were calibrated relative to the Persson et al. (1998AJ....116.2475P) standard stars, while the Y- band local sequence was calibrated relative to standard Y-band magnitudes computed using a combination of stellar atmosphere models (Castelli & Kurucz, 2003, IAUSymp, 210, A20) with the J-Ks colours of the Persson et al. standard-star catalogue (Hamuy et al., 2006PASP..118....2H). (5 data files).

  15. Air data position-error calibration using state reconstruction techniques

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.; Larson, T. J.; Ehernberger, L. J.

    1984-01-01

    During the highly maneuverable aircraft technology (HiMAT) flight test program recently completed at NASA Ames Research Center's Dryden Flight Research Facility, numerous problems were experienced in airspeed calibration. This necessitated the use of state reconstruction techniques to arrive at a position-error calibration. For the HiMAT aircraft, most of the calibration effort was expended on flights in which the air data pressure transducers were not performing accurately. Following discovery of this problem, the air data transducers of both aircraft were wrapped in heater blankets to correct the problem. Additional calibration flights were performed, and from the resulting data a satisfactory position-error calibration was obtained. This calibration and data obtained before installation of the heater blankets were used to develop an alternate calibration method. The alternate approach took advantage of high-quality inertial data that was readily available. A linearized Kalman filter (LKF) was used to reconstruct the aircraft's wind-relative trajectory; the trajectory was then used to separate transducer measurement errors from the aircraft position error. This calibration method is accurate and inexpensive. The LKF technique has an inherent advantage of requiring that no flight maneuvers be specially designed for airspeed calibrations. It is of particular use when the measurements of the wind-relative quantities are suspected to have transducer-related errors.

  16. Calibration Errors in Interferometric Radio Polarimetry

    NASA Astrophysics Data System (ADS)

    Hales, Christopher A.

    2017-08-01

    Residual calibration errors are difficult to predict in interferometric radio polarimetry because they depend on the observational calibration strategy employed, encompassing the Stokes vector of the calibrator and parallactic angle coverage. This work presents analytic derivations and simulations that enable examination of residual on-axis instrumental leakage and position-angle errors for a suite of calibration strategies. The focus is on arrays comprising alt-azimuth antennas with common feeds over which parallactic angle is approximately uniform. The results indicate that calibration schemes requiring parallactic angle coverage in the linear feed basis (e.g., the Atacama Large Millimeter/submillimeter Array) need only observe over 30°, beyond which no significant improvements in calibration accuracy are obtained. In the circular feed basis (e.g., the Very Large Array above 1 GHz), 30° is also appropriate when the Stokes vector of the leakage calibrator is known a priori, but this rises to 90° when the Stokes vector is unknown. These findings illustrate and quantify concepts that were previously obscure rules of thumb.

  17. Results from Source-Based and Detector-Based Calibrations of a CLARREO Calibration Demonstration System

    NASA Technical Reports Server (NTRS)

    Angal, Amit; Mccorkel, Joel; Thome, Kurt

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is formulated to determine long-term climate trends using SI-traceable measurements. The CLARREO mission will include instruments operating in the reflected solar (RS) wavelength region from 320 nm to 2300 nm. The Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO and facilitates testing and evaluation of calibration approaches. The basis of CLARREO and SOLARIS calibration is the Goddard Laser for Absolute Measurement of Response (GLAMR) that provides a radiance-based calibration at reflective solar wavelengths using continuously tunable lasers. SI-traceability is achieved via detector-based standards that, in GLAMRs case, are a set of NIST-calibrated transfer radiometers. A portable version of the SOLARIS, Suitcase SOLARIS is used to evaluate GLAMRs calibration accuracies. The calibration of Suitcase SOLARIS using GLAMR agrees with that obtained from source-based results of the Remote Sensing Group (RSG) at the University of Arizona to better than 5 (k2) in the 720-860 nm spectral range. The differences are within the uncertainties of the NIST-calibrated FEL lamp-based approach of RSG and give confidence that GLAMR is operating at 5 (k2) absolute uncertainties. Limitations of the Suitcase SOLARIS instrument also discussed and the next edition of the SOLARIS instrument (Suitcase SOLARIS- 2) is expected to provide an improved mechanism to further assess GLAMR and CLARREO calibration approaches. (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  18. Multivariate curve resolution-assisted determination of pseudoephedrine and methamphetamine by HPLC-DAD in water samples.

    PubMed

    Vosough, Maryam; Mohamedian, Hadi; Salemi, Amir; Baheri, Tahmineh

    2015-02-01

    In the present study, a simple strategy based on solid-phase extraction (SPE) with a cation exchange sorbent (Finisterre SCX) followed by fast high-performance liquid chromatography (HPLC) with diode array detection coupled with chemometrics tools has been proposed for the determination of methamphetamine and pseudoephedrine in ground water and river water. At first, the HPLC and SPE conditions were optimized and the analytical performance of the method was determined. In the case of ground water, determination of analytes was successfully performed through univariate calibration curves. For river water sample, multivariate curve resolution and alternating least squares was implemented and the second-order advantage was achieved in samples containing uncalibrated interferences and uncorrected background signals. The calibration curves showed good linearity (r(2) > 0.994).The limits of detection for pseudoephedrine and methamphetamine were 0.06 and 0.08 μg/L and the average recovery values were 104.7 and 102.3% in river water, respectively. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  20. [Development of an experimental apparatus for energy calibration of a CdTe detector by means of diagnostic X-ray equipment].

    PubMed

    Fukuda, Ikuma; Hayashi, Hiroaki; Takegami, Kazuki; Konishi, Yuki

    2013-09-01

    Diagnostic X-ray equipment was used to develop an experimental apparatus for calibrating a CdTe detector. Powder-type samples were irradiated with collimated X-rays. On excitation of the atoms, characteristic X-rays were emitted. We prepared Nb2O5, SnO2, La2O3, Gd2O3, and WO3 metal oxide samples. Experiments using the diagnostic X-ray equipment were carried out to verify the practicality of our apparatus. First, we verified that the collimators involving the apparatus worked well. Second, the X-ray spectra were measured using the prepared samples. Finally, we analyzed the spectra, which indicated that the energy calibration curve had been obtained at an accuracy of ±0.06 keV. The developed apparatus could be used conveniently, suggesting it to be useful for the practical training of beginners and researchers.

  1. Automatic exposure control calibration and optimisation for abdomen, pelvis and lumbar spine imaging with an Agfa computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Joshi, H; Saunderson, J R; Beavis, A W

    2016-11-07

    The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQ m ) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQ m and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.

  2. Automatic exposure control calibration and optimisation for abdomen, pelvis and lumbar spine imaging with an Agfa computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Joshi, H.; Saunderson, J. R.; Beavis, A. W.

    2016-11-01

    The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQm and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.

  3. Assessing and calibrating the ATR-FTIR approach as a carbonate rock characterization tool

    NASA Astrophysics Data System (ADS)

    Henry, Delano G.; Watson, Jonathan S.; John, Cédric M.

    2017-01-01

    ATR-FTIR (attenuated total reflectance Fourier transform infrared) spectroscopy can be used as a rapid and economical tool for qualitative identification of carbonates, calcium sulphates, oxides and silicates, as well as quantitatively estimating the concentration of minerals. Over 200 powdered samples with known concentrations of two, three, four and five phase mixtures were made, then a suite of calibration curves were derived that can be used to quantify the minerals. The calibration curves in this study have an R2 that range from 0.93-0.99, a RMSE (root mean square error) of 1-5 wt.% and a maximum error of 3-10 wt.%. The calibration curves were used on 35 geological samples that have previously been studied using XRD (X-ray diffraction). The identification of the minerals using ATR-FTIR is comparable with XRD and the quantitative results have a RMSD (root mean square deviation) of 14% and 12% for calcite and dolomite respectively when compared to XRD results. ATR-FTIR is a rapid technique (identification and quantification takes < 5 min) that involves virtually no cost if the machine is available. It is a common tool in most analytical laboratories, but it also has the potential to be deployed on a rig for real-time data acquisition of the mineralogy of cores and rock chips at the surface as there is no need for special sample preparation, rapid data collection and easy analysis.

  4. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.

    PubMed

    Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C

    2008-07-21

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.

  5. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R

    2014-05-07

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  6. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Beavis, A. W.; Saunderson, J. R.

    2014-05-01

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  7. Integrated calibration sphere and calibration step fixture for improved coordinate measurement machine calibration

    DOEpatents

    Clifford, Harry J [Los Alamos, NM

    2011-03-22

    A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.

  8. Historical Cost Curves for Hydrogen Masers and Cesium Beam Frequency and Timing Standards

    NASA Technical Reports Server (NTRS)

    Remer, D. S.; Moore, R. C.

    1985-01-01

    Historical cost curves were developed for hydrogen masers and cesium beam standards used for frequency and timing calibration in the Deep Space Network. These curves may be used to calculate the cost of future hydrogen masers or cesium beam standards in either future or current dollars. The cesium beam standards are decreasing in cost by about 2.3% per year since 1966, and hydrogen masers are decreasing by about 0.8% per year since 1978 relative to the National Aeronautics and Space Administration inflation index.

  9. Calibration of thin-foil manganin gauge in ALOX material

    NASA Astrophysics Data System (ADS)

    Benham, R. A.; Weirick, L. J.; Lee, L. M.

    1996-05-01

    The purpose of this program was to develop a calibration curve (stress as a function of change in gauge resistance/gauge resistance) and to obtain gauge repeatability data for Micro-Measurements stripped manganin thin-foiled gauges up to 6.1 GPa in ALOX (42% by volume alumina in Epon 828 epoxy) material. A light-gas gun was used to drive an ALOX impactor into the ALOX target containing four gauges in a centered diamond arrangement. Tilt and velocity of the impactor were measured along with the gauge outputs. Impact stresses from 0.5 to 6.1 GPa were selected in increments of 0.7 GPa with duplicate tests done at 0.5, 3.3 and 6.1 GPa. A total of twelve tests was conducted using ALOX. Three initial tests were done using polymethyl methacrylate (PMMA) as the impactor and target at an impact pressure of 3.0 GPa for comparison of gauge output with analysis and literature values. The installed gauge, stripped of its backing, has a nominal thickness of 5 μm. The thin gauge and high speed instrumentation allowed higher time resolution measurements than can be obtained with manganin wire.

  10. Sensor calibration of polymeric Hopkinson bars for dynamic testing of soft materials

    NASA Astrophysics Data System (ADS)

    Martarelli, Milena; Mancini, Edoardo; Lonzi, Barbara; Sasso, Marco

    2018-02-01

    Split Hopkinson pressure bar (SHPB) testing is one of the most common techniques for the estimation of the constitutive behaviour of metallic materials. In this paper, the characterisation of soft rubber-like materials has been addressed by means of polymeric bars thanks to their reduced mechanical impedance. Due to their visco-elastic nature, polymeric bars are more sensitive to temperature changes than metallic bars, and due to their low conductance, the strain gauges used to measure the propagating wave in an SHPB may be exposed to significant heating. Consequently, a calibration procedure has been proposed to estimate quantitatively the temperature influence on strain gauge output. Furthermore, the calibration is used to determine the elastic modulus of the polymeric bars, which is an important parameter for the synchronisation of the propagation waves measured in the input and output bar strain gate stations, and for the correct determination of stress and strain evolution within the specimen. An example of the application has been reported in order to demonstrate the effectiveness of the technique. Different tests at different strain rates have been carried out on samples made of nytrile butadyene rubber (NBR) from the same injection moulding batch. Thanks to the correct synchronisation of the measured propagation waves measured by the strain gauges and applying the calibrated coefficients, the mechanical behaviour of the NBR material is obtained in terms of strain-rate-strain and stress-strain engineering curves.

  11. The advantages of absorbed-dose calibration factors.

    PubMed

    Rogers, D W

    1992-01-01

    A formalism for clinical external beam dosimetry based on use of ion chamber absorbed-dose calibration factors is outlined in the context and notation of the AAPM TG-21 protocol. It is shown that basing clinical dosimetry on absorbed-dose calibration factors ND leads to considerable simplification and reduced uncertainty in dose measurement. In keeping with a protocol which is used in Germany, a quantity kQ is defined which relates an absorbed-dose calibration factor in a beam of quality Q0 to that in a beam of quality Q. For 38 cylindrical ion chambers, two sets of values are presented for ND/NX and Ngas/ND and for kQ for photon beams with beam quality specified by the TPR20(10) ratio. One set is based on TG-21's protocol to allow the new formalism to be used while maintaining equivalence to the TG-21 protocol. To demonstrate the magnitude of the overall error in the TG-21 protocol, the other set uses corrected versions of the TG-21 equations and the more consistent physical data of the IAEA Code of Practice. Comparisons are made to procedures based on air-kerma or exposure calibration factors and it is shown that accuracy and simplicity are gained by avoiding the determination of Ngas from NX. It is also shown that the kQ approach simplifies the use of plastic phantoms in photon beams since kQ values change by less than 0.6% compared to those in water although an overall correction factor of 0.973 is needed to go from absorbed dose in water calibration factors to those in PMMA or polystyrene. Values of kQ calculated using the IAEA Code of Practice are presented but are shown to be anomalous because of the way the effective point of measurement changes for 60Co beams. In photon beams the major difference between the IAEA Code of Practice and the corrected AAPM TG-21 protocol is shown to be the Prepl correction factor. Calculated kQ curves and three parameter equations for them are presented for each wall material and are shown to represent accurately the kQ curve

  12. Calibrations of the LHD Thomson scattering system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, I., E-mail: yamadai@nifs.ac.jp; Funaba, H.; Yasuhara, R.

    2016-11-15

    The Thomson scattering diagnostic systems are widely used for the measurements of absolute local electron temperatures and densities of fusion plasmas. In order to obtain accurate and reliable temperature and density data, careful calibrations of the system are required. We have tried several calibration methods since the second LHD experiment campaign in 1998. We summarize the current status of the calibration methods for the electron temperature and density measurements by the LHD Thomson scattering diagnostic system. Future plans are briefly discussed.

  13. Calibrations of the LHD Thomson scattering system.

    PubMed

    Yamada, I; Funaba, H; Yasuhara, R; Hayashi, H; Kenmochi, N; Minami, T; Yoshikawa, M; Ohta, K; Lee, J H; Lee, S H

    2016-11-01

    The Thomson scattering diagnostic systems are widely used for the measurements of absolute local electron temperatures and densities of fusion plasmas. In order to obtain accurate and reliable temperature and density data, careful calibrations of the system are required. We have tried several calibration methods since the second LHD experiment campaign in 1998. We summarize the current status of the calibration methods for the electron temperature and density measurements by the LHD Thomson scattering diagnostic system. Future plans are briefly discussed.

  14. SU-E-T-96: Energy Dependence of the New GafChromic- EBT3 Film's Dose Response-Curve.

    PubMed

    Chiu-Tsao, S; Massillon-Jl, G; Domingo-Muñoz, I; Chan, M

    2012-06-01

    To study and compare the dose response curves of the new GafChromic EBT3 film for megavoltage and kilovoltage x-ray beams, with different spatial resolution. Two sets of EBT3 films (lot#A101711-02) were exposed to each x-ray beam (6MV, 15MV and 50kV) at 8 dose values (50-3200cGy). The megavoltage beams were calibrated per AAPM TG-51 protocol while the kilovoltage beam was calibrated following the TG-61 using an ionization chamber calibrated at NIST. Each film piece was scanned three consecutive times in the center of Epson 10000XL flatbed scanner in transmission mode, landscape orientation, 48-bit color at two separate spatial resolutions of 75 and 300 dpi. The data were analyzed using ImageJ and, for each scanned image, a region of interest (ROI) of 2×2cm 2 at the field center was selected to obtain the mean pixel value with its standard deviation in the ROI. For each energy, dose value and spatial resolution, the average netOD and its associated uncertainty were determined. The Student's t-test was performed to evaluate the statistical differences between the netOD/dose values of the three energy modalities, with different color channels and spatial resolutions. The dose response curves for the three energy modalities were compared in three color channels with 75 and 300dpi. Weak energy dependence was found. For doses above 100cGy, no statistical differences were observed between 6 and 15MV beams, regardless of spatial resolution. However, statistical differences were observed between 50kV and the megavoltage beams. The degree of energy dependence (from MV to 50kV) was found to be function of color channel, dose level and spatial resolution. The dose response curves for GafChromic EBT3 films were found to be weakly dependent on the energy of the photon beams from 6MV to 50kV. The degree of energy dependence varies with color channel, dose and spatial resolution. GafChromic EBT3 films were supplied by Ashland Corp. This work was partially supported by DGAPA

  15. EXPERIMENTAL MEASUREMENT AND INTERPRETATION OF VOLT-AMPERE CURVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gingrich, J.E.; Warner, C.; Weeks, C.C.

    1962-07-01

    Cylindrical and parallel-plane cesium vapor thermionic converters were used for obtaining volt-ampere curves for systematic variations of emitter, collector, and cesium reservoir temperatures, with electrode spacings ranging from a few to many mean free paths, and with space charge conditions varying from electron-rich to ion-rich. The resulting curves exhibit much variety. The saturation currents agree well with the data of Houston and Aamodt for the space charge neutralized, few-mean-free-path cases. Apparent'' saturation currents for space charge limited cases were observed and were always less than the currents predicted by Houston and Aamodt. Several discontinuities in slope were observed in themore » reverse current portion of the curves and these have tentatively been identified with volume ionization of atoms in both the ground and excited states. Similar processes may be important for obtaining the ignited mode. The methods used to measure static and dynamic volt-ampere curves are described. The use of a controlled-current load has yielded a negative resistance'' region in the curves which show the ignited mode. The curves obtained with poor current control do not show this phenomenon. Extinction is considered from the standpoint of Kaufmann' s criterion for stability. (auth)« less

  16. Calibration of the COBE FIRAS instrument

    NASA Technical Reports Server (NTRS)

    Fixsen, D. J.; Cheng, E. S.; Cottingham, D. A.; Eplee, R. E., Jr.; Hewagama, T.; Isaacman, R. B.; Jensen, K. A.; Mather, J. C.; Massa, D. L.; Meyer, S. S.

    1994-01-01

    The Far-Infrared Absolute Spectrophotometer (FIRAS) instrument on the Cosmic Background Explorer (COBE) satellite was designed to accurately measure the spectrum of the cosmic microwave background radiation (CMBR) in the frequency range 1-95/cm with an angular resolution of 7 deg. We describe the calibration of this instrument, including the method of obtaining calibration data, reduction of data, the instrument model, fitting the model to the calibration data, and application of the resulting model solution to sky observations. The instrument model fits well for calibration data that resemble sky condition. The method of propagating detector noise through the calibration process to yield a covariance matrix of the calibrated sky data is described. The final uncertainties are variable both in frequency and position, but for a typical calibrated sky 2.6 deg square pixel and 0.7/cm spectral element the random detector noise limit is of order of a few times 10(exp -7) ergs/sq cm/s/sr cm for 2-20/cm, and the difference between the sky and the best-fit cosmic blackbody can be measured with a gain uncertainty of less than 3%.

  17. Determination of calibration parameters of a VRX CT system using an “Amoeba” algorithm

    PubMed Central

    Jordan, Lawrence M.; DiBianca, Frank A.; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M. Waleed

    2008-01-01

    Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge “clouds” created by the detected x-ray photons, i.e., the “physics limit.” This paper focuses on implementing a technique called “projective compression.” which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm “variable-resolution x-ray” (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown. PMID:19430581

  18. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  19. Practical application of electromyogram radiotelemetry: the suitability of applying laboratory-acquired calibration data to field data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, David R.; Brown, Richard S.; Lepla, Ken

    One of the practical problems with quantifying the amount of energy used by fish implanted with electromyogram (EMG) radio transmitters is that the signals emitted by the transmitter provide only a relative index of activity unless they are calibrated to the swimming speed of the fish. Ideally calibration would be conducted for each fish before it is released, but this is often not possible and calibration curves derived from more than one fish are used to interpret EMG signals from individuals which have not been calibrated. We tested the validity of this approach by comparing EMG data within three groupsmore » of three wild juvenile white sturgeon Acipenser transmontanus implanted with the same EMG radio transmitter. We also tested an additional six fish which were implanted with separate EMG transmitters. Within each group, a single EMG radio transmitter usually did not produce similar results in different fish. Grouping EMG signals among fish produced less accurate results than having individual EMG-swim speed relationships for each fish. It is unknown whether these differences were a result of different swimming performances among individual fish or inconsistencies in the placement or function of the EMG transmitters. In either case, our results suggest that caution should be used when applying calibration curves from one group of fish to another group of uncalibrated fish.« less

  20. Interim Calibration Report for the SMMR Simulator

    NASA Technical Reports Server (NTRS)

    Gloersen, P.; Cavalieri, D.

    1979-01-01

    The calibration data obtained during the fall 1978 Nimbus-G underflight mission with the scanning multichannel microwave radiometer (SMMR) simulator on board the NASA CV-990 aircraft were analyzed and an interim calibration algorithm was developed. Data selected for the analysis consisted of in flight sky, first-year sea ice, and open water observations, as well as ground based observations of fixed targets with varied temperatures of selected instrument components. For most of the SMMR channels, a good fit to the selected data set was obtained with the algorithm.

  1. The influence of the spectral emissivity of flat-plate calibrators on the calibration of IR thermometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cárdenas-García, D.; Méndez-Lango, E.

    Flat Calibrators (FC) are an option for calibration of infrared thermometers (IT) with a fixed large target. FCs are neither blackbodies, nor gray-bodies; their spectral emissivity is lower than one and depends on wavelength. Nevertheless they are used as gray-bodies with a nominal emissivity value. FCs can be calibrated radiometrically using as reference a calibrated IR thermometer (RT). If an FC will be used to calibrate ITs that work in the same spectral range as the RT then its calibration is straightforward: the actual FC spectral emissivity is not required. This result is valid for any given fixed emissivity assessedmore » to the FC. On the other hand, when the RT working spectral range does not match with that of the ITs to be calibrated with the FC then it is required to know the FC spectral emissivity as part of the calibration process. For this purpose, at CENAM, we developed an experimental setup to measure spectral emissivity in the infrared spectral range, based on a Fourier transform infrared spectrometer. Not all laboratories have emissivity measurement capability in the appropriate wavelength and temperature ranges to obtain the spectral emissivity. Thus, we present an estimation of the error introduced when the spectral range of the RT used to calibrate an FC and the spectral ranges of the ITs to be calibrated with the FC do not match. Some examples are developed for the cases when RT and IT spectral ranges are [8,13] μm and [8,14] μm respectively.« less

  2. Setup and Calibration of SLAC's Peripheral Monitoring Stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, C.

    2004-09-03

    measured). Detector response for both detectors is dependent upon the energy of the incident radiation; this trend had to be accounted for in the calibration of the BF{sub 3} detector. Energy dependence did not have to be taken into consideration when calibrating the GM detectors since GM detector response is only dependent on radiation energy below 100 keV; SLAC only produces a spectrum of gamma radiation above 100 keV. For the GM detector, calibration consisted of bringing a {sup 137}Cs source and a NIST-calibrated RADCAL Radiation Monitor Controller (model 9010) out to the field; the absolute dose rate was determined by the RADCAL device while simultaneously irradiating the GM detector to obtain a scaler reading corresponding to counts per minute. Detector response was then calculated. Calibration of the BF{sub 3} detector was done using NIST certified neutron sources of known emission rates and energies. Five neutron sources ({sup 238}PuBe, {sup 238}PuB, {sup 238}PuF4, {sup 238}PuLi and {sup 252}Cf) with different energies were used to account for the energy dependence of the response. The actual neutron dose rate was calculated by date-correcting NIST source data and considering the direct dose rate and scattered dose rate. Once the total dose rate (sum of the direct and scattered dose rates) was known, the response vs. energy curve was plotted. The first station calibrated (PMS6) was calibrated with these five neutron sources; all subsequent stations were calibrated with one neutron source and the energy dependence was assumed to be the same.« less

  3. Crash prediction modeling for curved segments of rural two-lane two-way highways in Utah.

    DOT National Transportation Integrated Search

    2015-10-01

    This report contains the results of the development of crash prediction models for curved segments of rural : two-lane two-way highways in the state of Utah. The modeling effort included the calibration of the predictive : model found in the Highway ...

  4. Quasi-Static Calibration Method of a High-g Accelerometer

    PubMed Central

    Wang, Yan; Fan, Jinbiao; Zu, Jing; Xu, Peng

    2017-01-01

    To solve the problem of resonance during quasi-static calibration of high-g accelerometers, we deduce the relationship between the minimum excitation pulse width and the resonant frequency of the calibrated accelerometer according to the second-order mathematical model of the accelerometer, and improve the quasi-static calibration theory. We establish a quasi-static calibration testing system, which uses a gas gun to generate high-g acceleration signals, and apply a laser interferometer to reproduce the impact acceleration. These signals are used to drive the calibrated accelerometer. By comparing the excitation acceleration signal and the output responses of the calibrated accelerometer to the excitation signals, the impact sensitivity of the calibrated accelerometer is obtained. As indicated by the calibration test results, this calibration system produces excitation acceleration signals with a pulse width of less than 1000 μs, and realize the quasi-static calibration of high-g accelerometers with a resonant frequency above 20 kHz when the calibration error was 3%. PMID:28230743

  5. Adapting Shape Parameters for Cubic Bezier Curves

    NASA Technical Reports Server (NTRS)

    Isacoff, D.; Bailey, M. J.

    1985-01-01

    Bezier curves are an established tool in Computer Aided Geometric Design. One of the drawbacks of the Bezier method is that the curves often bear little resemblance to their control polygons. As a result, it becomes increasingly difficult to obtain anything but a rough outline of the desired shape. One possible solution is tomanipulate the curve itself instead of the control polygon. The standard cubic Bezier curve form has introduced into it two shape parameters, gamma 1 and 2. These parameters give the user the ability to manipulate the curve while the control polygon retains its original form, thereby providing a more intuitive feel for the necessary changes to the curve in order to achieve the desired shape.

  6. Auto calibration of a cone-beam-CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gross, Daniel; Heil, Ulrich; Schulze, Ralf

    2012-10-15

    Purpose: This paper introduces a novel autocalibration method for cone-beam-CTs (CBCT) or flat-panel CTs, assuming a perfect rotation. The method is based on ellipse-fitting. Autocalibration refers to accurate recovery of the geometric alignment of a CBCT device from projection images alone, without any manual measurements. Methods: The authors use test objects containing small arbitrarily positioned radio-opaque markers. No information regarding the relative positions of the markers is used. In practice, the authors use three to eight metal ball bearings (diameter of 1 mm), e.g., positioned roughly in a vertical line such that their projection image curves on the detector preferablymore » form large ellipses over the circular orbit. From this ellipse-to-curve mapping and also from its inversion the authors derive an explicit formula. Nonlinear optimization based on this mapping enables them to determine the six relevant parameters of the system up to the device rotation angle, which is sufficient to define the geometry of a CBCT-machine assuming a perfect rotational movement. These parameters also include out-of-plane rotations. The authors evaluate their method by simulation based on data used in two similar approaches [L. Smekal, M. Kachelriess, S. E, and K. Wa, 'Geometric misalignment and calibration in cone-beam tomography,' Med. Phys. 31(12), 3242-3266 (2004); K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, 'A geometric calibration method for cone beam CT systems,' Med. Phys. 33(6), 1695-1706 (2006)]. This allows a direct comparison of accuracy. Furthermore, the authors present real-world 3D reconstructions of a dry human spine segment and an electronic device. The reconstructions were computed from projections taken with a commercial dental CBCT device having two different focus-to-detector distances that were both calibrated with their method. The authors compare their reconstruction with a reconstruction computed by the manufacturer of the CBCT

  7. Sensitivity calibration procedures in optical-CT scanning of BANG 3 polymer gel dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y.; Wuu, Cheng-Shie; Maryanski, Marek J.

    2010-02-15

    The dose response of the BANG 3 polymer gel dosimeter (MGS Research Inc., Madison, CT) was studied using the OCTOPUS laser CT scanner (MGS Research Inc., Madison, CT). Six 17 cm diameter and 12 cm high Barex cylinders, and 18 small glass vials were used to house the gel. The gel phantoms were irradiated with 6 and 10 MV photons, as well as 12 and 16 MeV electrons using a Varian Clinac 2100EX. Three calibration methods were used to obtain the dose response curves: (a) Optical density measurements on the 18 glass vials irradiated with graded doses from 0 tomore » 4 Gy using 6 or 10 MV large field irradiations; (b) optical-CT scanning of Barex cylinders irradiated with graded doses (0.5, 1, 1.5, and 2 Gy) from four adjacent 4x4 cm{sup 2} photon fields or 6x6 cm{sup 2} electron fields; and (c) percent depth dose (PDD) comparison of optical-CT scans with ion chamber measurements for 6x6 cm{sup 2}, 12 and 16 MeV electron fields. The dose response of the BANG 3 gel was found to be linear and energy independent within the uncertainties of the experimental methods (about 3%). The slopes of the linearly fitted dose response curves (dose sensitivities) from the four field irradiations (0.0752{+-}3%, 0.0756{+-}3%, 0.0767{+-}3%, and 0.0759{+-}3% cm{sup -1} Gy{sup -1}) and the PDD matching methods (0.0768{+-}3% and 0.0761{+-}3% cm{sup -1} Gy{sup -1}) agree within 2.2%, indicating a good reproducibility of the gel dose response within phantoms of the same geometry. The dose sensitivities from the glass vial approach are different from those of the cylindrical Barex phantoms by more than 30%, owing probably to the difference in temperature inside the two types of phantoms during gel formation and irradiation, and possible oxygen contamination of the glass vial walls. The dose response curve obtained from the PDD matching approach with 16 MeV electron field was used to calibrate the gel phantom irradiated with the 12 MeV, 6x6 cm{sup 2} electron field. Three-dimensional dose

  8. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  9. A New Approach to the Internal Calibration of Reverberation-Mapping Spectra

    NASA Astrophysics Data System (ADS)

    Fausnaugh, M. M.

    2017-02-01

    We present a new procedure for the internal (night-to-night) calibration of timeseries spectra, with specific applications to optical AGN reverberation mapping data. The traditional calibration technique assumes that the narrow [O iii] λ5007 emission-line profile is constant in time; given a reference [O iii] λ5007 line profile, nightly spectra are aligned by fitting for a wavelength shift, a flux rescaling factor, and a change in the spectroscopic resolution. We propose the following modifications to this procedure: (1) we stipulate a constant spectral resolution for the final calibrated spectra, (2) we employ a more flexible model for changes in the spectral resolution, and (3) we use a Bayesian modeling framework to assess uncertainties in the calibration. In a test case using data for MCG+08-11-011, these modifications result in a calibration precision of ˜1 millimagnitude, which is approximately a factor of five improvement over the traditional technique. At this level, other systematic issues (e.g., the nightly sensitivity functions and Feii contamination) limit the final precision of the observed light curves. We implement this procedure as a python package (mapspec), which we make available to the community.

  10. Progress in obtaining an absolute calibration of a total deuterium-tritium neutron yield diagnostic based on copper activationa)

    NASA Astrophysics Data System (ADS)

    Ruiz, C. L.; Chandler, G. A.; Cooper, G. W.; Fehl, D. L.; Hahn, K. D.; Leeper, R. J.; McWatters, B. R.; Nelson, A. J.; Smelser, R. M.; Snow, C. S.; Torres, J. A.

    2012-10-01

    The 350-keV Cockroft-Walton accelerator at Sandia National laboratory's Ion Beam facility is being used to calibrate absolutely a total DT neutron yield diagnostic based on the 63Cu(n,2n)62Cu(β+) reaction. These investigations have led to first-order uncertainties approaching 5% or better. The experiments employ the associated-particle technique. Deuterons at 175 keV impinge a 2.6 μm thick erbium tritide target producing 14.1 MeV neutrons from the T(d,n)4He reaction. The alpha particles emitted are measured at two angles relative to the beam direction and used to infer the neutron flux on a copper sample. The induced 62Cu activity is then measured and related to the neutron flux. This method is known as the F-factor technique. Description of the associated-particle method, copper sample geometries employed, and the present estimates of the uncertainties to the F-factor obtained are given.

  11. Progress in obtaining an absolute calibration of a total deuterium-tritium neutron yield diagnostic based on copper activation.

    PubMed

    Ruiz, C L; Chandler, G A; Cooper, G W; Fehl, D L; Hahn, K D; Leeper, R J; McWatters, B R; Nelson, A J; Smelser, R M; Snow, C S; Torres, J A

    2012-10-01

    The 350-keV Cockroft-Walton accelerator at Sandia National laboratory's Ion Beam facility is being used to calibrate absolutely a total DT neutron yield diagnostic based on the (63)Cu(n,2n)(62)Cu(β+) reaction. These investigations have led to first-order uncertainties approaching 5% or better. The experiments employ the associated-particle technique. Deuterons at 175 keV impinge a 2.6 μm thick erbium tritide target producing 14.1 MeV neutrons from the T(d,n)(4)He reaction. The alpha particles emitted are measured at two angles relative to the beam direction and used to infer the neutron flux on a copper sample. The induced (62)Cu activity is then measured and related to the neutron flux. This method is known as the F-factor technique. Description of the associated-particle method, copper sample geometries employed, and the present estimates of the uncertainties to the F-factor obtained are given.

  12. Derivation of flood frequency curves in poorly gauged Mediterranean catchments using a simple stochastic hydrological rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Aronica, G. T.; Candela, A.

    2007-12-01

    SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.

  13. Generalized Calibration of the Polarimetric Albedo Scale of Asteroids

    NASA Astrophysics Data System (ADS)

    Lupishko, D. F.

    2018-03-01

    Six different calibrations of the polarimetric albedo scale of asteroids have been published so far. Each of them contains its particular random and systematic errors and yields its values of geometric albedo. On the one hand, this complicates their analysis and comparison; on the other hand, it becomes more and more difficult to decide which of the proposed calibrations should be used. Moreover, in recent years, new databases on the albedo of asteroids obtained from the radiometric surveys of the sky with the orbital space facilities (the InfraRed Astronomical Satellite (IRAS), the Japanese astronomical satellite AKARI (which means "light"), the Wide-field Infrared Survey Explorer (WISE), and the Near-Earth Object Wide-field Survey Explorer (NEOWISE)) have appeared; and the database on the diameters and albedos of asteroids obtained from their occultations of stars has substantially increased. Here, we critically review the currently available calibrations and propose a new generalized calibration derived from the interrelations between the slope h and the albedo and between P min and the albedo. This calibration is based on all of the available series of the asteroid albedos and the most complete data on the polarization parameters of asteroids. The generalized calibration yields the values of the polarimetric albedo of asteroids in the system unified with the radiometric albedos and the albedos obtained from occultations of stars by asteroids. This, in turn, removes the difficulties in their comparison, joint analysis, etc.

  14. The Sloan Digital Sky Survey-II: Photometry and Supernova Ia Light Curves from the 2005 Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holtzman, Jon A.; /New Mexico State U.; Marriner, John

    2010-08-26

    We present ugriz light curves for 146 spectroscopically confirmed or spectroscopically probable Type Ia supernovae from the 2005 season of the SDSS-II Supernova survey. The light curves have been constructed using a photometric technique that we call scene modeling, which is described in detail here; the major feature is that supernova brightnesses are extracted from a stack of images without spatial resampling or convolution of the image data. This procedure produces accurate photometry along with accurate estimates of the statistical uncertainty, and can be used to derive photometry taken with multiple telescopes. We discuss various tests of this technique thatmore » demonstrate its capabilities. We also describe the methodology used for the calibration of the photometry, and present calibrated magnitudes and fluxes for all of the spectroscopic SNe Ia from the 2005 season.« less

  15. Mexican national pyronometer network calibration

    NASA Astrophysics Data System (ADS)

    VAldes, M.; Villarreal, L.; Estevez, H.; Riveros, D.

    2013-12-01

    In order to take advantage of the solar radiation as an alternate energy source it is necessary to evaluate the spatial and temporal availability. The Mexican National Meterological Service (SMN) has a network with 136 meteorological stations, each coupled with a pyronometer for measuring the global solar radiation. Some of these stations had not been calibrated in several years. The Mexican Department of Energy (SENER) in order to count on a reliable evaluation of the solar resource funded this project to calibrate the SMN pyrometer network and validate the data. The calibration of the 136 pyronometers by the intercomparison method recommended by the World Meterological Organization (WMO) requires lengthy observations and specific environmental conditions such as clear skies and a stable atmosphere, circumstances that determine the site and season of the calibration. The Solar Radiation Section of the Instituto de Geofísica of the Universidad Nacional Autónoma de México is a Regional Center of the WMO and is certified to carry out the calibration procedures and emit certificates. We are responsible for the recalibration of the pyronometer network of the SMN. A continuous emission solar simulator with exposed areas with 30cm diameters was acquired to reduce the calibration time and not depend on atmospheric conditions. We present the results of the calibration of 10 thermopile pyronometers and one photovoltaic cell by the intercomparison method with more than 10000 observations each and those obtained with the solar simulator.

  16. Calibration Experiments for a Computer Vision Oyster Volume Estimation System

    ERIC Educational Resources Information Center

    Chang, G. Andy; Kerns, G. Jay; Lee, D. J.; Stanek, Gary L.

    2009-01-01

    Calibration is a technique that is commonly used in science and engineering research that requires calibrating measurement tools for obtaining more accurate measurements. It is an important technique in various industries. In many situations, calibration is an application of linear regression, and is a good topic to be included when explaining and…

  17. Calibration of Hurricane Imaging Radiometer C-Band Receivers

    NASA Technical Reports Server (NTRS)

    Biswas, Sayak K.; Cecil, Daniel J.; James, Mark W.

    2017-01-01

    The laboratory calibration of airborne Hurricane Imaging Radiometer's C-Band multi-frequency receivers is described here. The method used to obtain the values of receiver frontend loss, internal cold load brightness temperature and injected noise diode temperature is presented along with the expected RMS uncertainty in the final calibration.

  18. Camera calibration: active versus passive targets

    NASA Astrophysics Data System (ADS)

    Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli

    2011-11-01

    Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.

  19. Detection and quantification of a toxic salt substitute (LiCl) by using laser induced breakdown spectroscopy (LIBS).

    PubMed

    Sezer, Banu; Velioglu, Hasan Murat; Bilge, Gonca; Berkkan, Aysel; Ozdinc, Nese; Tamer, Ugur; Boyaci, Ismail Hakkı

    2018-01-01

    The use of Li salts in foods has been prohibited due to their negative effects on central nervous system; however, they might still be used especially in meat products as Na substitutes. Lithium can be toxic and even lethal at higher concentrations and it is not approved in foods. The present study focuses on Li analysis in meatballs by using laser induced breakdown spectroscopy (LIBS). Meatball samples were analyzed using LIBS and flame atomic absorption spectroscopy. Calibration curves were obtained by utilizing Li emission lines at 610nm and 670nm for univariate calibration. The results showed that Li calibration curve at 670nm provided successful determination of Li with 0.965 of R 2 and 4.64ppm of limit of detection (LOD) value. While Li Calibration curve obtained using emission line at 610nm generated R 2 of 0.991 and LOD of 22.6ppm, calibration curve obtained at 670nm below 1300ppm generated R 2 of 0.965 and LOD of 4.64ppm. Copyright © 2017. Published by Elsevier Ltd.

  20. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves

    NASA Astrophysics Data System (ADS)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.

    2017-02-01

    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  1. Neck Muscle Moment Arms Obtained In-Vivo from MRI: Effect of Curved and Straight Modeled Paths.

    PubMed

    Suderman, Bethany L; Vasavada, Anita N

    2017-08-01

    Musculoskeletal models of the cervical spine commonly represent neck muscles with straight paths. However, straight lines do not best represent the natural curvature of muscle paths in the neck, because the paths are constrained by bone and soft tissue. The purpose of this study was to estimate moment arms of curved and straight neck muscle paths using different moment arm calculation methods: tendon excursion, geometric, and effective torque. Curved and straight muscle paths were defined for two subject-specific cervical spine models derived from in vivo magnetic resonance images (MRI). Modeling neck muscle paths with curvature provides significantly different moment arm estimates than straight paths for 10 of 15 neck muscles (p < 0.05, repeated measures two-way ANOVA). Moment arm estimates were also found to be significantly different among moment arm calculation methods for 11 of 15 neck muscles (p < 0.05, repeated measures two-way ANOVA). In particular, using straight lines to model muscle paths can lead to overestimating neck extension moment. However, moment arm methods for curved paths should be investigated further, as different methods of calculating moment arm can provide different estimates.

  2. Traceable Co-C eutectic points for thermocouple calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jahan, F.; Ballico, M. J.

    2013-09-11

    National Measurement Institute of Australia (NMIA) has developed a miniature crucible design suitable for measurement by both thermocouples and radiation thermometry, and has established an ensemble of five Co-C eutectic-point cells based on this design. The cells in this ensemble have been individually calibrated using both ITS-90 radiation thermometry and thermocouples calibrated on the ITS-90 by the NMIA mini-coil methodology. The assigned ITS-90 temperatures obtained using these different techniques are both repeatable and consistent, despite the use of different furnaces and measurement conditions. The results demonstrate that, if individually calibrated, such cells can be practically used as part of amore » national traceability scheme for thermocouple calibration, providing a useful intermediate calibration point between Cu and Pd.« less

  3. Ensuring the consistancy of Flow Direction Curve reconstructions: the 'quantile solidarity' approach

    NASA Astrophysics Data System (ADS)

    Poncelet, Carine; Andreassian, Vazken; Oudin, Ludovic

    2015-04-01

    Flow Duration Curves (FDCs) are a hydrologic tool describing the distribution of streamflows at a catchment outlet. FDCs are usually used for calibration of hydrological models, managing water quality and classifying catchments, among others. For gauged catchments, empirical FDCs can be computed from streamflow records. For ungauged catchments, on the other hand, FDCs cannot be obtained from streamflow records and must therefore be obtained in another manner, for example through reconstructions. Regression-based reconstructions are methods relying on the evaluation of quantiles separately from catchments' attributes (climatic or physical features).The advantage of this category of methods is that it is informative about the processes and it is non-parametric. However, the large number of parameters required can cause unwanted artifacts, typically reconstructions that do not always produce increasing quantiles. In this paper we propose a new approach named Quantile Solidarity (QS), which is applied under strict proxy-basin test conditions (Klemes, 1986) to a set of 600 French catchments. Half of the catchments are considered as gauged and used to calibrate the regression and compute residuals of the regression. The QS approach consists in a three-step regionalization scheme, which first links quantile values to physical descriptors, then reduces the number of regression parameters and finally exploits the spatial correlation of the residuals. The innovation is the utilisation of the parameters continuity across the quantiles to dramatically reduce the number of parameters. The second half of catchment is used as an independent validation set over which we show that the QS approach ensures strictly growing FDC reconstructions in ungauged conditions. Reference: V. KLEMEŠ (1986) Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31:1, 13-24

  4. Calibration of Solar Radio Spectrometer of the Purple Mountain Observatory

    NASA Astrophysics Data System (ADS)

    Lei, LU; Si-ming, LIU; Qi-wu, SONG; Zong-jun, NING

    2015-10-01

    Calibration is a basic and important job in solar radio spectral observations. It not only deduces the solar radio flux as an important physical quantity for solar observations, but also deducts the flat field of the radio spectrometer to display the radio spectrogram clearly. In this paper, we first introduce the basic method of calibration based on the data of the solar radio spectrometer of Purple Mountain Observatory. We then analyze the variation of the calibration coefficients, and give the calibrated results for a few flares. These results are compared with those of the Nobeyama solar radio polarimeter and the hard X-ray observations of the RHESSI (Reuven Ramaty High Energy Solar Spectroscopic Imager) satellite, it is shown that these results are consistent with the characteristics of typical solar flare light curves. In particular, the analysis on the correlation between the variation of radio flux and the variation of hard X-ray flux in the pulsing phase of a flare indicates that these observations can be used to study the relevant radiation mechanism, as well as the related energy release and particle acceleration processes.

  5. Evaluation of factors affecting CGMS calibration.

    PubMed

    Buckingham, Bruce A; Kollman, Craig; Beck, Roy; Kalajian, Andrea; Fiallo-Scharer, Rosanna; Tansey, Michael J; Fox, Larry A; Wilson, Darrell M; Weinzimer, Stuart A; Ruedy, Katrina J; Tamborlane, William V

    2006-06-01

    The optimal number/timing of calibrations entered into the CGMS (Medtronic MiniMed, Northridge, CA) continuous glucose monitoring system have not been previously described. Fifty subjects with Type 1 diabetes mellitus (10-18 years old) were hospitalized in a clinical research center for approximately 24 h on two separate days. CGMS and OneTouch Ultra meter (LifeScan, Milpitas, CA) data were obtained. The CGMS was retrospectively recalibrated using the Ultra data varying the number and timing of calibrations. Resulting CGMS values were compared against laboratory reference values. There was a modest improvement in accuracy with increasing number of calibrations. The median relative absolute deviation (RAD) was 14%, 15%, 13%, and 13% when using three, four, five, and seven calibration values, respectively (P < 0.001). Corresponding percentages of CGMS-reference pairs meeting the International Organisation for Standardisation criteria were 66%, 67%, 71%, and 72% (P < 0.001). Nighttime accuracy improved when daytime calibrations (pre-lunch and pre-dinner) were removed leaving only two calibrations at 9 p.m. and 6 a.m. (median difference, -2 vs. -9 mg/dL, P < 0.001; median RAD, 12% vs. 15%, P = 0.001). Accuracy was better on visits where the average absolute rate of glucose change at the times of calibration was lower. On visits with average absolute rates <0.5, 0.5 to <1.0, 1.0 to <1.5, and >or=1.5 mg/dL/min, median RAD values were 13% versus 14% versus 17% versus 19%, respectively (P = 0.05). Although accuracy is slightly improved with more calibrations, the timing of the calibrations appears more important. Modifying the algorithm to put less weight on daytime calibrations for nighttime values and calibrating during times of relative glucose stability may have greater impact on accuracy.

  6. Evaluation of Factors Affecting CGMS Calibration

    PubMed Central

    2006-01-01

    Background The optimal number/timing of calibrations entered into the Continuous Glucose Monitoring System (“CGMS”; Medtronic MiniMed, Northridge, CA) have not been previously described. Methods Fifty subjects with T1DM (10–18y) were hospitalized in a clinical research center for ~24h on two separate days. CGMS and OneTouch® Ultra® Meter (“Ultra”; LifeScan, Milpitas, CA) data were obtained. The CGMS was retrospectively recalibrated using the Ultra data varying the number and timing of calibrations. Resulting CGMS values were compared against laboratory reference values. Results There was a modest improvement in accuracy with increasing number of calibrations. The median relative absolute deviation (RAD) was 14%, 15%, 13% and 13% when using 3, 4, 5 and 7 calibration values, respectively (p<0.001). Corresponding percentages of CGMS-reference pairs meeting the ISO criteria were 66%, 67%, 71% and 72% (p<0.001). Nighttime accuracy improved when daytime calibrations (pre-lunch and pre-dinner) were removed leaving only two calibrations at 9p.m. and 6a.m. (median difference: −2 vs. −9mg/dL, p<0.001; median RAD: 12% vs. 15%, p=0.001). Accuracy was better on visits where the average absolute rate of glucose change at the times of calibration was lower. On visits with average absolute rates <0.5, 0.5-<1.0, 1.0-<1.5 and ≥1.5mg/dL/min, median RAD values were 13% vs. 14% vs. 17% vs. 19%, respectively (p=0.05). Conclusions Although accuracy is slightly improved with more calibrations, the timing of the calibrations appears more important. Modifying the algorithm to put less weight on daytime calibrations for nighttime values and calibrating during times of relative glucose stability may have greater impact on accuracy. PMID:16800753

  7. Calibration of the NASA GRC 16 In. Mass-Flow Plug

    NASA Technical Reports Server (NTRS)

    Davis, David O.; Friedlander, David J.; Saunders, J. David; Frate, Franco C.; Foster, Lancert E.

    2012-01-01

    The results of an experimental calibration of the NASA Glenn Research Center 16 in. Mass-Flow Plug (MFP) are presented and compared to a previously obtained calibration of a 15 in. Mass-Flow Plug. An ASME low-beta, long-radius nozzle was used as the calibration reference. The discharge coefficient for the ASME nozzle was obtained by numerically simulating the flow through the nozzle from the WIND-US code. The results showed agreement between the 15 in. and 16 in. MFPs for area ratios (MFP to pipe area ratio) greater than 0.6 but deviate at area ratios below this value for reasons that are not fully understood. A general uncertainty analysis was also performed and indicates that large uncertainties in the calibration are present for low MFP area ratios.

  8. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  9. Calibration of a conodont apatite-based Ordovician 87Sr/86Sr curve to biostratigraphy and geochronology: Implications for stratigraphic resolution

    USGS Publications Warehouse

    Saltzman, M. R.; Edwards, C. T.; Leslie, S. A.; Dwyer, Gary S.; Bauer, J. A.; Repetski, John E.; Harris, A. G.; Bergstrom, S. M.

    2014-01-01

    The Ordovician 87Sr/86Sr isotope seawater curve is well established and shows a decreasing trend until the mid-Katian. However, uncertainties in calibration of this curve to biostratigraphy and geochronology have made it difficult to determine how the rates of 87Sr/86Sr decrease may have varied, which has implications for both the stratigraphic resolution possible using Sr isotope stratigraphy and efforts to model the effects of Ordovician geologic events. We measured 87Sr/86Sr in conodont apatite in North American Ordovician sections that are well studied for conodont biostratigraphy, primarily in Nevada, Oklahoma, the Appalachian region, and Ohio Valley. Our results indicate that conodont apatite may provide an accurate medium for Sr isotope stratigraphy and strengthen previous reports that point toward a significant increase in the rate of fall in seawater 87Sr/86Sr during the Middle Ordovician Darriwilian Stage. Our 87Sr/86Sr results suggest that Sr isotope stratigraphy will be most useful as a high-resolution tool for global correlation in the mid-Darriwilian to mid-Sandbian, when the maximum rate of fall in 87Sr/86Sr is estimated at ∼5.0–10.0 × 10–5 per m.y. Variable preservation of conodont elements limits the precision for individual stratigraphic horizons. Replicate conodont analyses from the same sample differ by an average of ∼4.0 × 10–5 (the 2σ standard deviation is 6.2 × 10–5), which in the best case scenario allows for subdivision of Ordovician time intervals characterized by the highest rates of fall in 87Sr/86Sr at a maximum resolution of ∼0.5–1.0 m.y. Links between the increased rate of fall in 87Sr/86Sr beginning in the mid-late Darriwilian (Phragmodus polonicus to Pygodus serra conodont zones) and geologic events continue to be investigated, but the coincidence with a long-term rise in sea level (Sauk-Tippecanoe megasequence boundary) and tectonic events (Taconic orogeny) in North America provides a plausible

  10. Uav Cameras: Overview and Geometric Calibration Benchmark

    NASA Astrophysics Data System (ADS)

    Cramer, M.; Przybilla, H.-J.; Zurhorst, A.

    2017-08-01

    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  11. Light-curve Analysis of Neon Novae

    NASA Astrophysics Data System (ADS)

    Hachisu, Izumi; Kato, Mariko

    2016-01-01

    We analyzed light curves of five neon novae, QU Vul, V351 Pup, V382 Vel, V693 CrA, and V1974 Cyg, and determined their white dwarf (WD) masses and distance moduli on the basis of theoretical light curves composed of free-free and photospheric emission. For QU Vul, we obtained a distance of d ˜ 2.4 kpc, reddening of E(B - V) ˜ 0.55, and WD mass of MWD = 0.82-0.96 {M}⊙ . This suggests that an oxygen-neon WD lost a mass of more than ˜ 0.1 {M}⊙ since its birth. For V351 Pup, we obtained d˜ 5.5 {{kpc}}, E(B-V)˜ 0.45, and {M}{{WD}}=0.98-1.1 {M}⊙ . For V382 Vel, we obtained d˜ 1.6 {{kpc}}, E(B-V)˜ 0.15, and {M}{{WD}}=1.13-1.28 {M}⊙ . For V693 CrA, we obtained d˜ 7.1 {{kpc}}, E(B-V)˜ 0.05, and {M}{{WD}}=1.15-1.25 {M}⊙ . For V1974 Cyg, we obtained d˜ 1.8 {{kpc}}, E(B-V)˜ 0.30, and {M}{{WD}}=0.95-1.1 {M}⊙ . For comparison, we added the carbon-oxygen nova V1668 Cyg to our analysis and obtained d˜ 5.4 {{kpc}}, E(B-V)˜ 0.30, and {M}{{WD}}=0.98-1.1 {M}⊙ . In QU Vul, photospheric emission contributes 0.4-0.8 mag at most to the optical light curve compared with free-free emission only. In V351 Pup and V1974 Cyg, photospheric emission contributes very little (0.2-0.4 mag at most) to the optical light curve. In V382 Vel and V693 CrA, free-free emission dominates the continuum spectra, and photospheric emission does not contribute to the optical magnitudes. We also discuss the maximum magnitude versus rate of decline relation for these novae based on the universal decline law.

  12. Hot-wire calibration in subsonic/transonic flow regimes

    NASA Technical Reports Server (NTRS)

    Nagabushana, K. A.; Ash, Robert L.

    1995-01-01

    A different approach for calibrating hot-wires, which simplifies the calibration procedure and reduces the tunnel run-time by an order of magnitude was sought. In general, it is accepted that the directly measurable quantities in any flow are velocity, density, and total temperature. Very few facilities have the capability of varying the total temperature over an adequate range. However, if the overheat temperature parameter, a(sub w), is used to calibrate the hot-wire then the directly measurable quantity, voltage, will be a function of the flow variables and the overheat parameter i.e., E = f(u,p,a(sub w), T(sub w)) where a(sub w) will contain the needed total temperature information. In this report, various methods of evaluating sensitivities with different dependent and independent variables to calibrate a 3-Wire hot-wire probe using a constant temperature anemometer (CTA) in subsonic/transonic flow regimes is presented. The advantage of using a(sub w) as the independent variable instead of total temperature, t(sub o), or overheat temperature parameter, tau, is that while running a calibration test it is not necessary to know the recovery factor, the coefficients in a wire resistance to temperature relationship for a given probe. It was deduced that the method employing the relationship E = f (u,p,a(sub w)) should result in the most accurate calibration of hot wire probes. Any other method would require additional measurements. Also this method will allow calibration and determination of accurate temperature fluctuation information even in atmospheric wind tunnels where there is no ability to obtain any temperature sensitivity information at present. This technique greatly simplifies the calibration process for hot-wires, provides the required calibration information needed in obtaining temperature fluctuations, and reduces both the tunnel run-time and the test matrix required to calibrate hotwires. Some of the results using the above techniques are presented

  13. Effects of dilution rates, animal species and instruments on the spectrophotometric determination of sperm counts.

    PubMed

    Rondeau, M; Rouleau, M

    1981-06-01

    Using semen from bull, boar and stallion as well as different spectrophotometers, we established the calibration curves relating the optical density of a sperm sample to the sperm count obtained on the hemacytometer. The results show that, for a given spectrophotometer, the calibration curve is not characteristic of the animal species we studied. The differences in size of the spermatozoa are probably too small to account for the anticipated specificity of the calibration curve. Furthermore, the fact that different dilution rates must be used, because of the vastly different concentrations of spermatozoa which is characteristic of those species, has no effect on the calibration curves since the dilution rate is shown to be artefactual. On the other hand, for a given semen, the calibration curve varies depending upon the spectrophotometry used. However, if two instruments have the same characteristic in terms of spectral bandwidth, the calibration curves are not statistically different.

  14. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  15. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    NASA Astrophysics Data System (ADS)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this

  16. Calibration and validation of a general infiltration model

    NASA Astrophysics Data System (ADS)

    Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.

    1999-08-01

    A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.

  17. Results of the 1996 JPL Balloon Flight Solar Cell Calibration Program

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.; Weiss, R. S.

    1996-01-01

    The 1996 solar cell calibration balloon flight campaign was completed with the first flight on June 30, 1996 and a second flight on August 8, 1996. All objectives of the flight program were met. Sixty-four modules were carried to an altitude of 120,000 ft (36.6 km). Full 1-5 curves were measured on 22 of these modules, and output at a fixed load was measured on 42 modules. This data was corrected to 28 C and to 1 AU (1.496 x 10(exp 8) km). The calibrated cells have been returned to the participants and can now be used as reference standards in simulator testing of cells and arrays.

  18. MIRO Continuum Calibration for Asteroid Mode

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon

    2011-01-01

    MIRO (Microwave Instrument for the Rosetta Orbiter) is a lightweight, uncooled, dual-frequency heterodyne radiometer. The MIRO encountered asteroid Steins in 2008, and during the flyby, MIRO used the Asteroid Mode to measure the emission spectrum of Steins. The Asteroid Mode is one of the seven modes of the MIRO operation, and is designed to increase the length of time that a spectral line is in the MIRO pass-band during a flyby of an object. This software is used to calibrate the continuum measurement of Steins emission power during the asteroid flyby. The MIRO raw measurement data need to be calibrated in order to obtain physically meaningful data. This software calibrates the MIRO raw measurements in digital units to the brightness temperature in Kelvin. The software uses two calibration sequences that are included in the Asteroid Mode. One sequence is at the beginning of the mode, and the other at the end. The first six frames contain the measurement of a cold calibration target, while the last six frames measure a warm calibration target. The targets have known temperatures and are used to provide reference power and gain, which can be used to convert MIRO measurements into brightness temperature. The software was developed to calibrate MIRO continuum measurements from Asteroid Mode. The software determines the relationship between the raw digital unit measured by MIRO and the equivalent brightness temperature by analyzing data from calibration frames. The found relationship is applied to non-calibration frames, which are the measurements of an object of interest such as asteroids and other planetary objects that MIRO encounters during its operation. This software characterizes the gain fluctuations statistically and determines which method to estimate gain between calibration frames. For example, if the fluctuation is lower than a statistically significant level, the averaging method is used to estimate the gain between the calibration frames. If the

  19. A combined microphone and camera calibration technique with application to acoustic imaging.

    PubMed

    Legg, Mathew; Bradley, Stuart

    2013-10-01

    We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.

  20. Systematic Calibration for a Backpacked Spherical Photogrammetry Imaging System

    NASA Astrophysics Data System (ADS)

    Rau, J. Y.; Su, B. W.; Hsiao, K. W.; Jhan, J. P.

    2016-06-01

    A spherical camera can observe the environment for almost 720 degrees' field of view in one shoot, which is useful for augmented reality, environment documentation, or mobile mapping applications. This paper aims to develop a spherical photogrammetry imaging system for the purpose of 3D measurement through a backpacked mobile mapping system (MMS). The used equipment contains a Ladybug-5 spherical camera, a tactical grade positioning and orientation system (POS), i.e. SPAN-CPT, and an odometer, etc. This research aims to directly apply photogrammetric space intersection technique for 3D mapping from a spherical image stereo-pair. For this purpose, several systematic calibration procedures are required, including lens distortion calibration, relative orientation calibration, boresight calibration for direct georeferencing, and spherical image calibration. The lens distortion is serious on the ladybug-5 camera's original 6 images. Meanwhile, for spherical image mosaicking from these original 6 images, we propose the use of their relative orientation and correct their lens distortion at the same time. However, the constructed spherical image still contains systematic error, which will reduce the 3D measurement accuracy. Later for direct georeferencing purpose, we need to establish a ground control field for boresight/lever-arm calibration. Then, we can apply the calibrated parameters to obtain the exterior orientation parameters (EOPs) of all spherical images. In the end, the 3D positioning accuracy after space intersection will be evaluated, including EOPs obtained by structure from motion method.

  1. Calibration of Viking imaging system pointing, image extraction, and optical navigation measure

    NASA Technical Reports Server (NTRS)

    Breckenridge, W. G.; Fowler, J. W.; Morgan, E. M.

    1977-01-01

    Pointing control and knowledge accuracy of Viking Orbiter science instruments is controlled by the scan platform. Calibration of the scan platform and the imaging system was accomplished through mathematical models. The calibration procedure and results obtained for the two Viking spacecraft are described. Included are both ground and in-flight scan platform calibrations, and the additional calibrations unique to optical navigation.

  2. Rotation Period of Blanco 1 Members from KELT Light Curves: Comparing Rotation-Ages to Various Stellar Chronometers at 100 Myr

    NASA Astrophysics Data System (ADS)

    Cargile, Phillip; James, D. J.; Pepper, J.; Kuhn, R.; Siverd, R. J.; Stassun, K. G.

    2012-01-01

    The age of a star is one of its most fundamental properties, and yet tragically it is also the one property that is not directly measurable in observations. We must therefore rely on age estimates based on mostly model-dependent or empirical methods. Moreover, there remains a critical need for direct comparison of different age-dating techniques using the same stars analyzed in a consistent fashion. One chronometer commonly being employed is using stellar rotation rates to measure stellar ages, i.e., gyrochronology. Although this technique is one of the better-understood chronometers, its calibration relies heavily on the solar datum, as well as benchmark open clusters with reliable ages, and also lacks a comprehensive comparative analysis to other stellar chronometers. The age of the nearby (? pc) open cluster Blanco 1 has been estimated using various techniques, including being one of only 7 clusters with an LDB age measurement, making it a unique and powerful comparative laboratory for stellar chronometry, including gyrochronology. Here, we present preliminary results from our light-curve analysis of solar-type stars in Blanco 1 in order to identify and measure rotation periods of cluster members. The light-curve data were obtained during the engineering and calibration phase of the KELT-South survey. The large area on the sky and low number of contaminating field stars makes Blanco 1 an ideal target for the extremely wide field and large pixel scale of the KELT telescope. We apply a period-finding technique using the Lomb-Scargle periodogram and FAP statistics to measure significant rotation periods in the KELT-South light curves for confirmed Blanco 1 members. These new rotation periods allow us to test and inform rotation evolution models for stellar ages at ? Myr, determining a rotation-age for Blanco 1 using gyrochronology, and compare this rotation-age to other age measurements for this cluster.

  3. Spectral irradiance calibration in the infrared. I - Ground-based and IRAS broadband calibrations

    NASA Technical Reports Server (NTRS)

    Cohen, Martin; Walker, Russell G.; Barlow, Michael J.; Deacon, John R.

    1992-01-01

    Absolutely calibrated versions of realistic model atmosphere calculations for Sirius and Vega by Kurucz (1991) are presented and used as a basis to offer a new absolute calibration of infrared broad and narrow filters. In-band fluxes for Vega are obtained and defined to be zero magnitude at all wavelengths shortward of 20 microns. Existing infrared photometry is used differentially to establish an absolute scale of the new Sirius model, yielding an angular diameter within 1 sigma of the mean determined interferometrically by Hanbury Brown et al. (1974). The use of Sirius as a primary infrared stellar standard beyond the 20 micron region is suggested. Isophotal wavelengths and monochromatic flux densities for both Vega and Sirius are tabulated.

  4. SCALA: In situ calibration for integral field spectrographs

    NASA Astrophysics Data System (ADS)

    Lombardo, S.; Küsters, D.; Kowalski, M.; Aldering, G.; Antilogus, P.; Bailey, S.; Baltay, C.; Barbary, K.; Baugh, D.; Bongard, S.; Boone, K.; Buton, C.; Chen, J.; Chotard, N.; Copin, Y.; Dixon, S.; Fagrelius, P.; Feindt, U.; Fouchez, D.; Gangler, E.; Hayden, B.; Hillebrandt, W.; Hoffmann, A.; Kim, A. G.; Leget, P.-F.; McKay, L.; Nordin, J.; Pain, R.; Pécontal, E.; Pereira, R.; Perlmutter, S.; Rabinowitz, D.; Reif, K.; Rigault, M.; Rubin, D.; Runge, K.; Saunders, C.; Smadja, G.; Suzuki, N.; Taubenberger, S.; Tao, C.; Thomas, R. C.; Nearby Supernova Factory

    2017-11-01

    Aims: The scientific yield of current and future optical surveys is increasingly limited by systematic uncertainties in the flux calibration. This is the case for type Ia supernova (SN Ia) cosmology programs, where an improved calibration directly translates into improved cosmological constraints. Current methodology rests on models of stars. Here we aim to obtain flux calibration that is traceable to state-of-the-art detector-based calibration. Methods: We present the SNIFS Calibration Apparatus (SCALA), a color (relative) flux calibration system developed for the SuperNova integral field spectrograph (SNIFS), operating at the University of Hawaii 2.2 m (UH 88) telescope. Results: By comparing the color trend of the illumination generated by SCALA during two commissioning runs, and to previous laboratory measurements, we show that we can determine the light emitted by SCALA with a long-term repeatability better than 1%. We describe the calibration procedure necessary to control for system aging. We present measurements of the SNIFS throughput as estimated by SCALA observations. Conclusions: The SCALA calibration unit is now fully deployed at the UH 88 telescope, and with it color-calibration between 4000 Å and 9000 Å is stable at the percent level over a one-year baseline.

  5. Calibration procedure for Slocum glider deployed optical instruments.

    PubMed

    Cetinić, Ivona; Toro-Farmer, Gerardo; Ragan, Matthew; Oberg, Carl; Jones, Burton H

    2009-08-31

    Recent developments in the field of the autonomous underwater vehicles allow the wide usage of these platforms as part of scientific experiments, monitoring campaigns and more. The vehicles are often equipped with sensors measuring temperature, conductivity, chlorophyll a fluorescence (Chl a), colored dissolved organic matter (CDOM) fluorescence, phycoerithrin (PE) fluorescence and spectral volume scattering function at 117 degrees, providing users with high resolution, real time data. However, calibration of these instruments can be problematic. Most in situ calibrations are performed by deploying complementary instrument packages or water samplers in the proximity of the glider. Laboratory calibrations of the mounted sensors are difficult due to the placement of the instruments within the body of the vehicle. For the laboratory calibrations of the Slocum glider instruments we developed a small calibration chamber where we can perform precise calibrations of the optical instruments aboard our glider, as well as sensors from other deployment platforms. These procedures enable us to obtain pre- and post-deployment calibrations for optical fluorescence instruments, which may differ due to the biofouling and other physical damage that can occur during long-term glider deployments. We found that biofouling caused significant changes in the calibration scaling factors of fluorescent sensors, suggesting the need for consistent and repetitive calibrations for gliders as proposed in this paper.

  6. Advancing Absolute Calibration for JWST and Other Applications

    NASA Astrophysics Data System (ADS)

    Rieke, George; Bohlin, Ralph; Boyajian, Tabetha; Carey, Sean; Casagrande, Luca; Deustua, Susana; Gordon, Karl; Kraemer, Kathleen; Marengo, Massimo; Schlawin, Everett; Su, Kate; Sloan, Greg; Volk, Kevin

    2017-10-01

    We propose to exploit the unique optical stability of the Spitzer telescope, along with that of IRAC, to (1) transfer the accurate absolute calibration obtained with MSX on very bright stars directly to two reference stars within the dynamic range of the JWST imagers (and of other modern instrumentation); (2) establish a second accurate absolute calibration based on the absolutely calibrated spectrum of the sun, transferred onto the astronomical system via alpha Cen A; and (3) provide accurate infrared measurements for the 11 (of 15) highest priority stars with no such data but with accurate interferometrically measured diameters, allowing us to optimize determinations of effective temperatures using the infrared flux method and thus to extend the accurate absolute calibration spectrally. This program is integral to plans for an accurate absolute calibration of JWST and will also provide a valuable Spitzer legacy.

  7. Absolute calibration of sniffer probes on Wendelstein 7-X

    NASA Astrophysics Data System (ADS)

    Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m2 per MW injected beam power is measured.

  8. Absolute calibration of sniffer probes on Wendelstein 7-X.

    PubMed

    Moseev, D; Laqua, H P; Marsen, S; Stange, T; Braune, H; Erckmann, V; Gellert, F; Oosterbeek, J W

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m(2) per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m(2) per MW injected beam power is measured.

  9. Comparison of infusion pumps calibration methods

    NASA Astrophysics Data System (ADS)

    Batista, Elsa; Godinho, Isabel; do Céu Ferreira, Maria; Furtado, Andreia; Lucas, Peter; Silva, Claudia

    2017-12-01

    Nowadays, several types of infusion pump are commonly used for drug delivery, such as syringe pumps and peristaltic pumps. These instruments present different measuring features and capacities according to their use and therapeutic application. In order to ensure the metrological traceability of these flow and volume measuring equipment, it is necessary to use suitable calibration methods and standards. Two different calibration methods can be used to determine the flow error of infusion pumps. One is the gravimetric method, considered as a primary method, commonly used by National Metrology Institutes. The other calibration method, a secondary method, relies on an infusion device analyser (IDA) and is typically used by hospital maintenance offices. The suitability of the IDA calibration method was assessed by testing several infusion instruments at different flow rates using the gravimetric method. In addition, a measurement comparison between Portuguese Accredited Laboratories and hospital maintenance offices was performed under the coordination of the Portuguese Institute for Quality, the National Metrology Institute. The obtained results were directly related to the used calibration method and are presented in this paper. This work has been developed in the framework of the EURAMET projects EMRP MeDD and EMPIR 15SIP03.

  10. Calibration of z-axis linearity for arbitrary optical topography measuring instruments

    NASA Astrophysics Data System (ADS)

    Eifler, Matthias; Seewig, Jörg; Hering, Julian; von Freymann, Georg

    2015-05-01

    The calibration of the height axis of optical topography measurement instruments is essential for reliable topography measurements. A state of the art technology for the calibration of the linearity and amplification of the z-axis is the use of step height artefacts. However, a proper calibration requires numerous step heights at different positions within the measurement range. The procedure is extensive and uses artificial surface structures that are not related to real measurement tasks. Concerning these limitations, approaches should to be developed that work for arbitrary topography measurement devices and require little effort. Hence, we propose calibration artefacts which are based on the 3D-Abbott-Curve and image desired surface characteristics. Further, real geometric structures are used as an initial point of the calibration artefact. Based on these considerations, an algorithm is introduced which transforms an arbitrary measured surface into a measurement artefact for the z-axis linearity. The method works both for profiles and topographies. For considering effects of manufacturing, measuring, and evaluation an iterative approach is chosen. The mathematical impact of these processes can be calculated with morphological signal processing. The artefact is manufactured with 3D laser lithography and characterized with different optical measurement devices. An introduced calibration routine can calibrate the entire z-axis-range within one measurement and minimizes the required effort. With the results it is possible to locate potential linearity deviations and to adjust the z-axis. Results of different optical measurement principles are compared in order to evaluate the capabilities of the new artefact.

  11. Principal curve detection in complicated graph images

    NASA Astrophysics Data System (ADS)

    Liu, Yuncai; Huang, Thomas S.

    2001-09-01

    Finding principal curves in an image is an important low level processing in computer vision and pattern recognition. Principal curves are those curves in an image that represent boundaries or contours of objects of interest. In general, a principal curve should be smooth with certain length constraint and allow either smooth or sharp turning. In this paper, we present a method that can efficiently detect principal curves in complicated map images. For a given feature image, obtained from edge detection of an intensity image or thinning operation of a pictorial map image, the feature image is first converted to a graph representation. In graph image domain, the operation of principal curve detection is performed to identify useful image features. The shortest path and directional deviation schemes are used in our algorithm os principal verve detection, which is proven to be very efficient working with real graph images.

  12. Importance of Calibration Method in Central Blood Pressure for Cardiac Structural Abnormalities.

    PubMed

    Negishi, Kazuaki; Yang, Hong; Wang, Ying; Nolan, Mark T; Negishi, Tomoko; Pathan, Faraz; Marwick, Thomas H; Sharman, James E

    2016-09-01

    Central blood pressure (CBP) independently predicts cardiovascular risk, but calibration methods may affect accuracy of central systolic blood pressure (CSBP). Standard central systolic blood pressure (Stan-CSBP) from peripheral waveforms is usually derived with calibration using brachial SBP and diastolic BP (DBP). However, calibration using oscillometric mean arterial pressure (MAP) and DBP (MAP-CSBP) is purported to provide more accurate representation of true invasive CSBP. This study sought to determine which derived CSBP could more accurately discriminate cardiac structural abnormalities. A total of 349 community-based patients with risk factors (71±5years, 161 males) had CSBP measured by brachial oscillometry (Mobil-O-Graph, IEM GmbH, Stolberg, Germany) using 2 calibration methods: MAP-CSBP and Stan-CSBP. Left ventricular hypertrophy (LVH) and left atrial dilatation (LAD) were measured based on standard guidelines. MAP-CSBP was higher than Stan-CSBP (149±20 vs. 128±15mm Hg, P < 0.0001). Although they were modestly correlated (rho = 0.74, P < 0.001), the Bland-Altman plot demonstrated a large bias (21mm Hg) and limits of agreement (24mm Hg). In receiver operating characteristic (ROC) curve analyses, MAP-CSBP significantly better discriminated LVH compared with Stan-CSBP (area under the curve (AUC) 0.66 vs. 0.59, P = 0.0063) and brachial SBP (0.62, P = 0.027). Continuous net reclassification improvement (NRI) (P < 0.001) and integrated discrimination improvement (IDI) (P < 0.001) corroborated superior discrimination of LVH by MAP-CSBP. Similarly, MAP-CSBP better distinguished LAD than Stan-CSBP (AUC 0.63 vs. 0.56, P = 0.005) and conventional brachial SBP (0.58, P = 0.006), whereas Stan-CSBP provided no better discrimination than conventional brachial BP (P = 0.09). CSBP is calibration dependent and when oscillometric MAP and DBP are used, the derived CSBP is a better discriminator for cardiac structural abnormalities. © American Journal of Hypertension

  13. Calibration of GafChromic XR-RV3 radiochromic film for skin dose measurement using standardized x-ray spectra and a commercial flatbed scanner

    PubMed Central

    McCabe, Bradley P.; Speidel, Michael A.; Pike, Tina L.; Van Lysel, Michael S.

    2011-01-01

    Purpose: In this study, newly formulated XR-RV3 GafChromic® film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. Methods: The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. Results: The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was ±7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. Conclusions: XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film

  14. Calibration of GafChromic XR-RV3 radiochromic film for skin dose measurement using standardized x-ray spectra and a commercial flatbed scanner.

    PubMed

    McCabe, Bradley P; Speidel, Michael A; Pike, Tina L; Van Lysel, Michael S

    2011-04-01

    In this study, newly formulated XR-RV3 GafChromic film was calibrated with National Institute of Standards and Technology (NIST) traceability for measurement of patient skin dose during fluoroscopically guided interventional procedures. The film was calibrated free-in-air to air kerma levels between 15 and 1100 cGy using four moderately filtered x-ray beam qualities (60, 80, 100, and 120 kVp). The calibration films were scanned with a commercial flatbed document scanner. Film reflective density-to-air kerma calibration curves were constructed for each beam quality, with both the orange and white sides facing the x-ray source. A method to correct for nonuniformity in scanner response (up to 25% depending on position) was developed to enable dose measurement with large films. The response of XR-RV3 film under patient backscattering conditions was examined using on-phantom film exposures and Monte Carlo simulations. The response of XR-RV3 film to a given air kerma depended on kVp and film orientation. For a 200 cGy air kerma exposure with the orange side of the film facing the source, the film response increased by 20% from 60 to 120 kVp. At 500 cGy, the increase was 12%. When 500 cGy exposures were performed with the white side facing the x-ray source, the film response increased by 4.0% (60 kVp) to 9.9% (120 kVp) compared to the orange-facing orientation. On-phantom film measurements and Monte Carlo simulations show that using a NIST-traceable free-in-air calibration curve to determine air kerma in the presence of backscatter results in an error from 2% up to 8% depending on beam quality. The combined uncertainty in the air kerma measurement from the calibration curves and scanner nonuniformity correction was +/- 7.1% (95% C.I.). The film showed notable stability. Calibrations of film and scanner separated by 1 yr differed by 1.0%. XR-RV3 radiochromic film response to a given air kerma shows dependence on beam quality and film orientation. The presence of

  15. Characteristics of time-activity curves obtained from dynamic 11C-methionine PET in common primary brain tumors.

    PubMed

    Nomura, Yuichi; Asano, Yoshitaka; Shinoda, Jun; Yano, Hirohito; Ikegame, Yuka; Kawasaki, Tomohiro; Nakayama, Noriyuki; Maruyama, Takashi; Muragaki, Yoshihiro; Iwama, Toru

    2018-07-01

    The aim of this study was to assess whether dynamic PET with 11 C-methionine (MET) (MET-PET) is useful in the diagnosis of brain tumors. One hundred sixty patients with brain tumors (139 gliomas, 9 meningiomas, 4 hemangioblastomas and 8 primary central nervous system lymphomas [PCNSL]) underwent dynamic MET-PET with a 3-dimensional acquisition mode, and the maximum tumor MET-standardized uptake value (MET-SUV) was measured consecutively to construct a time-activity curve (TAC). Furthermore, receiver operating characteristic (ROC) curves were generated from the time-to-peak (TTP) and the slope of the curve in the late phase (SLOPE). The TAC patterns of MET-SUVs (MET-TACs) could be divided into four characteristic types when MET dynamics were analyzed by dividing the MET-TAC into three phases. MET-SUVs were significantly higher in early and late phases in glioblastoma compared to anaplastic astrocytoma, diffuse astrocytoma and the normal frontal cortex (P < 0.05). The SLOPE in the late phase was significantly lower in tumors that included an oligodendroglial component compared to astrocytic tumors (P < 0.001). When we set the cutoff of the SLOPE in the late phase to - 0.04 h -1 for the differentiation of tumors that included an oligodendroglial component from astrocytic tumors, the diagnostic accuracy was 74.2% sensitivity and 64.9% specificity. The area under the ROC curve was 0.731. The results of this study show that quantification of the MET-TAC for each brain tumor identified by a dynamic MET-PET study could be helpful in the non-invasive discrimination of brain tumor subtypes, in particular gliomas.

  16. 7 CFR 42.141 - Obtaining Operating Characteristic (OC) curve information for skip lot sampling and inspection.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., and read the new Percent of Lots Expected to be Accepted, Pas, which results when using these skip lot... point, proceed vertically to the curve and then horizontally to the left to the vertical axis. From this...

  17. 7 CFR 42.141 - Obtaining Operating Characteristic (OC) curve information for skip lot sampling and inspection.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., and read the new Percent of Lots Expected to be Accepted, Pas, which results when using these skip lot... point, proceed vertically to the curve and then horizontally to the left to the vertical axis. From this...

  18. Polarimetric SAR calibration experiment using active radar calibrators

    NASA Astrophysics Data System (ADS)

    Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.

    1990-03-01

    Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.

  19. Polarimetric SAR calibration experiment using active radar calibrators

    NASA Technical Reports Server (NTRS)

    Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.

    1990-01-01

    Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.

  20. Calibration of hydrometers

    NASA Astrophysics Data System (ADS)

    Lorefice, Salvatore; Malengo, Andrea

    2006-10-01

    After a brief description of the different methods employed in periodic calibration of hydrometers used in most cases to measure the density of liquids in the range between 500 kg m-3 and 2000 kg m-3, particular emphasis is given to the multipoint procedure based on hydrostatic weighing, known as well as Cuckow's method. The features of the calibration apparatus and the procedure used at the INRiM (formerly IMGC-CNR) density laboratory have been considered to assess all relevant contributions involved in the calibration of different kinds of hydrometers. The uncertainty is strongly dependent on the kind of hydrometer; in particular, the results highlight the importance of the density of the reference buoyant liquid, the temperature of calibration and the skill of operator in the reading of the scale in the whole assessment of the uncertainty. It is also interesting to realize that for high-resolution hydrometers (division of 0.1 kg m-3), the uncertainty contribution of the density of the reference liquid is the main source of the total uncertainty, but its importance falls under about 50% for hydrometers with a division of 0.5 kg m-3 and becomes somewhat negligible for hydrometers with a division of 1 kg m-3, for which the reading uncertainty is the predominant part of the total uncertainty. At present the best INRiM result is obtained with commercially available hydrometers having a scale division of 0.1 kg m-3, for which the relative uncertainty is about 12 × 10-6.

  1. Hybrid dynamic radioactive particle tracking (RPT) calibration technique for multiphase flow systems

    NASA Astrophysics Data System (ADS)

    Khane, Vaibhav; Al-Dahhan, Muthanna H.

    2017-04-01

    The radioactive particle tracking (RPT) technique has been utilized to measure three-dimensional hydrodynamic parameters for multiphase flow systems. An analytical solution to the inverse problem of the RPT technique, i.e. finding the instantaneous tracer positions based upon instantaneous counts received in the detectors, is not possible. Therefore, a calibration to obtain a counts-distance map is needed. There are major shortcomings in the conventional RPT calibration method due to which it has limited applicability in practical applications. In this work, the design and development of a novel dynamic RPT calibration technique are carried out to overcome the shortcomings of the conventional RPT calibration method. The dynamic RPT calibration technique has been implemented around a test reactor with 1foot in diameter and 1 foot in height using Cobalt-60 as an isotopes tracer particle. Two sets of experiments have been carried out to test the capability of novel dynamic RPT calibration. In the first set of experiments, a manual calibration apparatus has been used to hold a tracer particle at known static locations. In the second set of experiments, the tracer particle was moved vertically downwards along a straight line path in a controlled manner. The obtained reconstruction results about the tracer particle position were compared with the actual known position and the reconstruction errors were estimated. The obtained results revealed that the dynamic RPT calibration technique is capable of identifying tracer particle positions with a reconstruction error between 1 to 5.9 mm for the conditions studied which could be improved depending on various factors outlined here.

  2. Radiometric calibration of hyper-spectral imaging spectrometer based on optimizing multi-spectral band selection

    NASA Astrophysics Data System (ADS)

    Sun, Li-wei; Ye, Xin; Fang, Wei; He, Zhen-lei; Yi, Xiao-long; Wang, Yu-peng

    2017-11-01

    Hyper-spectral imaging spectrometer has high spatial and spectral resolution. Its radiometric calibration needs the knowledge of the sources used with high spectral resolution. In order to satisfy the requirement of source, an on-orbit radiometric calibration method is designed in this paper. This chain is based on the spectral inversion accuracy of the calibration light source. We compile the genetic algorithm progress which is used to optimize the channel design of the transfer radiometer and consider the degradation of the halogen lamp, thus realizing the high accuracy inversion of spectral curve in the whole working time. The experimental results show the average root mean squared error is 0.396%, the maximum root mean squared error is 0.448%, and the relative errors at all wavelengths are within 1% in the spectral range from 500 nm to 900 nm during 100 h operating time. The design lays a foundation for the high accuracy calibration of imaging spectrometer.

  3. A detector interferometric calibration experiment for high precision astrometry

    NASA Astrophysics Data System (ADS)

    Crouzier, A.; Malbet, F.; Henault, F.; Léger, A.; Cara, C.; LeDuigou, J. M.; Preis, O.; Kern, P.; Delboulbe, A.; Martin, G.; Feautrier, P.; Stadler, E.; Lafrasse, S.; Rochat, S.; Ketchazo, C.; Donati, M.; Doumayrou, E.; Lagage, P. O.; Shao, M.; Goullioud, R.; Nemati, B.; Zhai, C.; Behar, E.; Potin, S.; Saint-Pe, M.; Dupont, J.

    2016-11-01

    Context. Exoplanet science has made staggering progress in the last two decades, due to the relentless exploration of new detection methods and refinement of existing ones. Yet astrometry offers a unique and untapped potential of discovery of habitable-zone low-mass planets around all the solar-like stars of the solar neighborhood. To fulfill this goal, astrometry must be paired with high precision calibration of the detector. Aims: We present a way to calibrate a detector for high accuracy astrometry. An experimental testbed combining an astrometric simulator and an interferometric calibration system is used to validate both the hardware needed for the calibration and the signal processing methods. The objective is an accuracy of 5 × 10-6 pixel on the location of a Nyquist sampled polychromatic point spread function. Methods: The interferometric calibration system produced modulated Young fringes on the detector. The Young fringes were parametrized as products of time and space dependent functions, based on various pixel parameters. The minimization of function parameters was done iteratively, until convergence was obtained, revealing the pixel information needed for the calibration of astrometric measurements. Results: The calibration system yielded the pixel positions to an accuracy estimated at 4 × 10-4 pixel. After including the pixel position information, an astrometric accuracy of 6 × 10-5 pixel was obtained, for a PSF motion over more than five pixels. In the static mode (small jitter motion of less than 1 × 10-3 pixel), a photon noise limited precision of 3 × 10-5 pixel was reached.

  4. Medical color displays and their color calibration: investigations of various calibration methods, tools, and potential improvement in color difference ΔE

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Hashmi, Syed F.; Dallas, William J.; Krupinski, Elizabeth A.; Rehm, Kelly; Fan, Jiahua

    2010-08-01

    Our laboratory has investigated the efficacy of a suite of color calibration and monitor profiling packages which employ a variety of color measurement sensors. Each of the methods computes gamma correction tables for the red, green and blue color channels of a monitor that attempt to: a) match a desired luminance range and tone reproduction curve; and b) maintain a target neutral point across the range of grey values. All of the methods examined here produce International Color Consortium (ICC) profiles that describe the color rendering capabilities of the monitor after calibration. Color profiles incorporate a transfer matrix that establishes the relationship between RGB driving levels and the International Commission on Illumination (CIE) XYZ (tristimulus) values of the resulting on-screen color; the matrix is developed by displaying color patches of known RGB values on the monitor and measuring the tristimulus values with a sensor. The number and chromatic distribution of color patches varies across methods and is usually not under user control. In this work we examine the effect of employing differing calibration and profiling methods on rendition of color images. A series of color patches encoded in sRGB color space were presented on the monitor using color-management software that utilized the ICC profile produced by each method. The patches were displayed on the calibrated monitor and measured with a Minolta CS200 colorimeter. Differences in intended and achieved luminance and chromaticity were computed using the CIE DE2000 color-difference metric, in which a value of ΔE = 1 is generally considered to be approximately one just noticeable difference (JND) in color. We observed between one and 17 JND's for individual colors, depending on calibration method and target. As an extension of this fundamental work1, we further improved our calibration method by defining concrete calibration parameters for the display, using the NEC wide gamut puck, and making sure

  5. Extracting information from S-curves of language change

    PubMed Central

    Ghanbarnejad, Fakhteh; Gerlach, Martin; Miotto, José M.; Altmann, Eduardo G.

    2014-01-01

    It is well accepted that adoption of innovations are described by S-curves (slow start, accelerating period and slow end). In this paper, we analyse how much information on the dynamics of innovation spreading can be obtained from a quantitative description of S-curves. We focus on the adoption of linguistic innovations for which detailed databases of written texts from the last 200 years allow for an unprecedented statistical precision. Combining data analysis with simulations of simple models (e.g. the Bass dynamics on complex networks), we identify signatures of endogenous and exogenous factors in the S-curves of adoption. We propose a measure to quantify the strength of these factors and three different methods to estimate it from S-curves. We obtain cases in which the exogenous factors are dominant (in the adoption of German orthographic reforms and of one irregular verb) and cases in which endogenous factors are dominant (in the adoption of conventions for romanization of Russian names and in the regularization of most studied verbs). These results show that the shape of S-curve is not universal and contains information on the adoption mechanism. PMID:25339692

  6. Spectral multivariate calibration without laboratory prepared or determined reference analyte values.

    PubMed

    Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H

    2013-02-05

    An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.

  7. A new method for automated dynamic calibration of tipping-bucket rain gauges

    USGS Publications Warehouse

    Humphrey, M.D.; Istok, J.D.; Lee, J.Y.; Hevesi, J.A.; Flint, A.L.

    1997-01-01

    Existing methods for dynamic calibration of tipping-bucket rain gauges (TBRs) can be time consuming and labor intensive. A new automated dynamic calibration system has been developed to calibrate TBRs with minimal effort. The system consists of a programmable pump, datalogger, digital balance, and computer. Calibration is performed in two steps: 1) pump calibration and 2) rain gauge calibration. Pump calibration ensures precise control of water flow rates delivered to the rain gauge funnel; rain gauge calibration ensures precise conversion of bucket tip times to actual rainfall rates. Calibration of the pump and one rain gauge for 10 selected pump rates typically requires about 8 h. Data files generated during rain gauge calibration are used to compute rainfall intensities and amounts from a record of bucket tip times collected in the field. The system was tested using 5 types of commercial TBRs (15.2-, 20.3-, and 30.5-cm diameters; 0.1-, 0.2-, and 1.0-mm resolutions) and using 14 TBRs of a single type (20.3-cm diameter; 0.1-mm resolution). Ten pump rates ranging from 3 to 154 mL min-1 were used to calibrate the TBRs and represented rainfall rates between 6 and 254 mm h-1 depending on the rain gauge diameter. All pump calibration results were very linear with R2 values greater than 0.99. All rain gauges exhibited large nonlinear underestimation errors (between 5% and 29%) that decreased with increasing rain gauge resolution and increased with increasing rainfall rate, especially for rates greater than 50 mm h-1. Calibration curves of bucket tip time against the reciprocal of the true pump rate for all rain gauges also were linear with R2 values of 0.99. Calibration data for the 14 rain gauges of the same type were very similar, as indicated by slope values that were within 14% of each other and ranged from about 367 to 417 s mm h-1. The developed system can calibrate TBRs efficiently, accurately, and virtually unattended and could be modified for use with other

  8. Estimating the Area Under ROC Curve When the Fitted Binormal Curves Demonstrate Improper Shape.

    PubMed

    Bandos, Andriy I; Guo, Ben; Gur, David

    2017-02-01

    The "binormal" model is the most frequently used tool for parametric receiver operating characteristic (ROC) analysis. The binormal ROC curves can have "improper" (non-concave) shapes that are unrealistic in many practical applications, and several tools (eg, PROPROC) have been developed to address this problem. However, due to the general robustness of binormal ROCs, the improperness of the fitted curves might carry little consequence for inferences about global summary indices, such as the area under the ROC curve (AUC). In this work, we investigate the effect of severe improperness of fitted binormal ROC curves on the reliability of AUC estimates when the data arise from an actually proper curve. We designed theoretically proper ROC scenarios that induce severely improper shape of fitted binormal curves in the presence of well-distributed empirical ROC points. The binormal curves were fitted using maximum likelihood approach. Using simulations, we estimated the frequency of severely improper fitted curves, bias of the estimated AUC, and coverage of 95% confidence intervals (CIs). In Appendix S1, we provide additional information on percentiles of the distribution of AUC estimates and bias when estimating partial AUCs. We also compared the results to a reference standard provided by empirical estimates obtained from continuous data. We observed up to 96% of severely improper curves depending on the scenario in question. The bias in the binormal AUC estimates was very small and the coverage of the CIs was close to nominal, whereas the estimates of partial AUC were biased upward in the high specificity range and downward in the low specificity range. Compared to a non-parametric approach, the binormal model led to slightly more variable AUC estimates, but at the same time to CIs with more appropriate coverage. The improper shape of the fitted binormal curve, by itself, ie, in the presence of a sufficient number of well-distributed points, does not imply

  9. Results of the 1999 JPL Balloon Flight Solar Cell Calibration Program

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.; Mueller, R. L.; Weiss, R. S.

    2000-01-01

    The 1999 solar cell calibration balloon flight campaign consisted of two flights, which occurred on June 14, 1999, and July 6, 1999. All objectives of the flight program were met. Fifty-seven modules were carried to an altitude of approximately equal to 120,000 ft (36.6 km). Full I-V curves were measured on five of these modules, and output at a fixed load was measured on forty-three modules (forty-five cells), with some modules repeated on the second flight. This data was corrected to 28 C and to 1 AU (1.496 x 10 (exp 8) km). The calibrated cells have been returned to their owners and can now be used as reference standards in simulator testing of cells and arrays.

  10. Bayesian model calibration of ramp compression experiments on Z

    NASA Astrophysics Data System (ADS)

    Brown, Justin; Hund, Lauren

    2017-06-01

    Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. Frequency analysis of a step dynamic pressure calibrator.

    PubMed

    Choi, In-Mook; Yang, Inseok; Yang, Tae-Heon

    2012-09-01

    A dynamic high pressure standard is becoming more essential in the fields of mobile engines, space science, and especially the area of defense such as long-range missile development. However, a complication arises when a dynamic high pressure sensor is compared with a reference dynamic pressure gauge calibrated in static mode. Also, it is difficult to determine a reference dynamic pressure signal from the calibrator because a dynamic high pressure calibrator generates unnecessary oscillations in a positive-going pressure step method. A dynamic high pressure calibrator, using a quick-opening ball valve, generates a fast step pressure change within 1 ms; however, the calibrator also generates a big impulse force that can lead to a short life-time of the system and to oscillating characteristics in response to the dynamic sensor to be calibrated. In this paper, unnecessary additional resonant frequencies besides those of the step function are characterized using frequency analysis. Accordingly, the main sources of resonance are described. In order to remove unnecessary frequencies, the post processing results, obtained by a filter, are given; also, a method for the modification of the dynamic calibration system is proposed.

  12. Frequency analysis of a step dynamic pressure calibrator

    NASA Astrophysics Data System (ADS)

    Choi, In-Mook; Yang, Inseok; Yang, Tae-Heon

    2012-09-01

    A dynamic high pressure standard is becoming more essential in the fields of mobile engines, space science, and especially the area of defense such as long-range missile development. However, a complication arises when a dynamic high pressure sensor is compared with a reference dynamic pressure gauge calibrated in static mode. Also, it is difficult to determine a reference dynamic pressure signal from the calibrator because a dynamic high pressure calibrator generates unnecessary oscillations in a positive-going pressure step method. A dynamic high pressure calibrator, using a quick-opening ball valve, generates a fast step pressure change within 1 ms; however, the calibrator also generates a big impulse force that can lead to a short life-time of the system and to oscillating characteristics in response to the dynamic sensor to be calibrated. In this paper, unnecessary additional resonant frequencies besides those of the step function are characterized using frequency analysis. Accordingly, the main sources of resonance are described. In order to remove unnecessary frequencies, the post processing results, obtained by a filter, are given; also, a method for the modification of the dynamic calibration system is proposed.

  13. Calibration of an electronic counter and pulse height analyzer for plotting erythrocyte volume spectra.

    DOT National Transportation Integrated Search

    1963-03-01

    A simple technique is presented for calibrating an electronic system used in the plotting of erythrocyte volume spectra. The calibration factors, once obtained, apparently remain applicable for some time. Precise estimates of calibration factors appe...

  14. Calibrating and training of neutron based NSA techniques with less SNM standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, William H; Swinhoe, Martyn T; Bracken, David S

    2010-01-01

    Accessing special nuclear material (SNM) standards for the calibration of and training on nondestructive assay (NDA) instruments has become increasingly difficult in light of enhanced safeguards and security regulations. Limited or nonexistent access to SNM has affected neutron based NDA techniques more than gamma ray techniques because the effects of multiplication require a range of masses to accurately measure the detector response. Neutron based NDA techniques can also be greatly affected by the matrix and impurity characteristics of the item. The safeguards community has been developing techniques for calibrating instrumentation and training personnel with dwindling numbers of SNM standards. Montemore » Carlo methods have become increasingly important for design and calibration of instrumentation. Monte Carlo techniques have the ability to accurately predict the detector response for passive techniques. The Monte Carlo results are usually benchmarked to neutron source measurements such as californium. For active techniques, the modeling becomes more difficult because of the interaction of the interrogation source with the detector and nuclear material; and the results cannot be simply benchmarked with neutron sources. A Monte Carlo calculated calibration curve for a training course in Indonesia of material test reactor (MTR) fuel elements assayed with an active well coincidence counter (AWCC) will be presented as an example. Performing training activities with reduced amounts of nuclear material makes it difficult to demonstrate how the multiplication and matrix properties of the item affects the detector response and limits the knowledge that can be obtained with hands-on training. A neutron pulse simulator (NPS) has been developed that can produce a pulse stream representative of a real pulse stream output from a detector measuring SNM. The NPS has been used by the International Atomic Energy Agency (IAEA) for detector testing and training applications at

  15. Energy calibration of CALET onboard the International Space Station

    NASA Astrophysics Data System (ADS)

    Asaoka, Y.; Akaike, Y.; Komiya, Y.; Miyata, R.; Torii, S.; Adriani, O.; Asano, K.; Bagliesi, M. G.; Bigongiari, G.; Binns, W. R.; Bonechi, S.; Bongi, M.; Brogi, P.; Buckley, J. H.; Cannady, N.; Castellini, G.; Checchia, C.; Cherry, M. L.; Collazuol, G.; Di Felice, V.; Ebisawa, K.; Fuke, H.; Guzik, T. G.; Hams, T.; Hareyama, M.; Hasebe, N.; Hibino, K.; Ichimura, M.; Ioka, K.; Ishizaki, W.; Israel, M. H.; Javaid, A.; Kasahara, K.; Kataoka, J.; Kataoka, R.; Katayose, Y.; Kato, C.; Kawanaka, N.; Kawakubo, Y.; Kitamura, H.; Krawczynski, H. S.; Krizmanic, J. F.; Kuramata, S.; Lomtadze, T.; Maestro, P.; Marrocchesi, P. S.; Messineo, A. M.; Mitchell, J. W.; Miyake, S.; Mizutani, K.; Moiseev, A. A.; Mori, K.; Mori, M.; Mori, N.; Motz, H. M.; Munakata, K.; Murakami, H.; Nakagawa, Y. E.; Nakahira, S.; Nishimura, J.; Okuno, S.; Ormes, J. F.; Ozawa, S.; Pacini, L.; Palma, F.; Papini, P.; Penacchioni, A. V.; Rauch, B. F.; Ricciarini, S.; Sakai, K.; Sakamoto, T.; Sasaki, M.; Shimizu, Y.; Shiomi, A.; Sparvoli, R.; Spillantini, P.; Stolzi, F.; Takahashi, I.; Takayanagi, M.; Takita, M.; Tamura, T.; Tateyama, N.; Terasawa, T.; Tomida, H.; Tsunesada, Y.; Uchihori, Y.; Ueno, S.; Vannuccini, E.; Wefel, J. P.; Yamaoka, K.; Yanagita, S.; Yoshida, A.; Yoshida, K.; Yuda, T.

    2017-05-01

    In August 2015, the CALorimetric Electron Telescope (CALET), designed for long exposure observations of high energy cosmic rays, docked with the International Space Station (ISS) and shortly thereafter began to collect data. CALET will measure the cosmic ray electron spectrum over the energy range of 1 GeV to 20 TeV with a very high resolution of 2% above 100 GeV, based on a dedicated instrument incorporating an exceptionally thick 30 radiation-length calorimeter with both total absorption and imaging (TASC and IMC) units. Each TASC readout channel must be carefully calibrated over the extremely wide dynamic range of CALET that spans six orders of magnitude in order to obtain a degree of calibration accuracy matching the resolution of energy measurements. These calibrations consist of calculating the conversion factors between ADC units and energy deposits, ensuring linearity over each gain range, and providing a seamless transition between neighboring gain ranges. This paper describes these calibration methods in detail, along with the resulting data and associated accuracies. The results presented in this paper show that a sufficient accuracy was achieved for the calibrations of each channel in order to obtain a suitable resolution over the entire dynamic range of the electron spectrum measurement.

  16. Calibration methods influence quantitative material decomposition in photon-counting spectral CT

    NASA Astrophysics Data System (ADS)

    Curtis, Tyler E.; Roeder, Ryan K.

    2017-03-01

    Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.

  17. Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry

    PubMed Central

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies. PMID:26011052

  18. Application of composite small calibration objects in traffic accident scene photogrammetry.

    PubMed

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  19. Wavelength calibration of dispersive near-infrared spectrometer using relative k-space distribution with low coherence interferometer

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2016-05-01

    The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.

  20. Absolute Radiometric Calibration of EUNIS-06

    NASA Technical Reports Server (NTRS)

    Thomas, R. J.; Rabin, D. M.; Kent, B. J.; Paustian, W.

    2007-01-01

    The Extreme-Ultraviolet Normal-Incidence Spectrometer (EUNIS) is a soundingrocket payload that obtains imaged high-resolution spectra of individual solar features, providing information about the Sun's corona and upper transition region. Shortly after its successful initial flight last year, a complete end-to-end calibration was carried out to determine the instrument's absolute radiometric response over its Longwave bandpass of 300 - 370A. The measurements were done at the Rutherford-Appleton Laboratory (RAL) in England, using the same vacuum facility and EUV radiation source used in the pre-flight calibrations of both SOHO/CDS and Hinode/EIS, as well as in three post-flight calibrations of our SERTS sounding rocket payload, the precursor to EUNIS. The unique radiation source provided by the Physikalisch-Technische Bundesanstalt (PTB) had been calibrated to an absolute accuracy of 7% (l-sigma) at 12 wavelengths covering our bandpass directly against the Berlin electron storage ring BESSY, which is itself a primary radiometric source standard. Scans of the EUNIS aperture were made to determine the instrument's absolute spectral sensitivity to +- 25%, considering all sources of error, and demonstrate that EUNIS-06 was the most sensitive solar E W spectrometer yet flown. The results will be matched against prior calibrations which relied on combining measurements of individual optical components, and on comparisons with theoretically predicted 'insensitive' line ratios. Coordinated observations were made during the EUNIS-06 flight by SOHO/CDS and EIT that will allow re-calibrations of those instruments as well. In addition, future EUNIS flights will provide similar calibration updates for TRACE, Hinode/EIS, and STEREO/SECCHI/EUVI.

  1. Calibration of the ROSAT HRI Spectral Response

    NASA Technical Reports Server (NTRS)

    Prestwich, Andrea

    1998-01-01

    The ROSAT High Resolution Imager has a limited (2-band) spectral response. This spectral capability can give X-ray hardness ratios on spatial scales of 5 arcseconds. The spectral response of the center of the detector was calibrated before the launch of ROSAT, but the gain decreases-with time and also is a function of position on the detector. To complicate matters further, the satellite is "wobbled", possibly moving a source across several spatial gain states. These difficulties have prevented the spectral response of the ROSAT HRI from being used for scientific measurements. We have used Bright Earth data and in-flight calibration sources to map the spatial and temporal gain changes, and written software which will allow ROSAT users to generate a calibrated XSPEC response matrix and hence determine a calibrated hardness ratio. In this report, we describe the calibration procedure and show how to obtain a response matrix. In Section 2 we give an overview of the calibration procedure, in Section 3 we give a summary of HRI spatial and temporal gain variations. Section 4 describes the routines used to determine the gain distribution of a source. In Sections 5 and 6, we describe in detail how the Bright Earth database and calibration sources are used to derive a corrected response matrix for a given observation. Finally, Section 7 describes how to use the software.

  2. Camera Calibration with Radial Variance Component Estimation

    NASA Astrophysics Data System (ADS)

    Mélykuti, B.; Kruck, E. J.

    2014-11-01

    Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.

  3. Developing new extension of GafChromic RTQA2 film to patient quality assurance field using a plan-based calibration method

    NASA Astrophysics Data System (ADS)

    Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang

    2015-10-01

    GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter. Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method. The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90%  ±  1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality. By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the

  4. Evaluation of Strain-Life Fatigue Curve Estimation Methods and Their Application to a Direct-Quenched High-Strength Steel

    NASA Astrophysics Data System (ADS)

    Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.

    2018-03-01

    Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.

  5. The Calibration of the Slotted Section for Precision Microwave Measurements

    DTIC Science & Technology

    1952-03-01

    Calibration Curve for lossless Structures B. The Correction Relations for Dis’sipative Structures C The Effect of an Error in the Variable Short...a’discussipn of protoe effects ? and a methpd of correction? for large insertion depths are given in the literature-* xhrs. reppirt is _ cpnceraed...solely with error source fcp)v *w w«v 3Jhe: presence of the slot in the slptted section Intro dub« effects ? fa)" the slot, loads the vmyeguide

  6. Geomorphological origin of recession curves

    NASA Astrophysics Data System (ADS)

    Biswal, Basudev; Marani, Marco

    2010-12-01

    We identify a previously undetected link between the river network morphology and key recession curves properties through a conceptual-physical model of the drainage process of the riparian unconfined aquifer. We show that the power-law exponent, α, of -dQ/dt vs. Q curves is related to the power-law exponent of N(l) vs. G(l) curves (which we show to be connected to Hack's law), where l is the downstream distance from the channel heads, N(l) is the number of channel reaches exactly located at a distance l from their channel head, and G(l) is the total length of the network located at a distance greater or equal to l from channel heads. Using Digital Terrain Models and daily discharge observations from 67 US basins we find that geomorphologic α estimates match well the values obtained from recession curves analyses. Finally, we argue that the link between recession flows and network morphology points to an important role of low-flow discharges in shaping the channel network.

  7. Recession curve analysis for groundwater levels: case study in Latvia

    NASA Astrophysics Data System (ADS)

    Gailuma, A.; VÄ«tola, I.; Abramenko, K.; Lauva, D.; Vircavs, V.; Veinbergs, A.; Dimanta, Z.

    2012-04-01

    Recession curve analysis is powerful and effective analysis technique in many research areas related with hydrogeology where observations have to be made, such as water filtration and absorption of moisture, irrigation and drainage, planning of hydroelectric power production and chemical leaching (elution of chemical substances) as well as in other areas. The analysis of the surface runoff hydrograph`s recession curves, which is performed to conceive the after-effects of interaction of precipitation and surface runoff, has approved in practice. The same method for analysis of hydrograph`s recession curves can be applied for the observations of the groundwater levels. There are manually prepared hydrograph for analysis of recession curves for observation wells (MG2, BG2 and AG1) in agricultural monitoring sites in Latvia. Within this study from the available monitoring data of groundwater levels were extracted data of declining periods, splitted by month. The drop-down curves were manually (by changing the date) moved together, until to find the best match, thereby obtaining monthly drop-down curves, representing each month separately. Monthly curves were combined and manually joined, for obtaining characterizing drop-down curves of the year for each well. Within the process of decreased recession curve analysis, from the initial curve was cut out upward areas, leaving only the drops of the curve, consequently, the curve is transformed more closely to the groundwater flow, trying to take out the impact of rain or drought periods from the curve. Respectively, the drop-down curve is part of the data, collected with hydrograph, where data with the discharge dominates, without considering impact of precipitation. Using the recession curve analysis theory, ready tool "A Visual Basic Spreadsheet Macro for Recession Curve Analysis" was used for selection of data and logarithmic functions matching (K. Posavec et.al., GROUND WATER 44, no. 5: 764-767, 2006), as well as

  8. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    PubMed

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  9. Solution-based calibration strategy for laser ablation-inductively coupled plasma-mass spectrometry using desolvating nebulizer system

    NASA Astrophysics Data System (ADS)

    Zhang, Guoxia; Li, Qing; Zhu, Yan; Wang, Zheng

    2018-07-01

    An additional quantification strategy using a desolvating nebulizer system (DNS) for solution-based calibration was developed. For quantitative analysis, laser ablation (LA) and DNS-generated aerosols were coupled using a "Y" connector and introduced into the inductively coupled plasma (ICP). These aerosols were also observed by scanning electron microscopy following collection on a silicon chip. Internal standards (108Ag, 64Cu, 89Y) were used to correct for the different aerosol transport efficiencies between the DNS and LA. The correlation coefficients of the calibration curves for all elements ranged from 0.9986 to 0.9999. Standard reference materials (NIST 610-616 and GBW08407-08411) were used to demonstrate the accuracy and precision of the method. The results were in good agreement with certified values, and the relative standard deviation (RSD) of most elements was <3%. The limits of detection (LODs) for 50Cr, 55Mn, 59Co, 60Ni, 66Zn, 89Y, 110Cd, 139La, 140Ce, 146Nd, 147Sm, 157Gd, 163Dy, 166Er, and 208Pb were 23, 3, 3, 19, 31, 4, 12, 0.4, 0.9, 0.1, 0.2, 2, 0.3, 0.4, and 21 ng/g, respectively, which were significantly better than those obtained by other methods. Further, this approach was applied for the analysis of multiple elements in biological tissues, and the results were in good agreement with those obtained using solution-based inductively coupled plasma-mass spectrometry (ICP-MS).

  10. OT calibration and service maintenance manual.

    DOT National Transportation Integrated Search

    2012-01-01

    The machine conditions, as well as the values at the calibration and control parameters, may determine the quality of each test results obtained. In order to keep consistency and accuracy, the conditions, performance and measurements of an OT must be...

  11. Gap Test Calibrations And Their Scalin

    NASA Astrophysics Data System (ADS)

    Sandusky, Harold

    2012-03-01

    Common tests for measuring the threshold for shock initiation are the NOL large scale gap test (LSGT) with a 50.8-mm diameter donor/gap and the expanded large scale gap test (ELSGT) with a 95.3-mm diameter donor/gap. Despite the same specifications for the explosive donor and polymethyl methacrylate (PMMA) gap in both tests, calibration of shock pressure in the gap versus distance from the donor scales by a factor of 1.75, not the 1.875 difference in their sizes. Recently reported model calculations suggest that the scaling discrepancy results from the viscoelastic properties of PMMA in combination with different methods for obtaining shock pressure. This is supported by the consistent scaling of these donors when calibrated in water-filled aquariums. Calibrations and their scaling are compared for other donors with PMMA gaps and for various donors in water.

  12. Calibration of the Urbana lidar system

    NASA Technical Reports Server (NTRS)

    Cerny, T.; Sechrist, C. F., Jr.

    1980-01-01

    A method for calibrating data obtained by the Urban sodium lidar system is presented. First, an expression relating the number of photocounts originating from a specific altitude range to the soodium concentration is developed. This relation is then simplified by normalizing the sodium photocounts with photocounts originating from the Rayleigh region of the atmosphere. To evaluate the calibration expression, the laser linewidth must be known. Therefore, a method for measuring the laser linewidth using a Fabry-Perot interferometer is given. The laser linewidth was found to be 6 + or - 2.5 pm. Problems due to photomultiplier tube overloading are discussed. Finally, calibrated data is presented. The sodium column abundance exhibits something close to a sinusoidal variation throughout the year with the winter months showing an enhancement of a factor of 5 to 7 over the summer months.

  13. Calibration of Photon Sources for Brachytherapy

    NASA Astrophysics Data System (ADS)

    Rijnders, Alex

    Source calibration has to be considered an essential part of the quality assurance program in a brachytherapy department. Not only it will ensure that the source strength value used for dose calculation agrees within some predetermined limits to the value stated on the source certificate, but also it will ensure traceability to international standards. At present calibration is most often still given in terms of reference air kerma rate, although calibration in terms of absorbed dose to water would be closer to the users interest. It can be expected that in a near future several standard laboratories will be able to offer this latter service, and dosimetry protocols will have to be adapted in this way. In-air measurement using ionization chambers (e.g. a Baldwin—Farmer ionization chamber for 192Ir high dose rate HDR or pulsed dose rate PDR sources) is still considered the method of choice for high energy source calibration, but because of their ease of use and reliability well type chambers are becoming more popular and are nowadays often recommended as the standard equipment. For low energy sources well type chambers are in practice the only equipment available for calibration. Care should be taken that the chamber is calibrated at the standard laboratory for the same source type and model as used in the clinic, and using the same measurement conditions and setup. Several standard laboratories have difficulties to provide these calibration facilities, especially for the low energy seed sources (125I and 103Pd). Should a user not be able to obtain properly calibrated equipment to verify the brachytherapy sources used in his department, then at least for sources that are replaced on a regular basis, a consistency check program should be set up to ensure a minimal level of quality control before these sources are used for patient treatment.

  14. Calibration of PMIS pavement performance prediction models.

    DOT National Transportation Integrated Search

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  15. Cross calibration of the Landsat-7 ETM+ and EO-1 ALI sensor

    USGS Publications Warehouse

    Chander, G.; Meyer, D.J.; Helder, D.L.

    2004-01-01

    As part of the Earth Observer 1 (EO-1) Mission, the Advanced Land Imager (ALI) demonstrates a potential technological direction for Landsat Data Continuity Missions. To evaluate ALI's capabilities in this role, a cross-calibration methodology has been developed using image pairs from the Landsat-7 (L7) Enhanced Thematic Mapper Plus (ETM+) and EO-1 (ALI) to verify the radiometric calibration of ALI with respect to the well-calibrated L7 ETM+ sensor. Results have been obtained using two different approaches. The first approach involves calibration of nearly simultaneous surface observations based on image statistics from areas observed simultaneously by the two sensors. The second approach uses vicarious calibration techniques to compare the predicted top-of-atmosphere radiance derived from ground reference data collected during the overpass to the measured radiance obtained from the sensor. The results indicate that the relative sensor chip assemblies gains agree with the ETM+ visible and near-infrared bands to within 2% and the shortwave infrared bands to within 4%.

  16. Calibration Uncertainties in the Droplet Measurement Technologies Cloud Condensation Nuclei Counter

    NASA Astrophysics Data System (ADS)

    Hibert, Kurt James

    average surface pressure at Grand Forks, ND. The supersaturation calibration uncertainty is 2.3, 3.1, and 4.4 % for calibrations done at 700, 840, and 980 hPa respectively. The supersaturation calibration change with pressure is on average 0.047 % supersaturation per 100 hPa. The supersaturation calibrations done at UND are 42-45 % lower than supersaturation calibrations done at DMT approximately 1 year previously. Performance checks confirmed that all major leaks developed during shipping were fixed before conducting the supersaturation calibrations. Multiply-charged particles passing through the Electrostatic Classifier may have influenced DMT's activation curves, which is likely part of the supersaturation calibration difference. Furthermore, the fitting method used to calculate the activation size and the limited calibration points are likely significant sources of error in DMT's supersaturation calibration. While the DMT CCN counter's calibration uncertainties are relatively small, and the pressure dependence is easily accounted for, the calibration methodology used by different groups can be very important. The insights gained from the careful calibration of the DMT CCN counter indicate that calibration of scientific instruments using complex methodology is not trivial.

  17. An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition

    NASA Astrophysics Data System (ADS)

    Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.

    2018-04-01

    Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  18. Calibration of the NASA Glenn Research Center 16 in. Mass-Flow Plug

    NASA Technical Reports Server (NTRS)

    Davis, David O.; Friedlander, David J.; Saunders, J. David; Frate, Franco C.; Foster, Lancert E.

    2014-01-01

    The results of an experimental calibration of the NASA Glenn Research Center 16 in. Mass-Flow Plug (MFP) are presented and compared to a previously obtained calibration of a 15 in. Mass-Flow Plug. An ASME low-beta, long-radius nozzle was used as the calibration reference. The discharge coefficient for the ASME nozzle was obtained by numerically simulating the flow through the nozzle from the WIND-US code. The results showed agreement between the 15 and 16 in. MFPs for area ratios (MFP to pipe area ratio) greater than 0.6 but deviate at area ratios below this value for reasons that are not fully understood. A general uncertainty analysis was also performed and indicates that large uncertainties in the calibration are present for low MFP area ratios.

  19. Automatic alignment method for calibration of hydrometers

    NASA Astrophysics Data System (ADS)

    Lee, Y. J.; Chang, K. H.; Chon, J. C.; Oh, C. Y.

    2004-04-01

    This paper presents a new method to automatically align specific scale-marks for the calibration of hydrometers. A hydrometer calibration system adopting the new method consists of a vision system, a stepping motor, and software to control the system. The vision system is composed of a CCD camera and a frame grabber, and is used to acquire images. The stepping motor moves the camera, which is attached to the vessel containing a reference liquid, along the hydrometer. The operating program has two main functions: to process images from the camera to find the position of the horizontal plane and to control the stepping motor for the alignment of the horizontal plane with a particular scale-mark. Any system adopting this automatic alignment method is a convenient and precise means of calibrating a hydrometer. The performance of the proposed method is illustrated by comparing the calibration results using the automatic alignment method with those obtained using the manual method.

  20. Extracting information from S-curves of language change.

    PubMed

    Ghanbarnejad, Fakhteh; Gerlach, Martin; Miotto, José M; Altmann, Eduardo G

    2014-12-06

    It is well accepted that adoption of innovations are described by S-curves (slow start, accelerating period and slow end). In this paper, we analyse how much information on the dynamics of innovation spreading can be obtained from a quantitative description of S-curves. We focus on the adoption of linguistic innovations for which detailed databases of written texts from the last 200 years allow for an unprecedented statistical precision. Combining data analysis with simulations of simple models (e.g. the Bass dynamics on complex networks), we identify signatures of endogenous and exogenous factors in the S-curves of adoption. We propose a measure to quantify the strength of these factors and three different methods to estimate it from S-curves. We obtain cases in which the exogenous factors are dominant (in the adoption of German orthographic reforms and of one irregular verb) and cases in which endogenous factors are dominant (in the adoption of conventions for romanization of Russian names and in the regularization of most studied verbs). These results show that the shape of S-curve is not universal and contains information on the adoption mechanism. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  1. Design of multiplex calibrant plasmids, their use in GMO detection and the limit of their applicability for quantitative purposes owing to competition effects.

    PubMed

    Debode, Frédéric; Marien, Aline; Janssen, Eric; Berben, Gilbert

    2010-03-01

    Five double-target multiplex plasmids to be used as calibrants for GMO quantification were constructed. They were composed of two modified targets associated in tandem in the same plasmid: (1) a part of the soybean lectin gene and (2) a part of the transgenic construction of the GTS40-3-2 event. Modifications were performed in such a way that each target could be amplified with the same primers as those for the original target from which they were derived but such that each was specifically detected with an appropriate probe. Sequence modifications were done to keep the parameters of the new target as similar as possible to those of its original sequence. The plasmids were designed to be used either in separate reactions or in multiplex reactions. Evidence is given that with each of the five different plasmids used in separate wells as a calibrant for a different copy number, a calibration curve can be built. When the targets were amplified together (in multiplex) and at different concentrations inside the same well, the calibration curves showed that there was a competition effect between the targets and this limits the range of copy numbers for calibration over a maximum of 2 orders of magnitude. Another possible application of multiplex plasmids is discussed.

  2. Waveguide Calibrator for Multi-Element Probe Calibration

    NASA Technical Reports Server (NTRS)

    Sommerfeldt, Scott D.; Blotter, Jonathan D.

    2007-01-01

    A calibrator, referred to as the spider design, can be used to calibrate probes incorporating multiple acoustic sensing elements. The application is an acoustic energy density probe, although the calibrator can be used for other types of acoustic probes. The calibrator relies on the use of acoustic waveguide technology to produce the same acoustic field at each of the sensing elements. As a result, the sensing elements can be separated from each other, but still calibrated through use of the acoustic waveguides. Standard calibration techniques involve placement of an individual microphone into a small cavity with a known, uniform pressure to perform the calibration. If a cavity is manufactured with sufficient size to insert the energy density probe, it has been found that a uniform pressure field can only be created at very low frequencies, due to the size of the probe. The size of the energy density probe prevents one from having the same pressure at each microphone in a cavity, due to the wave effects. The "spider" design probe is effective in calibrating multiple microphones separated from each other. The spider design ensures that the same wave effects exist for each microphone, each with an indivdual sound path. The calibrator s speaker is mounted at one end of a 14-cm-long and 4.1-cm diameter small plane-wave tube. This length was chosen so that the first evanescent cross mode of the plane-wave tube would be attenuated by about 90 dB, thus leaving just the plane wave at the termination plane of the tube. The tube terminates with a small, acrylic plate with five holes placed symmetrically about the axis of the speaker. Four ports are included for the four microphones on the probe. The fifth port is included for the pre-calibrated reference microphone. The ports in the acrylic plate are in turn connected to the probe sensing elements via flexible PVC tubes. These five tubes are the same length, so the acoustic wave effects are the same in each tube. The

  3. Results of the 2001 JPL Balloon Flight Solar Cell Calibration Program

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.; Mueller, R. L.

    2002-01-01

    The 2001 solar cell calibration balloon flight campaign consisted of two flights, which occurred on June 26, 2001, and July 4, 2001. Fifty-nine modules were carried to an altitude of approximately 120,000 ft (36.6 km). Full I-V curves were measured on nineteen of these modules, and output at a fixed load was measured on thirty-two modules (forty-six cells), with some modules repeated on the second flight. Nine modules were flown for temperature measurement only. The data from the fixed load cells on the first flight was not usable. The temperature dependence of the first-flight data was erratic and we were unable to find a way to extract accurate calibration values. The I-V data from the first flight was good, however, and all data from the second flight was also good. The data was corrected to 28 C and to 1 AU (1.496 x 10(exp 8)km). The calibrated cells have been returned to their owners and can now be used as reference standards in simulator testing of cells and arrays.

  4. A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry

    NASA Astrophysics Data System (ADS)

    Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.

    2018-03-01

    Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.

  5. Results of the 2000 JPL Balloon Flight Solar Cell Calibration Program

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.; Mueller, R. L.; Weiss, R. S.

    2001-01-01

    The 2000 solar cell calibration balloon flight campaign consisted of two flights, which occurred on June 27, 2000, and July 5, 2000. All objectives of the flight program were met. Sixty-two modules were carried to an altitude of approximately 120,000 ft (36.6 km). Full I-V curves were measured on sixteen of these modules, and output at a fixed load was measured on thirty-seven modules (forty-six cells), with some modules repeated on the second flight. Nine modules were flown for temperature measurement only. This data was corrected to 28 C and to 1 AU (1.496x10(exp 8) km). The calibrated cells have been returned to their owners and can now be used as reference standards in simulator testing of cells and arrays.

  6. Preparation of the calibration unit for LINC-NIRVANA

    NASA Astrophysics Data System (ADS)

    Labadie, Lucas; de Bonis, Fulvio; Egner, Sebastian; Herbst, Tom; Bizenberger, Peter; Kürster, Martin; Delboulé, Alain

    2008-07-01

    We present in this paper the status of the calibration unit for the interferometric infrared imager LINC-NIRVANA that will be installed on the Large Binocular Telescope, Arizona. LINC-NIRVANA will combine high angular resolution (~10 mas in J), and wide field-of-view (up to 2'×2') thanks to the conjunct use of interferometry and MCAO. The goal of the calibration unit is to provide calibration tools for the different sub-systems of the instrument. We give an overview of the different tasks that are foreseen as well as of the preliminary detailed design. We show some interferometric results obtained with specific fiber splitters optimized for LINC-NIRVANA. The different components of the calibration unit will be used either during the integration phase on site, or during the science exploitation phase of the instrument.

  7. Calibration of the ROSAT HRI Spectral Response

    NASA Technical Reports Server (NTRS)

    Prestwich, Andrea H.; Silverman, John; McDowell, Jonathan; Callanan, Paul; Snowden, Steve

    2000-01-01

    The ROSAT High Resolution Imager has a limited (2-band) spectral response. This spectral capability can give X-ray hardness ratios on spatial scales of 5 arcseconds. The spectral response of the center of the detector was calibrated before the launch of ROSAT, but the gain decreases with time and also is a function of position on the detector. To complicate matters further, the satellite is 'wobbled', possibly moving a source across several spatial gain states. These difficulties have prevented the spectral response of the ROSAT High Resolution Imager (HRI) from being used for scientific measurements. We have used Bright Earth data and in-flight calibration sources to map the spatial and temporal gain changes, and written software which will allow ROSAT users to generate a calibrated XSPEC (an x ray spectral fitting package) response matrix and hence determine a calibrated hardness ratio. In this report, we describe the calibration procedure and show how to obtain a response matrix. In Section 2 we give an overview of the calibration procedure, in Section 3 we give a summary of HRI spatial and temporal gain variations. Section 4 describes the routines used to determine the gain distribution of a source. In Sections 5 and 6, we describe in detail how, the Bright Earth database and calibration sources are used to derive a corrected response matrix for a given observation. Finally, Section 7 describes how to use the software.

  8. Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2013-01-01

    Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.

  9. Calibrating the IXPE observatory from ground to space

    NASA Astrophysics Data System (ADS)

    Muleri, Fabio; Baldini, Luca; Baumgartner, Wayne; Evangelista, Yuri; Fabiani, Sergio; Kolodziejczak, Jeffery; Latronico, Luca; Lefevre, Carlo; O'Dell, Stephen L.; Ramsey, Brian; Sgrò, Carmelo; Soffitta, Paolo; Tennant, Allyn; Weisskopf, Martin C.

    2017-08-01

    The Imaging X-ray Polarimetry Explorer (IXPE) will be the next SMEX mission launched by NASA in 2021 in collaboration with the Italian Space Agency (ASI). IXPE will perform groundbreaking measurements of imaging polarization in X-rays for a number of different classes of sources with three identical telescopes, finally (re)opening a window in the high energy Universe after more than 40 years since the first pioneering results. The unprecedented sensitivity of IXPE to polarization poses peculiar requirements on the payload calibration, e.g. the use of polarized and completely unpolarized radiation, both on ground and in orbit, and can not rely on a systematic comparison with results obtained by previous observatories. In this paper, we will present the IXPE calibration plan, describing both calibrations which will be performed on the detectors at INAF-IAPS in Rome (Italy) and the calibration on the mirror and detector assemblies which will be carried out at Marshall Space Flight Center in Huntsville, Alabama. On orbit calibrations, performed with calibrations sources mounted on a filter wheel and placed in front of each detector when necessary, will be presented as well.

  10. Calibrating the IXPE Observatory from Ground to Space

    NASA Technical Reports Server (NTRS)

    Muleri, Fabio; Baldini, Luca; Baumgartner, Wayne; Evangelista, Yuri; Fabiani, Sergio; Kolodziejczak, Jeffery; Latronico, Luca; Lefevre, Carlo; O'Dell, Stephen L.; Ramsey, Brian; hide

    2017-01-01

    The Imaging X-ray Polarimetry Explorer (IXPE) will be the next SMEX mission launched by NASA in 2021 in collaboration with the Italian Space Agency (ASI). IXPE will perform groundbreaking measurements of imaging polarization in X-rays for a number of different classes of sources with three identical telescopes, finally (re)opening a window in the high energy Universe after more than 40 years since the first pioneering results. The unprecedented sensitivity of IXPE to polarization poses peculiar requirements on the payload calibration, e.g. the use of polarized and completely unpolarized radiation, both on ground and in orbit, and can not rely on a systematic comparison with results obtained by previous observatories. In this paper, we will present the IXPE calibration plan, describing both calibrations which will be performed on the detectors at INAF-IAPS in Rome (Italy) and the calibration on the mirror and detector assemblies which will be carried out at Marshall Space Flight Center in Huntsville, Alabama. On orbit calibrations, performed with calibrations sources mounted on a filter wheel and placed in front of each detector when necessary, will be presented as well.

  11. Holomorphic curves in surfaces of general type.

    PubMed Central

    Lu, S S; Yau, S T

    1990-01-01

    This note answers some questions on holomorphic curves and their distribution in an algebraic surface of positive index. More specifically, we exploit the existence of natural negatively curved "pseudo-Finsler" metrics on a surface S of general type whose Chern numbers satisfy c(2)1>2c2 to show that a holomorphic map of a Riemann surface to S whose image is not in any rational or elliptic curve must satisfy a distance decreasing property with respect to these metrics. We show as a consequence that such a map extends over isolated punctures. So assuming that the Riemann surface is obtained from a compact one of genus q by removing a finite number of points, then the map is actually algebraic and defines a compact holomorphic curve in S. Furthermore, the degree of the curve with respect to a fixed polarization is shown to be bounded above by a multiple of q - 1 irrespective of the map. PMID:11607050

  12. Calibration of Galileo signals for time metrology.

    PubMed

    Defraigne, Pascale; Aerts, Wim; Cerretto, Giancarlo; Cantoni, Elena; Sleewaegen, Jean-Marie

    2014-12-01

    Using global navigation satellite system (GNSS) signals for accurate timing and time transfer requires the knowledge of all electric delays of the signals inside the receiving system. GNSS stations dedicated to timing or time transfer are classically calibrated only for Global Positioning System (GPS) signals. This paper proposes a procedure to determine the hardware delays of a GNSS receiving station for Galileo signals, once the delays of the GPS signals are known. This approach makes use of the broadcast satellite inter-signal biases, and is based on the ionospheric delay measured from dual-frequency combinations of GPS and Galileo signals. The uncertainty on the so-determined hardware delays is estimated to 3.7 ns for each isolated code in the L5 frequency band, and 4.2 ns for the ionosphere-free combination of E1 with a code of the L5 frequency band. For the calibration of a time transfer link between two stations, another approach can be used, based on the difference between the common-view time transfer results obtained with calibrated GPS data and with uncalibrated Galileo data. It is shown that the results obtained with this approach or with the ionospheric method are equivalent.

  13. Calibration strategies for the direct determination of Ca, K, and Mg in commercial samples of powdered milk and solid dietary supplements using laser-induced breakdown spectroscopy (LIBS).

    PubMed

    Dos Santos Augusto, Amanda; Barsanelli, Paulo Lopes; Pereira, Fabiola Manhas Verbi; Pereira-Filho, Edenir Rodrigues

    2017-04-01

    This study describes the application of laser-induced breakdown spectroscopy (LIBS) for the direct determination of Ca, K and Mg in powdered milk and solid dietary supplements. The following two calibration strategies were applied: (i) use of the samples to calculate calibration models (milk) and (ii) use of sample mixtures (supplements) to obtain a calibration curve. In both cases, reference values obtained from inductively coupled plasma optical emission spectroscopy (ICP OES) after acid digestion were used. The emission line selection from LIBS spectra was accomplished by analysing the regression coefficients of partial least squares (PLS) regression models, and wavelengths of 534.947, 766.490 and 285.213nm were chosen for Ca, K and Mg, respectively. In the case of the determination of Ca in supplements, it was necessary to perform a dilution (10-fold) of the standards and samples to minimize matrix interference. The average accuracy for powdered milk ranged from 60% to 168% for Ca, 77% to 152% for K and 76% to 131% for Mg. In the case of dietary supplements, standard error of prediction (SEP) varied from 295 (Mg) to 3782mgkg -1 (Ca). The proposed method presented an analytical frequency of around 60 samples per hour and the step of sample manipulation was drastically reduced, with no generation of toxic chemical residues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Fabrication, characterization, and modeling of comixed films for NXS calibration targets [Fabrication and metrology of the NXS calibration targets

    DOE PAGES

    Jaquez, Javier; Farrell, Mike; Huang, Haibo; ...

    2016-08-01

    In 2014/2015 at the Omega laser facility, several experiments took place to calibrate the National Ignition Facility (NIF) X-ray spectrometer (NXS), which is used for high-resolution time-resolved spectroscopic experiments at NIF. The spectrometer allows experimentalists to measure the X-ray energy emitted from high-energy targets, which is used to understand key data such as mixing of materials in highly compressed fuel. The purpose of the experiments at Omega was to obtain information on the instrument performance and to deliver an absolute photometric calibration of the NXS before it was deployed at NIF. The X-ray emission sources fabricated for instrument calibration weremore » 1-mm fused silica spheres with precisely known alloy composition coatings of Si/Ag/Mo, Ti/Cr/Ag, Cr/Ni/Zn, and Zn/Zr, which have emission in the 2- to 18-keV range. Critical to the spectrometer calibration is a known atomic composition of elements with low uncertainty for each calibration sphere. This study discusses the setup, fabrication, and precision metrology of these spheres as well as some interesting findings on the ternary magnetron-sputtered alloy structure.« less

  15. Fabrication, characterization, and modeling of comixed films for NXS calibration targets [Fabrication and metrology of the NXS calibration targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaquez, Javier; Farrell, Mike; Huang, Haibo

    In 2014/2015 at the Omega laser facility, several experiments took place to calibrate the National Ignition Facility (NIF) X-ray spectrometer (NXS), which is used for high-resolution time-resolved spectroscopic experiments at NIF. The spectrometer allows experimentalists to measure the X-ray energy emitted from high-energy targets, which is used to understand key data such as mixing of materials in highly compressed fuel. The purpose of the experiments at Omega was to obtain information on the instrument performance and to deliver an absolute photometric calibration of the NXS before it was deployed at NIF. The X-ray emission sources fabricated for instrument calibration weremore » 1-mm fused silica spheres with precisely known alloy composition coatings of Si/Ag/Mo, Ti/Cr/Ag, Cr/Ni/Zn, and Zn/Zr, which have emission in the 2- to 18-keV range. Critical to the spectrometer calibration is a known atomic composition of elements with low uncertainty for each calibration sphere. This study discusses the setup, fabrication, and precision metrology of these spheres as well as some interesting findings on the ternary magnetron-sputtered alloy structure.« less

  16. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  17. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  18. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  19. Use of the Airborne Visible/Infrared Imaging Spectrometer to calibrate the optical sensor on board the Japanese Earth Resources Satellite-1

    NASA Technical Reports Server (NTRS)

    Green, Robert O.; Conel, James E.; Vandenbosch, Jeannette; Shimada, Masanobu

    1993-01-01

    We describe an experiment to calibrate the optical sensor (OPS) on board the Japanese Earth Resources Satellite-1 with data acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). On 27 Aug. 1992 both the OPS and AVIRIS acquired data concurrently over a calibration target on the surface of Rogers Dry Lake, California. The high spectral resolution measurements of AVIRIS have been convolved to the spectral response curves of the OPS. These data in conjunction with the corresponding OPS digitized numbers have been used to generate the radiometric calibration coefficients for the eight OPS bands. This experiment establishes the suitability of AVIRIS for the calibration of spaceborne sensors in the 400 to 2500 nm spectral region.

  20. Self-calibration performance in stereoscopic PIV acquired in a transonic wind tunnel

    DOE PAGES

    Beresh, Steven J.; Wagner, Justin L.; Smith, Barton L.

    2016-03-16

    Three stereoscopic PIV experiments have been examined to test the effectiveness of self-calibration under varied circumstances. Furthermore, we our measurements taken in a streamwise plane yielded a robust self-calibration that returned common results regardless of the specific calibration procedure, but measurements in the crossplane exhibited substantial velocity bias errors whose nature was sensitive to the particulars of the self-calibration approach. Self-calibration is complicated by thick laser sheets and large stereoscopic camera angles and further exacerbated by small particle image diameters and high particle seeding density. In spite of the different answers obtained by varied self-calibrations, each implementation locked onto anmore » apparently valid solution with small residual disparity and converged adjustment of the calibration plane. Thus, the convergence of self-calibration on a solution with small disparity is not sufficient to indicate negligible velocity error due to the stereo calibration.« less

  1. Calibration of force actuators on an adaptive secondary prototype.

    PubMed

    Ricci, Davide; Riccardi, Armando; Zanotti, Daniela

    2008-07-10

    In the context of the Large Binocular Telescope project, we present the results of force actuator calibrations performed on an adaptive secondary prototype called P45, a thin deformable glass with magnets glued onto its back. Electromagnetic actuators, controlled in a closed loop with a system of internal metrology based on capacitive sensors, continuously deform its shape to correct the distortions of the wavefront. Calibrations of the force actuators are needed because of the differences between driven forces and measured forces. We describe the calibration procedures and the results, obtained with errors of less than 1.5%.

  2. Alkali trace elements in Gale crater, Mars, with ChemCam: Calibration update and geological implications

    NASA Astrophysics Data System (ADS)

    Payré, V.; Fabre, C.; Cousin, A.; Sautter, V.; Wiens, R. C.; Forni, O.; Gasnault, O.; Mangold, N.; Meslin, P.-Y.; Lasue, J.; Ollila, A.; Rapin, W.; Maurice, S.; Nachon, M.; Le Deit, L.; Lanza, N.; Clegg, S.

    2017-03-01

    The Chemistry Camera (ChemCam) instrument onboard Curiosity can detect minor and trace elements such as lithium, strontium, rubidium, and barium. Their abundances can provide some insights about Mars' magmatic history and sedimentary processes. We focus on developing new quantitative models for these elements by using a new laboratory database (more than 400 samples) that displays diverse compositions that are more relevant for Gale crater than the previous ChemCam database. These models are based on univariate calibration curves. For each element, the best model is selected depending on the results obtained by using the ChemCam calibration targets onboard Curiosity. New quantifications of Li, Sr, Rb, and Ba in Gale samples have been obtained for the first 1000 Martian days. Comparing these data in alkaline and magnesian rocks with the felsic and mafic clasts from the Martian meteorite NWA7533—from approximately the same geologic period—we observe a similar behavior: Sr, Rb, and Ba are more concentrated in soluble- and incompatible-element-rich mineral phases (Si, Al, and alkali-rich). Correlations between these trace elements and potassium in materials analyzed by ChemCam reveal a strong affinity with K-bearing phases such as feldspars, K-phyllosilicates, and potentially micas in igneous and sedimentary rocks. However, lithium is found in comparable abundances in alkali-rich and magnesium-rich Gale rocks. This very soluble element can be associated with both alkali and Mg-Fe phases such as pyroxene and feldspar. These observations of Li, Sr, Rb, and Ba mineralogical associations highlight their substitution with potassium and their incompatibility in magmatic melts.

  3. VizieR Online Data Catalog: SNLS and SDSS SN surveys photometric calibration (Betoule+, 2013)

    NASA Astrophysics Data System (ADS)

    Betoule, M.; Marriner, J.; Regnault, N.; Cuillandre, J.-C.; Astier, P.; Guy, J.; Balland, C.; El, Hage P.; Hardin, D.; Kessler, R.; Le Guillou, L.; Mosher, J.; Pain, R.; Rocci, P.-F.; Sako, M.; Schahmaneche, K.

    2012-11-01

    We present a joined photometric calibration for the SNLS and the SDSS supernova surveys. Our main delivery are catalogs of natural AB magnitudes for a large set of selected tertiary standard stars covering the science field of both surveys. Those catalogs are calibrated to the AB flux scale through observations of 5 primary spectrophotometric standard stars, for which HST-STIS spectra are available in the CALSPEC database. The estimate of the uncertainties associated to this calibration are delivered as a single covariance matrix. We also provide a model of the transmission efficiency of the SNLS photometric instrument MegaCam. Those transmission functions are required for the interpretation of MegaCam natural magnitudes in term of physical fluxes. Similar curves for the SDSS photometric instrument have been published in Doi et al. (2010AJ....139.1628D). Last, we release the measured magnitudes of the five CALSPEC standard stars in the magnitude system of the tertiary catalogs. This makes it possible to update the calibration of the tertiary catalogs if CALSPEC spectra for the primary standards are revised. (11 data files).

  4. Calibration of neutron detectors on the Joint European Torus.

    PubMed

    Batistoni, Paola; Popovichev, S; Conroy, S; Lengar, I; Čufar, A; Abhangi, M; Snoj, L; Horton, L

    2017-10-01

    The present paper describes the findings of the calibration of the neutron yield monitors on the Joint European Torus (JET) performed in 2013 using a 252 Cf source deployed inside the torus by the remote handling system, with particular regard to the calibration of fission chambers which provide the time resolved neutron yield from JET plasmas. The experimental data obtained in toroidal, radial, and vertical scans are presented. These data are first analysed following an analytical approach adopted in the previous neutron calibrations at JET. In this way, a calibration function for the volumetric plasma source is derived which allows us to understand the importance of the different plasma regions and of different spatial profiles of neutron emissivity on fission chamber response. Neutronics analyses have also been performed to calculate the correction factors needed to derive the plasma calibration factors taking into account the different energy spectrum and angular emission distribution of the calibrating (point) 252 Cf source, the discrete positions compared to the plasma volumetric source, and the calibration circumstances. All correction factors are presented and discussed. We discuss also the lessons learnt which are the basis for the on-going 14 MeV neutron calibration at JET and for ITER.

  5. Vicarious calibration of GOES imager visible channel using the moon

    USGS Publications Warehouse

    Wu, X.; Stone, T.C.; Yu, F.; Han, D.

    2006-01-01

    In this paper, we study the feasibility of a method for vicarious calibration of the GOES Imager visible channel using the Moon. The measured Moon irradiance from 26 undipped moon imagers exhausted all the potential Moon appearances between July 1998 and December 2005, together with the seven scheduled Moon observation data obtained after November 2005, were compared with the USGS lunar model results to estimate the degradation rate of the GOES-10 Imager visible channel. A total of nine methods of determining the space count and identifying lunar pixels were employed in this study to measure the GOES-10 Moon irradiance. Our results show that the selected mean and the masking Moon appears the best method. Eight of the nine resulting degradation rates range from 4.5%/year to 5.0%/year during the nearly nine years of data, which are consistent with most other degradation rates obtained for GOES-10 based on different references. In particular, the degradation rate from the Moon-based calibration (4.5%/year) agrees very well with the MODIS-based calibration (4.4%/year) over the same period, confirming the capability of relative and absolute calibration based on the Moon. Finally, our estimate of lunar calibration precision as applied to GOES-10 is 3.5%.

  6. Bayesian calibration for electrochemical thermal model of lithium-ion cells

    NASA Astrophysics Data System (ADS)

    Tagade, Piyush; Hariharan, Krishnan S.; Basu, Suman; Verma, Mohan Kumar Singh; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang

    2016-07-01

    Pseudo-two dimensional electrochemical thermal (P2D-ECT) model contains many parameters that are difficult to evaluate experimentally. Estimation of these model parameters is challenging due to computational cost and the transient model. Due to lack of complete physical understanding, this issue gets aggravated at extreme conditions like low temperature (LT) operations. This paper presents a Bayesian calibration framework for estimation of the P2D-ECT model parameters. The framework uses a matrix variate Gaussian process representation to obtain a computationally tractable formulation for calibration of the transient model. Performance of the framework is investigated for calibration of the P2D-ECT model across a range of temperatures (333 Ksbnd 263 K) and operating protocols. In the absence of complete physical understanding, the framework also quantifies structural uncertainty in the calibrated model. This information is used by the framework to test validity of the new physical phenomena before incorporation in the model. This capability is demonstrated by introducing temperature dependence on Bruggeman's coefficient and lithium plating formation at LT. With the incorporation of new physics, the calibrated P2D-ECT model accurately predicts the cell voltage with high confidence. The accurate predictions are used to obtain new insights into the low temperature lithium ion cell behavior.

  7. Numerical simulations of flow fields through conventionally controlled wind turbines & wind farms

    NASA Astrophysics Data System (ADS)

    Emre Yilmaz, Ali; Meyers, Johan

    2014-06-01

    In the current study, an Actuator-Line Model (ALM) is implemented in our in-house pseudo-spectral LES solver SP-WIND, including a turbine controller. Below rated wind speed, turbines are controlled by a standard-torque-controller aiming at maximum power extraction from the wind. Above rated wind speed, the extracted power is limited by a blade pitch controller which is based on a proportional-integral type control algorithm. This model is used to perform a series of single turbine and wind farm simulations using the NREL 5MW turbine. First of all, we focus on below-rated wind speed, and investigate the effect of the farm layout on the controller calibration curves. These calibration curves are expressed in terms of nondimensional torque and rotational speed, using the mean turbine-disk velocity as reference. We show that this normalization leads to calibration curves that are independent of wind speed, but the calibration curves do depend on the farm layout, in particular for tightly spaced farms. Compared to turbines in a lone-standing set-up, turbines in a farm experience a different wind distribution over the rotor due to the farm boundary-layer interaction. We demonstrate this for fully developed wind-farm boundary layers with aligned turbine arrangements at different spacings (5D, 7D, 9D). Further we also compare calibration curves obtained from full farm simulations with calibration curves that can be obtained at a much lower cost using a minimal flow unit.

  8. Hybrid Geometric Calibration Method for Multi-Platform Spaceborne SAR Image with Sparse Gcps

    NASA Astrophysics Data System (ADS)

    Lv, G.; Tang, X.; Ai, B.; Li, T.; Chen, Q.

    2018-04-01

    Geometric calibration is able to provide high-accuracy geometric coordinates of spaceborne SAR image through accurate geometric parameters in the Range-Doppler model by ground control points (GCPs). However, it is very difficult to obtain GCPs that covering large-scale areas, especially in the mountainous regions. In addition, the traditional calibration method is only used for single platform SAR images and can't support the hybrid geometric calibration for multi-platform images. To solve the above problems, a hybrid geometric calibration method for multi-platform spaceborne SAR images with sparse GCPs is proposed in this paper. First, we calibrate the master image that contains GCPs. Secondly, the point tracking algorithm is used to obtain the tie points (TPs) between the master and slave images. Finally, we calibrate the slave images using TPs as the GCPs. We take the Beijing-Tianjin- Hebei region as an example to study SAR image hybrid geometric calibration method using 3 TerraSAR-X images, 3 TanDEM-X images and 5 GF-3 images covering more than 235 kilometers in the north-south direction. Geometric calibration of all images is completed using only 5 GCPs. The GPS data extracted from GNSS receiver are used to assess the plane accuracy after calibration. The results after geometric calibration with sparse GCPs show that the geometric positioning accuracy is 3 m for TSX/TDX images and 7.5 m for GF-3 images.

  9. Spectral calibration of EBT3 and HD-V2 radiochromic film response at high dose using 20 MeV proton beams

    NASA Astrophysics Data System (ADS)

    Feng, Yiwei; Tiedje, Henry F.; Gagnon, Katherine; Fedosejevs, Robert

    2018-04-01

    Radiochromic film is used extensively in many medical, industrial, and scientific applications. In particular, the film is used in analysis of proton generation and in high intensity laser-plasma experiments where very high dose levels can be obtained. The present study reports calibration of the dose response of Gafchromic EBT3 and HD-V2 radiochromic films up to high exposure densities. A 2D scanning confocal densitometer system is employed to carry out accurate optical density measurements up to optical density 5 on the exposed films at the peak spectral absorption wavelengths. Various wavelengths from 400 to 740 nm are also scanned to extend the practical dose range of such films by measuring the response at wavelengths removed from the peak response wavelengths. Calibration curves for the optical density versus exposure dose are determined and can be used for quantitative evaluation of measured doses based on the measured optical densities. It was found that blue and UV wavelengths allowed the largest dynamic range though at some trade-off with overall accuracy.

  10. Empirical solution of Green-Ampt equation using soil conservation service - curve number values

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Petroselli, A.; Romano, N.

    2012-09-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular widely used rainfall-runoff model for quantifying the total stream-flow volume generated by storm rainfall, but its application is not appropriate for sub-daily resolutions. In order to overcome this drawback, the Green-Ampt (GA) infiltration equation is considered and an empirical solution is proposed and evaluated. The procedure, named CN4GA (Curve Number for Green-Ampt), aims to calibrate the Green-Ampt model parameters distributing in time the global information provided by the SCS-CN method. The proposed procedure is evaluated by analysing observed rainfall-runoff events; results show that CN4GA seems to provide better agreement with the observed hydrographs respect to the classic SCS-CN method.

  11. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  12. 14 MeV calibration of JET neutron detectors—phase 1: calibration and characterization of the neutron source

    NASA Astrophysics Data System (ADS)

    Batistoni, P.; Popovichev, S.; Cufar, A.; Ghani, Z.; Giacomelli, L.; Jednorog, S.; Klix, A.; Lilley, S.; Laszynska, E.; Loreti, S.; Packer, L.; Peacock, A.; Pillon, M.; Price, R.; Rebai, M.; Rigamonti, D.; Roberts, N.; Tardocchi, M.; Thomas, D.; Contributors, JET

    2018-02-01

    In view of the planned DT operations at JET, a calibration of the JET neutron monitors at 14 MeV neutron energy is needed using a 14 MeV neutron generator deployed inside the vacuum vessel by the JET remote handling system. The target accuracy of this calibration is  ±10% as also required by ITER, where a precise neutron yield measurement is important, e.g. for tritium accountancy. To achieve this accuracy, the 14 MeV neutron generator selected as the calibration source has been fully characterised and calibrated prior to the in-vessel calibration of the JET monitors. This paper describes the measurements performed using different types of neutron detectors, spectrometers, calibrated long counters and activation foils which allowed us to obtain the neutron emission rate and the anisotropy of the neutron generator, i.e. the neutron flux and energy spectrum dependence on emission angle, and to derive the absolute emission rate in 4π sr. The use of high resolution diamond spectrometers made it possible to resolve the complex features of the neutron energy spectra resulting from the mixed D/T beam ions reacting with the D/T nuclei present in the neutron generator target. As the neutron generator is not a stable neutron source, several monitoring detectors were attached to it by means of an ad hoc mechanical structure to continuously monitor the neutron emission rate during the in-vessel calibration. These monitoring detectors, two diamond diodes and activation foils, have been calibrated in terms of neutrons/counts within  ±5% total uncertainty. A neutron source routine has been developed, able to produce the neutron spectra resulting from all possible reactions occurring with the D/T ions in the beam impinging on the Ti D/T target. The neutron energy spectra calculated by combining the source routine with a MCNP model of the neutron generator have been validated by the measurements. These numerical tools will be key in analysing the results from the in

  13. Decision curve analysis and external validation of the postoperative Karakiewicz nomogram for renal cell carcinoma based on a large single-center study cohort.

    PubMed

    Zastrow, Stefan; Brookman-May, Sabine; Cong, Thi Anh Phuong; Jurk, Stanislaw; von Bar, Immanuel; Novotny, Vladimir; Wirth, Manfred

    2015-03-01

    To predict outcome of patients with renal cell carcinoma (RCC) who undergo surgical therapy, risk models and nomograms are valuable tools. External validation on independent datasets is crucial for evaluating accuracy and generalizability of these models. The objective of the present study was to externally validate the postoperative nomogram developed by Karakiewicz et al. for prediction of cancer-specific survival. A total of 1,480 consecutive patients with a median follow-up of 82 months (IQR 46-128) were included into this analysis with 268 RCC-specific deaths. Nomogram-estimated survival probabilities were compared with survival probabilities of the actual cohort, and concordance indices were calculated. Calibration plots and decision curve analyses were used for evaluating calibration and clinical net benefit of the nomogram. Concordance between predictions of the nomogram and survival rates of the cohort was 0.911 after 12, 0.909 after 24 months and 0.896 after 60 months. Comparison of predicted probabilities and actual survival estimates with calibration plots showed an overestimation of tumor-specific survival based on nomogram predictions of high-risk patients, although calibration plots showed a reasonable calibration for probability ranges of interest. Decision curve analysis showed a positive net benefit of nomogram predictions for our patient cohort. The postoperative Karakiewicz nomogram provides a good concordance in this external cohort and is reasonably calibrated. It may overestimate tumor-specific survival in high-risk patients, which should be kept in mind when counseling patients. A positive net benefit of nomogram predictions was proven.

  14. Implicit multiplane 3D camera calibration matrices for stereo image processing

    NASA Astrophysics Data System (ADS)

    McKee, James W.; Burgett, Sherrie J.

    1997-12-01

    By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time

  15. On the Long-Term Stability of Microwave Radiometers Using Noise Diodes for Calibration

    NASA Technical Reports Server (NTRS)

    Brown, Shannon T.; Desai, Shailen; Lu, Wenwen; Tanner, Alan B.

    2007-01-01

    Results are presented from the long-term monitoring and calibration of the National Aeronautics and Space Administration Jason Microwave Radiometer (JMR) on the Jason-1 ocean altimetry satellite and the ground-based Advanced Water Vapor Radiometers (AWVRs) developed for the Cassini Gravity Wave Experiment. Both radiometers retrieve the wet tropospheric path delay (PD) of the atmosphere and use internal noise diodes (NDs) for gain calibration. The JMR is the first radiometer to be flown in space that uses NDs for calibration. External calibration techniques are used to derive a time series of ND brightness for both instruments that is greater than four years. For the JMR, an optimal estimator is used to find the set of calibration coefficients that minimize the root-mean-square difference between the JMR brightness temperatures and the on-Earth hot and cold references. For the AWVR, continuous tip curves are used to derive the ND brightness. For the JMR and AWVR, both of which contain three redundant NDs per channel, it was observed that some NDs were very stable, whereas others experienced jumps and drifts in their effective brightness. Over the four-year time period, the ND stability ranged from 0.2% to 3% among the diodes for both instruments. The presented recalibration methodology demonstrates that long-term calibration stability can be achieved with frequent recalibration of the diodes using external calibration techniques. The JMR PD drift compared to ground truth over the four years since the launch was reduced from 3.9 to - 0.01 mm/year with the recalibrated ND time series. The JMR brightness temperature calibration stability is estimated to be 0.25 K over ten days.

  16. Bedload Rating and Flow Competence Curves Vary With Watershed and Bed Material Parameters

    NASA Astrophysics Data System (ADS)

    Bunte, K.; Abt, S. R.

    2003-12-01

    Bedload transport rating curves and flow competence curves (largest bedload size for specified flow) are usually not known for streams unless a large number of bedload samples has been collected and analyzed. However, this information is necessary for assessing instream flow needs and stream responses to watershed effects. This study therefore analyzed whether bedload transport rating and flow competence curves were related to stream parameters. Bedload transport rating curves and flow competence curves were obtained from extensive bedload sampling in six gravel- and cobble-bed mountain streams. Samples were collected using bedload traps and a large net sampler, both of which provide steep and relatively well-defined bedload rating and flow competence curves due to a long sampling duration, a large sampler opening and a large sampler capacity. The sampled streams have snowmelt regimes, steep (1-9%) gradients, and watersheds that are mainly forested and relatively undisturbed with basin area sizes of 8 to 105 km2. The channels are slightly incised and can contain flows of more than 1.5 times bankfull with little overbank flow. Exponents of bedload rating and flow competence curves obtained from these measurements were found to systematically increase with basin area size and decrease with the degree of channel armoring. By contrast, coefficients of bedload rating and flow competence curves decreased with basin size and increased with armoring. All of these relationships were well-defined (0.86 < r2 < 0.99). Data sets from other studies in coarse-bedded streams fit the indicated trend if the sampling device used allows measuring bedload transport rates over a wide range and if bedload supply is somewhat low. The existence of a general positive trend between bedload rating curve exponents and basin area, and a negative trend between coefficients and basin area, is confirmed by a large data set of bedload rating curves obtained from Helley-Smith samples. However, in

  17. A statistical method for estimating wood thermal diffusivity and probe geometry using in situ heat response curves from sap flow measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xingyuan; Miller, Gretchen R.; Rubin, Yoram

    2012-09-13

    The heat pulse method is widely used to measure water flux through plants; it works by inferring the velocity of water through a porous medium from the speed at which a heat pulse is propagated through the system. No systematic, non-destructive calibration procedure exists to determine the site-specific parameters necessary for calculating sap velocity, e.g., wood thermal diffusivity and probe spacing. Such parameter calibration is crucial to obtain the correct transpiration flux density from the sap flow measurements at the plant scale; and consequently, to up-scale tree-level water fluxes to canopy and landscape scales. The purpose of this study ismore » to present a statistical framework for estimating the wood thermal diffusivity and probe spacing simutaneously from in-situ heat response curves collected by the implanted probes of a heat ratio apparatus. Conditioned on the time traces of wood temperature following a heat pulse, the parameters are inferred using a Bayesian inversion technique, based on the Markov chain Monte Carlo sampling method. The primary advantage of the proposed methodology is that it does not require known probe spacing or any further intrusive sampling of sapwood. The Bayesian framework also enables direct quantification of uncertainty in estimated sap flow velocity. Experiments using synthetic data show that repeated tests using the same apparatus are essential to obtain reliable and accurate solutions. When applied to field conditions, these tests are conducted during different seasons and automated using the existing data logging system. The seasonality of wood thermal diffusivity is obtained as a by-product of the parameter estimation process, and it is shown to be affected by both moisture content and temperature. Empirical factors are often introduced to account for the influence of non-ideal probe geometry on the estimation of heat pulse velocity, and they are estimated in this study as well. The proposed methodology can be

  18. Exploiting Task Constraints for Self-Calibrated Brain-Machine Interface Control Using Error-Related Potentials

    PubMed Central

    Iturrate, Iñaki; Grizou, Jonathan; Omedes, Jason; Oudeyer, Pierre-Yves; Lopes, Manuel; Montesano, Luis

    2015-01-01

    This paper presents a new approach for self-calibration BCI for reaching tasks using error-related potentials. The proposed method exploits task constraints to simultaneously calibrate the decoder and control the device, by using a robust likelihood function and an ad-hoc planner to cope with the large uncertainty resulting from the unknown task and decoder. The method has been evaluated in closed-loop online experiments with 8 users using a previously proposed BCI protocol for reaching tasks over a grid. The results show that it is possible to have a usable BCI control from the beginning of the experiment without any prior calibration. Furthermore, comparisons with simulations and previous results obtained using standard calibration hint that both the quality of recorded signals and the performance of the system were comparable to those obtained with a standard calibration approach. PMID:26131890

  19. SU-F-T-220: Validation of Hounsfield Unit-To-Stopping Power Ratio Calibration Used for Dose Calculation in Proton Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polf, J; Chung, H; Langen, K

    Purpose: To validate the stoichiometric calibration of the Hounsfield Unit (HU) to Stopping Power Ratio (SPR) calibration used to commission a commercial treatment planning system (TPS) for proton radiotherapy dose calculation. Methods and Materials: The water equivalent thickness (WET) of several individual pig tissues (lung, fat, muscle, liver, intestine, rib, femur), mixed tissue samples (muscle/rib, ice/femur, rib/air cavity/muscle), and an intact pig head were measured with a multi-layer ionization chamber (MLIC). A CT scan of each sample was obtained and imported into a commercial TPS. The WET calculated by the TPS for each tissue sample was compared to the measuredmore » WET value to determine the accuracy of the HU-to-SPR calibration curve used by the TPS to calculate dose. Results: The WET values calculated by the TPS showed good agreement (< 2.0%) with the measured values for bone and all soft tissues except fat (3.1% difference). For the mixed tissue samples and the intact pig head measurements, the difference in the TPS and measured WET values all agreed to within 3.5%. In addition, SPR values were calculated from the measured WET of each tissue, and compared to SPR values of reference tissues from ICRU 46 used to generate the HU-to-SPR calibration for the TPS. Conclusion: For clinical scenarios where the beam passes through multiple tissue types and its path is dominated by soft tissues, we believe using an uncertainty of 3.5% of the planned beam range is acceptable to account for uncertainties in the TPS WET determination.« less

  20. An updated Holocene sea-level curve for the Delaware coast

    USGS Publications Warehouse

    Nikitina, D.L.; Pizzuto, J.E.; Schwimmer, R.A.; Ramsey, K.W.

    2000-01-01

    We present an updated Holocene sea-level curve for the Delaware coast based on new calibrations of 16 previously published radiocarbon dates (Kraft, 1976; Belknap and Kraft, 1977) and 22 new radiocarbon dates of basal peat deposits. A review of published and unpublished 137Cs and 210Pb analyses, and tide gauge data provide the basis for evaluating shorter-term (102 yr) sea-level trends. Paleosea-level elevations for the new basal peat samples were determined from the present vertical zonation of marsh plants relative to mean high water along the Delaware coast and the composition of plant fossils and foraminifera. Current trends in tidal range along the Delaware coast were used to reduce elevations from different locations to a common vertical datum of mean high water at Breakwater Harbor, Delaware. The updated curve is similar to Belknap and Kraft's [J. Sediment. Petrol., 47 (1977) 610-629] original sea-level curve from 12,000 to about 2000 yr BP. The updated curve documents a rate of sea-level rise of 0.9 mm/yr from 1250 yr BP to present (based on 11 dates), in good agreement with other recent sea-level curves from the northern and central U.S. Atlantic coast, while the previous curve documents rates of about 1.3 mm/yr (based on 4 dates). The precision of both estimates, however, is very low, so the significance of these differences is uncertain. A review of 210Pb and 137Cs analyses from salt marshes of Delaware indicates average marsh accretion rates of 3 mm/yr for the last 100 yr, in good agreement with shorter-term estimates of sea-level rise from tide gauge records. ?? 2000 Elsevier Science B.V.

  1. Gap Test Calibrations and Their Scaling

    NASA Astrophysics Data System (ADS)

    Sandusky, Harold

    2011-06-01

    Common tests for measuring the threshold for shock initiation are the NOL large scale gap test (LSGT) with a 50.8-mm diameter donor/gap and the expanded large scale gap test (ELSGT) with a 95.3-mm diameter donor/gap. Despite the same specifications for the explosive donor and polymethyl methacrylate (PMMA) gap in both tests, calibration of shock pressure in the gap versus distance from the donor scales by a factor of 1.75, not the 1.875 difference in their sizes. Recently reported model calculations suggest that the scaling discrepancy results from the viscoelastic properties of PMMA in combination with different methods for obtaining shock pressure. This is supported by the consistent scaling of these donors when calibrated in water-filled aquariums. Calibrations with water gaps will be provided and compared with PMMA gaps. Scaling for other donor systems will also be provided. Shock initiation data with water gaps will be reviewed.

  2. ModABa Model: Annual Flow Duration Curves Assessment in Ephemeral Basins

    NASA Astrophysics Data System (ADS)

    Pumo, Dario; Viola, Francesco; Noto, Leonardo V.

    2013-04-01

    streamflow component is directly derived from the precipitation duration curve through a simple filter model. The fast component of streamflow is considered to be formed by two contributions that are the entire amount of rainfall falling onto the impervious portion of the basin and the excess of rainfall over a fixed threshold, defining heavy rain events, falling onto the permeable portion. The two obtained FDCs are then overlapped, providing a unique non-zero FDC relative to the total streamflow. Finally, once the probability that the river is dry and the non zero FDC are known, the annual FDC of the daily total streamflow is derived applying the theory of total probability. The model is calibrated on a small catchment with ephemeral streamflows using a long period of daily precipitation, temperature and streamflow measurements, and it is successively validated in the same basin using two different time periods. The high model performances obtained in both the validation periods, demonstrate how the model, once calibrated, is able to accurately reproduce the empirical FDC starting from easily derivable parameters arising from a basic ecohydrological knowledge of the basin and commonly available climatic data such as daily precipitation and temperatures. In this sense, the model reveals itself as a valid tool for streamflow predictions in ungauged basins.

  3. Maximum likelihood estimation in calibrating a stereo camera setup.

    PubMed

    Muijtjens, A M; Roos, J M; Arts, T; Hasman, A

    1999-02-01

    Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.

  4. Calibrating Reach Distance to Visual Targets

    ERIC Educational Resources Information Center

    Mon-Williams, Mark; Bingham, Geoffrey P.

    2007-01-01

    The authors investigated the calibration of reach distance by gradually distorting the haptic feedback obtained when participants grasped visible target objects. The authors found that the modified relationship between visually specified distance and reach distance could be captured by a straight-line mapping function. Thus, the relation could be…

  5. Calibration of 4π NaI(Tl) detectors with coincidence summing correction using new numerical procedure and ANGLE4 software

    NASA Astrophysics Data System (ADS)

    Badawi, Mohamed S.; Jovanovic, Slobodan I.; Thabet, Abouzeid A.; El-Khatib, Ahmed M.; Dlabac, Aleksandar D.; Salem, Bohaysa A.; Gouda, Mona M.; Mihaljevic, Nikola N.; Almugren, Kholud S.; Abbas, Mahmoud I.

    2017-03-01

    The 4π NaI(Tl) γ-ray detectors are consisted of the well cavity with cylindrical cross section, and the enclosing geometry of measurements with large detection angle. This leads to exceptionally high efficiency level and a significant coincidence summing effect, much more than a single cylindrical or coaxial detector especially in very low activity measurements. In the present work, the detection effective solid angle in addition to both full-energy peak and total efficiencies of well-type detectors, were mainly calculated by the new numerical simulation method (NSM) and ANGLE4 software. To obtain the coincidence summing correction factors through the previously mentioned methods, the simulation of the coincident emission of photons was modeled mathematically, based on the analytical equations and complex integrations over the radioactive volumetric sources including the self-attenuation factor. The measured full-energy peak efficiencies and correction factors were done by using 152Eu, where an exact adjustment is required for the detector efficiency curve, because neglecting the coincidence summing effect can make the results inconsistent with the whole. These phenomena, in general due to the efficiency calibration process and the coincidence summing corrections, appear jointly. The full-energy peak and the total efficiencies from the two methods typically agree with discrepancy 10%. The discrepancy between the simulation, ANGLE4 and measured full-energy peak after corrections for the coincidence summing effect was on the average, while not exceeding 14%. Therefore, this technique can be easily applied in establishing the efficiency calibration curves of well-type detectors.

  6. Construction of estimated flow- and load-duration curves for Kentucky using the Water Availability Tool for Environmental Resources (WATER)

    USGS Publications Warehouse

    Unthank, Michael D.; Newson, Jeremy K.; Williamson, Tanja N.; Nelson, Hugh L.

    2012-01-01

    Flow- and load-duration curves were constructed from the model outputs of the U.S. Geological Survey's Water Availability Tool for Environmental Resources (WATER) application for streams in Kentucky. The WATER application was designed to access multiple geospatial datasets to generate more than 60 years of statistically based streamflow data for Kentucky. The WATER application enables a user to graphically select a site on a stream and generate an estimated hydrograph and flow-duration curve for the watershed upstream of that point. The flow-duration curves are constructed by calculating the exceedance probability of the modeled daily streamflows. User-defined water-quality criteria and (or) sampling results can be loaded into the WATER application to construct load-duration curves that are based on the modeled streamflow results. Estimates of flow and streamflow statistics were derived from TOPographically Based Hydrological MODEL (TOPMODEL) simulations in the WATER application. A modified TOPMODEL code, SDP-TOPMODEL (Sinkhole Drainage Process-TOPMODEL) was used to simulate daily mean discharges over the period of record for 5 karst and 5 non-karst watersheds in Kentucky in order to verify the calibrated model. A statistical evaluation of the model's verification simulations show that calibration criteria, established by previous WATER application reports, were met thus insuring the model's ability to provide acceptably accurate estimates of discharge at gaged and ungaged sites throughout Kentucky. Flow-duration curves are constructed in the WATER application by calculating the exceedence probability of the modeled daily flow values. The flow-duration intervals are expressed as a percentage, with zero corresponding to the highest stream discharge in the streamflow record. Load-duration curves are constructed by applying the loading equation (Load = Flow*Water-quality criterion) at each flow interval.

  7. Extinction, seeing and sky transparency monitoring at the Observatorio Astrofísico de Javalambre for J-PAS and J-PLUS calibration and scheduling

    NASA Astrophysics Data System (ADS)

    Vázquez Ramió, H.; Díaz-Martín, M. C.; Varela, J.; Ederoclite, A.; Maícas, N. Lamadrid, J. L.; Abril, J.; Iglesias-Marzoa, R.; Rodríguez, S.; Tilve, V.; Cenarro, A. J.; Antón Bravo, J. L.; Bello Ferrer, R.; Cristóbal-Hornillos, D.; Guillén Civera, L.; Hernández-Fuertes, J.; Jiménez Mejías, D.; Lasso-Cabrera, N. M.; López Alegre, G.; López Sainz, A.; Luis-Simoes, R. M.; Marín-Franch, A.; Moles, M.; Rueda-Teruel, F.; Rueda-Teruel, S.; Suárez López, O.; Yanes-Díaz, A.

    2015-05-01

    The Javalambre-Physics of the Accelerating Universe Astrophysical Survey (J-PAS; see Benítez et al. 2014) and the Javalambre-Photometric Local Universe Survey (J-PLUS) will be conducted at the brand-new Observatorio Astrofísico de Javalambre (OAJ) in Teruel, Spain. J-PLUS is planned to start by the first half of 2015 while J-PAS first light is expected to happen along 2015. Besides the two main telescopes (with 2.5 m and 80 cm apertures), several smaller-sized facilities are present at the OAJ devoted to site characterization and supporting measurements to be used to calibrate the J-PAS and J-PLUS photometry and to feed up the OAJ's Sequencer with the integrated seeing and the sky transparency. These instruments are: i) an extinction monitor, an 11 " telescope estimating the atmospheric extinction to finally obtain the OAJ extinction curve, which is the initial step to J-PAS overall photometric calibration procedure; ii) an 8 " telescope implementing the Differential Image Motion Monitor (DIMM) technique to obtain the integrated seeing; and iii) an All-Sky Transmission MONitor (ASTMON), a roughly all-sky instrument providing the sky transparency as well as sky brightness and the atmospheric extinction too.

  8. Drift-insensitive distributed calibration of probe microscope scanner in nanometer range: Virtual mode

    NASA Astrophysics Data System (ADS)

    Lapshin, Rostislav V.

    2016-08-01

    A method of distributed calibration of a probe microscope scanner is suggested. The main idea consists in a search for a net of local calibration coefficients (LCCs) in the process of automatic measurement of a standard surface, whereby each point of the movement space of the scanner can be characterized by a unique set of scale factors. Feature-oriented scanning (FOS) methodology is used as a basis for implementation of the distributed calibration permitting to exclude in situ the negative influence of thermal drift, creep and hysteresis on the obtained results. Possessing the calibration database enables correcting in one procedure all the spatial systematic distortions caused by nonlinearity, nonorthogonality and spurious crosstalk couplings of the microscope scanner piezomanipulators. To provide high precision of spatial measurements in nanometer range, the calibration is carried out using natural standards - constants of crystal lattice. One of the useful modes of the developed calibration method is a virtual mode. In the virtual mode, instead of measurement of a real surface of the standard, the calibration program makes a surface image ;measurement; of the standard, which was obtained earlier using conventional raster scanning. The application of the virtual mode permits simulation of the calibration process and detail analysis of raster distortions occurring in both conventional and counter surface scanning. Moreover, the mode allows to estimate the thermal drift and the creep velocities acting while surface scanning. Virtual calibration makes possible automatic characterization of a surface by the method of scanning probe microscopy (SPM).

  9. Growth curves of carcass traits obtained by ultrasonography in three lines of Nellore cattle selected for body weight.

    PubMed

    Coutinho, C C; Mercadante, M E Z; Jorge, A M; Paz, C C P; El Faro, L; Monteiro, F M

    2015-10-30

    The effect of selection for postweaning weight was evaluated within the growth curve parameters for both growth and carcass traits. Records of 2404 Nellore animals from three selection lines were analyzed: two selection lines for high postweaning weight, selection (NeS) and traditional (NeT); and a control line (NeC) in which animals were selected for postweaning weight close to the average. Body weight (BW), hip height (HH), rib eye area (REA), back fat thickness (BFT), and rump fat thickness (RFT) were measured and records collected from animals 8 to 20 (males) and 11 to 26 (females) months of age. The parameters A (asymptotic value) and k (growth rate) were estimated using the nonlinear model procedure of the Statistical Analysis System program, which included fixed effect of line (NeS, NeT, and NeC) in the model, with the objective to evaluate differences in the estimated parameters between lines. Selected animals (NeS and NeT) showed higher growth rates than control line animals (NeC) for all traits. Line effect on curves parameters was significant (P < 0.001) for BW, HH, and REA in males, and for BFT and RFT in females. Selection for postweaning weight was effective in altering growth curves, resulting in animals with higher growth potential.

  10. MODIS calibration

    NASA Technical Reports Server (NTRS)

    Barker, John L.

    1992-01-01

    The MODIS/MCST (MODIS Characterization Support Team) Status Report contains an outline of the calibration strategy, handbook, and plan. It also contains an outline of the MODIS/MCST action item from the 4th EOS Cal/Val Meeting, for which the objective was to locate potential MODIS calibration targets on the Earth's surface that are radiometrically homogeneous on a scale of 3 by 3 Km. As appendices, draft copies of the handbook table of contents, calibration plan table of contents, and detailed agenda for MODIS calibration working group are included.

  11. Excimer Potential Curves

    DTIC Science & Technology

    1980-01-01

    lasers see A. V. Phelps, JILA Report 110, "Tunable Gas Lasers Using Ground State Dissociation," (1972) and references therein. 2. This requires highly...possibility of using GaXe as a laser if the Ga can be obtained from dissociation of Gal 3 . Consequently the GaKr curves should also be of intrinsic interest... laser transitions The interest in the group IIIB-rare gas systems arises from the possibility of their use as visible laser systems. In order to judge

  12. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  13. Calibration of the Wedge Prism

    Treesearch

    Charles B. Briscoe

    1957-01-01

    Since the introduction of plotless cruising in this country by Grosenbaugh and the later suggestion of using a wedge prism as an angle gauge by Bruce this method of determining basal area has been widely adopted in the South. One of the factors contributing to the occasionally unsatisfactory results obtained is failure to calibrate the prism used. As noted by Bruce the...

  14. A calibration method based on virtual large planar target for cameras with large FOV

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu

    2018-02-01

    In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.

  15. Active motion on curved surfaces

    NASA Astrophysics Data System (ADS)

    Castro-Villarreal, Pavel; Sevilla, Francisco J.

    2018-05-01

    A theoretical analysis of active motion on curved surfaces is presented in terms of a generalization of the telegrapher equation. Such a generalized equation is explicitly derived as the polar approximation of the hierarchy of equations obtained from the corresponding Fokker-Planck equation of active particles diffusing on curved surfaces. The general solution to the generalized telegrapher equation is given for a pulse with vanishing current as initial data. Expressions for the probability density and the mean squared geodesic displacement are given in the limit of weak curvature. As an explicit example of the formulated theory, the case of active motion on the sphere is presented, where oscillations observed in the mean squared geodesic displacement are explained.

  16. Calibration methodology application of kerma area product meters in situ: Preliminary results

    NASA Astrophysics Data System (ADS)

    Costa, N. A.; Potiens, M. P. A.

    2014-11-01

    The kerma-area product (KAP) is a useful quantity to establish the reference levels of conventional X-ray examinations. It can be obtained by measurements carried out with a KAP meter on a plane parallel transmission ionization chamber mounted on the X-ray system. A KAP meter can be calibrated in laboratory or in situ, where it is used. It is important to use one reference KAP meter in order to obtain reliable quantity of doses on the patient. The Patient Dose Calibrator (PDC) is a new equipment from Radcal that measures KAP. It was manufactured following the IEC 60580 recommendations, an international standard for KAP meters. This study had the aim to calibrate KAP meters using the PDC in situ. Previous studies and the quality control program of the PDC have shown that it has good function in characterization tests of dosimeters with ionization chamber and it also has low energy dependence. Three types of KAP meters were calibrated in four different diagnostic X-ray equipments. The voltages used in the two first calibrations were 50 kV, 70 kV, 100 kV and 120 kV. The other two used 50 kV, 70 kV and 90 kV. This was related to the equipments limitations. The field sizes used for the calibration were 10 cm, 20 cm and 30 cm. The calibrations were done in three different cities with the purpose to analyze the reproducibility of the PDC. The results gave the calibration coefficient for each KAP meter and showed that the PDC can be used as a reference instrument to calibrate clinical KAP meters.

  17. Calibrating page sized Gafchromic EBT3 films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crijns, W.; Maes, F.; Heide, U. A. van der

    2013-01-15

    Purpose: The purpose is the development of a novel calibration method for dosimetry with Gafchromic EBT3 films. The method should be applicable for pretreatment verification of volumetric modulated arc, and intensity modulated radiotherapy. Because the exposed area on film can be large for such treatments, lateral scan errors must be taken into account. The correction for the lateral scan effect is obtained from the calibration data itself. Methods: In this work, the film measurements were modeled using their relative scan values (Transmittance, T). Inside the transmittance domain a linear combination and a parabolic lateral scan correction described the observed transmittancemore » values. The linear combination model, combined a monomer transmittance state (T{sub 0}) and a polymer transmittance state (T{sub {infinity}}) of the film. The dose domain was associated with the observed effects in the transmittance domain through a rational calibration function. On the calibration film only simple static fields were applied and page sized films were used for calibration and measurements (treatment verification). Four different calibration setups were considered and compared with respect to dose estimation accuracy. The first (I) used a calibration table from 32 regions of interest (ROIs) spread on 4 calibration films, the second (II) used 16 ROIs spread on 2 calibration films, the third (III), and fourth (IV) used 8 ROIs spread on a single calibration film. The calibration tables of the setups I, II, and IV contained eight dose levels delivered to different positions on the films, while for setup III only four dose levels were applied. Validation was performed by irradiating film strips with known doses at two different time points over the course of a week. Accuracy of the dose response and the lateral effect correction was estimated using the dose difference and the root mean squared error (RMSE), respectively. Results: A calibration based on two films was the

  18. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  19. Landsat 4 Thematic Mapper calibration update

    USGS Publications Warehouse

    Helder, Dennis L.; Malla, Rimy; Mettler, Cory J.; Markham, Brian L.; Micijevic, Esad

    2012-01-01

    The Landsat 4 Thematic Mapper (TM) collected imagery of the Earth's surface from 1982 to 1993. Although largely overshadowed by Landsat 5 which was launched in 1984, Landsat 4 TM imagery extends the TM-based record of the Earth back to 1982 and also substantially supplements the image archive collected by Landsat 5. To provide a consistent calibration record for the TM instruments, Landsat 4 TM was cross-calibrated to Landsat 5 using nearly simultaneous overpass imagery of pseudo-invariant calibration sites (PICS) in the time period of 1988-1990. To determine if the radiometric gain of Landsat 4 had changed over its lifetime, time series from two PICS locations (a Saharan site known as Libya 4 and a site in southwest North America, commonly referred to as the Sonoran Desert site) were developed. The results indicated that Landsat 4 had been very stable over its lifetime, with no discernible degradation in sensor performance in all reflective bands except band 1. In contrast, band 1 exhibited a 12% decay in responsivity over the lifetime of the instrument. Results from this paper have been implemented at USGS EROS, which enables users of Landsat TM data sets to obtain consistently calibrated data from Landsat 4 and 5 TM as well as Landsat 7 ETM+ instruments.

  20. A new polarimetric active radar calibrator and calibration technique

    NASA Astrophysics Data System (ADS)

    Tang, Jianguo; Xu, Xiaojian

    2015-10-01

    Polarimetric active radar calibrator (PARC) is one of the most important calibrators with high radar cross section (RCS) for polarimetry measurement. In this paper, a new double-antenna polarimetric active radar calibrator (DPARC) is proposed, which consists of two rotatable antennas with wideband electromagnetic polarization filters (EMPF) to achieve lower cross-polarization for transmission and reception. With two antennas which are rotatable around the radar line of sight (LOS), the DPARC provides a variety of standard polarimetric scattering matrices (PSM) through the rotation combination of receiving and transmitting polarization, which are useful for polarimatric calibration in different applications. In addition, a technique based on Fourier analysis is proposed for calibration processing. Numerical simulation results are presented to demonstrate the superior performance of the proposed DPARC and processing technique.

  1. IMU-based online kinematic calibration of robot manipulator.

    PubMed

    Du, Guanglong; Zhang, Ping

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  2. IMU-Based Online Kinematic Calibration of Robot Manipulator

    PubMed Central

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods. PMID:24302854

  3. A stoichiometric calibration method for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Bourque, Alexandra E.; Carrier, Jean-François; Bouchard, Hugo

    2014-04-01

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic

  4. The DFMS sensor of ROSINA onboard Rosetta: A computer-assisted approach to resolve mass calibration, flux calibration, and fragmentation issues

    NASA Astrophysics Data System (ADS)

    Dhooghe, Frederik; De Keyser, Johan; Altwegg, Kathrin; Calmonte, Ursina; Fuselier, Stephen; Hässig, Myrtha; Berthelier, Jean-Jacques; Mall, Urs; Gombosi, Tamas; Fiethe, Björn

    2014-05-01

    Rosetta will rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) instrument comprises three sensors: the pressure sensor (COPS) and two mass spectrometers (RTOF and DFMS). The double focusing mass spectrometer DFMS is optimized for mass resolution and consists of an ion source, a mass analyser and a detector package operated in analogue mode. The magnetic sector of the analyser provides the mass dispersion needed for use with the position-sensitive microchannel plate (MCP) detector. Ions that hit the MCP release electrons that are recorded digitally using a linear electron detector array with 512 pixels. Raw data for a given commanded mass are obtained as ADC counts as a function of pixel number. We have developed a computer-assisted approach to address the problem of calibrating such raw data. Mass calibration: Ion identification is based on their mass-over-charge (m/Z) ratio and requires an accurate correlation of pixel number and m/Z. The m/Z scale depends on the commanded mass and the magnetic field and can be described by an offset of the pixel associated with the commanded mass from the centre of the detector array and a scaling factor. Mass calibration is aided by the built-in gas calibration unit (GCU), which allows one to inject a known gas mixture into the instrument. In a first, fully automatic step of the mass calibration procedure, the calibration uses all GCU spectra and extracts information about the mass peak closest to the centre pixel, since those peaks can be identified unambiguously. This preliminary mass-calibration relation can then be applied to all spectra. Human-assisted identification of additional mass peaks further improves the mass calibration. Ion flux calibration: ADC counts per pixel are converted to ion counts per second using the overall gain, the individual pixel gain, and the total data accumulation time. DFMS can perform an internal scan to determine

  5. Comparison of the uncertainties of several European low-dose calibration facilities

    NASA Astrophysics Data System (ADS)

    Dombrowski, H.; Cornejo Díaz, N. A.; Toni, M. P.; Mihelic, M.; Röttger, A.

    2018-04-01

    The typical uncertainty of a low-dose rate calibration of a detector, which is calibrated in a dedicated secondary national calibration laboratory, is investigated, including measurements in the photon field of metrology institutes. Calibrations at low ambient dose equivalent rates (at the level of the natural ambient radiation) are needed when environmental radiation monitors are to be characterised. The uncertainties of calibration measurements in conventional irradiation facilities above ground are compared with those obtained in a low-dose rate irradiation facility located deep underground. Four laboratories quantitatively evaluated the uncertainties of their calibration facilities, in particular for calibrations at low dose rates (250 nSv/h and 1 μSv/h). For the first time, typical uncertainties of European calibration facilities are documented in a comparison and the main sources of uncertainty are revealed. All sources of uncertainties are analysed, including the irradiation geometry, scattering, deviations of real spectra from standardised spectra, etc. As a fundamental metrological consequence, no instrument calibrated in such a facility can have a lower total uncertainty in subsequent measurements. For the first time, the need to perform calibrations at very low dose rates (< 100 nSv/h) deep underground is underpinned on the basis of quantitative data.

  6. Calibration of time of flight detectors using laser-driven neutron source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirfayzi, S. R.; Kar, S., E-mail: s.kar@qub.ac.uk; Ahmed, H.

    2015-07-15

    Calibration of three scintillators (EJ232Q, BC422Q, and EJ410) in a time-of-flight arrangement using a laser drive-neutron source is presented. The three plastic scintillator detectors were calibrated with gamma insensitive bubble detector spectrometers, which were absolutely calibrated over a wide range of neutron energies ranging from sub-MeV to 20 MeV. A typical set of data obtained simultaneously by the detectors is shown, measuring the neutron spectrum emitted from a petawatt laser irradiated thin foil.

  7. Calibration of time of flight detectors using laser-driven neutron source.

    PubMed

    Mirfayzi, S R; Kar, S; Ahmed, H; Krygier, A G; Green, A; Alejo, A; Clarke, R; Freeman, R R; Fuchs, J; Jung, D; Kleinschmidt, A; Morrison, J T; Najmudin, Z; Nakamura, H; Norreys, P; Oliver, M; Roth, M; Vassura, L; Zepf, M; Borghesi, M

    2015-07-01

    Calibration of three scintillators (EJ232Q, BC422Q, and EJ410) in a time-of-flight arrangement using a laser drive-neutron source is presented. The three plastic scintillator detectors were calibrated with gamma insensitive bubble detector spectrometers, which were absolutely calibrated over a wide range of neutron energies ranging from sub-MeV to 20 MeV. A typical set of data obtained simultaneously by the detectors is shown, measuring the neutron spectrum emitted from a petawatt laser irradiated thin foil.

  8. Calibration of time of flight detectors using laser-driven neutron source

    NASA Astrophysics Data System (ADS)

    Mirfayzi, S. R.; Kar, S.; Ahmed, H.; Krygier, A. G.; Green, A.; Alejo, A.; Clarke, R.; Freeman, R. R.; Fuchs, J.; Jung, D.; Kleinschmidt, A.; Morrison, J. T.; Najmudin, Z.; Nakamura, H.; Norreys, P.; Oliver, M.; Roth, M.; Vassura, L.; Zepf, M.; Borghesi, M.

    2015-07-01

    Calibration of three scintillators (EJ232Q, BC422Q, and EJ410) in a time-of-flight arrangement using a laser drive-neutron source is presented. The three plastic scintillator detectors were calibrated with gamma insensitive bubble detector spectrometers, which were absolutely calibrated over a wide range of neutron energies ranging from sub-MeV to 20 MeV. A typical set of data obtained simultaneously by the detectors is shown, measuring the neutron spectrum emitted from a petawatt laser irradiated thin foil.

  9. Empirical expression for DC magnetization curve of immobilized magnetic nanoparticles for use in biomedical applications

    NASA Astrophysics Data System (ADS)

    Elrefai, Ahmed L.; Sasayama, Teruyoshi; Yoshida, Takashi; Enpuku, Keiji

    2018-05-01

    We studied the magnetization (M-H) curve of immobilized magnetic nanoparticles (MNPs) used for biomedical applications. First, we performed numerical simulation on the DC M-H curve over a wide range of MNPs parameters. Based on the simulation results, we obtained an empirical expression for DC M-H curve. The empirical expression was compared with the measured M-H curves of various MNP samples, and quantitative agreements were obtained between them. We can also estimate the basic parameters of MNP from the comparison. Therefore, the empirical expression is useful for analyzing the M-H curve of immobilized MNPs for specific biomedical applications.

  10. A weakly nonlinear theory for wave-vortex interactions in curved channel flow

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.; Erlebacher, Gordon; Zang, Thomas A.

    1992-01-01

    A weakly nonlinear theory is developed to study the interaction of Tollmien-Schlichting (TS) waves and Dean vortices in curved channel flow. The predictions obtained from the theory agree well with results obtained from direct numerical simulations of curved channel flow, especially for low amplitude disturbances. Some discrepancies in the results of a previous theory with direct numerical simulations are resolved.

  11. Trend analysis of Terra/ASTER/VNIR radiometric calibration coefficient through onboard and vicarious calibrations as well as cross calibration with MODIS

    NASA Astrophysics Data System (ADS)

    Arai, Kohei

    2012-07-01

    More than 11 years Radiometric Calibration Coefficients (RCC) derived from onboard and vicarious calibrations are compared together with cross comparison to the well calibrated MODIS RCC. Fault Tree Analysis (FTA) is also conducted for clarification of possible causes of the RCC degradation together with sensitivity analysis for vicarious calibration. One of the suspects of causes of RCC degradation is clarified through FTA. Test site dependency on vicarious calibration is quite obvious. It is because of the vicarious calibration RCC is sensitive to surface reflectance measurement accuracy, not atmospheric optical depth. The results from cross calibration with MODIS support that significant sensitivity of surface reflectance measurements on vicarious calibration.

  12. Accuracy evaluation of optical distortion calibration by digital image correlation

    NASA Astrophysics Data System (ADS)

    Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan

    2017-11-01

    Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.

  13. A validated method for the quantitation of 1,1-difluoroethane using a gas in equilibrium method of calibration.

    PubMed

    Avella, Joseph; Lehrer, Michael; Zito, S William

    2008-10-01

    1,1-Difluoroethane (DFE), also known as Freon 152A, is a member of a class of compounds known as halogenated hydrocarbons. A number of these compounds have gained notoriety because of their ability to induce rapid onset of intoxication after inhalation exposure. Abuse of DFE has necessitated development of methods for its detection and quantitation in postmortem and human performance specimens. Furthermore, methodologies applicable to research studies are required as there have been limited toxicokinetic and toxicodynamic reports published on DFE. This paper describes a method for the quantitation of DFE using a gas chromatography-flame-ionization headspace technique that employs solventless standards for calibration. Two calibration curves using 0.5 mL whole blood calibrators which ranged from A: 0.225-1.350 to B: 9.0-180.0 mg/L were developed. These were evaluated for linearity (0.9992 and 0.9995), limit of detection of 0.018 mg/L, limit of quantitation of 0.099 mg/L (recovery 111.9%, CV 9.92%), and upper limit of linearity of 27,000.0 mg/L. Combined curve recovery results of a 98.0 mg/L DFE control that was prepared using an alternate technique was 102.2% with CV of 3.09%. No matrix interference was observed in DFE enriched blood, urine or brain specimens nor did analysis of variance detect any significant differences (alpha = 0.01) in the area under the curve of blood, urine or brain specimens at three identical DFE concentrations. The method is suitable for use in forensic laboratories because validation was performed on instrumentation routinely used in forensic labs and due to the ease with which the calibration range can be adjusted. Perhaps more importantly it is also useful for research oriented studies because the removal of solvent from standard preparation eliminates the possibility for solvent induced changes to the gas/liquid partitioning of DFE or chromatographic interference due to the presence of solvent in specimens.

  14. A new systematic calibration method of ring laser gyroscope inertial navigation system

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu

    2016-10-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.

  15. Calibration of the dietary data obtained from the Brazilian center of the Natural History of HPV Infection in Men study: the HIM Study.

    PubMed

    Teixeira, Juliana Araujo; Baggio, Maria Luiza; Fisberg, Regina Mara; Marchioni, Dirce Maria Lobo

    2010-12-01

    The objective of this study was to estimate the regressions calibration for the dietary data that were measured using the quantitative food frequency questionnaire (QFFQ) in the Natural History of HPV Infection in Men: the HIM Study in Brazil. A sample of 98 individuals from the HIM study answered one QFFQ and three 24-hour recalls (24HR) at interviews. The calibration was performed using linear regression analysis in which the 24HR was the dependent variable and the QFFQ was the independent variable. Age, body mass index, physical activity, income and schooling were used as adjustment variables in the models. The geometric means between the 24HR and the calibration-corrected QFFQ were statistically equal. The dispersion graphs between the instruments demonstrate increased correlation after making the correction, although there is greater dispersion of the points with worse explanatory power of the models. Identification of the regressions calibration for the dietary data of the HIM study will make it possible to estimate the effect of the diet on HPV infection, corrected for the measurement error of the QFFQ.

  16. A Visual Servoing-Based Method for ProCam Systems Calibration

    PubMed Central

    Berry, Francois; Aider, Omar Ait; Mosnier, Jeremie

    2013-01-01

    Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy. PMID:24084121

  17. More Unusual Light Curves from Kepler

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2017-03-01

    Twenty-three new objects have been added to the growing collection of stars observed to have unusual dips in their light curves. A recent study examines these stars and the potential causes of their strange behavior.An Influx of DataThe primary Kepler mission provided light curves for over 100,000 stars, and its continuation K2 is observing another 20,000 stars every three months. As we enter an era where these enormous photometric data sets become commonplace Gaia will obtain photometry for millions of stars, and LSST billions its crucial that we understand the different categories of variability observed in these stars.The authors find three different types of light curves among their 23 unusual stars. Scallop-shell curves (top) show many undulations; persistent flux-dip class curves (middle) have discrete triangularly shaped flux dips; transient, narrow dip class curves (bottom) have only one dip that is variable in depth. The authors speculate a common cause for the scallop-shell and persistent flux-dip stars, and a different cause for the transient flux-dip stars. [Stauffer et al. 2017]After filtering out the stars with planets, those in binary systems, those with circumstellar disks, and those with starspots, a number of oddities remain: a menagerie of stars with periodic variability that cant be accounted for in these categories. Some of these stars are now famous (for instance, Boyajians star); some are lesser known. But by continuing to build up this sample of stars with unusual light curves, we have a better chance of understanding the sources of variability.Building the MenagerieTo this end, a team of scientists led by John Stauffer (Spitzer Science Center at Caltech) has recently hunted for more additions to this sample in the K2 data set. In particular, they searched through the light curves from stars in the Oph and Upper Scorpius star-forming region a data set that makes up the largest collection of high-quality light curves for low-mass, pre

  18. An Overview of Suomi NPP VIIRS Calibration Maneuvers

    NASA Technical Reports Server (NTRS)

    Butler, James J.; Xiong, Xiaoxiong; Barnes, Robert A.; Patt, Frederick S.; Sun, Junqiang; Chiang, Kwofu

    2012-01-01

    The first Visible Infrared Imager Radiometer Suite (VIIRS) instrument was successfully launched on-board the Suomi National Polar-orbiting Partnership (SNPP) spacecraft on October 28, 2011. Suomi NPP VIIRS observations are made in 22 spectral bands, from the visible (VIS) to the long-wave infrared (LWIR), and are used to produce 22 Environmental Data Records (EDRs) with a broad range of scientific applications. The quality of these VIIRS EDRs strongly depends on the quality of its calibrated and geo-located Sensor Date Records (SDRs). Built with a strong heritage to the NASA's EOS MODerate resolution Imaging Spectroradiometer (MODIS) instrument, the VIIRS is calibrated on-orbit using a similar set of on-board calibrators (OBC), including a solar diffuser (SD) and solar diffuser stability monitor (SDSM) system for the reflective solar bands (RSB) and a blackbody (BB) for the thermal emissive bands (TEB). On-orbit maneuvers of the SNPP spacecraft provide additional calibration and characterization data from the VIIRS instrument which cannot be obtained pre-launch and are required to produce the highest quality SDRs. These include multi-orbit yaw maneuvers for the characterization of SD and SDSM screen transmission, quasi-monthly roll maneuvers to acquire lunar observations to track sensor degradation in the visible through shortwave infrared, and a driven pitch-over maneuver to acquire multiple scans of deep space to determine TEB response versus scan angle (RVS). This paper pro-vides an overview of these three SNPP calibration maneuvers. Discussions are focused on their potential calibration and science benefits, pre-launch planning activities, and on-orbit scheduling and implementation strategies. Results from calibration maneuvers performed during the Intensive Calibration and Validation (ICV) period for the VIIRS sensor are illustrated. Also presented in this paper are lessons learned regarding the implementation of calibration spacecraft maneuvers on follow

  19. An overview of Suomi NPP VIIRS calibration maneuvers

    NASA Astrophysics Data System (ADS)

    Butler, James J.; Xiong, Xiaoxiong; Barnes, Robert A.; Patt, Frederick S.; Sun, Junqiang; Chiang, Kwofu

    2012-09-01

    The first Visible Infrared Imager Radiometer Suite (VIIRS) instrument was successfully launched on-board the Suomi National Polar-orbiting Partnership (SNPP) spacecraft on October 28, 2011. Suomi NPP VIIRS observations are made in 22 spectral bands, from the visible (VIS) to the long-wave infrared (LWIR), and are used to produce 22 Environmental Data Records (EDRs) with a broad range of scientific applications. The quality of these VIIRS EDRs strongly depends on the quality of its calibrated and geo-located Sensor Date Records (SDRs). Built with a strong heritage to the NASA's EOS MODerate resolution Imaging Spectroradiometer (MODIS) instrument, the VIIRS is calibrated on-orbit using a similar set of on-board calibrators (OBC), including a solar diffuser (SD) and solar diffuser stability monitor (SDSM) system for the reflective solar bands (RSB) and a blackbody (BB) for the thermal emissive bands (TEB). Onorbit maneuvers of the SNPP spacecraft provide additional calibration and characterization data from the VIIRS instrument which cannot be obtained pre-launch and are required to produce the highest quality SDRs. These include multiorbit yaw maneuvers for the characterization of SD and SDSM screen transmission, quasi-monthly roll maneuvers to acquire lunar observations to track sensor degradation in the visible through shortwave infrared, and a driven pitch-over maneuver to acquire multiple scans of deep space to determine TEB response versus scan angle (RVS). This paper provides an overview of these three SNPP calibration maneuvers. Discussions are focused on their potential calibration and science benefits, pre-launch planning activities, and on-orbit scheduling and implementation strategies. Results from calibration maneuvers performed during the Intensive Calibration and Validation (ICV) period for the VIIRS sensor are illustrated. Also presented in this paper are lessons learned regarding the implementation of calibration spacecraft maneuvers on follow

  20. In-flight calibration verification of spaceborne remote sensing instruments

    NASA Astrophysics Data System (ADS)

    LaBaw, Clayton C.

    1990-07-01

    The need to verify the pei1ormaixc of untended instrumentation has been recognized since scientists began sending thnse instrumems into hostile environments to quire data. The sea floor and the stratosphere have been explored, and the quality and cury of the data obtained vified by calibrating the instrumentalion in the laboratoiy, both jxior and subsequent to deployment The inability to make the lau measurements on deep-space missions make the calibration vthficatkin of these insiruments a uniclue problem.

  1. Energy calibration of organic scintillation detectors for. gamma. rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu Jiahui; Xiao Genlai; Liu Jingyi

    1988-10-01

    An experimental method of calibrating organic detectors is described. A NaI(T1) detector has some advantages of high detection efficiency, good energy resolution, and definite position of the back-scattering peak. The precise position of the Compton edge can be determined by coincidence measurement between the pulse of an organic scintillation detector and the pulse of the back-scattering peak from NaI(T1) detector. It can be used to calibrate various sizes and shapes of organic scintillation detectors simply and reliably. The home-made plastic and organic liquid scintillation detectors are calibrated and positions of the Compton edge as a function of ..gamma..-ray energies aremore » obtained.« less

  2. Improved Radial Velocity Precision with a Tunable Laser Calibrator

    NASA Astrophysics Data System (ADS)

    Cramer, Claire; Brown, S.; Dupree, A. K.; Lykke, K. R.; Smith, A.; Szentgyorgyi, A.

    2010-01-01

    We present radial velocities obtained using a novel laser-based wavelength calibration technique. We have built a prototype laser calibrator for the Hectochelle spectrograph at the MMT 6.5 m telescope. The Hectochelle is a high-dispersion, fiber-fed, multi-object spectrograph capable of recording up to 240 spectra simultaneously with a resolving power of 40000. The standard wavelength calibration method makes use of spectra from thorium-argon hollow cathode lamps shining directly onto the fibers. The difference in light path between calibration and science light as well as the uneven distribution of spectral lines are believed to introduce errors of up to several hundred m/s in the wavelength scale. Our tunable laser wavelength calibrator solves these problems. The laser is bright enough for use with a dome screen, allowing the calibration light path to better match the science light path. Further, the laser is tuned in regular steps across a spectral order to generate a calibration spectrum, creating a comb of evenly-spaced lines on the detector. Using the solar spectrum reflected from the atmosphere to record the same spectrum in every fiber, we show that laser wavelength calibration brings radial velocity uncertainties down below 100 m/s. We present these results as well as an application of tunable laser calibration to stellar radial velocities determined with the infrared Ca triplet in globular clusters M15 and NGC 7492. We also suggest how the tunable laser could be useful for other instruments, including single-object, cross-dispersed echelle spectrographs, and adapted for infrared spectroscopy.

  3. Updating the HST/ACS G800L Grism Calibration

    NASA Astrophysics Data System (ADS)

    Hathi, Nimish P.; Pirzkal, Norbert; Grogin, Norman A.; Chiaberge, Marco; ACS Team

    2018-06-01

    We present results from our ongoing work on obtaining newly derived trace and wavelength calibrations of the HST/ACS G800L grism and comparing them to previous set of calibrations. Past calibration efforts were based on 2003 observations. New observations of an emission line Wolf-Rayet star (WR96) were recently taken in HST Cycle 25 (PID: 15401). These observations are used to analyze and measure various grism properties, including wavelength calibration, spectral trace/tilt, length/size of grism orders, and spacing between various grism orders. To account for the field dependence, we observe WR96 at 3 different observing positions over the HST/ACS field of view. The three locations are the center of chip 1, the center of chip 2, and the center of the WFC1A-2K subarray (center of WFC Amp A on chip 1). This new data will help us to evaluate any differences in the G800L grism properties compared to previous calibration data, and to apply improved data analysis techniques to update these old measurements.

  4. Alkali trace elements in Gale crater, Mars, with ChemCam: Calibration update and geological implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Payre, Valerie; Fabre, Cecile; Cousin, Agnes

    The Chemistry Camera (ChemCam) instrument onboard Curiosity can detect minor and trace elements such as lithium, strontium, rubidium, and barium. Their abundances can provide some insights about Mars' magmatic history and sedimentary processes. We focus on developing new quantitative models for these elements by using a new laboratory database (more than 400 samples) that displays diverse compositions that are more relevant for Gale crater than the previous ChemCam database. These models are based on univariate calibration curves. For each element, the best model is selected depending on the results obtained by using the ChemCam calibration targets onboard Curiosity. New quantificationsmore » of Li, Sr, Rb, and Ba in Gale samples have been obtained for the first 1000 Martian days. Comparing these data in alkaline and magnesian rocks with the felsic and mafic clasts from the Martian meteorite NWA7533—from approximately the same geologic period—we observe a similar behavior: Sr, Rb, and Ba are more concentrated in soluble- and incompatible-element-rich mineral phases (Si, Al, and alkali-rich). Correlations between these trace elements and potassium in materials analyzed by ChemCam reveal a strong affinity with K-bearing phases such as feldspars, K-phyllosilicates, and potentially micas in igneous and sedimentary rocks. However, lithium is found in comparable abundances in alkali-rich and magnesium-rich Gale rocks. This very soluble element can be associated with both alkali and Mg-Fe phases such as pyroxene and feldspar. Here, these observations of Li, Sr, Rb, and Ba mineralogical associations highlight their substitution with potassium and their incompatibility in magmatic melts.« less

  5. Alkali trace elements in Gale crater, Mars, with ChemCam: Calibration update and geological implications

    DOE PAGES

    Payre, Valerie; Fabre, Cecile; Cousin, Agnes; ...

    2017-03-20

    The Chemistry Camera (ChemCam) instrument onboard Curiosity can detect minor and trace elements such as lithium, strontium, rubidium, and barium. Their abundances can provide some insights about Mars' magmatic history and sedimentary processes. We focus on developing new quantitative models for these elements by using a new laboratory database (more than 400 samples) that displays diverse compositions that are more relevant for Gale crater than the previous ChemCam database. These models are based on univariate calibration curves. For each element, the best model is selected depending on the results obtained by using the ChemCam calibration targets onboard Curiosity. New quantificationsmore » of Li, Sr, Rb, and Ba in Gale samples have been obtained for the first 1000 Martian days. Comparing these data in alkaline and magnesian rocks with the felsic and mafic clasts from the Martian meteorite NWA7533—from approximately the same geologic period—we observe a similar behavior: Sr, Rb, and Ba are more concentrated in soluble- and incompatible-element-rich mineral phases (Si, Al, and alkali-rich). Correlations between these trace elements and potassium in materials analyzed by ChemCam reveal a strong affinity with K-bearing phases such as feldspars, K-phyllosilicates, and potentially micas in igneous and sedimentary rocks. However, lithium is found in comparable abundances in alkali-rich and magnesium-rich Gale rocks. This very soluble element can be associated with both alkali and Mg-Fe phases such as pyroxene and feldspar. Here, these observations of Li, Sr, Rb, and Ba mineralogical associations highlight their substitution with potassium and their incompatibility in magmatic melts.« less

  6. Dimensional accuracy of aluminium extrusions in mechanical calibration

    NASA Astrophysics Data System (ADS)

    Raknes, Christian Arne; Welo, Torgeir; Paulsen, Frode

    2018-05-01

    Reducing dimensional variations in the extrusion process without increasing cost is challenging due to the nature of the process itself. An alternative approach—also from a cost perspective—is using extruded profiles with standard tolerances and utilize downstream processes, and thus calibrate the part within tolerance limits that are not achievable directly from the extrusion process. In this paper, two mechanical calibration strategies for the extruded product are investigated, utilizing the forming lines of the manufacturer. The first calibration strategy is based on global, longitudinal stretching in combination with local bending, while the second strategy utilizes the principle of transversal stretching and local bending of the cross-section. An extruded U-profile is used to make a comparison between the two methods using numerical analyses. To provide response surfaces with the FEA program, ABAQUS is used in combination with Design of Experiment (DOE). DOE is conducted with a two-level fractional factorial design to collect the appropriate data. The aim is to find the main factors affecting the dimension accuracy of the final part obtained by the two calibration methods. The results show that both calibration strategies have proven to reduce cross-sectional variations effectively form standard extrusion tolerances. It is concluded that mechanical calibration is a viable, low-cost alternative for aluminium parts that demand high dimensional accuracy, e.g. due to fit-up or welding requirements.

  7. Airport Landside - Volume III : ALSIM Calibration and Validation.

    DOT National Transportation Integrated Search

    1982-06-01

    This volume discusses calibration and validation procedures applied to the Airport Landside Simulation Model (ALSIM), using data obtained at Miami, Denver and LaGuardia Airports. Criteria for the selection of a validation methodology are described. T...

  8. A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature

    NASA Astrophysics Data System (ADS)

    Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min

    2017-05-01

    This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.

  9. Empirical dual energy calibration (EDEC) for cone-beam computed tomography.

    PubMed

    Stenner, Philip; Berkus, Timo; Kachelriess, Marc

    2007-09-01

    Material-selective imaging using dual energy CT (DECT) relies heavily on well-calibrated material decomposition functions. These require the precise knowledge of the detected x-ray spectra, and even if they are exactly known the reliability of DECT will suffer from scattered radiation. We propose an empirical method to determine the proper decomposition function. In contrast to other decomposition algorithms our empirical dual energy calibration (EDEC) technique requires neither knowledge of the spectra nor of the attenuation coefficients. The desired material-selective raw data p1 and p2 are obtained as functions of the measured attenuation data q1 and q2 (one DECT scan = two raw data sets) by passing them through a polynomial function. The polynomial's coefficients are determined using a general least squares fit based on thresholded images of a calibration phantom. The calibration phantom's dimension should be of the same order of magnitude as the test object, but other than that no assumptions on its exact size or positioning are made. Once the decomposition coefficients are determined DECT raw data can be decomposed by simply passing them through the polynomial. To demonstrate EDEC simulations of an oval CTDI phantom, a lung phantom, a thorax phantom and a mouse phantom were carried out. The method was further verified by measuring a physical mouse phantom, a half-and-half-cylinder phantom and a Yin-Yang phantom with a dedicated in vivo dual source micro-CT scanner. The raw data were decomposed into their components, reconstructed, and the pixel values obtained were compared to the theoretical values. The determination of the calibration coefficients with EDEC is very robust and depends only slightly on the type of calibration phantom used. The images of the test phantoms (simulations and measurements) show a nearly perfect agreement with the theoretical micro values and density values. Since EDEC is an empirical technique it inherently compensates for scatter

  10. Compression of contour data through exploiting curve-to-curve dependence

    NASA Technical Reports Server (NTRS)

    Yalabik, N.; Cooper, D. B.

    1975-01-01

    An approach to exploiting curve-to-curve dependencies in order to achieve high data compression is presented. One of the approaches to date of along curve compression through use of cubic spline approximation is taken and extended by investigating the additional compressibility achievable through curve-to-curve structure exploitation. One of the models under investigation is reported on.

  11. Re-calibration of coronary risk prediction: an example of the Seven Countries Study.

    PubMed

    Puddu, Paolo Emilio; Piras, Paolo; Kromhout, Daan; Tolonen, Hanna; Kafatos, Anthony; Menotti, Alessandro

    2017-12-14

    We aimed at performing a calibration and re-calibration process using six standard risk factors from Northern (NE, N = 2360) or Southern European (SE, N = 2789) middle-aged men of the Seven Countries Study, whose parameters and data were fully known, to establish whether re-calibration gave the right answer. Greenwood-Nam-D'Agostino technique as modified by Demler (GNDD) in 2015 produced chi-squared statistics using 10 deciles of observed/expected CHD mortality risk, corresponding to Hosmer-Lemeshaw chi-squared employed for multiple logistic equations whereby binary data are used. Instead of the number of events, the GNDD test uses survival probabilities of observed and predicted events. The exercise applied, in five different ways, the parameters of the NE-predictive model to SE (and vice-versa) and compared the outcome of the simulated re-calibration with the real data. Good re-calibration could be obtained only when risk factor coefficients were substituted, being similar in magnitude and not significantly different between NE-SE. In all other ways, a good re-calibration could not be obtained. This is enough to praise for an overall need of re-evaluation of most investigations that, without GNDD or another proper technique for statistically assessing the potential differences, concluded that re-calibration is a fair method and might therefore be used, with no specific caution.

  12. A rapid tool for determination of titanium dioxide content in white chickpea samples.

    PubMed

    Sezer, Banu; Bilge, Gonca; Berkkan, Aysel; Tamer, Ugur; Hakki Boyaci, Ismail

    2018-02-01

    Titanium dioxide (TiO 2 ) is a widely used additive in foods. However, in the scientific community there is an ongoing debate on health concerns about TiO 2 . The main goal of this study is to determine TiO 2 content by using laser induced breakdown spectroscopy (LIBS). To this end, different amounts of TiO 2 was added to white chickpeas and analyzed by using LIBS. Calibration curve was obtained by following Ti emissions at 390.11nm for univariate calibration, and partial least square (PLS) calibration curve was obtained by evaluating the whole spectra. The results showed that Ti calibration curve at 390.11nm provides successful determination of Ti level with 0.985 of R 2 and 33.9ppm of limit of detection (LOD) value, while PLS has 0.989 of R 2 and 60.9ppm of LOD. Furthermore, commercial white chickpea samples were used to validate the method, and validation R 2 for simple calibration and PLS were calculated as 0.989 and 0.951, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. ITER-relevant calibration technique for soft x-ray spectrometer.

    PubMed

    Rzadkiewicz, J; Książek, I; Zastrow, K-D; Coffey, I H; Jakubowska, K; Lawson, K D

    2010-10-01

    The ITER-oriented JET research program brings new requirements for the low-Z impurity monitoring, in particular for the Be—the future main wall component of JET and ITER. Monitoring based on Bragg spectroscopy requires an absolute sensitivity calibration, which is challenging for large tokamaks. This paper describes both “component-by-component” and “continua” calibration methods used for the Be IV channel (75.9 Å) of the Bragg rotor spectrometer deployed on JET. The calibration techniques presented here rely on multiorder reflectivity calculations and measurements of continuum radiation emitted from helium plasmas. These offer excellent conditions for the absolute photon flux calibration due to their low level of impurities. It was found that the component-by-component method gives results that are four times higher than those obtained by means of the continua method. A better understanding of this discrepancy requires further investigations.

  14. Estimation of Uncertainties in Stage-Discharge Curve for an Experimental Himalayan Watershed

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Sen, S.

    2016-12-01

    Various water resource projects developed on rivers originating from the Himalayan region, the "Water Tower of Asia", plays an important role on downstream development. Flow measurements at the desired river site are very critical for river engineers and hydrologists for water resources planning and management, flood forecasting, reservoir operation and flood inundation studies. However, an accurate discharge assessment of these mountainous rivers is costly, tedious and frequently dangerous to operators during flood events. Currently, in India, discharge estimation is linked to stage-discharge relationship known as rating curve. This relationship would be affected by a high degree of uncertainty. Estimating the uncertainty of rating curve remains a relevant challenge because it is not easy to parameterize. Main source of rating curve uncertainty are errors because of incorrect discharge measurement, variation in hydraulic conditions and depth measurement. In this study our objective is to obtain best parameters of rating curve that fit the limited record of observations and to estimate uncertainties at different depth obtained from rating curve. The rating curve parameters of standard power law are estimated for three different streams of Aglar watershed located in lesser Himalayas by maximum-likelihood estimator. Quantification of uncertainties in the developed rating curves is obtained from the estimate of variances and covariances of the rating curve parameters. Results showed that the uncertainties varied with catchment behavior with error varies between 0.006-1.831 m3/s. Discharge uncertainty in the Aglar watershed streams significantly depend on the extent of extrapolation outside the range of observed water levels. Extrapolation analysis confirmed that more than 15% for maximum discharges and 5% for minimum discharges are not strongly recommended for these mountainous gauging sites.

  15. Calibration and validation of projection lithography in chemically amplified resist systems using fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Mason, Michael D.; Ray, Krishanu; Feke, Gilbert D.; Grober, Robert D.; Pohlers, Gerd; Cameron, James F.

    2003-05-01

    Coumarin 6 (C6), a pH sensitive fluorescent molecule were doped into commercial resist systems to demonstrate a cost-effective fluorescence microscopy technique for detecting latent photoacid images in exposed chemically amplified resist films. The fluorescenec image contrast is optimized by carefully selecting optical filters to match the spectroscopic properties of C6 in the resist matrices. We demonstrate the potential of this technique for two sepcific non-invasive applications. First, a fast, conventient, fluorescence technique is demonstrated for determination of quantum yeidsl of photo-acid generation. Since the Ka of C6 in the 193nm resist system lies wihtin the range of acid concentrations that can be photogenerated, we have used this technique to evaluate the acid generation efficiency of various photo-acid generators (PAGs). The technique is based on doping the resist formulations containing the candidate PAGs with C6, coating one wafer per PAG, patterning the wafer with a dose ramp and spectroscopically imaging the wafers. The fluorescence of each pattern in the dose ramp is measured as a single image and analyzed with the optical titration model. Second, a nondestructive in-line diagnostic technique is developed for the focus calibration and validation of a projection lithography system. Our experimental results show excellent correlation between the fluorescence images and scanning electron microscope analysis of developed features. This technique has successfully been applied in both deep UV resists e.g., Shipley UVIIHS resist and 193 nm resists e.g., Shipley Vema-type resist. This method of focus calibration has also been extended to samples with feature sizes below the diffraction limit where the pitch between adjacent features is on the order of 300 nm. Image capture, data analysis, and focus latitude verification are all computer controlled from a single hardware/software platform. Typical focus calibration curves can be obtained within several

  16. VizieR Online Data Catalog: SNe II light curves & spectra from the CfA (Hicken+, 2017)

    NASA Astrophysics Data System (ADS)

    Hicken, M.; Friedman, A. S.; Blondin, S.; Challis, P.; Berlind, P.; Calkins, M.; Esquerdo, G.; Matheson, T.; Modjaz, M.; Rest, A.; Kirshner, R. P.

    2018-01-01

    Since all of the optical photometry reported here was produced as part of the CfA3 and CfA4 processing campaigns, see Hicken+ (2009, J/ApJ/700/331) and Hicken+ (2012, J/ApJS/200/12) for greater details on the instruments, observations, photometry pipeline, calibration, and host-galaxy subtraction used to create the CfA SN II light curves. (8 data files).

  17. Non-contact AFM measurement of the Hamaker constants of solids: Calibrating cantilever geometries.

    PubMed

    Fronczak, Sean G; Browne, Christopher A; Krenek, Elizabeth C; Beaudoin, Stephen P; Corti, David S

    2018-05-01

    Surface effects arising from roughness and deformation can negatively affect the results of AFM contact experiments. Using the non-contact portion of an AFM deflection curve is therefore desirable for estimating the Hamaker constant, A, of a solid material. A previously validated non-contact quasi-dynamic method for estimating A is revisited, in which the cantilever tip is now always represented by an "effective sphere". In addition to simplifying this previous method, accurate estimates of A can still be obtained even though precise knowledge of the nanoscale geometric features of the cantilever tip are no longer required. The tip's "effective" radius of curvature, R eff , is determined from a "calibration" step, in which the tip's deflection at first contact with the surface is measured for a substrate with a known Hamaker constant. After R eff is known for a given tip, estimates of A for other surfaces of interest are then determined. An experimental study was conducted to validate the new method and the obtained results are in good agreement with predictions from the Lifshitz approximation, when available. Since R eff accounts for all geometric uncertainties of the tip through a single fitted parameter, no visual fitting of the tip shape was required. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  19. High-precision method of binocular camera calibration with a distortion model.

    PubMed

    Li, Weimin; Shan, Siyu; Liu, Hui

    2017-03-10

    A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.

  20. Curve Number Application in Continuous Runoff Models: An Exercise in Futility?

    NASA Astrophysics Data System (ADS)

    Lamont, S. J.; Eli, R. N.

    2006-12-01

    The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a

  1. SU-F-T-274: Modified Dose Calibration Methods for IMRT QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, W; Westlund, S

    2016-06-15

    Purpose: To investigate IMRT QA uncertainties caused by dose calibration and modify widely used dose calibration procedures to improve IMRT QA accuracy and passing rate. Methods: IMRT QA dose measurement is calibrated using a calibration factor (CF) that is the ratio between measured value and expected value corresponding to the reference fields delivered on a phantom. Two IMRT QA phantoms were used for this study: a 30×30×30 cm3 solid water cube phantom (Cube), and the PTW Octavius phantom. CF was obtained by delivering 100 MUs to the phantoms with different reference fields ranging from 3×3 cm2 to 20×20 cm{sup 2}.more » For Cube, CFs were obtained using the following beam arrangements: 2-AP Field - chamber at dmax, 2-AP Field - chamber at isocenter, 4-beam box - chamber at isocenter, and 8 equally spaced fields and chamber at isocenter. The same plans were delivered on Octavius and CFs were derived for the dose at the isocenter using the above beam arrangements. The Octavius plans were evaluated with PTW-VeriSoft (Gamma criteria of 3%/3mm). Results: Four head and neck IMRT plans were included in this study. For point dose measurement with Cube, the CFs with 4-Field gave the best agreement between measurement and calculation within 4% for large field plans. All the measurement results agreed within 2% for a small field plan. Compared with calibration field sizes, 5×5 to 15×15 were more accurate than other field sizes. For Octavius, 4-Field calibration increased passing rate by up to 10% compared to AP calibration. Passing rate also increased by up to 4% with the increase of field size from 3×3 to 20×20. Conclusion: IMRT QA results are correlated with calibration methods used. The dose calibration using 4-beam box with field sizes from 5×5 to 20×20 can improve IMRT QA accuracy and passing rate.« less

  2. Standardization of gamma-glutamyltransferase assays by intermethod calibration. Effect on determining common reference limits.

    PubMed

    Steinmetz, Josiane; Schiele, Françoise; Gueguen, René; Férard, Georges; Henny, Joseph

    2007-01-01

    The improvement of the consistency of gamma-glutamyltransferase (GGT) activity results among different assays after calibration with a common material was estimated. We evaluated if this harmonization could lead to reference limits common to different routine methods. Seven laboratories measured GGT activity using their own routine analytical system both according to the manufacturer's recommendation and after calibration with a multi-enzyme calibrator [value assigned by the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) reference procedure]. All samples were re-measured using the IFCC reference procedure. Two groups of subjects were selected in each laboratory: a group of healthy men aged 18-25 years without long-term medication and with alcohol consumption less than 44 g/day and a group of subjects with elevated GGT activity. The day-to-day coefficients of variation were less than 2.9% in each laboratory. The means obtained in the group of healthy subjects without common calibration (range of the means 16-23 U/L) were significantly different from those obtained by the IFCC procedure in five laboratories. After calibration, the means remained significantly different from the IFCC procedure results in only one laboratory. For three calibrated methods, the slope values of linear regression vs. the IFCC procedure were not different from the value 1. The results obtained with these three methods for healthy subjects (n=117) were gathered and reference limits were calculated. These were 11-49 U/L (2.5th-97.5th percentiles). The calibration also improved the consistency of elevated results when compared to the IFCC procedure. The common calibration improved the level of consistency between different routine methods. It permitted to define common reference limits which are quite similar to those proposed by the IFCC. This approach should lead to a real benefit in terms of prevention, screening, diagnosis, therapeutic monitoring and for

  3. New technique for calibrating hydrocarbon gas flowmeters

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Puster, R. L.

    1984-01-01

    A technique for measuring calibration correction factors for hydrocarbon mass flowmeters is described. It is based on the Nernst theorem for matching the partial pressure of oxygen in the combustion products of the test hydrocarbon, burned in oxygen-enriched air, with that in normal air. It is applied to a widely used type of commercial thermal mass flowmeter for a number of hydrocarbons. The calibration correction factors measured using this technique are in good agreement with the values obtained by other independent procedures. The technique is successfully applied to the measurement of differences as low as one percent of the effective hydrocarbon content of the natural gas test samples.

  4. LCC: Light Curves Classifier

    NASA Astrophysics Data System (ADS)

    Vo, Martin

    2017-08-01

    Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

  5. Urban stormwater capture curve using three-parameter mixed exponential probability density function and NRCS runoff curve number method.

    PubMed

    Kim, Sangdan; Han, Suhee

    2010-01-01

    Most related literature regarding designing urban non-point-source management systems assumes that precipitation event-depths follow the 1-parameter exponential probability density function to reduce the mathematical complexity of the derivation process. However, the method of expressing the rainfall is the most important factor for analyzing stormwater; thus, a better mathematical expression, which represents the probability distribution of rainfall depths, is suggested in this study. Also, the rainfall-runoff calculation procedure required for deriving a stormwater-capture curve is altered by the U.S. Natural Resources Conservation Service (Washington, D.C.) (NRCS) runoff curve number method to consider the nonlinearity of the rainfall-runoff relation and, at the same time, obtain a more verifiable and representative curve for design when applying it to urban drainage areas with complicated land-use characteristics, such as occurs in Korea. The result of developing the stormwater-capture curve from the rainfall data in Busan, Korea, confirms that the methodology suggested in this study provides a better solution than the pre-existing one.

  6. Does the Budyko curve reflect a maximum power state of hydrological systems? A backward analysis

    NASA Astrophysics Data System (ADS)

    Westhoff, Martijn; Zehe, Erwin; Archambeau, Pierre; Dewals, Benjamin

    2016-04-01

    Almost all catchments plot within a small envelope around the Budyko curve. This apparent behaviour suggests that organizing principles may play a role in the evolution of catchments. In this paper we applied the thermodynamic principle of maximum power as the organizing principle. In a top-down approach we derived mathematical formulations of the relation between relative wetness and gradients driving runoff and evaporation for a simple one-box model. We did this in an inverse manner such that when the conductances are optimized with the maximum power principle, the steady state behaviour of the model leads exactly to a point on the asymptotes of the Budyko curve. Subsequently, we added dynamics in forcing and actual evaporations, causing the Budyko curve to deviate from the asymptotes. Despite the simplicity of the model, catchment observations compare reasonably well with the Budyko curves subject to observed dynamics in rainfall and actual evaporation. Thus by constraining the - with the maximum power principle optimized - model with the asymptotes of the Budyko curve we were able to derive more realistic values of the aridity and evaporation index without any parameter calibration. Future work should focus on better representing the boundary conditions of real catchments and eventually adding more complexity to the model.

  7. Does the Budyko curve reflect a maximum-power state of hydrological systems? A backward analysis

    NASA Astrophysics Data System (ADS)

    Westhoff, M.; Zehe, E.; Archambeau, P.; Dewals, B.

    2016-01-01

    Almost all catchments plot within a small envelope around the Budyko curve. This apparent behaviour suggests that organizing principles may play a role in the evolution of catchments. In this paper we applied the thermodynamic principle of maximum power as the organizing principle. In a top-down approach we derived mathematical formulations of the relation between relative wetness and gradients driving run-off and evaporation for a simple one-box model. We did this in an inverse manner such that, when the conductances are optimized with the maximum-power principle, the steady-state behaviour of the model leads exactly to a point on the asymptotes of the Budyko curve. Subsequently, we added dynamics in forcing and actual evaporation, causing the Budyko curve to deviate from the asymptotes. Despite the simplicity of the model, catchment observations compare reasonably well with the Budyko curves subject to observed dynamics in rainfall and actual evaporation. Thus by constraining the model that has been optimized with the maximum-power principle with the asymptotes of the Budyko curve, we were able to derive more realistic values of the aridity and evaporation index without any parameter calibration. Future work should focus on better representing the boundary conditions of real catchments and eventually adding more complexity to the model.

  8. Simple laser vision sensor calibration for surface profiling applications

    NASA Astrophysics Data System (ADS)

    Abu-Nabah, Bassam A.; ElSoussi, Adnane O.; Al Alami, Abed ElRahman K.

    2016-09-01

    Due to the relatively large structures in the Oil and Gas industry, original equipment manufacturers (OEMs) have been implementing custom-designed laser vision sensor (LVS) surface profiling systems as part of quality control in their manufacturing processes. The rough manufacturing environment and the continuous movement and misalignment of these custom-designed tools adversely affect the accuracy of laser-based vision surface profiling applications. Accordingly, Oil and Gas businesses have been raising the demand from the OEMs to implement practical and robust LVS calibration techniques prior to running any visual inspections. This effort introduces an LVS calibration technique representing a simplified version of two known calibration techniques, which are commonly implemented to obtain a calibrated LVS system for surface profiling applications. Both calibration techniques are implemented virtually and experimentally to scan simulated and three-dimensional (3D) printed features of known profiles, respectively. Scanned data is transformed from the camera frame to points in the world coordinate system and compared with the input profiles to validate the introduced calibration technique capability against the more complex approach and preliminarily assess the measurement technique for weld profiling applications. Moreover, the sensitivity to stand-off distances is analyzed to illustrate the practicality of the presented technique.

  9. A proposed standard method for polarimetric calibration and calibration verification

    NASA Astrophysics Data System (ADS)

    Persons, Christopher M.; Jones, Michael W.; Farlow, Craig A.; Morell, L. Denise; Gulley, Michael G.; Spradley, Kevin D.

    2007-09-01

    Accurate calibration of polarimetric sensors is critical to reducing and analyzing phenomenology data, producing uniform polarimetric imagery for deployable sensors, and ensuring predictable performance of polarimetric algorithms. It is desirable to develop a standard calibration method, including verification reporting, in order to increase credibility with customers and foster communication and understanding within the polarimetric community. This paper seeks to facilitate discussions within the community on arriving at such standards. Both the calibration and verification methods presented here are performed easily with common polarimetric equipment, and are applicable to visible and infrared systems with either partial Stokes or full Stokes sensitivity. The calibration procedure has been used on infrared and visible polarimetric imagers over a six year period, and resulting imagery has been presented previously at conferences and workshops. The proposed calibration method involves the familiar calculation of the polarimetric data reduction matrix by measuring the polarimeter's response to a set of input Stokes vectors. With this method, however, linear combinations of Stokes vectors are used to generate highly accurate input states. This allows the direct measurement of all system effects, in contrast with fitting modeled calibration parameters to measured data. This direct measurement of the data reduction matrix allows higher order effects that are difficult to model to be discovered and corrected for in calibration. This paper begins with a detailed tutorial on the proposed calibration and verification reporting methods. Example results are then presented for a LWIR rotating half-wave retarder polarimeter.

  10. RF Reference Switch for Spaceflight Radiometer Calibration

    NASA Technical Reports Server (NTRS)

    Knuble, Joseph

    2013-01-01

    The goal of this technology is to provide improved calibration and measurement sensitivity to the Soil Moisture Active Passive Mission (SMAP) radiometer. While RF switches have been used in the past to calibrate microwave radiometers, the switch used on SMAP employs several techniques uniquely tailored to the instrument requirements and passive remote-sensing in general to improve radiometer performance. Measurement error and sensitivity are improved by employing techniques to reduce thermal gradients within the device, reduce insertion loss during antenna observations, increase insertion loss temporal stability, and increase rejection of radar and RFI (radio-frequency interference) signals during calibration. The two legs of the single-pole double-throw reference switch employ three PIN diodes per leg in a parallel-shunt configuration to minimize insertion loss and increase stability while exceeding rejection requirements at 1,413 MHz. The high-speed packaged diodes are selected to minimize junction capacitance and resistance while ensuring the parallel devices have very similar I-V curves. Switch rejection is improved by adding high-impedance quarter-wave tapers before and after the diodes, along with replacing the ground via of one diode per leg with an open circuit stub. Errors due to thermal gradients in the switch are reduced by embedding the 50-ohm reference load within the switch, along with using a 0.25-in. (approximately equal to 0.6-cm) aluminum prebacked substrate. Previous spaceflight microwave radiometers did not embed the reference load and thermocouple directly within the calibration switch. In doing so, the SMAP switch reduces error caused by thermal gradients between the load and switch. Thermal issues are further reduced by moving the custom, highspeed regulated driver circuit to a physically separate PWB (printed wiring board). Regarding RF performance, previous spaceflight reference switches have not employed high-impedance tapers to improve

  11. Accuracy, calibration and clinical performance of the EuroSCORE: can we reduce the number of variables?

    PubMed

    Ranucci, Marco; Castelvecchio, Serenella; Menicanti, Lorenzo; Frigiola, Alessandro; Pelissero, Gabriele

    2010-03-01

    The European system for cardiac operative risk evaluation (EuroSCORE) is currently used in many institutions and is considered a reference tool in many countries. We hypothesised that too many variables were included in the EuroSCORE using limited patient series. We tested different models using a limited number of variables. A total of 11150 adult patients undergoing cardiac operations at our institution (2001-2007) were retrospectively analysed. The 17 risk factors composing the EuroSCORE were separately analysed and ranked for accuracy of prediction of hospital mortality. Seventeen models were created by progressively including one factor at a time. The models were compared for accuracy with a receiver operating characteristics (ROC) analysis and area under the curve (AUC) evaluation. Calibration was tested with Hosmer-Lemeshow statistics. Clinical performance was assessed by comparing the predicted with the observed mortality rates. The best accuracy (AUC 0.76) was obtained using a model including only age, left ventricular ejection fraction, serum creatinine, emergency operation and non-isolated coronary operation. The EuroSCORE AUC (0.75) was not significantly different. Calibration and clinical performance were better in the five-factor model than in the EuroSCORE. Only in high-risk patients were 12 factors needed to achieve a good performance. Including many factors in multivariable logistic models increases the risk for overfitting, multicollinearity and human error. A five-factor model offers the same level of accuracy but demonstrated better calibration and clinical performance. Models with a limited number of factors may work better than complex models when applied to a limited number of patients. Copyright (c) 2009 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.

  12. Planck 2013 results. VIII. HFI photometric calibration and mapmaking

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bertincourt, B.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chen, X.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Filliard, C.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Lellouch, E.; Leonardi, R.; Leroy, C.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Maurin, L.; Mazzotta, P.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Moreno, R.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rusholme, B.; Santos, D.; Savini, G.; Scott, D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Techene, S.; Terenzi, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-11-01

    This paper describes the methods used to produce photometrically calibrated maps from the Planck High Frequency Instrument (HFI) cleaned, time-ordered information. HFI observes the sky over a broad range of frequencies, from 100 to 857 GHz. To obtain the best calibration accuracy over such a large range, two different photometric calibration schemes have to be used. The 545 and 857 GHz data are calibrated by comparing flux-density measurements of Uranus and Neptune with models of their atmospheric emission. The lower frequencies (below 353 GHz) are calibrated using the solar dipole. A component of this anisotropy is time-variable, owing to the orbital motion of the satellite in the solar system. Photometric calibration is thus tightly linked to mapmaking, which also addresses low-frequency noise removal. By comparing observations taken more than one year apart in the same configuration, we have identified apparent gain variations with time. These variations are induced by non-linearities in the read-out electronics chain. We have developed an effective correction to limit their effect on calibration. We present several methods to estimate the precision of the photometric calibration. We distinguish relative uncertainties (between detectors, or between frequencies) and absolute uncertainties. Absolute uncertainties lie in the range from 0.54% to 10% from 100 to 857 GHz. We describe the pipeline used to produce the maps from the HFI timelines, based on the photometric calibration parameters, and the scheme used to set the zero level of the maps a posteriori. We also discuss the cross-calibration between HFI and the SPIRE instrument on board Herschel. Finally we summarize the basic characteristics of the set of HFI maps included in the 2013 Planck data release.

  13. Calibration of Watershed Lag Time Equation for Philippine Hydrology using RADARSAT Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Cipriano, F. R.; Lagmay, A. M. A.; Horritt, M.; Mendoza, J.; Sabio, G.; Punay, K. N.; Taniza, H. J.; Uichanco, C.

    2015-12-01

    Widespread flooding is a major problem in the Philippines. The country experiences heavy amount of rainfall throughout the year and several areas are prone to flood hazards because of its unique topography. Human casualties and destruction of infrastructure are just some of the damages caused by flooding and the Philippine government has undertaken various efforts to mitigate these hazards. One of the solutions was to create flood hazard maps of different floodplains and use them to predict the possible catastrophic results of different rain scenarios. To produce these maps with accurate output, different input parameters were needed and one of those is calculating hydrological components from topographical data. This paper presents how a calibrated lag time (TL) equation was obtained using measurable catchment parameters. Lag time is an essential input in flood mapping and is defined as the duration between the peak rainfall and peak discharge of the watershed. The lag time equation involves three measurable parameters, namely, watershed length (L), maximum potential retention (S) derived from the curve number, and watershed slope (Y), all of which were available from RADARSAT Digital Elevation Models (DEM). This approach was based on a similar method developed by CH2M Hill and Horritt for Taiwan, which has a similar set of meteorological and hydrological parameters with the Philippines. Rainfall data from fourteen water level sensors covering 67 storms from all the regions in the country were used to estimate the actual lag time. These sensors were chosen by using a screening process that considers the distance of the sensors from the sea, the availability of recorded data, and the catchment size. The actual lag time values were plotted against the values obtained from the Natural Resource Conservation Management handbook lag time equation. Regression analysis was used to obtain the final calibrated equation that would be used to calculate the lag time

  14. Calibrating thermal behavior of electronics

    DOEpatents

    Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.

    2017-07-11

    A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.

  15. Calibrating thermal behavior of electronics

    DOEpatents

    Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.

    2016-05-31

    A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.

  16. Calibrating thermal behavior of electronics

    DOEpatents

    Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.

    2017-01-03

    A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.

  17. Multiplexed MRM-Based Protein Quantitation Using Two Different Stable Isotope-Labeled Peptide Isotopologues for Calibration.

    PubMed

    LeBlanc, André; Michaud, Sarah A; Percy, Andrew J; Hardie, Darryl B; Yang, Juncong; Sinclair, Nicholas J; Proudfoot, Jillaine I; Pistawka, Adam; Smith, Derek S; Borchers, Christoph H

    2017-07-07

    When quantifying endogenous plasma proteins for fundamental and biomedical research - as well as for clinical applications - precise, reproducible, and robust assays are required. Targeted detection of peptides in a bottom-up strategy is the most common and precise mass spectrometry-based quantitation approach when combined with the use of stable isotope-labeled peptides. However, when measuring protein in plasma, the unknown endogenous levels prevent the implementation of the best calibration strategies, since no blank matrix is available. Consequently, several alternative calibration strategies are employed by different laboratories. In this study, these methods were compared to a new approach using two different stable isotope-labeled standard (SIS) peptide isotopologues for each endogenous peptide to be quantified, enabling an external calibration curve as well as the quality control samples to be prepared in pooled human plasma without interference from endogenous peptides. This strategy improves the analytical performance of the assay and enables the accuracy of the assay to be monitored, which can also facilitate method development and validation.

  18. ESR/Alanine gamma-dosimetry in the 10-30 Gy range.

    PubMed

    Fainstein, C; Winkler, E; Saravi, M

    2000-05-01

    We report Alanine Dosimeter preparation, procedures for using the ESR/Dosimetry method, and the resulting calibration curve for gamma-irradiation in the range from 10-30 Gy. We use calibration curve to measure the irradiation dose in gamma-irradiation of human blood, as required in Blood Transfusion Therapy. The ESR/Alanine results are compared against those obtained using the thermoluminescent dosimetry (TLD) method.

  19. Magnetic luminescent nanoparticles as internal calibration for an immunoassay for ricin

    NASA Astrophysics Data System (ADS)

    Dosev, Dosi; Nichkova, Mikaela; Ma, Zhi-Ya; Gee, Shirley J.; Hammock, Bruce D.; Kennedy, Ian M.

    2008-02-01

    Fluorescence techniques rely on measurement of relative fluorescence units and require calibration to obtain reliable and comparable quantitative data. Fluorescent immunoassays are a very sensitive and convenient method of choice for rapid detection of biotoxins, such as ricin. Here we present the application of magnetic luminescent nanoparticles (MLNPs) with a magnetic core of Fe 3O 4 and a fluorescent shell of Eu:Gd IIO 3 as carriers for a nanobead-immunoassay for the detection of ricin with internal calibration. A sandwich immunoassay for ricin was performed on the surface of the MLNPs. The particles were functionalized with capture polyclonal antibodies. Anti-ricin antibodies labeled with Alexa Fluor dye were used as the detecting antibodies. After magnetic extraction, the amount of ricin bound to the particle surface was quantified and related to the fluorescence signal of the nanoparticles. In this new platform, the MLNPs have three main functions: (1) a probe for the specific extraction of the target analyte from the sample; (2) a carrier in the quantitative immunoassay with magnetic separation; and (3) an internal standard in the fluorescence measurement of the dye reporter. The MLNPs serve as an internal control for the total analysis including extraction and assay performance. This approach eliminates the experimental error inherent in particle extraction and measurement of absolute organic dye fluorescence intensities. All fluorescent measurements were performed in a microplate reader. The standard curve for ricin had a dynamic range from 20 ng/ml to 100 μg/ml with a detection limit of 5 ng/ml. The configuration that has been developed can be easily adapted to a high throughput miniaturized system.

  20. Calibration of ground-based microwave radiometers - Accuracy assessment and recommendations for network users

    NASA Astrophysics Data System (ADS)

    Pospichal, Bernhard; Küchler, Nils; Löhnert, Ulrich; Crewell, Susanne; Czekala, Harald; Güldner, Jürgen

    2016-04-01

    Ground-based microwave radiometers (MWR) are becoming widely used in atmospheric remote sensing and start to be routinely operated by national weather services and other institutions. However, common standards for calibration of these radiometers and a detailed knowledge about the error characteristics is needed, in order to assimilate the data into models. Intercomparisons of calibrations by different MWRs have rarely been done. Therefore, two calibration experiments in Lindenberg (2014) and Meckenheim (2015) were performed in the frame of TOPROF (Cost action ES1303) in order to assess uncertainties and differences between various instruments. In addition, a series of experiments were taken in Oklahoma in autumn 2014. The focus lay on the performance of the two main instrument types, which are currently used operationally. These are the MP-Profiler series by Radiometrics Corporation as well as the HATPRO series by Radiometer Physics GmbH (RPG). Both instrument types are operating in two frequency bands, one along the 22 GHz water vapour line, the other one at the lower wing of the 60 GHz oxygen absorption complex. The goal was to establish protocols for providing quality controlled (QC) MWR data and their uncertainties. To this end, standardized calibration procedures for MWR were developed and recommendations for radiometer users were compiled. We focus here mainly on data types, integration times and optimal settings for calibration intervals, both for absolute (liquid nitrogen, tipping curve) as well as relative (hot load, noise diode) calibrations. Besides the recommendations for ground-based MWR operators, we will present methods to determine the accuracy of the calibration as well as means for automatic data quality control. In addition, some results from the intercomparison of different radiometers will be discussed.

  1. Supervised Detection of Anomalous Light Curves in Massive Astronomical Catalogs

    NASA Astrophysics Data System (ADS)

    Nun, Isadora; Pichara, Karim; Protopapas, Pavlos; Kim, Dae-Won

    2014-09-01

    The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. In order to process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new methodology to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each of the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. By leaving out one of the classes on the training set, we perform a validity test and show that when the random forest classifier attempts to classify unknown light curves (the class left out), it votes with an unusual distribution among the classes. This rare voting is detected by the Bayesian network and expressed as a low joint probability. Our method is suitable for exploring massive data sets given that the training process is performed offline. We tested our algorithm on 20 million light curves from the MACHO catalog and generated a list of anomalous candidates. After analysis, we divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration, or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post-analysis stage by performing a cross-match with all publicly available catalogs. Within these candidates we identified certain known

  2. Multi-q pattern classification of polarization curves

    NASA Astrophysics Data System (ADS)

    Fabbri, Ricardo; Bastos, Ivan N.; Neto, Francisco D. Moura; Lopes, Francisco J. P.; Gonçalves, Wesley N.; Bruno, Odemir M.

    2014-02-01

    Several experimental measurements are expressed in the form of one-dimensional profiles, for which there is a scarcity of methodologies able to classify the pertinence of a given result to a specific group. The polarization curves that evaluate the corrosion kinetics of electrodes in corrosive media are applications where the behavior is chiefly analyzed from profiles. Polarization curves are indeed a classic method to determine the global kinetics of metallic electrodes, but the strong nonlinearity from different metals and alloys can overlap and the discrimination becomes a challenging problem. Moreover, even finding a typical curve from replicated tests requires subjective judgment. In this paper, we used the so-called multi-q approach based on the Tsallis statistics in a classification engine to separate the multiple polarization curve profiles of two stainless steels. We collected 48 experimental polarization curves in an aqueous chloride medium of two stainless steel types, with different resistance against localized corrosion. Multi-q pattern analysis was then carried out on a wide potential range, from cathodic up to anodic regions. An excellent classification rate was obtained, at a success rate of 90%, 80%, and 83% for low (cathodic), high (anodic), and both potential ranges, respectively, using only 2% of the original profile data. These results show the potential of the proposed approach towards efficient, robust, systematic and automatic classification of highly nonlinear profile curves.

  3. Implementation Learning and Forgetting Curve to Scheduling in Garment Industry

    NASA Astrophysics Data System (ADS)

    Muhamad Badri, Huda; Deros, Baba Md; Syahri, M.; Saleh, Chairul; Fitria, Aninda

    2016-02-01

    The learning curve shows the relationship between time and the cumulative number of units produced which using the mathematical description on the performance of workers in performing repetitive works. The problems of this study is level differences in the labors performance before and after the break which affects the company's production scheduling. The study was conducted in the garment industry, which the aims is to predict the company production scheduling using the learning curve and forgetting curve. By implementing the learning curve and forgetting curve, this paper contributes in improving the labors performance that is in line with the increase in maximum output 3 hours productive before the break are 15 unit product with learning curve percentage in the company is 93.24%. Meanwhile, the forgetting curve improving maximum output 3 hours productive after the break are 11 unit product with the percentage of forgetting curve in the company is 92.96%. Then, the obtained 26 units product on the productive hours one working day is used as the basic for production scheduling.

  4. Evaluation of plasmid and genomic DNA calibrants used for the quantification of genetically modified organisms.

    PubMed

    Caprioara-Buda, M; Meyer, W; Jeynov, B; Corbisier, P; Trapmann, S; Emons, H

    2012-07-01

    The reliable quantification of genetically modified organisms (GMOs) by real-time PCR requires, besides thoroughly validated quantitative detection methods, sustainable calibration systems. The latter establishes the anchor points for the measured value and the measurement unit, respectively. In this paper, the suitability of two types of DNA calibrants, i.e. plasmid DNA and genomic DNA extracted from plant leaves, for the certification of the GMO content in reference materials as copy number ratio between two targeted DNA sequences was investigated. The PCR efficiencies and coefficients of determination of the calibration curves as well as the measured copy number ratios for three powder certified reference materials (CRMs), namely ERM-BF415e (NK603 maize), ERM-BF425c (356043 soya), and ERM-BF427c (98140 maize), originally certified for their mass fraction of GMO, were compared for both types of calibrants. In all three systems investigated, the PCR efficiencies of plasmid DNA were slightly closer to the PCR efficiencies observed for the genomic DNA extracted from seed powders rather than those of the genomic DNA extracted from leaves. Although the mean DNA copy number ratios for each CRM overlapped within their uncertainties, the DNA copy number ratios were significantly different using the two types of calibrants. Based on these observations, both plasmid and leaf genomic DNA calibrants would be technically suitable as anchor points for the calibration of the real-time PCR methods applied in this study. However, the most suitable approach to establish a sustainable traceability chain is to fix a reference system based on plasmid DNA.

  5. Calibration plots for risk prediction models in the presence of competing risks.

    PubMed

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-08-15

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Updated radiometric calibration for the Landsat-5 thematic mapper reflective bands

    USGS Publications Warehouse

    Helder, D.L.; Markham, B.L.; Thome, K.J.; Barsi, J.A.; Chander, G.; Malla, R.

    2008-01-01

    The Landsat-5 Thematic Mapper (TM) has been the workhorse of the Landsat system. Launched in 1984, it continues collecting data through the time frame of this paper. Thus, it provides an invaluable link to the past history of the land features of the Earth's surface, and it becomes imperative to provide an accurate radiometric calibration of the reflective bands to the user community. Previous calibration has been based on information obtained from prelaunch, the onboard calibrator, vicarious calibration attempts, and cross-calibration with Landsat-7. Currently, additional data sources are available to improve this calibration. Specifically, improvements in vicarious calibration methods and development of the use of pseudoinvariant sites for trending provide two additional independent calibration sources. The use of these additional estimates has resulted in a consistent calibration approach that ties together all of the available calibration data sources. Results from this analysis indicate a simple exponential, or a constant model may be used for all bands throughout the lifetime of Landsat-5 TM. Where previously time constants for the exponential models were approximately one year, the updated model has significantly longer time constants in bands 1-3. In contrast, bands 4, 5, and 7 are shown to be best modeled by a constant. The models proposed in this paper indicate calibration knowledge of 5% or better early in life, decreasing to nearly 2% later in life. These models have been implemented at the U.S. Geological Survey Earth Resources Observation and Science (EROS) and are the default calibration used for all Landsat TM data now distributed through EROS. ?? 2008 IEEE.

  7. Chandra Observations of SN 1987A: The Soft X-Ray Light Curve Revisited

    NASA Technical Reports Server (NTRS)

    Helder, E. A.; Broos, P. S.; Dewey, D.; Dwek, E.; McCray, R.; Park, S.; Racusin, J. L.; Zhekov, S. A.; Burrows, D. N.

    2013-01-01

    We report on the present stage of SN 1987A as observed by the Chandra X-Ray Observatory. We reanalyze published Chandra observations and add three more epochs of Chandra data to get a consistent picture of the evolution of the X-ray fluxes in several energy bands. We discuss the implications of several calibration issues for Chandra data. Using the most recent Chandra calibration files, we find that the 0.5-2.0 keV band fluxes of SN 1987A have increased by approximately 6 x 10(exp-13) erg s(exp-1)cm(exp-2) per year since 2009. This is in contrast with our previous result that the 0.5-2.0 keV light curve showed a sudden flattening in 2009. Based on our new analysis, we conclude that the forward shock is still in full interaction with the equatorial ring.

  8. Resistance Curves in the Tensile and Compressive Longitudinal Failure of Composites

    NASA Technical Reports Server (NTRS)

    Camanho, Pedro P.; Catalanotti, Giuseppe; Davila, Carlos G.; Lopes, Claudio S.; Bessa, Miguel A.; Xavier, Jose C.

    2010-01-01

    This paper presents a new methodology to measure the crack resistance curves associated with fiber-dominated failure modes in polymer-matrix composites. These crack resistance curves not only characterize the fracture toughness of the material, but are also the basis for the identification of the parameters of the softening laws used in the analytical and numerical simulation of fracture in composite materials. The method proposed is based on the identification of the crack tip location by the use of Digital Image Correlation and the calculation of the J-integral directly from the test data using a simple expression derived for cross-ply composite laminates. It is shown that the results obtained using the proposed methodology yield crack resistance curves similar to those obtained using FEM-based methods in compact tension carbon-epoxy specimens. However, it is also shown that the Digital Image Correlation based technique can be used to extract crack resistance curves in compact compression tests for which FEM-based techniques are inadequate.

  9. Giada improved calibration of measurement subsystems

    NASA Astrophysics Data System (ADS)

    Della Corte, V.; Rotundi, A.; Sordini, R.; Accolla, M.; Ferrari, M.; Ivanovski, S.; Lucarelli, F.; Mazzotta Epifani, E.; Palumbo, P.

    2014-12-01

    GIADA (Grain Impact Analyzer and Dust Accumulator) is an in-situ instrument devoted to measure the dynamical properties of the dust grains emitted by the comet. An Extended Calibration activity using the GIADA Flight Spare Model has been carried out taking into account the knowledge gained through the analyses of IDPs and cometary samples returned from comet 81P/Wild 2. GIADA consists of three measurement subsystems: Grain Detection System, an optical device measuring the optical cross-section for individual dust; Impact Sensor an aluminum plate connected to 5 piezo-sensors measuring the momentum of impacting single dust grains; Micro Balance System measuring the cumulative deposition in time of dust grains smaller than 10 μm. The results of the analyses on data acquired with the GIADA PFM and the comparison with calibration data acquired during the pre-launch campaign allowed us to improve GIADA performances and capabilities. We will report the results of the following main activities: a) definition of a correlation between the 2 GIADA Models (PFM housed in laboratory and In-Flight Model on-board ROSETTA); b) characterization of the sub-systems performances (signal elaboration, sensitivities, space environment effects); c) new calibration measurements and related curves by means of the PFM model using realistic cometary dust analogues. Acknowledgements: GIADA was built by a consortium led by the Univ. Napoli "Parthenope" & INAF-Oss. Astr. Capodimonte, IT, in collaboration with the Inst. de Astrofisica de Andalucia, ES, Selex-ES s.p.a. and SENER. GIADA is presently managed & operated by Ist. di Astrofisica e Planetologia Spaziali-INAF, IT. GIADA was funded and managed by the Agenzia Spaziale Italiana, IT, with a support of the Spanish Ministry of Education and Science MEC, ES. GIADA was developed from a University of Kent, UK, PI proposal; sci. & tech. contribution given by CISAS, IT, Lab. d'Astr. Spat., FR, and Institutions from UK, IT, FR, DE and USA. We thank

  10. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler

    PubMed Central

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-01-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  11. Melting Curve of Molecular Crystal GeI4

    NASA Astrophysics Data System (ADS)

    Fuchizaki, Kazuhiro; Hamaya, Nozomu

    2014-07-01

    In situ synchrotron x-ray diffraction measurements were carried out to determine the melting curve of the molecular crystal GeI4. We found that the melting line rapidly increases with a pressure up to about 3 GPa, at which it abruptly breaks. Such a strong nonlinear shape of the melting curve can be approximately captured by the Kumari-Dass-Kechin equation. The parameters involved in the equation could be determined from the equation of state for the crystalline phase, which was also established in the present study. The melting curve predicted from the equation approaches the actual melting curve as the degree of approximation involved in obtaining the equation is improved. However, the treatment is justifiable only if the slope of the melting curve is everywhere continuous. We believe that this is not the case for GeI4's melting line at the breakpoint, as inferred from the nature of breakdown of the Kraut-Kennedy and the Magalinskii-Zubov relationships.The breakpoint may then be a triple point among the crystalline phase and two possible liquid phases.

  12. Spectroradiometric calibration of the Thematic Mapper and Multispectral Scanner system

    NASA Technical Reports Server (NTRS)

    Slater, P. N.; Palmer, J. M. (Principal Investigator)

    1985-01-01

    The results of analyses of Thematic Mapper (TM) images acquired on July 8 and October 28, 1984, and of a check of the calibration of the 1.22-m integrating sphere at Santa Barbara Research Center (SBRC) are described. The results obtained from the in-flight calibration attempts disagree with the pre-flight calibrations for bands 2 and 4. Considerable effort was expended in an attempt to explain the disagreement. The difficult point to explain is that the difference between the radiances predicted by the radiative transfer code (the code radiances) and the radiances predicted by the preflight calibration (the pre-flight radiances) fluctuate with spectral band. Because the spectral quantities measured at White Sands show little change with spectral band, these fluctuations are not anticipated. Analyses of other targets at White Sands such as clouds, cloud shadows, and water surfaces tend to support the pre-flight and internal calibrator calibrations. The source of the disagreement has not been identified. It could be due to: (1) a computational error in the data reduction; (2) an incorrect assumption in the input to the radiative transfer code; or (3) incorrect operation of the field equipment.

  13. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  14. Using Peano Curves to Construct Laplacians on Fractals

    NASA Astrophysics Data System (ADS)

    Molitor, Denali; Ott, Nadia; Strichartz, Robert

    2015-12-01

    We describe a new method to construct Laplacians on fractals using a Peano curve from the circle onto the fractal, extending an idea that has been used in the case of certain Julia sets. The Peano curve allows us to visualize eigenfunctions of the Laplacian by graphing the pullback to the circle. We study in detail three fractals: the pentagasket, the octagasket and the magic carpet. We also use the method for two nonfractal self-similar sets, the torus and the equilateral triangle, obtaining appealing new visualizations of eigenfunctions on the triangle. In contrast to the many familiar pictures of approximations to standard Peano curves, that do no show self-intersections, our descriptions of approximations to the Peano curves have self-intersections that play a vital role in constructing graph approximations to the fractal with explicit graph Laplacians that give the fractal Laplacian in the limit.

  15. Trends in scale and shape of survival curves.

    PubMed

    Weon, Byung Mook; Je, Jung Ho

    2012-01-01

    The ageing of the population is an issue in wealthy countries worldwide because of increasing costs for health care and welfare. Survival curves taken from demographic life tables may help shed light on the hypotheses that humans are living longer and that human populations are growing older. We describe a methodology that enables us to obtain separate measurements of scale and shape variances in survival curves. Specifically, 'living longer' is associated with the scale variance of survival curves, whereas 'growing older' is associated with the shape variance. We show how the scale and shape of survival curves have changed over time during recent decades, based on period and cohort female life tables for selected wealthy countries. Our methodology will be useful for performing better tracking of ageing statistics and it is possible that this methodology can help identify the causes of current trends in human ageing.

  16. Quantitative spatial distribution of sirolimus and polymers in drug-eluting stents using confocal Raman microscopy.

    PubMed

    Balss, K M; Llanos, G; Papandreou, G; Maryanoff, C A

    2008-04-01

    Raman spectroscopy was used to differentiate each component found in the CYPHER Sirolimus-eluting Coronary Stent. The unique spectral features identified for each component were then used to develop three separate calibration curves to describe the solid phase distribution found on drug-polymer coated stents. The calibration curves were obtained by analyzing confocal Raman spectral depth profiles from a set of 16 unique formulations of drug-polymer coatings sprayed onto stents and planar substrates. The sirolimus model was linear from 0 to 100 wt % of drug. The individual polymer calibration curves for poly(ethylene-co-vinyl acetate) [PEVA] and poly(n-butyl methacrylate) [PBMA] were also linear from 0 to 100 wt %. The calibration curves were tested on three independent drug-polymer coated stents. The sirolimus calibration predicted the drug content within 1 wt % of the laboratory assay value. The polymer calibrations predicted the content within 7 wt % of the formulation solution content. Attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectra from five formulations confirmed a linear response to changes in sirolimus and polymer content. Copyright 2007 Wiley Periodicals, Inc.

  17. TWSTFT Link Calibration Report

    DTIC Science & Technology

    2015-09-01

    1 Annex II. TWSTFT link calibration with a GPS calibrator Calibration reference: CI-888-2015 Version history: ZJ/V0/25Feb2015, V0a,b/HE/ZJ...7Mar; V0s/VZ9Mar; V0d,e,f+/DM10,17Mar; V1.0/1Apr; Final version 1Sept2015 TWSTFT link calibration report -- Calibration of the Lab(k)-PTB UTC...bipm.org * Coordinator Abstract This report includes the calibration results of the Lab(k)-PTB TWSTFT link and closure measurements of the BIPM

  18. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  19. In-flight calibration of SCIAMACHY's polarization sensitivity

    NASA Astrophysics Data System (ADS)

    Liebing, Patricia; Krijger, Matthijs; Snel, Ralph; Bramstedt, Klaus; Noël, Stefan; Bovensmann, Heinrich; Burrows, John P.

    2018-01-01

    This paper describes the in-flight calibration of the polarization response of the SCIAMACHY polarization measurement devices (PMDs) and a selected region of its science channels. With the lack of polarized calibration sources it is not possible to obtain such a calibration from dedicated calibration measurements. Instead, the earthshine itself, together with a simplified radiative transfer model (RTM), is used to derive time-dependent and measurement-configuration-dependent polarization sensitivities. The results are compared to an instrument model that describes the degradation of the instrument as a result of a slow buildup of contaminant layers on its elevation and azimuth scan mirrors. This comparison reveals significant differences between the model prediction and the data, suggesting an unforeseen change between on-ground and in-flight calibration in at least one of the polarization-sensitive components of the optical bench. The possibility of mechanisms other than scan mirror contamination contributing to the degradation of the instrument will be discussed. The data are consistent with a polarization phase shift occurring in the beam split prism used to divert the light coming from the telescope to the different channels and polarization measurement devices. The extension of the instrument degradation model with a linear retarder enables the determination of the relevant parameters to describe this phase shift and ultimately results in a significant improvement of the polarization measurements as well as the polarization response correction of measured radiances.

  20. Validity Assessment of Low-risk SCORE Function and SCORE Function Calibrated to the Spanish Population in the FRESCO Cohorts.

    PubMed

    Baena-Díez, José Miguel; Subirana, Isaac; Ramos, Rafael; Gómez de la Cámara, Agustín; Elosua, Roberto; Vila, Joan; Marín-Ibáñez, Alejandro; Guembe, María Jesús; Rigo, Fernando; Tormo-Díaz, María José; Moreno-Iribas, Conchi; Cabré, Joan Josep; Segura, Antonio; Lapetra, José; Quesada, Miquel; Medrano, María José; González-Diego, Paulino; Frontera, Guillem; Gavrila, Diana; Ardanaz, Eva; Basora, Josep; García, José María; García-Lareo, Manel; Gutiérrez-Fuentes, José Antonio; Mayoral, Eduardo; Sala, Joan; Dégano, Irene R; Francès, Albert; Castell, Conxa; Grau, María; Marrugat, Jaume

    2018-04-01

    To assess the validity of the original low-risk SCORE function without and with high-density lipoprotein cholesterol and SCORE calibrated to the Spanish population. Pooled analysis with individual data from 12 Spanish population-based cohort studies. We included 30 919 individuals aged 40 to 64 years with no history of cardiovascular disease at baseline, who were followed up for 10 years for the causes of death included in the SCORE project. The validity of the risk functions was analyzed with the area under the ROC curve (discrimination) and the Hosmer-Lemeshow test (calibration), respectively. Follow-up comprised 286 105 persons/y. Ten-year cardiovascular mortality was 0.6%. The ratio between estimated/observed cases ranged from 9.1, 6.5, and 9.1 in men and 3.3, 1.3, and 1.9 in women with original low-risk SCORE risk function without and with high-density lipoprotein cholesterol and calibrated SCORE, respectively; differences were statistically significant with the Hosmer-Lemeshow test between predicted and observed mortality with SCORE (P < .001 in both sexes and with all functions). The area under the ROC curve with the original SCORE was 0.68 in men and 0.69 in women. All versions of the SCORE functions available in Spain significantly overestimate the cardiovascular mortality observed in the Spanish population. Despite the acceptable discrimination capacity, prediction of the number of fatal cardiovascular events (calibration) was significantly inaccurate. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  1. On the reduction of occultation light curves. [stellar occultations by planets

    NASA Technical Reports Server (NTRS)

    Wasserman, L.; Veverka, J.

    1973-01-01

    The two basic methods of reducing occultation light curves - curve fitting and inversion - are reviewed and compared. It is shown that the curve fitting methods have severe problems of nonuniqueness. In addition, in the case of occultation curves dominated by spikes, it is not clear that such solutions are meaningful. The inversion method does not suffer from these drawbacks. Methods of deriving temperature profiles from refractivity profiles are then examined. It is shown that, although the temperature profiles are sensitive to small errors in the refractivity profile, accurate temperatures can be obtained, particularly at the deeper levels of the atmosphere. The ambiguities that arise when the occultation curve straddles the turbopause are briefly discussed.

  2. SAR antenna calibration techniques

    NASA Technical Reports Server (NTRS)

    Carver, K. R.; Newell, A. C.

    1978-01-01

    Calibration of SAR antennas requires a measurement of gain, elevation and azimuth pattern shape, boresight error, cross-polarization levels, and phase vs. angle and frequency. For spaceborne SAR antennas of SEASAT size operating at C-band or higher, some of these measurements can become extremely difficult using conventional far-field antenna test ranges. Near-field scanning techniques offer an alternative approach and for C-band or X-band SARs, give much improved accuracy and precision as compared to that obtainable with a far-field approach.

  3. Importance of nasal clipping in screening investigations of flow volume curve.

    PubMed

    Yanev, I

    1992-01-01

    Comparative analysis of some basic lung indices obtained from a screening investigation of the flow volume curve by using two techniques, with a nose clip and without a nose clip, was made on a cohort of 86 workers in a factory shop for the production of bearings. We found no statistically significant differences between the indices obtained by the two techniques. Our study showed that the FVC and FEV1 obtained in workers without using nose clips were equal to or better than those obtained using nose clips in 60% of the workers. The reproducibility of the two methods was similar. The analysis of the data has shown that the flow volume curve investigation gives better results when performed without a nose clip, especially in industrial conditions.

  4. Precise calibration of few-cycle laser pulses with atomic hydrogen

    NASA Astrophysics Data System (ADS)

    Wallace, W. C.; Kielpinski, D.; Litvinyuk, I. V.; Sang, R. T.

    2017-12-01

    Interaction of atoms and molecules with strong electric fields is a fundamental process in many fields of research, particularly in the emerging field of attosecond science. Therefore, understanding the physics underpinning those interactions is of significant interest to the scientific community. One crucial step in this understanding is accurate knowledge of the few-cycle laser field driving the process. Atomic hydrogen (H), the simplest of all atomic species, plays a key role in benchmarking strong-field processes. Its wide-spread use as a testbed for theoretical calculations allows the comparison of approximate theoretical models against nearly-perfect numerical solutions of the three-dimensional time-dependent Schrödinger equation. Until recently, relatively little experimental data in atomic H was available for comparison to these models, and was due mostly due to the difficulty in the construction and use of atomic H sources. Here, we review our most recent experimental results from atomic H interaction with few-cycle laser pulses and how they have been used to calibrate important laser pulse parameters such as peak intensity and the carrier-envelope phase (CEP). Quantitative agreement between experimental data and theoretical predictions for atomic H has been obtained at the 10% uncertainty level, allowing for accurate laser calibration intensity at the 1% level. Using this calibration in atomic H, both accurate CEP data and an intensity calibration standard have been obtained Ar, Kr, and Xe; such gases are in common use for strong-field experiments. This calibration standard can be used by any laboratory using few-cycle pulses in the 1014 W cm-2 intensity regime centered at 800 nm wavelength to accurately calibrate their peak laser intensity to within few-percent precision.

  5. Multispectral scanner flight model (F-1) radiometric calibration and alignment handbook

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This handbook on the calibration of the MSS-D flight model (F-1) provides both the relevant data and a summary description of how the data were obtained for the system radiometric calibration, system relative spectral response, and the filter response characteristics for all 24 channels of the four band MSS-D F-1 scanner. The calibration test procedure and resulting test data required to establish the reference light levels of the MSS-D internal calibration system are discussed. The final set of data ("nominal" calibration wedges for all 24 channels) for the internal calibration system is given. The system relative spectral response measurements for all 24 channels of MSS-D F-1 are included. These data are the spectral response of the complete scanner, which are the composite of the spectral responses of the scan mirror primary and secondary telescope mirrors, fiber optics, optical filters, and detectors. Unit level test data on the measurements of the individual channel optical transmission filters are provided. Measured performance is compared to specification values.

  6. Impacts of uncertainties in weather and streamflow observations in calibration and evaluation of an elevation distributed HBV-model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Steinsland, I.; Petersen-Øverleir, A.; Johansen, S.

    2012-04-01

    The aim of this study is to assess the uncertainties in streamflow simulations when uncertainties in both observed inputs (precipitation and temperature) and streamflow observations used in the calibration of the hydrological model are explicitly accounted for. To achieve this goal we applied the elevation distributed HBV model operating on daily time steps to a small catchment in high elevation in Southern Norway where the seasonal snow cover is important. The uncertainties in precipitation inputs were quantified using conditional simulation. This procedure accounts for the uncertainty related to the density of the precipitation network, but neglects uncertainties related to measurement bias/errors and eventual elevation gradients in precipitation. The uncertainties in temperature inputs were quantified using a Bayesian temperature interpolation procedure where the temperature lapse rate is re-estimated every day. The uncertainty in the lapse rate was accounted for whereas the sampling uncertainty related to network density was neglected. For every day a random sample of precipitation and temperature inputs were drawn to be applied as inputs to the hydrologic model. The uncertainties in observed streamflow were assessed based on the uncertainties in the rating curve model. A Bayesian procedure was applied to estimate the probability for rating curve models with 1 to 3 segments and the uncertainties in their parameters. This method neglects uncertainties related to errors in observed water levels. Note that one rating curve was drawn to make one realisation of a whole time series of streamflow, thus the rating curve errors lead to a systematic bias in the streamflow observations. All these uncertainty sources were linked together in both calibration and evaluation of the hydrologic model using a DREAM based MCMC routine. Effects of having less information (e.g. missing one streamflow measurement for defining the rating curve or missing one precipitation station

  7. Graphical evaluation of complexometric titration curves.

    PubMed

    Guinon, J L

    1985-04-01

    A graphical method, based on logarithmic concentration diagrams, for construction, without any calculations, of complexometric titration curves is examined. The titration curves obtained for different kinds of unidentate, bidentate and quadridentate ligands clearly show why only chelating ligands are usually used in titrimetric analysis. The method has also been applied to two practical cases where unidentate ligands are used: (a) the complexometric determination of mercury(II) with halides and (b) the determination of cyanide with silver, which involves both a complexation and a precipitation system; for this purpose construction of the diagrams for the HgCl(2)/HgCl(+)/Hg(2+) and Ag(CN)(2)(-)/AgCN/CN(-) systems is considered in detail.

  8. High pressure melting curve of platinum up to 35 GPa

    NASA Astrophysics Data System (ADS)

    Patel, Nishant N.; Sunder, Meenakshi

    2018-04-01

    Melting curve of Platinum (Pt) has been measured up to 35 GPa using our laboratory based laser heated diamond anvil cell (LHDAC) facility. Laser speckle method has been employed to detect onset of melting. High pressure melting curve of Pt obtained in the present study has been compared with previously reported experimental and theoretical results. The melting curve measured agrees well within experimental error with the results of Kavner et al. The experimental data fitted with simon equation gives (∂Tm/∂P) ˜25 K/GPa at P˜1 MPa.

  9. Interpretation of OAO-2 ultraviolet light curves of beta Doradus

    NASA Technical Reports Server (NTRS)

    Hutchinson, J. L.; Lillie, C. F.; Hill, S. J.

    1975-01-01

    Middle-ultraviolet light curves of beta Doradus, obtained by OAO-2, are presented along with other evidence indicating that the small additional bumps observed on the rising branches of these curves have their origin in shock-wave phenomena in the upper atmosphere of this classical Cepheid. A simple piston-driven spherical hydrodynamic model of the atmosphere is developed to explain the bumps, and the calculations are compared with observations. The model is found to be consistent with the shapes of the light curves as well as with measurements of the H-alpha radial velocities.

  10. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  11. Spectroradiometric calibration of the thematic mapper and multispectral scanner system

    NASA Technical Reports Server (NTRS)

    Slater, P. N. (Principal Investigator); Palmer, J. M.

    1983-01-01

    The results obtained for the absolute calibration of TM bands 2, 3, and 4 are presented. The results are based on TM image data collected simultaneously with ground and atmospheric data at White Sands, New Mexico. Also discussed are the results of a moments analysis to determine the equivalent bandpasses, effective central wavelengths and normalized responses of the TM and MSS spectral bands; the calibration of the BaSO, plate used at White Sands; and future plans.

  12. A Novel Protocol for Model Calibration in Biological Wastewater Treatment

    PubMed Central

    Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen

    2015-01-01

    Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959

  13. Respiration monitoring by Electrical Bioimpedance (EBI) Technique in a group of healthy males. Calibration equations.

    NASA Astrophysics Data System (ADS)

    Balleza, M.; Vargas, M.; Kashina, S.; Huerta, M. R.; Delgadillo, I.; Moreno, G.

    2017-01-01

    Several research groups have proposed the electrical impedance tomography (EIT) in order to analyse lung ventilation. With the use of 16 electrodes, the EIT is capable to obtain a set of transversal section images of thorax. In previous works, we have obtained an alternating signal in terms of impedance corresponding to respiration from EIT images. Then, in order to transform those impedance changes into a measurable volume signal a set of calibration equations has been obtained. However, EIT technique is still expensive to attend outpatients in basics hospitals. For that reason, we propose the use of electrical bioimpedance (EBI) technique to monitor respiration behaviour. The aim of this study was to obtain a set of calibration equations to transform EBI impedance changes determined at 4 different frequencies into a measurable volume signal. In this study a group of 8 healthy males was assessed. From obtained results, a high mathematical adjustment in the group calibrations equations was evidenced. Then, the volume determinations obtained by EBI were compared with those obtained by our gold standard. Therefore, despite EBI does not provide a complete information about impedance vectors of lung compared with EIT, it is possible to monitor the respiration.

  14. Geometrical Calibration of the Photo-Spectral System and Digital Maps Retrieval

    NASA Astrophysics Data System (ADS)

    Bruchkouskaya, S.; Skachkova, A.; Katkovski, L.; Martinov, A.

    2013-12-01

    Imaging systems for remote sensing of the Earth are required to demonstrate high metric accuracy of the picture which can be provided through preliminary geometrical calibration of optical systems. Being defined as a result of the geometrical calibration, parameters of internal and external orientation of the cameras are needed while solving such problems of image processing, as orthotransformation, geometrical correction, geographical coordinate fixing, scale adjustment and image registration from various channels and cameras, creation of image mosaics of filmed territories, and determination of geometrical characteristics of objects in the images. The geometrical calibration also helps to eliminate image deformations arising due to manufacturing defects and errors in installation of camera elements and photo receiving matrices as well as those resulted from lens distortions. A Photo-Spectral System (PhSS), which is intended for registering reflected radiation spectra of underlying surfaces in a wavelength range from 350 nm to 1050 nm and recording images of high spatial resolution, has been developed at the A.N. Sevchenko Research Institute of Applied Physical Problems of the Belarusian State University. The PhSS has undergone flight tests over the territory of Belarus onboard the Antonov AN-2 aircraft with the aim to obtain visible range images of the underlying surface. Then we performed the geometrical calibration of the PhSS and carried out the correction of images obtained during the flight tests. Furthermore, we have plotted digital maps of the terrain using the stereo pairs of images acquired from the PhSS and evaluated the accuracy of the created maps. Having obtained the calibration parameters, we apply them for correction of the images from another identical PhSS device, which is located at the Russian Orbital Segment of the International Space Station (ROS ISS), aiming to retrieve digital maps of the terrain with higher accuracy.

  15. SPRT Calibration Uncertainties and Internal Quality Control at a Commercial SPRT Calibration Facility

    NASA Astrophysics Data System (ADS)

    Wiandt, T. J.

    2008-06-01

    The Hart Scientific Division of the Fluke Corporation operates two accredited standard platinum resistance thermometer (SPRT) calibration facilities, one at the Hart Scientific factory in Utah, USA, and the other at a service facility in Norwich, UK. The US facility is accredited through National Voluntary Laboratory Accreditation Program (NVLAP), and the UK facility is accredited through UKAS. Both provide SPRT calibrations using similar equipment and procedures, and at similar levels of uncertainty. These uncertainties are among the lowest available commercially. To achieve and maintain low uncertainties, it is required that the calibration procedures be thorough and optimized. However, to minimize customer downtime, it is also important that the instruments be calibrated in a timely manner and returned to the customer. Consequently, subjecting the instrument to repeated calibrations or extensive repeated measurements is not a viable approach. Additionally, these laboratories provide SPRT calibration services involving a wide variety of SPRT designs. These designs behave differently, yet predictably, when subjected to calibration measurements. To this end, an evaluation strategy involving both statistical process control and internal consistency measures is utilized to provide confidence in both the instrument calibration and the calibration process. This article describes the calibration facilities, procedure, uncertainty analysis, and internal quality assurance measures employed in the calibration of SPRTs. Data will be reviewed and generalities will be presented. Finally, challenges and considerations for future improvements will be discussed.

  16. Calibration Plans for the Multi-angle Imaging SpectroRadiometer (MISR)

    NASA Astrophysics Data System (ADS)

    Bruegge, C. J.; Duval, V. G.; Chrien, N. L.; Diner, D. J.

    1993-01-01

    The EOS Multi-angle Imaging SpectroRadiometer (MISR) will study the ecology and climate of the Earth through acquisition of global multi-angle imagery. The MISR employs nine discrete cameras, each a push-broom imager. Of these, four point forward, four point aft and one views the nadir. Absolute radiometric calibration will be obtained pre-flight using high quantum efficiency (HQE) detectors and an integrating sphere source. After launch, instrument calibration will be provided using HQE detectors in conjunction with deployable diffuse calibration panels. The panels will be deployed at time intervals of one month and used to direct sunlight into the cameras, filling their fields-of-view and providing through-the-optics calibration. Additional techniques will be utilized to reduce systematic errors, and provide continuity as the methodology changes with time. For example, radiation-resistant photodiodes will also be used to monitor panel radiant exitance. These data will be acquired throughout the five-year mission, to maintain calibration in the latter years when it is expected that the HQE diodes will have degraded. During the mission, it is planned that the MISR will conduct semi-annual ground calibration campaigns, utilizing field measurements and higher resolution sensors (aboard aircraft or in-orbit platforms) to provide a check of the on-board hardware. These ground calibration campaigns are limited in number, but are believed to be the key to the long-term maintenance of MISR radiometric calibration.

  17. Calibrating and Evaluating Boomless Spray Systems for Applying Forest Herbicides

    Treesearch

    Michael A. Wehr; Russell W. Johnson; Robert L. Sajdak

    1985-01-01

    Describes a testing procedure used to calibrate and evaluate agricultureal boomless spray systems. Tests allow the user to obtain dependable and satisfactory results when used in actual forest situations.

  18. VizieR Online Data Catalog: BV(RI)c light curves of FF Vul (Samec+, 2016)

    NASA Astrophysics Data System (ADS)

    Samec, R. G.; Nyaude, R.; Caton, D.; van Hamme, W.

    2017-02-01

    The present BVRcIc light curves were taken by DC, the Dark Sky Observatory 0.81m reflector at Phillips Gap, North Carolina. These were taken on 2015 September 12, 13, 14 and 15, and October 15, with a thermoelectrically cooled (-40°C) 2*2K Apogee Alta camera. Additional observations were obtained remotely with the SARA north 0.91m reflector at KPNO on 2015 September 20 and October 11, with the ARC 2*2K camera cooled to -110°C. Individual observations were taken at both sites with standard Johnson-Cousins filters, and included 444 field images in B, 451 in V, 443 in Rc, and 445 in Ic. The standard error was ~7mmag in each of B, V, Rc and Ic. Nightly images were calibrated with 25 bias frames, five flat frames in each filter, and ten 300s dark frames. The exposure times were 40-50s in B, 25-30s in V, 15-25s in Rc and Ic. Our observations are listed in Table1. (1 data file).

  19. An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model

    NASA Astrophysics Data System (ADS)

    Tiernan, E. D.; Hodges, B. R.

    2017-12-01

    The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.

  20. The on-orbit calibration of the Fermi Large Area Telescope

    DOE PAGES

    Abdo, A. A.; Ackermann, M.; Ajello, M.; ...

    2009-09-06

    The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope began its on-orbit operations on June 23, 2008. Calibrations, defined in a generic sense, correspond to synchronization of trigger signals, optimization of delays for latching data, determination of detector thresholds, gains and responses, evaluation of the perimeter of the South Atlantic Anomaly (SAA), measurements of live time, of absolute time, and internal and spacecraft boresight alignments. Here in this work, we describe on-orbit calibration results obtained using known astrophysical sources, galactic cosmic rays, and charge injection into the front-end electronics of each detector. Instrument response functions will be describedmore » in a separate publication. This paper demonstrates the stability of calibrations and describes minor changes observed since launch. Lastly, these results have been used to calibrate the LAT datasets to be publicly released in August 2009.« less

  1. Calibration of the Oscillating Screen Viscometer

    NASA Technical Reports Server (NTRS)

    Berg, Robert F.; Moldover, Michael R.

    1993-01-01

    We have devised a calibration procedure for the oscillating screen viscometer which can provide the accuracy needed for the flight measurement of viscosity near the liquid-vapor critical point of xenon. The procedure, which makes use of the viscometer's wide bandwidth and hydrodynamic similarity, allows the viscometer to be self-calibrating. To demonstrate the validity of this procedure we measured the oscillator's transfer function under a wide variety of conditions. We obtained data using CO2 at temperatures spanning a temperature range of 35 K and densities varying by a factor of 165, thereby encountering viscosity variations as great as 50%. In contrast the flight experiment will be performed over a temperature range of 29 K and at only a single density, and the viscosity is expected to change by less than 40%. The measurements show that, after excluding data above 10 Hz (where frequency-dependent corrections are poorly modeled) and making a plausible adjustment to the viscosity value used at high density, the viscometer's behavior is fully consistent with the use of hydrodynamic similarity for calibration. Achieving this agreement required understanding a 1% anelastic effect present in the oscillator's torsion fiber.

  2. Energy Calibration of the Scintillating Optical Fiber Calorimeter Chamber (SOFCAL)

    NASA Technical Reports Server (NTRS)

    Christl, M. C.; Fountain, W. F.; Parnell, T.; Roberts, F. E.; Gregory, J. C.; Johnson, J.; Takahashi, Y.

    1997-01-01

    The Scintillating Optical Fiber Calorimeter (SOFCAL) detector is designed to make direct measures of the primary cosmic ray spectrum from -200 GeV/amu - 20 TeV/amu. The primary particles are resolved into groups according to their charge (p, He, CNO, Medium Z, Heavy Z) using both active and passive components integrated into the detector. The principal part of SOFCAL is a thin ionization calorimeter that measures the electromagnetic cascades that result from these energetic particles interacting in the detector. The calorimeter is divided into two sections: a thin passive emulsion/x-ray film calorimeter, and a fiber calorimeter that uses crossing layers of small scintillating optical fibers to sample the energy deposition of the cascades. The energy determination is made by fitting the fiber data to transition curves generated by Monte Carlo simulations. The fiber data must first be calibrated using the electron counts from the emulsion plates in the calorimeter for a small number of events. The technique and results of this calibration will be presented together with samples of the data from a balloon flight.

  3. Calibrating damping rates with LEGACY linewidths

    NASA Astrophysics Data System (ADS)

    Houdek, Günter

    2017-10-01

    Linear damping rates of radial oscillation modes in selected Kepler stars are estimated with the help of a nonadiabatic stability analysis. The convective fluxes are obtained from a nonlocal, time-dependent convection model. The mixing-length parameter is calibrated to the surface-convection-zone depth of a stellar model obtained from fitting adiabatic frequencies to the LEGACY* observations, and two of the three nonlocal convection parameters are calibrated to the corresponding LEGACY* linewidth measurements. The atmospheric structure in the 1D stability analysis adopts a temperature-optical-depth relation derived from 3D hydrodynamical simulations. Results from 3D simulations are also used to calibrate the turbulent pressure and to guide the functional form of the depth-dependence of the anisotropy of the turbulent velocity field in the 1D stability computations.

  4. An evaluation of the accuracy of geomagnetic data obtained from an unattended, automated, quasi-absolute station

    USGS Publications Warehouse

    Herzog, D.C.

    1990-01-01

    A comparison is made of geomagnetic calibration data obtained from a high-sensitivity proton magnetometer enclosed within an orthogonal bias coil system, with data obtained from standard procedures at a mid-latitude U.S. Geological Survey magnetic observatory using a quartz horizontal magnetometer, a Ruska magnetometer, and a total field magnetometer. The orthogonal coil arrangement is used with the proton magnetometer to provide Deflected-Inclination-Deflected-Declination (DIDD) data from which quasi-absolute values of declination, horizontal intensity, and vertical intensity can be derived. Vector magnetometers provide the ordinate values to yield baseline calibrations for both the DIDD and standard observatory processes. Results obtained from a prototype system over a period of several months indicate that the DIDD unit can furnish adequate absolute field values for maintaining observatory calibration data, thus providing baseline control for unattended, remote stations. ?? 1990.

  5. SAR calibration technology review

    NASA Technical Reports Server (NTRS)

    Walker, J. L.; Larson, R. W.

    1981-01-01

    Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.

  6. Calibration of a modified temperature-light intensity logger for quantifying water electrical conductivity

    NASA Astrophysics Data System (ADS)

    Gillman, M. A.; Lamoureux, S. F.; Lafrenière, M. J.

    2017-09-01

    The Stream Temperature, Intermittency, and Conductivity (STIC) electrical conductivity (EC) logger as presented by Chapin et al. (2014) serves as an inexpensive (˜50 USD) means to assess relative EC in freshwater environments. This communication demonstrates the calibration of the STIC logger for quantifying EC, and provides examples from a month long field deployment in the High Arctic. Calibration models followed multiple nonlinear regression and produced calibration curves with high coefficient of determination values (R2 = 0.995 - 0.998; n = 5). Percent error of mean predicted specific conductance at 25°C (SpC) to known SpC ranged in magnitude from -0.6% to 13% (mean = -1.4%), and mean absolute percent error (MAPE) ranged from 2.1% to 13% (mean = 5.3%). Across all tested loggers we found good accuracy and precision, with both error metrics increasing with increasing SpC values. During 10, month-long field deployments, there were no logger failures and full data recovery was achieved. Point SpC measurements at the location of STIC loggers recorded via a more expensive commercial electrical conductivity logger followed similar trends to STIC SpC records, with 1:1.05 and 1:1.08 relationships between the STIC and commercial logger SpC values. These results demonstrate that STIC loggers calibrated to quantify EC are an economical means to increase the spatiotemporal resolution of water quality investigations.

  7. Quantification of calcium using localized normalization on laser-induced breakdown spectroscopy data

    NASA Astrophysics Data System (ADS)

    Sabri, Nursalwanie Mohd; Haider, Zuhaib; Tufail, Kashif; Aziz, Safwan; Ali, Jalil; Wahab, Zaidan Abdul; Abbas, Zulkifly

    2017-03-01

    This paper focuses on localized normalization for improved calibration curves in laser-induced breakdown spectroscopy (LIBS) measurements. The calibration curves have been obtained using five samples consisting of different concentrations of calcium (Ca) in potassium bromide (KBr) matrix. The work has utilized Q-switched Nd:YAG laser installed in LIBS2500plus system with fundamental wavelength and laser energy of 650 mJ. Optimization of gate delay can be obtained from signal-to-background ratio (SBR) of Ca II 315.9 and 317.9 nm. The optimum conditions are determined in which having high spectral intensity and SBR. The highest spectral lines of ionic and emission lines of Ca at gate delay of 0.83 µs. From SBR, the optimized gate delay is at 5.42 µs for both Ca II spectral lines. Calibration curves consist of three parts; original intensity from LIBS experimentation, normalization and localized normalization of the spectral line intensity. The R2 values of the calibration curves plotted using locally normalized intensities of Ca I 610.3, 612.2 and 616.2 nm spectral lines are 0.96329, 0.97042, and 0.96131, respectively. The enhancement from calibration curves using the regression coefficient allows more accurate analysis in LIBS. At the request of all authors of the paper, and with the agreement of the Proceedings Editor, an updated version of this article was published on 24 May 2017.

  8. Calibration and validation of wearable monitors.

    PubMed

    Bassett, David R; Rowlands, Alex; Trost, Stewart G

    2012-01-01

    Wearable monitors are increasingly being used to objectively monitor physical activity in research studies within the field of exercise science. Calibration and validation of these devices are vital to obtaining accurate data. This article is aimed primarily at the physical activity measurement specialist, although the end user who is conducting studies with these devices also may benefit from knowing about this topic. Initially, wearable physical activity monitors should undergo unit calibration to ensure interinstrument reliability. The next step is to simultaneously collect both raw signal data (e.g., acceleration) from the wearable monitors and rates of energy expenditure, so that algorithms can be developed to convert the direct signals into energy expenditure. This process should use multiple wearable monitors and a large and diverse subject group and should include a wide range of physical activities commonly performed in daily life (from sedentary to vigorous). New methods of calibration now use "pattern recognition" approaches to train the algorithms on various activities, and they provide estimates of energy expenditure that are much better than those previously available with the single-regression approach. Once a method of predicting energy expenditure has been established, the next step is to examine its predictive accuracy by cross-validating it in other populations. In this article, we attempt to summarize the best practices for calibration and validation of wearable physical activity monitors. Finally, we conclude with some ideas for future research ideas that will move the field of physical activity measurement forward.

  9. Innovative methodology for intercomparison of radionuclide calibrators using short half-life in situ prepared radioactive sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira, P. A.; Santos, J. A. M., E-mail: joao.santos@ipoporto.min-saude.pt; Serviço de Física Médica do Instituto Português de Oncologia do Porto Francisco Gentil, EPE, Porto

    2014-07-15

    Purpose: An original radionuclide calibrator method for activity determination is presented. The method could be used for intercomparison surveys for short half-life radioactive sources used in Nuclear Medicine, such as{sup 99m}Tc or most positron emission tomography radiopharmaceuticals. Methods: By evaluation of the resulting net optical density (netOD) using a standardized scanning method of irradiated Gafchromic XRQA2 film, a comparison of the netOD measurement with a previously determined calibration curve can be made and the difference between the tested radionuclide calibrator and a radionuclide calibrator used as reference device can be calculated. To estimate the total expected measurement uncertainties, a carefulmore » analysis of the methodology, for the case of{sup 99m}Tc, was performed: reproducibility determination, scanning conditions, and possible fadeout effects. Since every factor of the activity measurement procedure can influence the final result, the method also evaluates correct syringe positioning inside the radionuclide calibrator. Results: As an alternative to using a calibrated source sent to the surveyed site, which requires a relatively long half-life of the nuclide, or sending a portable calibrated radionuclide calibrator, the proposed method uses a source preparedin situ. An indirect activity determination is achieved by the irradiation of a radiochromic film using {sup 99m}Tc under strictly controlled conditions, and cumulated activity calculation from the initial activity and total irradiation time. The irradiated Gafchromic film and the irradiator, without the source, can then be sent to a National Metrology Institute for evaluation of the results. Conclusions: The methodology described in this paper showed to have a good potential for accurate (3%) radionuclide calibrators intercomparison studies for{sup 99m}Tc between Nuclear Medicine centers without source transfer and can easily be adapted to other short half

  10. Cassini RADAR End of Mission Calibration and Preliminary Ring Results

    NASA Astrophysics Data System (ADS)

    West, R. D.; Janssen, M.; Zhang, Z.; Cuzzi, J. N.; Anderson, Y.; Hamilton, G.

    2017-12-01

    The Cassini mission is in the midst of its last year of observations. Part of the mission plan includes orbits that bring the spacecraft close to Saturn's rings prior to deorbiting into Saturn's atmosphere. First, a series of F-ring orbits crossed the ring plane just outside of the F-ring, and then a series of Proximal orbits crossed the ring plane inside of the D-ring - just above the cloud tops. The Cassini RADAR instrument collected active and passive data of the rings in 5 observations, of Saturn in one observation, and passive only data in an additional 4 observations. These observations provided a unique opportunity to obtain backscatter measurements and relatively high-resolution brightness temperature measurements from Saturn and the rings. Such measurements were never before possible from the spacecraft or the Earth due to high range. Before the F-ring orbits began, and again during the last rings scan, the radar collected calibration data to aid calibration of the rings measurements and to provide an updated timeline of the radar calibration over the whole mission. This presentation will cover preliminary processing results from the radar rings scans and from the calibration data sets. Ultimately, these ring scan measurements will provide a 1-D profile of backscatter obtained at 2.2 cm wavelength that will complement similar passive profiles obtained at optical, infrared, and microwave wavelengths. Such measurements will further constrain and inform models of the ring particle composition and structure, and the local vertical structure of the rings. This work is supported by the NASA Cassini Program at JPL - CalTech.

  11. Novel quantitative calibration approach for multi-configuration electromagnetic induction (EMI) systems using data acquired at multiple elevations

    NASA Astrophysics Data System (ADS)

    Tan, Xihe; Mester, Achim; von Hebel, Christian; van der Kruk, Jan; Zimmermann, Egon; Vereecken, Harry; van Waasen, Stefan

    2017-04-01

    Electromagnetic induction (EMI) systems offer a great potential to obtain highly resolved layered electrical conductivity models of the shallow subsurface. State-of-the-art inversion procedures require quantitative calibration of EMI data, especially for short-offset EMI systems where significant data shifts are often observed. These shifts are caused by external influences such as the presence of the operator, zero-leveling procedures, the field setup used to move the EMI system and/or cables close by. Calibrations can be performed by using collocated electrical resistivity measurements or taking soil samples, however, these two methods take a lot of time in the field. To improve the calibration in a fast and concise way, we introduce a novel on-site calibration method using a series of apparent electrical conductivity (ECa) values acquired at multiple elevations for a multi-configuration EMI system. No additional instrument or pre-knowledge of the subsurface is needed to acquire quantitative ECa data. By using this calibration method, we correct each coil configuration, i.e., transmitter and receiver coil separation and the horizontal or vertical coplanar (HCP or VCP) coil orientation with a unique set of calibration parameters. A multi-layer soil structure at the corresponding measurement location is inverted together with the calibration parameters using full-solution Maxwell equations for the forward modelling within the shuffled complex evolution (SCE) algorithm to find the optimum solution under a user-defined parameter space. Synthetic data verified the feasibility for calibrating HCP and VCP measurements of a custom made six-coil EMI system with coil offsets between 0.35 m and 1.8 m for quantitative data inversions. As a next step, we applied the calibration approach on acquired experimental data from a bare soil test field (Selhausen, Germany) for the considered EMI system. The obtained calibration parameters were applied to measurements over a 30 m

  12. When calibration is not enough

    NASA Astrophysics Data System (ADS)

    Kingsley, Jeffrey R.; Johnson, Leslie

    1999-12-01

    When added CD (Critical Dimension) capacity is needed there are several routes that can be taken -- add shifts and people to existing equipment, obtain additional equipment and staff or use an outside service provider for peak and emergency work. In all but the first scenario the qualification of the 'new' equipment, and correlation to the existing measurements, is key to meaningful results. In many cases simply calibrating the new tool with the same reference material or standard used to calibrate the existing tools will provide the level of agreement required. In fact, calibrating instruments using different standards can provide an acceptable level of agreement in cases where accuracy is a second tier consideration. However, there are also situations where factors outside of calibration can influence the results. In this study CD measurements from a mask sample being used to qualify an outside service provider showed good agreement for the narrower linewidths, but significant deviation occurred with increasing CD. In the course of a root cause investigation, it was found that there are a variety of factors that may influence the agreement found between two tools. What are these 'other factors' and how are they found? In the present case the results of a 'round robin' consensus from a variety of tools was used to initially determine which tool needed to be investigated. The instrument parameters felt to be the most important causes of the disagreement were identified and experiments run to test their influence. The factors investigated as the cause of the disagreement included (1) Type of detector and location with respect to sample, (2) Beam Voltage, (3) Scan Rotation/Sample Orientation issues and (4) Edge Detection Algorithm.

  13. Time-Intensity Curves Obtained after Microbubble Injection Can Be Used to Differentiate Responders from Nonresponders among Patients with Clinically Active Crohn Disease after 6 Weeks of Pharmacologic Treatment.

    PubMed

    Quaia, Emilio; Sozzi, Michele; Angileri, Roberta; Gennari, Antonio Giulio; Cova, Maria Assunta

    2016-11-01

    Purpose To assess whether contrast material-enhanced ultrasonography (US) can be used to differentiate responders from nonresponders among patients with clinically active Crohn disease after 6 weeks of pharmacologic treatment. Materials and Methods This prospective study was approved by our ethics committee, and written informed consent was obtained from all patients. Fifty consecutive patients (26 men and 24 women; mean age, 34.76 years ± 9) with a proved diagnosis of active Crohn disease who were scheduled to begin therapy with biologics (infliximab or adalimumab) were included, with enrollment from June 1, 2013, to June 1, 2015. In each patient, the terminal ileal loop was imaged with contrast-enhanced US before the beginning and at the end of week 6 of pharmacologic treatment. Time-intensity curves obtained in responders (those with a decrease in the Crohn disease endoscopic index of severity score of 25-44 before treatment to 10-15 after treatment, an inflammatory score <7, and/or a decrease ≥70 in the Crohn disease activity index score compared with baseline) and nonresponders were compared with Mann-Whitney test. Results Responders (n = 31) and nonresponders (n = 19) differed (P < .05) in the percent change of peak enhancement (-40.78 ± 62.85 vs 53.21 ± 72.5; P = .0001), wash-in (-34.8 ± 67.72 vs 89.44 ± 145.32; P = .001) and washout (-5.64 ± 130.71 vs 166.83 ± 204.44; P = .002) rate, wash-in perfusion index (-42.29 ± 59.21 vs 50.96 ± 71.13; P = .001), area under the time-intensity curve (AUC; -46.17 ± 48.42 vs 41.78 ± 87.64; P = .001), AUC during wash-in (-43.93 ± 54.29 vs 39.79 ± 70.85; P = .001), and AUC during washout (-49.36 ± 47.42 vs 42.65 ± 97.09; P = .001). Responders and nonresponders did not differ in the percent change of rise time (5.09 ± 49.13 vs 6.24 ± 48.06; P = .93) and time to peak enhancement (8.82 ± 54.5 vs 10.21 ± 43.25; P = .3). Conclusion Analysis of time-intensity curves obtained after injection of microbubble

  14. Experimental verification of self-calibration radiometer based on spontaneous parametric downconversion

    NASA Astrophysics Data System (ADS)

    Gao, Dongyang; Zheng, Xiaobing; Li, Jianjun; Hu, Youbo; Xia, Maopeng; Salam, Abdul; Zhang, Peng

    2018-03-01

    Based on spontaneous parametric downconversion process, we propose a novel self-calibration radiometer scheme which can self-calibrate the degradation of its own response and ultimately monitor the fluctuation of a target radiation. Monitor results were independent of its degradation and not linked to the primary standard detector scale. The principle and feasibility of the proposed scheme were verified by observing bromine-tungsten lamp. A relative standard deviation of 0.39 % was obtained for stable bromine-tungsten lamp. Results show that the proposed scheme is advanced of its principle. The proposed scheme could make a significant breakthrough in the self-calibration issue on the space platform.

  15. jasonSWIR Calibration of Spectralon Reflectance Factor

    NASA Technical Reports Server (NTRS)

    Georgiev, Georgi T.; Butler, James J.; Cooksey, Cahterine; Ding, Leibo; Thome, Kurtis J.

    2011-01-01

    Satellite instruments operating in the reflective solar wavelength region require accurate and precise determination of the Bidirectional Reflectance Factor (BRF) of laboratory-based diffusers used in their pre-flight and on-orbit radiometric calibrations. BRF measurements are required throughout the reflected-solar spectrum from the ultraviolet through the shortwave infrared. Spectralon diffusers are commonly used as a reflectance standard for bidirectional and hemispherical geometries. The Diffuser Calibration Laboratory (DCaL) at NASA's Goddard Space Flight Center is a secondary calibration facility with reflectance measurements traceable to those made by the Spectral Tri-function Automated Reference Reflectometer (STARR) facility at the National Institute of Standards and Technology (NIST). For more than two decades, the DCaL has provided numerous NASA projects with BRF data in the ultraviolet (UV), visible (VIS) and the Near infraRed (NIR) spectral regions. Presented in this paper are measurements of BRF from 1475nm to 1625nm obtained using an indium gallium arsenide detector and a tunable coherent light source. The sample was a 2 inch diameter, 99% white Spectralon target. The BRF results are discussed and compared to empirically generated data from a model based on NIST certified values of 6deg directional/hemispherical spectral reflectance factors from 900nm to 2500nm. Employing a new NIST capability for measuring bidirectional reflectance using a cooled, extended InGaAs detector, BRF calibration measurements of the same sample were also made using NIST's STARR from 1475nm to 1625nm at an incident angle of 0deg and at viewing angles of 40deg, 45deg, and 50deg. The total combined uncertainty for BRF in this ShortWave Infrared (SWIR) range is less than 1%. This measurement capability will evolve into a BRF calibration service in SWIR region in support of NASA remote sensing missions. Keywords: BRF, BRDF, Calibration, Spectralon, Reflectance, Remote Sensing.

  16. Line fiducial material and thickness considerations for ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; McLeod, A. J.; Baxter, John S. H.; Chen, Elvis C. S.; Peters, Terry M.

    2015-03-01

    Ultrasound calibration is a necessary procedure in many image-guided interventions, relating the position of tools and anatomical structures in the ultrasound image to a common coordinate system. This is a necessary component of augmented reality environments in image-guided interventions as it allows for a 3D visualization where other surgical tools outside the imaging plane can be found. Accuracy of ultrasound calibration fundamentally affects the total accuracy of this interventional guidance system. Many ultrasound calibration procedures have been proposed based on a variety of phantom materials and geometries. These differences lead to differences in representation of the phantom on the ultrasound image which subsequently affect the ability to accurately and automatically segment the phantom. For example, taut wires are commonly used as line fiducials in ultrasound calibration. However, at large depths or oblique angles, the fiducials appear blurred and smeared in ultrasound images making it hard to localize their cross-section with the ultrasound image plane. Intuitively, larger diameter phantoms with lower echogenicity are more accurately segmented in ultrasound images in comparison to highly reflective thin phantoms. In this work, an evaluation of a variety of calibration phantoms with different geometrical and material properties for the phantomless calibration procedure was performed. The phantoms used in this study include braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. Conventional B-mode and synthetic aperture images of the phantoms at different positions were obtained. The phantoms were automatically segmented from the ultrasound images using an ellipse fitting algorithm, the centroid of which is subsequently used as a fiducial for calibration. Calibration accuracy was evaluated for these procedures based on the leave-one-out target registration error. It was shown that larger diameter phantoms with lower

  17. Calibration of HST wide field camera for quantitative analysis of faint galaxy images

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Griffiths, Richard E.; Casertano, Stefano; Neuschaefer, Lyman W.; Wyckoff, Eric W.

    1994-01-01

    We present the methods adopted to optimize the calibration of images obtained with the Hubble Space Telescope (HST) Wide Field Camera (WFC) (1991-1993). Our main goal is to improve quantitative measurement of faint images, with special emphasis on the faint (I approximately 20-24 mag) stars and galaxies observed as a part of the Medium-Deep Survey. Several modifications to the standard calibration procedures have been introduced, including improved bias and dark images, and a new supersky flatfield obtained by combining a large number of relatively object-free Medium-Deep Survey exposures of random fields. The supersky flat has a pixel-to-pixel rms error of about 2.0% in F555W and of 2.4% in F785LP; large-scale variations are smaller than 1% rms. Overall, our modifications improve the quality of faint images with respect to the standard calibration by about a factor of five in photometric accuracy and about 0.3 mag in sensitivity, corresponding to about a factor of two in observing time. The relevant calibration images have been made available to the scientific community.

  18. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  19. Light curve variations of the eclipsing binary V367 Cygni

    NASA Astrophysics Data System (ADS)

    Akan, M. C.

    1987-07-01

    The long-period eclipsing binary star V367 Cygni has been observed photoelectrically in two colours, B and V, in 1984, 1985, and 1986. These new light curves of the system have been discussed and compared for the light-variability with the earlier ones presented by Heiser (1962). Using some of the previously published photoelectric light curves and the present ones, several primary minima times have been derived to calculate the light elements. Any attempt to obtain a photometric solution of the binary is complicated by the peculiar nature of the light curve caused by the presence of the circumstellar matter in the system. Despite this difficulty, however, some approaches are being carried out to solve the light curves which are briefly discussed.

  20. SWIR calibration of Spectralon reflectance factor

    NASA Astrophysics Data System (ADS)

    Georgiev, Georgi T.; Butler, James J.; Cooksey, Catherine; Ding, Leibo; Thome, Kurtis J.

    2011-11-01

    Satellite instruments operating in the reflective solar wavelength region require accurate and precise determination of the Bidirectional Reflectance Factor (BRF) of laboratory-based diffusers used in their pre-flight and on-orbit radiometric calibrations. BRF measurements are required throughout the reflected-solar spectrum from the ultraviolet through the shortwave infrared. Spectralon diffusers are commonly used as a reflectance standard for bidirectional and hemispherical geometries. The Diffuser Calibration Laboratory (DCaL) at NASA's Goddard Space Flight Center is a secondary calibration facility with reflectance measurements traceable to those made by the Spectral Tri-function Automated Reference Reflectometer (STARR) facility at the National Institute of Standards and Technology (NIST). For more than two decades, the DCaL has provided numerous NASA projects with BRF data in the ultraviolet (UV), visible (VIS) and the Near InfraRed (NIR) spectral regions. Presented in this paper are measurements of BRF from 1475 nm to 1625 nm obtained using an indium gallium arsenide detector and a tunable coherent light source. The sample was a 50.8 mm (2 in) diameter, 99% white Spectralon target. The BRF results are discussed and compared to empirically generated data from a model based on NIST certified values of 6°directional-hemispherical spectral reflectance factors from 900 nm to 2500 nm. Employing a new NIST capability for measuring bidirectional reflectance using a cooled, extended InGaAs detector, BRF calibration measurements of the same sample were also made using NIST's STARR from 1475 nm to 1625 nm at an incident angle of 0° and at viewing angle of 45°. The total combined uncertainty for BRF in this ShortWave Infrared (SWIR) range is less than 1%. This measurement capability will evolve into a BRF calibration service in SWIR region in support of NASA remote sensing missions.