NASA Technical Reports Server (NTRS)
Grubbs, Guy II; Michell, Robert; Samara, Marilia; Hampton, Don; Jahn, Jorg-Micha
2016-01-01
A technique is presented for the periodic and systematic calibration of ground-based optical imagers. It is important to have a common system of units (Rayleighs or photon flux) for cross comparison as well as self-comparison over time. With the advancement in technology, the sensitivity of these imagers has improved so that stars can be used for more precise calibration. Background subtraction, flat fielding, star mapping, and other common techniques are combined in deriving a calibration technique appropriate for a variety of ground-based imager installations. Spectral (4278, 5577, and 8446 A ) ground-based imager data with multiple fields of view (19, 47, and 180 deg) are processed and calibrated using the techniques developed. The calibration techniques applied result in intensity measurements in agreement between different imagers using identical spectral filtering, and the intensity at each wavelength observed is within the expected range of auroral measurements. The application of these star calibration techniques, which convert raw imager counts into units of photon flux, makes it possible to do quantitative photometry. The computed photon fluxes, in units of Rayleighs, can be used for the absolute photometry between instruments or as input parameters for auroral electron transport models.
A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation
NASA Technical Reports Server (NTRS)
Negri, Andrew J.; Adler, Robert F.
2002-01-01
The development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale is presented. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR-based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation
NASA Technical Reports Server (NTRS)
Negri, Andrew J.; Adler, Robert F.; Xu, Li-Ming
2003-01-01
This paper presents the development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during summer 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR- based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and t i f m rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.
Bleeker, H J; Lewin, P A
2000-01-01
A new calibration technique for PVDF ultrasonic hydrophone probes is described. Current implementation of the technique allows determination of hydrophone frequency response between 2 and 100 MHz and is based on the comparison of theoretically predicted and experimentally determined pressure-time waveforms produced by a focused, circular source. The simulation model was derived from the time domain algorithm that solves the non linear KZK (Khokhlov-Zabolotskaya-Kuznetsov) equation describing acoustic wave propagation. The calibration technique data were experimentally verified using independent calibration procedures in the frequency range from 2 to 40 MHz using a combined time delay spectrometry and reciprocity approach or calibration data provided by the National Physical Laboratory (NPL), UK. The results of verification indicated good agreement between the results obtained using KZK and the above-mentioned independent calibration techniques from 2 to 40 MHz, with the maximum discrepancy of 18% at 30 MHz. The frequency responses obtained using different hydrophone designs, including several membrane and needle probes, are presented, and it is shown that the technique developed provides a desirable tool for independent verification of primary calibration techniques such as those based on optical interferometry. Fundamental limitations of the presented calibration method are also examined.
NASA Astrophysics Data System (ADS)
McCann, C.; Repasky, K. S.; Morin, M.; Lawrence, R. L.; Powell, S. L.
2016-12-01
Compact, cost-effective, flight-based hyperspectral imaging systems can provide scientifically relevant data over large areas for a variety of applications such as ecosystem studies, precision agriculture, and land management. To fully realize this capability, unsupervised classification techniques based on radiometrically-calibrated data that cluster based on biophysical similarity rather than simply spectral similarity are needed. An automated technique to produce high-resolution, large-area, radiometrically-calibrated hyperspectral data sets based on the Landsat surface reflectance data product as a calibration target was developed and applied to three subsequent years of data covering approximately 1850 hectares. The radiometrically-calibrated data allows inter-comparison of the temporal series. Advantages of the radiometric calibration technique include the need for minimal site access, no ancillary instrumentation, and automated processing. Fitting the reflectance spectra of each pixel using a set of biophysically relevant basis functions reduces the data from 80 spectral bands to 9 parameters providing noise reduction and data compression. Examination of histograms of these parameters allows for determination of natural splitting into biophysical similar clusters. This method creates clusters that are similar in terms of biophysical parameters, not simply spectral proximity. Furthermore, this method can be applied to other data sets, such as urban scenes, by developing other physically meaningful basis functions. The ability to use hyperspectral imaging for a variety of important applications requires the development of data processing techniques that can be automated. The radiometric-calibration combined with the histogram based unsupervised classification technique presented here provide one potential avenue for managing big-data associated with hyperspectral imaging.
Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique
NASA Astrophysics Data System (ADS)
Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua
2018-05-01
A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.
Radiometrically accurate scene-based nonuniformity correction for array sensors.
Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott
2003-10-01
A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.
Vicarious calibrations of HICO data acquired from the International Space Station.
Gao, Bo-Cai; Li, Rong-Rong; Lucke, Robert L; Davis, Curtiss O; Bevilacqua, Richard M; Korwan, Daniel R; Montes, Marcos J; Bowles, Jeffrey H; Corson, Michael R
2012-05-10
The Hyperspectral Imager for the Coastal Ocean (HICO) presently onboard the International Space Station (ISS) is an imaging spectrometer designed for remote sensing of coastal waters. The instrument is not equipped with any onboard spectral and radiometric calibration devices. Here we describe vicarious calibration techniques that have been used in converting the HICO raw digital numbers to calibrated radiances. The spectral calibration is based on matching atmospheric water vapor and oxygen absorption bands and extraterrestrial solar lines. The radiometric calibration is based on comparisons between HICO and the EOS/MODIS data measured over homogeneous desert areas and on spectral reflectance properties of coral reefs and water clouds. Improvements to the present vicarious calibration techniques are possible as we gain more in-depth understanding of the HICO laboratory calibration data and the ISS HICO data in the future.
Calibration of a polarimetric imaging SAR
NASA Technical Reports Server (NTRS)
Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.
1991-01-01
Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.
Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.; ...
2016-11-08
The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less
A new polarimetric active radar calibrator and calibration technique
NASA Astrophysics Data System (ADS)
Tang, Jianguo; Xu, Xiaojian
2015-10-01
Polarimetric active radar calibrator (PARC) is one of the most important calibrators with high radar cross section (RCS) for polarimetry measurement. In this paper, a new double-antenna polarimetric active radar calibrator (DPARC) is proposed, which consists of two rotatable antennas with wideband electromagnetic polarization filters (EMPF) to achieve lower cross-polarization for transmission and reception. With two antennas which are rotatable around the radar line of sight (LOS), the DPARC provides a variety of standard polarimetric scattering matrices (PSM) through the rotation combination of receiving and transmitting polarization, which are useful for polarimatric calibration in different applications. In addition, a technique based on Fourier analysis is proposed for calibration processing. Numerical simulation results are presented to demonstrate the superior performance of the proposed DPARC and processing technique.
Microscope self-calibration based on micro laser line imaging and soft computing algorithms
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.
2018-06-01
A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.
Simple laser vision sensor calibration for surface profiling applications
NASA Astrophysics Data System (ADS)
Abu-Nabah, Bassam A.; ElSoussi, Adnane O.; Al Alami, Abed ElRahman K.
2016-09-01
Due to the relatively large structures in the Oil and Gas industry, original equipment manufacturers (OEMs) have been implementing custom-designed laser vision sensor (LVS) surface profiling systems as part of quality control in their manufacturing processes. The rough manufacturing environment and the continuous movement and misalignment of these custom-designed tools adversely affect the accuracy of laser-based vision surface profiling applications. Accordingly, Oil and Gas businesses have been raising the demand from the OEMs to implement practical and robust LVS calibration techniques prior to running any visual inspections. This effort introduces an LVS calibration technique representing a simplified version of two known calibration techniques, which are commonly implemented to obtain a calibrated LVS system for surface profiling applications. Both calibration techniques are implemented virtually and experimentally to scan simulated and three-dimensional (3D) printed features of known profiles, respectively. Scanned data is transformed from the camera frame to points in the world coordinate system and compared with the input profiles to validate the introduced calibration technique capability against the more complex approach and preliminarily assess the measurement technique for weld profiling applications. Moreover, the sensitivity to stand-off distances is analyzed to illustrate the practicality of the presented technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.
The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less
Calibration of cathode strip gains in multiwire drift chambers of the GlueX experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berdnikov, V. V.; Somov, S. V.; Pentchev, L.
A technique for calibrating cathode strip gains in multiwire drift chambers of the GlueX experiment is described. The accuracy of the technique is estimated based on Monte Carlo generated data with known gain coefficients in the strip signal channels. One of the four detector sections has been calibrated using cosmic rays. Results of drift chamber calibration on the accelerator beam upon inclusion in the GlueX experimental setup are presented.
Underwater 3D Surface Measurement Using Fringe Projection Based Scanning Devices
Bräuer-Burchardt, Christian; Heinze, Matthias; Schmidt, Ingo; Kühmstedt, Peter; Notni, Gunther
2015-01-01
In this work we show the principle of optical 3D surface measurements based on the fringe projection technique for underwater applications. The challenges of underwater use of this technique are shown and discussed in comparison with the classical application. We describe an extended camera model which takes refraction effects into account as well as a proposal of an effective, low-effort calibration procedure for underwater optical stereo scanners. This calibration technique combines a classical air calibration based on the pinhole model with ray-based modeling and requires only a few underwater recordings of an object of known length and a planar surface. We demonstrate a new underwater 3D scanning device based on the fringe projection technique. It has a weight of about 10 kg and the maximal water depth for application of the scanner is 40 m. It covers an underwater measurement volume of 250 mm × 200 mm × 120 mm. The surface of the measurement objects is captured with a lateral resolution of 150 μm in a third of a second. Calibration evaluation results are presented and examples of first underwater measurements are given. PMID:26703624
A calibration method for fringe reflection technique based on the analytical phase-slope description
NASA Astrophysics Data System (ADS)
Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong
2018-05-01
The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
Calibration of a horizontally acting force transducer with the use of a simple pendulum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taberner, Andrew J.; Hunter, Ian W.; BioInstrumentation Laboratory, Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 and Institute for Soldier Nanotechnologies, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139
This article details the implementation of a method for calibrating horizontally measuring force transducers using a pendulum. The technique exploits the sinusoidal inertial force generated by a suspended mass as it pendulates about a point on the measurement axis of the force transducer. The method is used to calibrate a reconfigurable, custom-made force transducer based on exchangeable cantilevers with stiffness ranging from 10 to 10{sup 4} N/m. In this implementation, the relative combined standard uncertainty in the calibrated transducer stiffness is 0.41% while the repeatability of the calibration technique is 0.46%.
In-situ technique for checking the calibration of platinum resistance thermometers
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Dillon-Townes, Lawrence A.
1987-01-01
The applicability of the self-heating technique for checking the calibration of platinum resistance thermometers located inside wind tunnels was investigated. This technique is based on a steady state measurement of resistance increase versus joule heating. This method was found to be undesirable, mainly because of the fluctuations of flow variables during any wind tunnel testing.
Automated response matching for organic scintillation detector arrays
NASA Astrophysics Data System (ADS)
Aspinall, M. D.; Joyce, M. J.; Cave, F. D.; Plenteda, R.; Tomanin, A.
2017-07-01
This paper identifies a digitizer technology with unique features that facilitates feedback control for the realization of a software-based technique for automatically calibrating detector responses. Three such auto-calibration techniques have been developed and are described along with an explanation of the main configuration settings and potential pitfalls. Automating this process increases repeatability, simplifies user operation, enables remote and periodic system calibration where consistency across detectors' responses are critical.
NASA Astrophysics Data System (ADS)
Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia
2017-09-01
The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.
New technique for calibrating hydrocarbon gas flowmeters
NASA Technical Reports Server (NTRS)
Singh, J. J.; Puster, R. L.
1984-01-01
A technique for measuring calibration correction factors for hydrocarbon mass flowmeters is described. It is based on the Nernst theorem for matching the partial pressure of oxygen in the combustion products of the test hydrocarbon, burned in oxygen-enriched air, with that in normal air. It is applied to a widely used type of commercial thermal mass flowmeter for a number of hydrocarbons. The calibration correction factors measured using this technique are in good agreement with the values obtained by other independent procedures. The technique is successfully applied to the measurement of differences as low as one percent of the effective hydrocarbon content of the natural gas test samples.
Wind Tunnel Force Balance Calibration Study - Interim Results
NASA Technical Reports Server (NTRS)
Rhew, Ray D.
2012-01-01
Wind tunnel force balance calibration is preformed utilizing a variety of different methods and does not have a direct traceable standard such as standards used for most calibration practices (weights, and voltmeters). These different calibration methods and practices include, but are not limited to, the loading schedule, the load application hardware, manual and automatic systems, re-leveling and non-re-leveling. A study of the balance calibration techniques used by NASA was undertaken to develop metrics for reviewing and comparing results using sample calibrations. The study also includes balances of different designs, single and multi-piece. The calibration systems include, the manual, and the automatic that are provided by NASA and its vendors. The results to date will be presented along with the techniques for comparing the results. In addition, future planned calibrations and investigations based on the results will be provided.
NASA Astrophysics Data System (ADS)
Kowalewski, M. G.; Janz, S. J.
2015-02-01
Methods of absolute radiometric calibration of backscatter ultraviolet (BUV) satellite instruments are compared as part of an effort to minimize pre-launch calibration uncertainties. An internally illuminated integrating sphere source has been used for the Shuttle Solar BUV, Total Ozone Mapping Spectrometer, Ozone Mapping Instrument, and Global Ozone Monitoring Experiment 2 using standardized procedures traceable to national standards. These sphere-based spectral responsivities agree to within the derived combined standard uncertainty of 1.87% relative to calibrations performed using an external diffuser illuminated by standard irradiance sources, the customary spectral radiance responsivity calibration method for BUV instruments. The combined standard uncertainty for these calibration techniques as implemented at the NASA Goddard Space Flight Center’s Radiometric Calibration and Development Laboratory is shown to less than 2% at 250 nm when using a single traceable calibration standard.
Hybrid dynamic radioactive particle tracking (RPT) calibration technique for multiphase flow systems
NASA Astrophysics Data System (ADS)
Khane, Vaibhav; Al-Dahhan, Muthanna H.
2017-04-01
The radioactive particle tracking (RPT) technique has been utilized to measure three-dimensional hydrodynamic parameters for multiphase flow systems. An analytical solution to the inverse problem of the RPT technique, i.e. finding the instantaneous tracer positions based upon instantaneous counts received in the detectors, is not possible. Therefore, a calibration to obtain a counts-distance map is needed. There are major shortcomings in the conventional RPT calibration method due to which it has limited applicability in practical applications. In this work, the design and development of a novel dynamic RPT calibration technique are carried out to overcome the shortcomings of the conventional RPT calibration method. The dynamic RPT calibration technique has been implemented around a test reactor with 1foot in diameter and 1 foot in height using Cobalt-60 as an isotopes tracer particle. Two sets of experiments have been carried out to test the capability of novel dynamic RPT calibration. In the first set of experiments, a manual calibration apparatus has been used to hold a tracer particle at known static locations. In the second set of experiments, the tracer particle was moved vertically downwards along a straight line path in a controlled manner. The obtained reconstruction results about the tracer particle position were compared with the actual known position and the reconstruction errors were estimated. The obtained results revealed that the dynamic RPT calibration technique is capable of identifying tracer particle positions with a reconstruction error between 1 to 5.9 mm for the conditions studied which could be improved depending on various factors outlined here.
Calibrating and training of neutron based NSA techniques with less SNM standards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, William H; Swinhoe, Martyn T; Bracken, David S
2010-01-01
Accessing special nuclear material (SNM) standards for the calibration of and training on nondestructive assay (NDA) instruments has become increasingly difficult in light of enhanced safeguards and security regulations. Limited or nonexistent access to SNM has affected neutron based NDA techniques more than gamma ray techniques because the effects of multiplication require a range of masses to accurately measure the detector response. Neutron based NDA techniques can also be greatly affected by the matrix and impurity characteristics of the item. The safeguards community has been developing techniques for calibrating instrumentation and training personnel with dwindling numbers of SNM standards. Montemore » Carlo methods have become increasingly important for design and calibration of instrumentation. Monte Carlo techniques have the ability to accurately predict the detector response for passive techniques. The Monte Carlo results are usually benchmarked to neutron source measurements such as californium. For active techniques, the modeling becomes more difficult because of the interaction of the interrogation source with the detector and nuclear material; and the results cannot be simply benchmarked with neutron sources. A Monte Carlo calculated calibration curve for a training course in Indonesia of material test reactor (MTR) fuel elements assayed with an active well coincidence counter (AWCC) will be presented as an example. Performing training activities with reduced amounts of nuclear material makes it difficult to demonstrate how the multiplication and matrix properties of the item affects the detector response and limits the knowledge that can be obtained with hands-on training. A neutron pulse simulator (NPS) has been developed that can produce a pulse stream representative of a real pulse stream output from a detector measuring SNM. The NPS has been used by the International Atomic Energy Agency (IAEA) for detector testing and training applications at the Agency due to the lack of appropriate SNM standards. This paper will address the effect of reduced access to SNM for calibration and training of neutron NDA applications along with the advantages and disadvantages of some solutions that do not use standards, such as the Monte Carlo techniques and the NPS.« less
NASA Technical Reports Server (NTRS)
Bhartia, P. K.; Taylor, S.; Mcpeters, R. D.; Wellemeyer, C.
1995-01-01
The concept of the well-known Langley plot technique, used for the calibration of ground-based instruments, has been generalized for application to satellite instruments. In polar regions, near summer solstice, the solar backscattered ultraviolet (SBUV) instrument on the Nimbus 7 satellite samples the same ozone field at widely different solar zenith angles. These measurements are compared to assess the long-term drift in the instrument calibration. Although the technique provides only a relative wavelength-to-wavelength calibration, it can be combined with existing techniques to determine the drift of the instrument at any wavelength. Using this technique, we have generated a 12-year data set of ozone vertical profiles from SBUV with an estimated accuracy of +/- 5% at 1 mbar and +/- 2% at 10 mbar (95% confidence) over 12 years. Since the method is insensitive to true changes in the atmospheric ozone profile, it can also be used to compare the calibrations of similar SBUV instruments launched without temporal overlap.
Dither Gyro Scale Factor Calibration: GOES-16 Flight Experience
NASA Technical Reports Server (NTRS)
Reth, Alan D.; Freesland, Douglas C.; Krimchansky, Alexander
2018-01-01
This poster is a sequel to a paper presented at the 34th Annual AAS Guidance and Control Conference in 2011, which first introduced dither-based calibration of gyro scale factors. The dither approach uses very small excitations, avoiding the need to take instruments offline during gyro scale factor calibration. In 2017, the dither calibration technique was successfully used to estimate gyro scale factors on the GOES-16 satellite. On-orbit dither calibration results were compared to more traditional methods using large angle spacecraft slews about each gyro axis, requiring interruption of science. The results demonstrate that the dither technique can estimate gyro scale factors to better than 2000 ppm during normal science observations.
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Thome, Kurtis; Lockwood, Ronald
2012-01-01
An inter-calibration method is developed to provide absolute radiometric calibration of narrow-swath imaging sensors with reference to non-coincident wide-swath sensors. The method predicts at-sensor radiance using non-coincident imagery from the reference sensor and knowledge of spectral reflectance of the test site. The imagery of the reference sensor is restricted to acquisitions that provide similar view and solar illumination geometry to reduce uncertainties due to directional reflectance effects. Spectral reflectance of the test site is found with a simple iterative radiative transfer method using radiance values of a well-understood wide-swath sensor and spectral shape information based on historical ground-based measurements. At-sensor radiance is calculated for the narrow-swath sensor using this spectral reflectance and atmospheric parameters that are also based on historical in situ measurements. Results of the inter-calibration method show agreement on the 2 5 percent level in most spectral regions with the vicarious calibration technique relying on coincident ground-based measurements referred to as the reflectance-based approach. While the variability of the inter-calibration method based on non-coincident image pairs is significantly larger, results are consistent with techniques relying on in situ measurements. The method is also insensitive to spectral differences between the sensors by transferring to surface spectral reflectance prior to prediction of at-sensor radiance. The utility of this inter-calibration method is made clear by its flexibility to utilize image pairings with acquisition dates differing in excess of 30 days allowing frequent absolute calibration comparisons between wide- and narrow-swath sensors.
Dynamic photogrammetric calibration of industrial robots
NASA Astrophysics Data System (ADS)
Maas, Hans-Gerd
1997-07-01
Today's developments in industrial robots focus on aims like gain of flexibility, improvement of the interaction between robots and reduction of down-times. A very important method to achieve these goals are off-line programming techniques. In contrast to conventional teach-in-robot programming techniques, where sequences of actions are defined step-by- step via remote control on the real object, off-line programming techniques design complete robot (inter-)action programs in a CAD/CAM environment. This poses high requirements to the geometric accuracy of a robot. While the repeatability of robot poses in the teach-in mode is often better than 0.1 mm, the absolute pose accuracy potential of industrial robots is usually much worse due to tolerances, eccentricities, elasticities, play, wear-out, load, temperature and insufficient knowledge of model parameters for the transformation from poses into robot axis angles. This fact necessitates robot calibration techniques, including the formulation of a robot model describing kinematics and dynamics of the robot, and a measurement technique to provide reference data. Digital photogrammetry as an accurate, economic technique with realtime potential offers itself for this purpose. The paper analyzes the requirements posed to a measurement technique by industrial robot calibration tasks. After an overview on measurement techniques used for robot calibration purposes in the past, a photogrammetric robot calibration system based on off-the- shelf lowcost hardware components will be shown and results of pilot studies will be discussed. Besides aspects of accuracy, reliability and self-calibration in a fully automatic dynamic photogrammetric system, realtime capabilities are discussed. In the pilot studies, standard deviations of 0.05 - 0.25 mm in the three coordinate directions could be achieved over a robot work range of 1.7 X 1.5 X 1.0 m3. The realtime capabilities of the technique allow to go beyond kinematic robot calibration and perform dynamic robot calibration as well as photogrammetric on-line control of a robot in action.
Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert
2015-05-28
System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less
Requirements for Calibration in Noninvasive Glucose Monitoring by Raman Spectroscopy
Lipson, Jan; Bernhardt, Jeff; Block, Ueyn; Freeman, William R.; Hofmeister, Rudy; Hristakeva, Maya; Lenosky, Thomas; McNamara, Robert; Petrasek, Danny; Veltkamp, David; Waydo, Stephen
2009-01-01
Background In the development of noninvasive glucose monitoring technology, it is highly desirable to derive a calibration that relies on neither person-dependent calibration information nor supplementary calibration points furnished by an existing invasive measurement technique (universal calibration). Method By appropriate experimental design and associated analytical methods, we establish the sufficiency of multiple factors required to permit such a calibration. Factors considered are the discrimination of the measurement technique, stabilization of the experimental apparatus, physics–physiology-based measurement techniques for normalization, the sufficiency of the size of the data set, and appropriate exit criteria to establish the predictive value of the algorithm. Results For noninvasive glucose measurements, using Raman spectroscopy, the sufficiency of the scale of data was demonstrated by adding new data into an existing calibration algorithm and requiring that (a) the prediction error should be preserved or improved without significant re-optimization, (b) the complexity of the model for optimum estimation not rise with the addition of subjects, and (c) the estimation for persons whose data were removed entirely from the training set should be no worse than the estimates on the remainder of the population. Using these criteria, we established guidelines empirically for the number of subjects (30) and skin sites (387) for a preliminary universal calibration. We obtained a median absolute relative difference for our entire data set of 30 mg/dl, with 92% of the data in the Clarke A and B ranges. Conclusions Because Raman spectroscopy has high discrimination for glucose, a data set of practical dimensions appears to be sufficient for universal calibration. Improvements based on reducing the variance of blood perfusion are expected to reduce the prediction errors substantially, and the inclusion of supplementary calibration points for the wearable device under development will be permissible and beneficial. PMID:20144354
Polarized Power Spectra from HERA-19 Commissioning Data: Effect of Calibration Techniques
NASA Astrophysics Data System (ADS)
Chichura, Paul; Igarashi, Amy; Fox Fortino, Austin; Kohn, Saul; Aguirre, James; HERA Collaboration
2018-01-01
Studying the Epoch of Reionization (EOR) is crucial for cosmologists as it not only provides information about the first generation of stars and galaxies, but it may also help answer any number of fundamental astrophysical questions. The Hydrogen Epoch of Reionization Array (HERA) is doing this by examining emission from the 21cm hyperfine transition of neutral hydrogen, which has been identified as a promising probe of reionization. Currently, HERA is still in its commissioning phase; 37 of the planned 350 dishes have been constructed and analysis has begun for data received from the first 19 dishes built. With the creation of fully polarized power spectra, we investigate how different data calibration techniques affect the power spectra and whether or not ordering these techniques in different ways affects the results. These calibration techniques include using both non-imaging redundant measurements within the array to calibrate, as well as more traditional approaches based on imaging and calibrating to a model of sky. We explore the degree to which the different calibration schemes affect leakage of foreground emission to regions of Fourier space where EoR the power spectrum is expected to be measurable.
NASA Astrophysics Data System (ADS)
Abe, O. E.; Otero Villamide, X.; Paparini, C.; Radicella, S. M.; Nava, B.; Rodríguez-Bouza, M.
2017-04-01
Global Navigation Satellite Systems (GNSS) have become a powerful tool use in surveying and mapping, air and maritime navigation, ionospheric/space weather research and other applications. However, in some cases, its maximum efficiency could not be attained due to some uncorrelated errors associated with the system measurements, which is caused mainly by the dispersive nature of the ionosphere. Ionosphere has been represented using the total number of electrons along the signal path at a particular height known as Total Electron Content (TEC). However, there are many methods to estimate TEC but the outputs are not uniform, which could be due to the peculiarity in characterizing the biases inside the observables (measurements), and sometimes could be associated to the influence of mapping function. The errors in TEC estimation could lead to wrong conclusion and this could be more critical in case of safety-of-life application. This work investigated the performance of Ciraolo's and Gopi's GNSS-TEC calibration techniques, during 5 geomagnetic quiet and disturbed conditions in the month of October 2013, at the grid points located in low and middle latitudes. The data used are obtained from the GNSS ground-based receivers located at Borriana in Spain (40°N, 0°E; mid latitude) and Accra in Ghana (5.50°N, -0.20°E; low latitude). The results of the calibrated TEC are compared with the TEC obtained from European Geostationary Navigation Overlay System Processing Set (EGNOS PS) TEC algorithm, which is considered as a reference data. The TEC derived from Global Ionospheric Maps (GIM) through International GNSS service (IGS) was also examined at the same grid points. The results obtained in this work showed that Ciraolo's calibration technique (a calibration technique based on carrier-phase measurements only) estimates TEC better at middle latitude in comparison to Gopi's technique (a calibration technique based on code and carrier-phase measurements). At the same time, Gopi's calibration was also found more reliable in low latitude than Ciraolo's technique. In addition, the TEC derived from IGS GIM seems to be much reliable in middle-latitude than in low-latitude region.
Development of Long-term Datasets from Satellite BUV Instruments: The "Soft" Calibration Approach
NASA Technical Reports Server (NTRS)
Bhartia, Pawan K.; Taylor, Steven; Jaross, Glen
2005-01-01
The first BUV instrument was launched in April 1970 on NASA's Nimbus4 satellite. More than a dozen instruments, broadly based on the same principle, but using very different technologies, have been launched in the last 35 years on NASA, NOAA, Japanese and European satellites. In this paper we describe the basic principles of the "soft" calibration approach that we have successfully applied to the data from many of these instruments to produce a consistent long-term record of total ozone, ozone profile and aerosols. This approach is based on using accurate radiative transfer models and assumed/known properties of the atmosphere in ultraviolet to derive calibration parameters. Although the accuracy of the results inevitably depends upon how well the assumed atmospheric properties are known, the technique has several built-in cross- checks that improve the robustness of the method. To develop further confidence in the data the soft calibration technique can be combined with data collected from few well- calibrated ground-based instruments. We will use examples from past and present BUV instruments to show how the method works.
Technique for Radiometer and Antenna Array Calibration with Two Antenna Noise Diodes
NASA Technical Reports Server (NTRS)
Srinivasan, Karthik; Limaye, Ashutosh; Laymon, Charles; Meyer, Paul
2011-01-01
This paper presents a new technique to calibrate a microwave radiometer and phased array antenna system. This calibration technique uses a radiated noise source in addition to an injected noise sources for calibration. The plane of reference for this calibration technique is the face of the antenna and therefore can effectively calibration the gain fluctuations in the active phased array antennas. This paper gives the mathematical formulation for the technique and discusses the improvements brought by the method over the existing calibration techniques.
NASA Astrophysics Data System (ADS)
Chibunichev, A. G.; Kurkov, V. M.; Smirnov, A. V.; Govorov, A. V.; Mikhalin, V. A.
2016-10-01
Nowadays, aerial survey technology using aerial systems based on unmanned aerial vehicles (UAVs) becomes more popular. UAVs physically can not carry professional aerocameras. Consumer digital cameras are used instead. Such cameras usually have rolling, lamellar or global shutter. Quite often manufacturers and users of such aerial systems do not use camera calibration. In this case self-calibration techniques are used. However such approach is not confirmed by extensive theoretical and practical research. In this paper we compare results of phototriangulation based on laboratory, test-field or self-calibration. For investigations we use Zaoksky test area as an experimental field provided dense network of target and natural control points. Racurs PHOTOMOD and Agisoft PhotoScan software were used in evaluation. The results of investigations, conclusions and practical recommendations are presented in this article.
NASA Technical Reports Server (NTRS)
Kruse, Fred A.; Dwyer, John L.
1993-01-01
The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures reflected light in 224 contiguous spectra bands in the 0.4 to 2.45 micron region of the electromagnetic spectrum. Numerous studies have used these data for mineralogic identification and mapping based on the presence of diagnostic spectral features. Quantitative mapping requires conversion of the AVIRIS data to physical units (usually reflectance) so that analysis results can be compared and validated with field and laboratory measurements. This study evaluated two different AVIRIS calibration techniques to ground reflectance: an empirically-based method and an atmospheric model based method to determine their effects on quantitative scientific analyses. Expert system analysis and linear spectral unmixing were applied to both calibrated data sets to determine the effect of the calibration on the mineral identification and quantitative mapping results. Comparison of the image-map results and image reflectance spectra indicate that the model-based calibrated data can be used with automated mapping techniques to produce accurate maps showing the spatial distribution and abundance of surface mineralogy. This has positive implications for future operational mapping using AVIRIS or similar imaging spectrometer data sets without requiring a priori knowledge.
NASA Astrophysics Data System (ADS)
Ding, Xiang; Li, Fei; Zhang, Jiyan; Liu, Wenli
2016-10-01
Raman spectrometers are usually calibrated periodically to ensure their measurement accuracy of Raman shift. A combination of a piece of monocrystalline silicon chip and a low pressure discharge lamp is proposed as a candidate for the reference standard of Raman shift. A high precision calibration technique is developed to accurately determine the standard value of the silicon's Raman shift around 520cm-1. The technique is described and illustrated by measuring a piece of silicon chip against three atomic spectral lines of a neon lamp. A commercial Raman spectrometer is employed and its error characteristics of Raman shift are investigated. Error sources are evaluated based on theoretical analysis and experiments, including the sample factor, the instrumental factor, the laser factor and random factors. Experimental results show that the expanded uncertainty of the silicon's Raman shift around 520cm-1 can acheive 0.3 cm-1 (k=2), which is more accurate than most of currently used reference materials. The results are validated by comparison measurement between three Raman spectrometers. It is proved that the technique can remarkably enhance the accuracy of Raman shift, making it possible to use the silicon and the lamp to calibrate Raman spectrometers.
Link calibration against receiver calibration: an assessment of GPS time transfer uncertainties
NASA Astrophysics Data System (ADS)
Rovera, G. D.; Torre, J.-M.; Sherwood, R.; Abgrall, M.; Courde, C.; Laas-Bourez, M.; Uhrich, P.
2014-10-01
We present a direct comparison between two different techniques for the relative calibration of time transfer between remote time scales when using the signals transmitted by the Global Positioning System (GPS). Relative calibration estimates the delay of equipment or the delay of a time transfer link with respect to reference equipment. It is based on the circulation of some travelling GPS equipment between the stations in the network, against which the local equipment is measured. Two techniques can be considered: first a station calibration by the computation of the hardware delays of the local GPS equipment; second the computation of a global hardware delay offset for the time transfer between the reference points of two remote time scales. This last technique is called a ‘link’ calibration, with respect to the other one, which is a ‘receiver’ calibration. The two techniques require different measurements on site, which change the uncertainty budgets, and we discuss this and related issues. We report on one calibration campaign organized during Autumn 2013 between Observatoire de Paris (OP), Paris, France, Observatoire de la Côte d'Azur (OCA), Calern, France, and NERC Space Geodesy Facility (SGF), Herstmonceux, United Kingdom. The travelling equipment comprised two GPS receivers of different types, along with the required signal generator and distribution amplifier, and one time interval counter. We show the different ways to compute uncertainty budgets, leading to improvement factors of 1.2 to 1.5 on the hardware delay uncertainties when comparing the relative link calibration to the relative receiver calibration.
NASA Astrophysics Data System (ADS)
Garavaglia, F.; Seyve, E.; Gottardi, F.; Le Lay, M.; Gailhard, J.; Garçon, R.
2014-12-01
MORDOR is a conceptual hydrological model extensively used in Électricité de France (EDF, French electric utility company) operational applications: (i) hydrological forecasting, (ii) flood risk assessment, (iii) water balance and (iv) climate change studies. MORDOR is a lumped, reservoir, elevation based model with hourly or daily areal rainfall and air temperature as the driving input data. The principal hydrological processes represented are evapotranspiration, direct and indirect runoff, ground water, snow accumulation and melt and routing. The model has been intensively used at EDF for more than 20 years, in particular for modeling French mountainous watersheds. In the matter of parameters calibration we propose and test alternative multi-criteria techniques based on two specific approaches: automatic calibration using single-objective functions and a priori parameter calibration founded on hydrological watershed features. The automatic calibration approach uses single-objective functions, based on Kling-Gupta efficiency, to quantify the good agreement between the simulated and observed runoff focusing on four different runoff samples: (i) time-series sample, (I) annual hydrological regime, (iii) monthly cumulative distribution functions and (iv) recession sequences.The primary purpose of this study is to analyze the definition and sensitivity of MORDOR parameters testing different calibration techniques in order to: (i) simplify the model structure, (ii) increase the calibration-validation performance of the model and (iii) reduce the equifinality problem of calibration process. We propose an alternative calibration strategy that reaches these goals. The analysis is illustrated by calibrating MORDOR model to daily data for 50 watersheds located in French mountainous regions.
Calibration of CR-39-based thoron progeny device.
Fábián, F; Csordás, A; Shahrokhi, A; Somlai, J; Kovács, T
2014-07-01
Radon isotopes and their progenies have proven significant role in respiratory tumour formation. In most cases, the radiological effect of one of the radon isotopes (thoron) and its progenies has been neglected together with its measurement technique; however, latest surveys proved that thoron's existence is expectable in flats and in workplace in Europe. Detectors based on different track detector measurement technologies have recently spread for measuring thoron progenies; however, the calibration is not yet completely elaborated. This study deals with the calibration of the track detector measurement method suitable for measuring thoron progenies using different devices with measurement techniques capable of measuring several progenies (Pylon AB5 and WLx, Sarad EQF 3220). The calibration factor values related to the thoron progeny monitors, the measurement uncertainty, reproducibility and other parameters were found using the calibration chamber. In the future, the effects of the different parameters (aerosol distribution, etc.) will be determined. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Effects of line fiducial parameters and beamforming on ultrasound calibration
Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Peters, Terry M.; Chen, Elvis C. S.
2017-01-01
Abstract. Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures. PMID:28331886
Effects of line fiducial parameters and beamforming on ultrasound calibration.
Ameri, Golafsoun; Baxter, John S H; McLeod, A Jonathan; Peters, Terry M; Chen, Elvis C S
2017-01-01
Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures.
Improving integrity of on-line grammage measurement with traceable basic calibration.
Kangasrääsiö, Juha
2010-07-01
The automatic control of grammage (basis weight) in paper and board production is based upon on-line grammage measurement. Furthermore, the automatic control of other quality variables such as moisture, ash content and coat weight, may rely on the grammage measurement. The integrity of Kr-85 based on-line grammage measurement systems was studied, by performing basic calibrations with traceably calibrated plastic reference standards. The calibrations were performed according to the EN ISO/IEC 17025 standard, which is a requirement for calibration laboratories. The observed relative measurement errors were 3.3% in the first time calibrations at the 95% confidence level. With the traceable basic calibration method, however, these errors can be reduced to under 0.5%, thus improving the integrity of on-line grammage measurements. Also a standardised algorithm, based on the experience from the performed calibrations, is proposed to ease the adjustment of the different grammage measurement systems. The calibration technique can basically be applied to all beta-radiation based grammage measurements. 2010 ISA. Published by Elsevier Ltd. All rights reserved.
An Optical Frequency Comb Tied to GPS for Laser Frequency/Wavelength Calibration
Stone, Jack A.; Egan, Patrick
2010-01-01
Optical frequency combs can be employed over a broad spectral range to calibrate laser frequency or vacuum wavelength. This article describes procedures and techniques utilized in the Precision Engineering Division of NIST (National Institute of Standards and Technology) for comb-based calibration of laser wavelength, including a discussion of ancillary measurements such as determining the mode order. The underlying purpose of these calibrations is to provide traceable standards in support of length measurement. The relative uncertainty needed to fulfill this goal is typically 10−8 and never below 10−12, very modest requirements compared to the capabilities of comb-based frequency metrology. In this accuracy range the Global Positioning System (GPS) serves as an excellent frequency reference that can provide the traceable underpinning of the measurement. This article describes techniques that can be used to completely characterize measurement errors in a GPS-based comb system and thus achieve full confidence in measurement results. PMID:27134794
Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T
2018-03-01
Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.
NASA Astrophysics Data System (ADS)
Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.
2017-12-01
Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.
NASA Astrophysics Data System (ADS)
Verbus, J. R.; Rhyne, C. A.; Malling, D. C.; Genecov, M.; Ghosh, S.; Moskowitz, A. G.; Chan, S.; Chapman, J. J.; de Viveiros, L.; Faham, C. H.; Fiorucci, S.; Huang, D. Q.; Pangilinan, M.; Taylor, W. C.; Gaitskell, R. J.
2017-04-01
We propose a new technique for the calibration of nuclear recoils in large noble element dual-phase time projection chambers used to search for WIMP dark matter in the local galactic halo. This technique provides an in situ measurement of the low-energy nuclear recoil response of the target media using the measured scattering angle between multiple neutron interactions within the detector volume. The low-energy reach and reduced systematics of this calibration have particular significance for the low-mass WIMP sensitivity of several leading dark matter experiments. Multiple strategies for improving this calibration technique are discussed, including the creation of a new type of quasi-monoenergetic neutron source with a minimum possible peak energy of 272 keV. We report results from a time-of-flight-based measurement of the neutron energy spectrum produced by an Adelphi Technology, Inc. DD108 neutron generator, confirming its suitability for the proposed nuclear recoil calibration.
Virtual environment assessment for laser-based vision surface profiling
NASA Astrophysics Data System (ADS)
ElSoussi, Adnane; Al Alami, Abed ElRahman; Abu-Nabah, Bassam A.
2015-03-01
Oil and gas businesses have been raising the demand from original equipment manufacturers (OEMs) to implement a reliable metrology method in assessing surface profiles of welds before and after grinding. This certainly mandates the deviation from the commonly used surface measurement gauges, which are not only operator dependent, but also limited to discrete measurements along the weld. Due to its potential accuracy and speed, the use of laser-based vision surface profiling systems have been progressively rising as part of manufacturing quality control. This effort presents a virtual environment that lends itself for developing and evaluating existing laser vision sensor (LVS) calibration and measurement techniques. A combination of two known calibration techniques is implemented to deliver a calibrated LVS system. System calibration is implemented virtually and experimentally to scan simulated and 3D printed features of known profiles, respectively. Scanned data is inverted and compared with the input profiles to validate the virtual environment capability for LVS surface profiling and preliminary assess the measurement technique for weld profiling applications. Moreover, this effort brings 3D scanning capability a step closer towards robust quality control applications in a manufacturing environment.
An ionospheric occultation inversion technique based on epoch difference
NASA Astrophysics Data System (ADS)
Lin, Jian; Xiong, Jing; Zhu, Fuying; Yang, Jian; Qiao, Xuejun
2013-09-01
Of the ionospheric radio occultation (IRO) electron density profile (EDP) retrievals, the Abel based calibrated TEC inversion (CTI) is the most widely used technique. In order to eliminate the contribution from the altitude above the RO satellite, it is necessary to utilize the calibrated TEC to retrieve the EDP, which introduces the error due to the coplanar assumption. In this paper, a new technique based on the epoch difference inversion (EDI) is firstly proposed to eliminate this error. The comparisons between CTI and EDI have been done, taking advantage of the simulated and real COSMIC data. The following conclusions can be drawn: the EDI technique can successfully retrieve the EDPs without non-occultation side measurements and shows better performance than the CTI method, especially for lower orbit mission; no matter which technique is used, the inversion results at the higher altitudes are better than those at the lower altitudes, which could be explained theoretically.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
NASA Technical Reports Server (NTRS)
Gasiewski, Albin J.
1992-01-01
This technique for electronically rotating the polarization basis of an orthogonal-linear polarization radiometer is based on the measurement of the first three feedhorn Stokes parameters, along with the subsequent transformation of this measured Stokes vector into a rotated coordinate frame. The technique requires an accurate measurement of the cross-correlation between the two orthogonal feedhorn modes, for which an innovative polarized calibration load was developed. The experimental portion of this investigation consisted of a proof of concept demonstration of the technique of electronic polarization basis rotation (EPBR) using a ground based 90-GHz dual orthogonal-linear polarization radiometer. Practical calibration algorithms for ground-, aircraft-, and space-based instruments were identified and tested. The theoretical effort consisted of radiative transfer modeling using the planar-stratified numerical model described in Gasiewski and Staelin (1990).
Calibrationless parallel magnetic resonance imaging: a joint sparsity model.
Majumdar, Angshul; Chaudhury, Kunal Narayan; Ward, Rabab
2013-12-05
State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets-eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used-Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods-CS SENSE and l1SPIRiT and two calibration free techniques-Distributed CS and SAKE. Our method yields better reconstruction results than all of them.
NASA Astrophysics Data System (ADS)
Nouri, N. M.; Mostafapour, K.; Kamran, M.
2018-02-01
In a closed water-tunnel circuit, the multi-component strain gauge force and moment sensor (also known as balance) are generally used to measure hydrodynamic forces and moments acting on scaled models. These balances are periodically calibrated by static loading. Their performance and accuracy depend significantly on the rig and the method of calibration. In this research, a new calibration rig was designed and constructed to calibrate multi-component internal strain gauge balances. The calibration rig has six degrees of freedom and six different component-loading structures that can be applied separately and synchronously. The system was designed based on the applicability of formal experimental design techniques, using gravity for balance loading and balance positioning and alignment relative to gravity. To evaluate the calibration rig, a six-component internal balance developed by Iran University of Science and Technology was calibrated using response surface methodology. According to the results, calibration rig met all design criteria. This rig provides the means by which various methods of formal experimental design techniques can be implemented. The simplicity of the rig saves time and money in the design of experiments and in balance calibration while simultaneously increasing the accuracy of these activities.
Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2004-01-01
A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.
An on-line calibration technique for improved blade by blade tip clearance measurement
NASA Astrophysics Data System (ADS)
Sheard, A. G.; Westerman, G. C.; Killeen, B.
A description of a capacitance-based tip clearance measurement system which integrates a novel technique for calibrating the capacitance probe in situ is presented. The on-line calibration system allows the capacitance probe to be calibrated immediately prior to use, providing substantial operational advantages and maximizing measurement accuracy. The possible error sources when it is used in service are considered, and laboratory studies of performance to ascertain their magnitude are discussed. The 1.2-mm diameter FM capacitance probe is demonstrated to be insensitive to variations in blade tip thickness from 1.25 to 1.45 mm. Over typical compressor blading the probe's range was four times the variation in blade to blade clearance encountered in engine run components.
Antenna Calibration and Measurement Equipment
NASA Technical Reports Server (NTRS)
Rochblatt, David J.; Cortes, Manuel Vazquez
2012-01-01
A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.
Knaack, Jennifer S; Zhou, Yingtao; Abney, Carter W; Prezioso, Samantha M; Magnuson, Matthew; Evans, Ronald; Jakubowski, Edward M; Hardy, Katelyn; Johnson, Rudolph C
2012-11-20
We have developed a novel immunomagnetic scavenging technique for extracting cholinesterase inhibitors from aqueous matrixes using biological targeting and antibody-based extraction. The technique was characterized using the organophosphorus nerve agent VX. The limit of detection for VX in high-performance liquid chromatography (HPLC)-grade water, defined as the lowest calibrator concentration, was 25 pg/mL in a small, 500 μL sample. The method was characterized over the course of 22 sample sets containing calibrators, blanks, and quality control samples. Method precision, expressed as the mean relative standard deviation, was less than 9.2% for all calibrators. Quality control sample accuracy was 102% and 100% of the mean for VX spiked into HPLC-grade water at concentrations of 2.0 and 0.25 ng/mL, respectively. This method successfully was applied to aqueous extracts from soil, hamburger, and finished tap water spiked with VX. Recovery was 65%, 81%, and 100% from these matrixes, respectively. Biologically based extractions of organophosphorus compounds represent a new technique for sample extraction that provides an increase in extraction specificity and sensitivity.
An additional study and implementation of tone calibrated technique of modulation
NASA Technical Reports Server (NTRS)
Rafferty, W.; Bechtel, L. K.; Lay, N. E.
1985-01-01
The Tone Calibrated Technique (TCT) was shown to be theoretically free from an error floor, and is only limited, in practice, by implementation constraints. The concept of the TCT transmission scheme along with a baseband implementation of a suitable demodulator is introduced. Two techniques for the generation of the TCT signal are considered: a Manchester source encoding scheme (MTCT) and a subcarrier based technique (STCT). The results are summarized for the TCT link computer simulation. The hardware implementation of the MTCT system is addressed and the digital signal processing design considerations involved in satisfying the modulator/demodulator requirements are outlined. The program findings are discussed and future direction are suggested based on conclusions made regarding the suitability of the TCT system for the transmission channel presently under consideration.
NASA Astrophysics Data System (ADS)
Bell, S. A.; Miao, P.; Carroll, P. A.
2018-04-01
Evolved vapor coulometry is a measurement technique that selectively detects water and is used to measure water content of materials. The basis of the measurement is the quantitative electrolysis of evaporated water entrained in a carrier gas stream. Although this measurement has a fundamental principle—based on Faraday's law which directly relates electrolysis current to amount of substance electrolyzed—in practice it requires calibration. Commonly, reference materials of known water content are used, but the variety of these is limited, and they are not always available for suitable values, materials, with SI traceability, or with well-characterized uncertainty. In this paper, we report development of an alternative calibration approach using as a reference the water content of humid gas of defined dew point traceable to the SI via national humidity standards. The increased information available through this new type of calibration reveals a variation of the instrument performance across its range not visible using the conventional approach. The significance of this is discussed along with details of the calibration technique, example results, and an uncertainty evaluation.
ITER-relevant calibration technique for soft x-ray spectrometer.
Rzadkiewicz, J; Książek, I; Zastrow, K-D; Coffey, I H; Jakubowska, K; Lawson, K D
2010-10-01
The ITER-oriented JET research program brings new requirements for the low-Z impurity monitoring, in particular for the Be—the future main wall component of JET and ITER. Monitoring based on Bragg spectroscopy requires an absolute sensitivity calibration, which is challenging for large tokamaks. This paper describes both “component-by-component” and “continua” calibration methods used for the Be IV channel (75.9 Å) of the Bragg rotor spectrometer deployed on JET. The calibration techniques presented here rely on multiorder reflectivity calculations and measurements of continuum radiation emitted from helium plasmas. These offer excellent conditions for the absolute photon flux calibration due to their low level of impurities. It was found that the component-by-component method gives results that are four times higher than those obtained by means of the continua method. A better understanding of this discrepancy requires further investigations.
Study on rapid valid acidity evaluation of apple by fiber optic diffuse reflectance technique
NASA Astrophysics Data System (ADS)
Liu, Yande; Ying, Yibin; Fu, Xiaping; Jiang, Xuesong
2004-03-01
Some issues related to nondestructive evaluation of valid acidity in intact apples by means of Fourier transform near infrared (FTNIR) (800-2631nm) method were addressed. A relationship was established between the diffuse reflectance spectra recorded with a bifurcated optic fiber and the valid acidity. The data were analyzed by multivariate calibration analysis such as partial least squares (PLS) analysis and principal component regression (PCR) technique. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influence of data preprocessing and different spectra treatments were also investigated. Models based on smoothing spectra were slightly worse than models based on derivative spectra and the best result was obtained when the segment length was 5 and the gap size was 10. Depending on data preprocessing and multivariate calibration technique, the best prediction model had a correlation efficient (0.871), a low RMSEP (0.0677), a low RMSEC (0.056) and a small difference between RMSEP and RMSEC by PLS analysis. The results point out the feasibility of FTNIR spectral analysis to predict the fruit valid acidity non-destructively. The ratio of data standard deviation to the root mean square error of prediction (SDR) is better to be less than 3 in calibration models, however, the results cannot meet the demand of actual application. Therefore, further study is required for better calibration and prediction.
Vicarious Calibration of EO-1 Hyperion
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Thome, Kurt; Lawrence, Ong
2012-01-01
The Hyperion imaging spectrometer on the Earth Observing-1 satellite is the first high-spatial resolution imaging spectrometer to routinely acquire science-grade data from orbit. Data gathered with this instrument needs to be quantitative and accurate in order to derive meaningful information about ecosystem properties and processes. Also, comprehensive and long-term ecological studies require these data to be comparable over time, between coexisting sensors and between generations of follow-on sensors. One method to assess the radiometric calibration is the reflectance-based approach, a common technique used for several other earth science sensors covering similar spectral regions. This work presents results of radiometric calibration of Hyperion based on the reflectance-based approach of vicarious calibration implemented by University of Arizona during 2001 2005. These results show repeatability to the 2% level and accuracy on the 3 5% level for spectral regions not affected by strong atmospheric absorption. Knowledge of the stability of the Hyperion calibration from moon observations allows for an average absolute calibration based on the reflectance-based results to be determined and applicable for the lifetime of Hyperion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penfold, S; Miller, A
2015-06-15
Purpose: Stoichiometric calibration of Hounsfield Units (HUs) for conversion to proton relative stopping powers (RStPs) is vital for accurate dose calculation in proton therapy. However proton dose distributions are not only dependent on RStP, but also on relative scattering power (RScP) of patient tissues. RScP is approximated from material density but a stoichiometric calibration of HU-density tables is commonly neglected. The purpose of this work was to quantify the difference in calculated dose of a commercial TPS when using HU-density tables based on tissue substitute materials and stoichiometric calibrated ICRU tissues. Methods: Two HU-density calibration tables were generated based onmore » scans of the CIRS electron density phantom. The first table was based directly on measured HU and manufacturer quoted density of tissue substitute materials. The second was based on the same CT scan of the CIRS phantom followed by a stoichiometric calibration of ICRU44 tissue materials. The research version of Pinnacle{sup 3} proton therapy was used to compute dose in a patient CT data set utilizing both HU-density tables. Results: The two HU-density tables showed significant differences for bone tissues; the difference increasing with increasing HU. Differences in density calibration table translated to a difference in calculated RScP of −2.5% for ICRU skeletal muscle and 9.2% for ICRU femur. Dose-volume histogram analysis of a parallel opposed proton therapy prostate plan showed that the difference in calculated dose was negligible when using the two different HU-density calibration tables. Conclusion: The impact of HU-density calibration technique on proton therapy dose calculation was assessed. While differences were found in the calculated RScP of bony tissues, the difference in dose distribution for realistic treatment scenarios was found to be insignificant.« less
One-calibrant kinetic calibration for on-site water sampling with solid-phase microextraction.
Ouyang, Gangfeng; Cui, Shufen; Qin, Zhipei; Pawliszyn, Janusz
2009-07-15
The existing solid-phase microextraction (SPME) kinetic calibration technique, using the desorption of the preloaded standards to calibrate the extraction of the analytes, requires that the physicochemical properties of the standard should be similar to those of the analyte, which limited the application of the technique. In this study, a new method, termed the one-calibrant kinetic calibration technique, which can use the desorption of a single standard to calibrate all extracted analytes, was proposed. The theoretical considerations were validated by passive water sampling in laboratory and rapid water sampling in the field. To mimic the variety of the environment, such as temperature, turbulence, and the concentration of the analytes, the flow-through system for the generation of standard aqueous polycyclic aromatic hydrocarbons (PAHs) solution was modified. The experimental results of the passive samplings in the flow-through system illustrated that the effect of the environmental variables was successfully compensated with the kinetic calibration technique, and all extracted analytes can be calibrated through the desorption of a single calibrant. On-site water sampling with rotated SPME fibers also illustrated the feasibility of the new technique for rapid on-site sampling of hydrophobic organic pollutants in water. This technique will accelerate the application of the kinetic calibration method and also will be useful for other microextraction techniques.
Multiplexed absorption tomography with calibration-free wavelength modulation spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Weiwei; Kaminski, Clemens F., E-mail: cfk23@cam.ac.uk
2014-04-14
We propose a multiplexed absorption tomography technique, which uses calibration-free wavelength modulation spectroscopy with tunable semiconductor lasers for the simultaneous imaging of temperature and species concentration in harsh combustion environments. Compared with the commonly used direct absorption spectroscopy (DAS) counterpart, the present variant enjoys better signal-to-noise ratios and requires no baseline fitting, a particularly desirable feature for high-pressure applications, where adjacent absorption features overlap and interfere severely. We present proof-of-concept numerical demonstrations of the technique using realistic phantom models of harsh combustion environments and prove that the proposed techniques outperform currently available tomography techniques based on DAS.
NASA Astrophysics Data System (ADS)
Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.
2012-07-01
Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83 % in RMS of range error and 72 % in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90 % true positive recognition and the average of 12 centimetres 3D positioning accuracy.
NASA Astrophysics Data System (ADS)
Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.
2012-07-01
Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83% in RMS of range error and 72% in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90% true positive recognition and the average of 12 centimetres 3D positioning accuracy.
Using HEC-HMS: Application to Karkheh river basin
USDA-ARS?s Scientific Manuscript database
This paper aims to facilitate the use of HEC-HMS model using a systematic event-based technique for manual calibration of soil moisture accounting and snowmelt degree-day parameters. Manual calibration, which helps ensure the HEC-HMS parameter values are physically-relevant, is often a time-consumin...
Standardization of Laser Methods and Techniques for Vibration Measurements and Calibrations
NASA Astrophysics Data System (ADS)
von Martens, Hans-Jürgen
2010-05-01
The realization and dissemination of the SI units of motion quantities (vibration and shock) have been based on laser interferometer methods specified in international documentary standards. New and refined laser methods and techniques developed by national metrology institutes and by leading manufacturers in the past two decades have been swiftly specified as standard methods for inclusion into in the series ISO 16063 of international documentary standards. A survey of ISO Standards for the calibration of vibration and shock transducers demonstrates the extended ranges and improved accuracy (measurement uncertainty) of laser methods and techniques for vibration and shock measurements and calibrations. The first standard for the calibration of laser vibrometers by laser interferometry or by a reference accelerometer calibrated by laser interferometry (ISO 16063-41) is on the stage of a Draft International Standard (DIS) and may be issued by the end of 2010. The standard methods with refined techniques proved to achieve wider measurement ranges and smaller measurement uncertainties than that specified in the ISO Standards. The applicability of different standardized interferometer methods to vibrations at high frequencies was recently demonstrated up to 347 kHz (acceleration amplitudes up to 350 km/s2). The relative deviations between the amplitude measurement results of the different interferometer methods that were applied simultaneously, differed by less than 1% in all cases.
McLaskey, Gregory C.; Lockner, David A.; Kilgore, Brian D.; Beeler, Nicholas M.
2015-01-01
We describe a technique to estimate the seismic moment of acoustic emissions and other extremely small seismic events. Unlike previous calibration techniques, it does not require modeling of the wave propagation, sensor response, or signal conditioning. Rather, this technique calibrates the recording system as a whole and uses a ball impact as a reference source or empirical Green’s function. To correctly apply this technique, we develop mathematical expressions that link the seismic moment $M_{0}$ of internal seismic sources (i.e., earthquakes and acoustic emissions) to the impulse, or change in momentum $\\Delta p $, of externally applied seismic sources (i.e., meteor impacts or, in this case, ball impact). We find that, at low frequencies, moment and impulse are linked by a constant, which we call the force‐moment‐rate scale factor $C_{F\\dot{M}} = M_{0}/\\Delta p$. This constant is equal to twice the speed of sound in the material from which the seismic sources were generated. Next, we demonstrate the calibration technique on two different experimental rock mechanics facilities. The first example is a saw‐cut cylindrical granite sample that is loaded in a triaxial apparatus at 40 MPa confining pressure. The second example is a 2 m long fault cut in a granite sample and deformed in a large biaxial apparatus at lower stress levels. Using the empirical calibration technique, we are able to determine absolute source parameters including the seismic moment, corner frequency, stress drop, and radiated energy of these magnitude −2.5 to −7 seismic events.
Technique for Radiometer and Antenna Array Calibration with a Radiated Noise Diode
NASA Technical Reports Server (NTRS)
Srinivasan, Karthik; Limaye, Ashutosh; Laymon, Charles; Meyer, Paul
2009-01-01
This paper presents a new technique to calibrate a microwave radiometer and antenna array system. This calibration technique uses a radiated noise source in addition to two calibration sources internal to the radiometer. The method accurately calibrates antenna arrays with embedded active devices (such as amplifiers) which are used extensively in active phased array antennas.
Chotimah, Chusnul; Sudjadi; Riyanto, Sugeng; Rohman, Abdul
2015-01-01
Purpose: Analysis of drugs in multicomponent system officially is carried out using chromatographic technique, however, this technique is too laborious and involving sophisticated instrument. Therefore, UV-VIS spectrophotometry coupled with multivariate calibration of partial least square (PLS) for quantitative analysis of metamizole, thiamin and pyridoxin is developed in the presence of cyanocobalamine without any separation step. Methods: The calibration and validation samples are prepared. The calibration model is prepared by developing a series of sample mixture consisting these drugs in certain proportion. Cross validation of calibration sample using leave one out technique is used to identify the smaller set of components that provide the greatest predictive ability. The evaluation of calibration model was based on the coefficient of determination (R2) and root mean square error of calibration (RMSEC). Results: The results showed that the coefficient of determination (R2) for the relationship between actual values and predicted values for all studied drugs was higher than 0.99 indicating good accuracy. The RMSEC values obtained were relatively low, indicating good precision. The accuracy and presision results of developed method showed no significant difference compared to those obtained by official method of HPLC. Conclusion: The developed method (UV-VIS spectrophotometry in combination with PLS) was succesfully used for analysis of metamizole, thiamin and pyridoxin in tablet dosage form. PMID:26819934
Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel
2012-01-01
Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023
Inflight Radiometric Calibration of New Horizons' Multispectral Visible Imaging Camera (MVIC)
NASA Technical Reports Server (NTRS)
Howett, C. J. A.; Parker, A. H.; Olkin, C. B.; Reuter, D. C.; Ennico, K.; Grundy, W. M.; Graps, A. L.; Harrison, K. P.; Throop, H. B.; Buie, M. W.;
2016-01-01
We discuss two semi-independent calibration techniques used to determine the inflight radiometric calibration for the New Horizons Multi-spectral Visible Imaging Camera (MVIC). The first calibration technique compares the measured number of counts (DN) observed from a number of well calibrated stars to those predicted using the component-level calibration. The ratio of these values provides a multiplicative factor that allows a conversation between the preflight calibration to the more accurate inflight one, for each detector. The second calibration technique is a channel-wise relative radiometric calibration for MVIC's blue, near-infrared and methane color channels using Hubble and New Horizons observations of Charon and scaling from the red channel stellar calibration. Both calibration techniques produce very similar results (better than 7% agreement), providing strong validation for the techniques used. Since the stellar calibration described here can be performed without a color target in the field of view and covers all of MVIC's detectors, this calibration was used to provide the radiometric keyword values delivered by the New Horizons project to the Planetary Data System (PDS). These keyword values allow each observation to be converted from counts to physical units; a description of how these keyword values were generated is included. Finally, mitigation techniques adopted for the gain drift observed in the near-infrared detector and one of the panchromatic framing cameras are also discussed.
Strain Gauge Balance Calibration and Data Reduction at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Ferris, A. T. Judy
1999-01-01
This paper will cover the standard force balance calibration and data reduction techniques used at Langley Research Center. It will cover balance axes definition, balance type, calibration instrumentation, traceability of standards to NIST, calibration loading procedures, balance calibration mathematical model, calibration data reduction techniques, balance accuracy reporting, and calibration frequency.
Wu, Defeng; Chen, Tianfei; Li, Aiguo
2016-08-30
A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.
A Bayesian alternative for multi-objective ecohydrological model specification
NASA Astrophysics Data System (ADS)
Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori
2018-01-01
Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior distributions in such approaches.
Precision calibration of the silicon doping level in gallium arsenide epitaxial layers
NASA Astrophysics Data System (ADS)
Mokhov, D. V.; Berezovskaya, T. N.; Kuzmenkov, A. G.; Maleev, N. A.; Timoshnev, S. N.; Ustinov, V. M.
2017-10-01
An approach to precision calibration of the silicon doping level in gallium arsenide epitaxial layers is discussed that is based on studying the dependence of the carrier density in the test GaAs layer on the silicon- source temperature using the Hall-effect and CV profiling techniques. The parameters are measured by standard or certified measuring techniques and approved measuring instruments. It is demonstrated that the use of CV profiling for controlling the carrier density in the test GaAs layer at the thorough optimization of the measuring procedure ensures the highest accuracy and reliability of doping level calibration in the epitaxial layers with a relative error of no larger than 2.5%.
Landsat-8 Operational Land Imager On-Orbit Radiometric Calibration
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Barsi, Julia A.
2017-01-01
The Operational Land Imager (OLI), the VIS/NIR/SWIR sensor on the Landsat-8 has been successfully acquiring Earth Imagery for more than four years. The OLI incorporates two on-board radiometric calibration systems, one diffuser based and one lamp based, each with multiple sources. For each system one source is treated as primary and used frequently and the other source(s) are used less frequently to assist in tracking any degradation in the primary sources. In addition, via a spacecraft maneuver, the OLI instrument views the moon once a lunar cycle (approx. 29 days). The integrated lunar irradiances from these acquisitions are compared to the output of a lunar irradiance model. The results from all these techniques, combined with cross calibrations with other sensors and ground based vicarious measurements are used to monitor the OLI's stability and correct for any changes observed. To date, the various techniques have other detected significant changes in the shortest wavelength OLI band centered at 443 nm and these are currently being adjusted in the operational processing.
Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers
NASA Technical Reports Server (NTRS)
Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.
2010-01-01
This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.
Adsorption losses from urine-based cannabinoid calibrators during routine use.
Blanc, J A; Manneh, V A; Ernst, R; Berger, D E; de Keczer, S A; Chase, C; Centofanti, J M; DeLizza, A J
1993-08-01
The major metabolite of cannabis found in urine, 11-nor-delta 9-tetrahydrocannabinol-9-carboxylic acid (delta 9-THC), is the compound most often used to calibrate cannabinoid immunoassays. The hydrophobic delta 9-THC molecule is known to adsorb to solid surfaces. This loss of analyte from calibrator solutions can lead to inaccuracy in the analytical system. Because the calibrators remain stable when not used, analyte loss is most probably caused by handling techniques. In an effort to develop an effective means of overcoming adsorption losses, we quantified cannabinoid loss from calibrators during the testing process. In studying handling of these solutions, we found noticeable, significant losses attributable to both the kind of pipette used for transfer and the contact surface-to-volume ratio of calibrator solution in the analyzer cup. Losses were quantified by immunoassay and by radioactive tracer. We suggest handling techniques that can minimize adsorption of delta 9-THC to surfaces. Using the appropriate pipette and maintaining a minimum surface-to-volume ratio in the analyzer cup effectively reduces analyte loss.
Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform
NASA Astrophysics Data System (ADS)
Brau-Avila, A.; Santolaria, J.; Acero, R.; Valenzuela-Galvan, M.; Herrera-Jimenez, V. M.; Aguilar, J. J.
2017-03-01
The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs.
New calibration technique for KCD-based megavoltage imaging
NASA Astrophysics Data System (ADS)
Samant, Sanjiv S.; Zheng, Wei; DiBianca, Frank A.; Zeman, Herbert D.; Laughter, Joseph S.
1999-05-01
In megavoltage imaging, current commercial electronic portal imaging devices (EPIDs), despite having the advantage of immediate digital imaging over film, suffer from poor image contrast and spatial resolution. The feasibility of using a kinestatic charge detector (KCD) as an EPID to provide superior image contrast and spatial resolution for portal imaging has already been demonstrated in a previous paper. The KCD system had the additional advantage of requiring an extremely low dose per acquired image, allowing for superior imaging to be reconstructed form a single linac pulse per image pixel. The KCD based images utilized a dose of two orders of magnitude less that for EPIDs and film. Compared with the current commercial EPIDs and film, the prototype KCD system exhibited promising image qualities, despite being handicapped by the use of a relatively simple image calibration technique, and the performance limits of medical linacs on the maximum linac pulse frequency and energy flux per pulse delivered. This image calibration technique fixed relative image pixel values based on a linear interpolation of extrema provided by an air-water calibration, and accounted only for channel-to-channel variations. The counterpart of this for area detectors is the standard flat fielding method. A comprehensive calibration protocol has been developed. The new technique additionally corrects for geometric distortions due to variations in the scan velocity, and timing artifacts caused by mis-synchronization between the linear accelerator and the data acquisition system (DAS). The role of variations in energy flux (2 - 3%) on imaging is demonstrated to be not significant for the images considered. The methodology is presented, and the results are discussed for simulated images. It also allows for significant improvements in the signal-to- noise ratio (SNR) by increasing the dose using multiple images without having to increase the linac pulse frequency or energy flux per pulse. The application of this protocol to a KCD system under construction is expected shortly.
NASA Astrophysics Data System (ADS)
Herman, Matthew R.; Nejadhashemi, A. Pouyan; Abouali, Mohammad; Hernandez-Suarez, Juan Sebastian; Daneshvar, Fariborz; Zhang, Zhen; Anderson, Martha C.; Sadeghi, Ali M.; Hain, Christopher R.; Sharifi, Amirreza
2018-01-01
As the global demands for the use of freshwater resources continues to rise, it has become increasingly important to insure the sustainability of this resources. This is accomplished through the use of management strategies that often utilize monitoring and the use of hydrological models. However, monitoring at large scales is not feasible and therefore model applications are becoming challenging, especially when spatially distributed datasets, such as evapotranspiration, are needed to understand the model performances. Due to these limitations, most of the hydrological models are only calibrated for data obtained from site/point observations, such as streamflow. Therefore, the main focus of this paper is to examine whether the incorporation of remotely sensed and spatially distributed datasets can improve the overall performance of the model. In this study, actual evapotranspiration (ETa) data was obtained from the two different sets of satellite based remote sensing data. One dataset estimates ETa based on the Simplified Surface Energy Balance (SSEBop) model while the other one estimates ETa based on the Atmosphere-Land Exchange Inverse (ALEXI) model. The hydrological model used in this study is the Soil and Water Assessment Tool (SWAT), which was calibrated against spatially distributed ETa and single point streamflow records for the Honeyoey Creek-Pine Creek Watershed, located in Michigan, USA. Two different techniques, multi-variable and genetic algorithm, were used to calibrate the SWAT model. Using the aforementioned datasets, the performance of the hydrological model in estimating ETa was improved using both calibration techniques by achieving Nash-Sutcliffe efficiency (NSE) values >0.5 (0.73-0.85), percent bias (PBIAS) values within ±25% (±21.73%), and root mean squared error - observations standard deviation ratio (RSR) values <0.7 (0.39-0.52). However, the genetic algorithm technique was more effective with the ETa calibration while significantly reducing the model performance for estimating the streamflow (NSE: 0.32-0.52, PBIAS: ±32.73%, and RSR: 0.63-0.82). Meanwhile, using the multi-variable technique, the model performance for estimating the streamflow was maintained with a high level of accuracy (NSE: 0.59-0.61, PBIAS: ±13.70%, and RSR: 0.63-0.64) while the evapotranspiration estimations were improved. Results from this assessment shows that incorporation of remotely sensed and spatially distributed data can improve the hydrological model performance if it is coupled with a right calibration technique.
Webster, Victoria A; Nieto, Santiago G; Grosberg, Anna; Akkus, Ozan; Chiel, Hillel J; Quinn, Roger D
2016-10-01
In this study, new techniques for approximating the contractile properties of cells in biohybrid devices using Finite Element Analysis (FEA) have been investigated. Many current techniques for modeling biohybrid devices use individual cell forces to simulate the cellular contraction. However, such techniques result in long simulation runtimes. In this study we investigated the effect of the use of thermal contraction on simulation runtime. The thermal contraction model was significantly faster than models using individual cell forces, making it beneficial for rapidly designing or optimizing devices. Three techniques, Stoney׳s Approximation, a Modified Stoney׳s Approximation, and a Thermostat Model, were explored for calibrating thermal expansion/contraction parameters (TECPs) needed to simulate cellular contraction using thermal contraction. The TECP values were calibrated by using published data on the deflections of muscular thin films (MTFs). Using these techniques, TECP values that suitably approximate experimental deflections can be determined by using experimental data obtained from cardiomyocyte MTFs. Furthermore, a sensitivity analysis was performed in order to investigate the contribution of individual variables, such as elastic modulus and layer thickness, to the final calibrated TECP for each calibration technique. Additionally, the TECP values are applicable to other types of biohybrid devices. Two non-MTF models were simulated based on devices reported in the existing literature. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier
2016-04-01
Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.
NASA Astrophysics Data System (ADS)
Pupillo, G.; Naldi, G.; Bianchi, G.; Mattana, A.; Monari, J.; Perini, F.; Poloni, M.; Schiaffino, M.; Bolli, P.; Lingua, A.; Aicardi, I.; Bendea, H.; Maschio, P.; Piras, M.; Virone, G.; Paonessa, F.; Farooqui, Z.; Tibaldi, A.; Addamo, G.; Peverini, O. A.; Tascone, R.; Wijnholds, S. J.
2015-06-01
One of the most challenging aspects of the new-generation Low-Frequency Aperture Array (LFAA) radio telescopes is instrument calibration. The operational LOw-Frequency ARray (LOFAR) instrument and the future LFAA element of the Square Kilometre Array (SKA) require advanced calibration techniques to reach the expected outstanding performance. In this framework, a small array, called Medicina Array Demonstrator (MAD), has been designed and installed in Italy to provide a test bench for antenna characterization and calibration techniques based on a flying artificial test source. A radio-frequency tone is transmitted through a dipole antenna mounted on a micro Unmanned Aerial Vehicle (UAV) (hexacopter) and received by each element of the array. A modern digital FPGA-based back-end is responsible for both data-acquisition and data-reduction. A simple amplitude and phase equalization algorithm is exploited for array calibration owing to the high stability and accuracy of the developed artificial test source. Both the measured embedded element patterns and calibrated array patterns are found to be in good agreement with the simulated data. The successful measurement campaign has demonstrated that a UAV-mounted test source provides a means to accurately validate and calibrate the full-polarized response of an antenna/array in operating conditions, including consequently effects like mutual coupling between the array elements and contribution of the environment to the antenna patterns. A similar system can therefore find a future application in the SKA-LFAA context.
A method for soil moisture probes calibration and validation of satellite estimates.
Holzman, Mauro; Rivas, Raúl; Carmona, Facundo; Niclòs, Raquel
2017-01-01
Optimization of field techniques is crucial to ensure high quality soil moisture data. The aim of the work is to present a sampling method for undisturbed soil and soil water content to calibrated soil moisture probes, in a context of the SMOS (Soil Moisture and Ocean Salinity) mission MIRAS Level 2 soil moisture product validation in Pampean Region of Argentina. The method avoids soil alteration and is recommended to calibrated probes based on soil type under a freely drying process at ambient temperature. A detailed explanation of field and laboratory procedures to obtain reference soil moisture is shown. The calibration results reflected accurate operation for the Delta-T thetaProbe ML2x probes in most of analyzed cases (RMSE and bias ≤ 0.05 m 3 /m 3 ). Post-calibration results indicated that the accuracy improves significantly applying the adjustments of the calibration based on soil types (RMSE ≤ 0.022 m 3 /m 3 , bias ≤ -0.010 m 3 /m 3 ). •A sampling method that provides high quality data of soil water content for calibration of probes is described.•Importance of calibration based on soil types.•A calibration process for similar soil types could be suitable in practical terms, depending on the required accuracy level.
Technique for calibrating angular measurement devices when calibration standards are unavailable
NASA Technical Reports Server (NTRS)
Finley, Tom D.
1991-01-01
A calibration technique is proposed that will allow the calibration of certain angular measurement devices without requiring the use of absolute standard. The technique assumes that the device to be calibrated has deterministic bias errors. A comparison device must be available that meets the same requirements. The two devices are compared; one device is then rotated with respect to the other, and a second comparison is performed. If the data are reduced using the described technique, the individual errors of the two devices can be determined.
NASA Technical Reports Server (NTRS)
Evans, Keith D.; Demoz, Belay B.; Cadirola, Martin P.; Melfi, S. H.; Whiteman, David N.; Schwemmer, Geary K.; Starr, David OC.; Schmidlin, F. J.; Feltz, Wayne
2000-01-01
The NAcA/Goddard Space Flight Center Scanning Raman Lidar has made measurements of water vapor and aerosols for almost ten years. Calibration of the water vapor data has typically been performed by comparison with another water vapor sensor such as radiosondes. We present a new method for water vapor calibration that only requires low clouds, and surface pressure and temperature measurements. A sensitivity study was performed and the cloud base algorithm agrees with the radiosonde calibration to within 10- 15%. Knowledge of the true atmospheric lapse rate is required to obtain more accurate cloud base temperatures. Analysis of water vapor and aerosol measurements made in the vicinity of Hurricane Bonnie are discussed.
A pipette-based calibration system for fast-scan cyclic voltammetry with fast response times.
Ramsson, Eric S
2016-01-01
Fast-scan cyclic voltammetry (FSCV) is an electrochemical technique that utilizes the oxidation and/or reduction of an analyte of interest to infer rapid changes in concentrations. In order to calibrate the resulting oxidative or reductive current, known concentrations of an analyte must be introduced under controlled settings. Here, I describe a simple and cost-effective method, using a Petri dish and pipettes, for the calibration of carbon fiber microelectrodes (CFMs) using FSCV.
Self-calibration of photometric redshift scatter in weak-lensing surveys
Zhang, Pengjie; Pen, Ue -Li; Bernstein, Gary
2010-06-11
Photo-z errors, especially catastrophic errors, are a major uncertainty for precision weak lensing cosmology. We find that the shear-(galaxy number) density and density-density cross correlation measurements between photo-z bins, available from the same lensing surveys, contain valuable information for self-calibration of the scattering probabilities between the true-z and photo-z bins. The self-calibration technique we propose does not rely on cosmological priors nor parameterization of the photo-z probability distribution function, and preserves all of the cosmological information available from shear-shear measurement. We estimate the calibration accuracy through the Fisher matrix formalism. We find that, for advanced lensing surveys such as themore » planned stage IV surveys, the rate of photo-z outliers can be determined with statistical uncertainties of 0.01-1% for z < 2 galaxies. Among the several sources of calibration error that we identify and investigate, the galaxy distribution bias is likely the most dominant systematic error, whereby photo-z outliers have different redshift distributions and/or bias than non-outliers from the same bin. This bias affects all photo-z calibration techniques based on correlation measurements. As a result, galaxy bias variations of O(0.1) produce biases in photo-z outlier rates similar to the statistical errors of our method, so this galaxy distribution bias may bias the reconstructed scatters at several-σ level, but is unlikely to completely invalidate the self-calibration technique.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anh Bui; Nam Dinh; Brian Williams
In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Suchmore » sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this work’s calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the “CIPS Validation Data Plan” at the Consortium for Advanced Simulation of LWRs to enable quantitative assessment of the CASL modeling of Crud-Induced Power Shift (CIPS) phenomenon, in particular, and the CASL advanced predictive capabilities, in general. This report is prepared for the Department of Energy’s Consortium for Advanced Simulation of LWRs program’s VUQ Focus Area.« less
Traceable Co-C eutectic points for thermocouple calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jahan, F.; Ballico, M. J.
2013-09-11
National Measurement Institute of Australia (NMIA) has developed a miniature crucible design suitable for measurement by both thermocouples and radiation thermometry, and has established an ensemble of five Co-C eutectic-point cells based on this design. The cells in this ensemble have been individually calibrated using both ITS-90 radiation thermometry and thermocouples calibrated on the ITS-90 by the NMIA mini-coil methodology. The assigned ITS-90 temperatures obtained using these different techniques are both repeatable and consistent, despite the use of different furnaces and measurement conditions. The results demonstrate that, if individually calibrated, such cells can be practically used as part of amore » national traceability scheme for thermocouple calibration, providing a useful intermediate calibration point between Cu and Pd.« less
NASA Astrophysics Data System (ADS)
Lu, Yuzhen; Lu, Renfu
2017-05-01
Three-dimensional (3-D) shape information is valuable for fruit quality evaluation. This study was aimed at developing phase analysis techniques for reconstruction of the 3-D surface of fruit from the pattern images acquired by a structuredillumination reflectance imaging (SIRI) system. Phase-shifted sinusoidal patterns, distorted by the fruit geometry, were acquired and processed through phase demodulation, phase unwrapping and other post-processing procedures to obtain phase difference maps relative to the phase of a reference plane. The phase maps were then transformed into height profiles and 3-D shapes in a world coordinate system based on phase-to-height and in-plane calibrations. A reference plane-based approach, coupled with the curve fitting technique using polynomials of order 3 or higher, was utilized for phase-to-height calibrations, which achieved superior accuracies with the root-mean-squared errors (RMSEs) of 0.027- 0.033 mm for a height measurement range of 0-91 mm. The 3rd-order polynomial curve fitting technique was further tested on two reference blocks with known heights, resulting in relative errors of 3.75% and 4.16%. In-plane calibrations were performed by solving a linear system formed by a number of control points in a calibration object, which yielded a RMSE of 0.311 mm. Tests of the calibrated system for reconstructing the surface of apple samples showed that surface concavities (i.e., stem/calyx regions) could be easily discriminated from bruises from the phase difference maps, reconstructed height profiles and the 3-D shape of apples. This study has laid a foundation for using SIRI for 3-D shape measurement, and thus expanded the capability of the technique for quality evaluation of horticultural products. Further research is needed to utilize the phase analysis techniques for stem/calyx detection of apples, and optimize the phase demodulation and unwrapping algorithms for faster and more reliable detection.
A tunable laser system for precision wavelength calibration of spectra
NASA Astrophysics Data System (ADS)
Cramer, Claire
2010-02-01
We present a novel laser-based wavelength calibration technique that improves the precision of astronomical spectroscopy, and solves a calibration problem inherent to multi-object spectroscopy. We have tested a prototype with the Hectochelle spectrograph at the MMT 6.5 m telescope. The Hectochelle is a high-dispersion, fiber-fed, multi-object spectrograph capable of recording up to 240 spectra simultaneously with a resolving power of 40000. The standard wavelength calibration method uses of spectra from ThAr hollow-cathode lamps shining directly onto the fibers. The difference in light path between calibration and science light as well as the uneven distribution of spectral lines are believed to introduce errors of up to several hundred m/s in the wavelength scale. Our tunable laser wavelength calibrator is bright enough for use with a dome screen, allowing the calibration light path to better match the science light path. Further, the laser is tuned in regular steps across a spectral order, creating a comb of evenly-spaced lines on the detector. Using the solar spectrum reflected from the atmosphere to record the same spectrum in every fiber, we show that laser wavelength calibration brings radial velocity uncertainties down below 100 m/s. We also present results from studies of globular clusters, and explain how the calibration technique can aid in stellar age determinations, studies of young stars, and searches for dark matter clumping in the galactic halo. )
2005-07-09
This final report summarizes the progress during the Phase I SBIR project entitled Embedded Electro - Optic Sensor Network for the On-Site Calibration...network based on an electro - optic field-detection technique (the Electro - optic Sensor Network, or ESN) for the performance evaluation of phased
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, J; Penfold, S; Royal Adelaide Hospital, Adelaide, SA
2015-06-15
Purpose: To investigate the robustness of dual energy CT (DECT) and single energy CT (SECT) proton stopping power calibration techniques and quantify the associated errors when imaging a phantom differing in chemical composition to that used during stopping power calibration. Methods: The CIRS tissue substitute phantom was scanned in a CT-simulator at 90kV and 140kV. This image set was used to generate a DECT proton SPR calibration based on a relationship between effective atomic number and mean excitation energy. A SECT proton SPR calibration based only on Hounsfield units (HUs) was also generated. DECT and SECT scans of a secondmore » phantom of known density and chemical composition were performed. The SPR of the second phantom was calculated with the DECT approach (SPR-DECT),the SECT approach (SPR-SECT) and finally the known density and chemical composition of the phantom (SPR-ref). The DECT and SECT image sets were imported into the Pinnacle{sup 3} research release of proton therapy treatment planning. The difference in dose when exposed to a common pencil beam distribution was investigated. Results: SPR-DECT was found to be in better agreement with SPR-ref than SPR- SECT. The mean difference in SPR for all materials was 0.51% for DECT and 6.89% for SECT. With the exception of Teflon, SPR-DECT was found to agree with SPR-ref to within 1%. Significant differences in calculated dose were found when using the DECT image set or the SECT image set. Conclusion: The DECT calibration technique was found to be more robust to situations in which the physical properties of the test materials differed from the materials used during SPR calibration. Furthermore, it was demonstrated that the DECT and SECT SPR calibration techniques can Result in significantly different calculated dose distributions.« less
Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.
2016-01-01
Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354
Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B
2016-04-01
Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.
Broadband interferometric characterisation of nano-positioning stages with sub-10 pm resolution
NASA Astrophysics Data System (ADS)
Li, Zhi; Brand, Uwe; Wolff, Helmut; Koenders, Ludger; Yacoot, Andrew; Puranto, Prabowo
2017-06-01
A traceable calibration setup for investigation of the quasi-static and the dynamic performance of nano-positioning stages is detailed, which utilizes a differential plane-mirror interferometer with double-pass configuration from the National Physical Laboratory (NPL). An NPL-developed FPGA-based interferometric data acquisition and decoding system has been used to enable traceable quasi-static calibration of nano-positioning stages with high resolution. A lockin based modulation technique is further introduced to quantitatively calibrate the dynamic response of moving stages with a bandwidth up to 100 kHz and picometer resolution. First experimental results have proven that the calibration setup can achieve under nearly open-air conditions a noise floor lower than 10 pm/sqrt(Hz). A pico-positioning stage, that is used for nanoindentation with indentation depths down to a few picometers, has been characterized with this calibration setup.
Current profilers and current meters: compass and tilt sensors errors and calibration
NASA Astrophysics Data System (ADS)
Le Menn, M.; Lusven, A.; Bongiovanni, E.; Le Dû, P.; Rouxel, D.; Lucas, S.; Pacaud, L.
2014-08-01
Current profilers and current meters have a magnetic compass and tilt sensors for relating measurements to a terrestrial reference frame. As compasses are sensitive to their magnetic environment, they must be calibrated in the configuration in which they will be used. A calibration platform for magnetic compasses and tilt sensors was built, based on a method developed in 2007, to correct angular errors and guarantee a measurement uncertainty for instruments mounted in mooring cages. As mooring cages can weigh up to 800 kg, it was necessary to find a suitable place to set up this platform, map the magnetic fields in this area and dimension the platform to withstand these loads. It was calibrated using a GPS positioning technique. The platform has a table that can be tilted to calibrate the tilt sensors. The measurement uncertainty of the system was evaluated. Sinusoidal corrections based on the anomalies created by soft and hard magnetic materials were tested, as well as manufacturers’ calibration methods.
Querol, Jorge; Tarongí, José Miguel; Forte, Giuseppe; Gómez, José Javier; Camps, Adriano
2017-05-10
MERITXELL is a ground-based multisensor instrument that includes a multiband dual-polarization radiometer, a GNSS reflectometer, and several optical sensors. Its main goals are twofold: to test data fusion techniques, and to develop Radio-Frequency Interference (RFI) detection, localization and mitigation techniques. The former is necessary to retrieve complementary data useful to develop geophysical models with improved accuracy, whereas the latter aims at solving one of the most important problems of microwave radiometry. This paper describes the hardware design, the instrument control architecture, the calibration of the radiometer, and several captures of RFI signals taken with MERITXELL in urban environment. The multiband radiometer has a dual linear polarization total-power radiometer topology, and it covers the L-, S-, C-, X-, K-, Ka-, and W-band. Its back-end stage is based on a spectrum analyzer structure which allows to perform real-time signal processing, while the rest of the sensors are controlled by a host computer where the off-line processing takes place. The calibration of the radiometer is performed using the hot-cold load procedure, together with the tipping curves technique in the case of the five upper frequency bands. Finally, some captures of RFI signals are shown for most of the radiometric bands under analysis, which evidence the problem of RFI in microwave radiometry, and the limitations they impose in external calibration.
Querol, Jorge; Tarongí, José Miguel; Forte, Giuseppe; Gómez, José Javier; Camps, Adriano
2017-01-01
MERITXELL is a ground-based multisensor instrument that includes a multiband dual-polarization radiometer, a GNSS reflectometer, and several optical sensors. Its main goals are twofold: to test data fusion techniques, and to develop Radio-Frequency Interference (RFI) detection, localization and mitigation techniques. The former is necessary to retrieve complementary data useful to develop geophysical models with improved accuracy, whereas the latter aims at solving one of the most important problems of microwave radiometry. This paper describes the hardware design, the instrument control architecture, the calibration of the radiometer, and several captures of RFI signals taken with MERITXELL in urban environment. The multiband radiometer has a dual linear polarization total-power radiometer topology, and it covers the L-, S-, C-, X-, K-, Ka-, and W-band. Its back-end stage is based on a spectrum analyzer structure which allows to perform real-time signal processing, while the rest of the sensors are controlled by a host computer where the off-line processing takes place. The calibration of the radiometer is performed using the hot-cold load procedure, together with the tipping curves technique in the case of the five upper frequency bands. Finally, some captures of RFI signals are shown for most of the radiometric bands under analysis, which evidence the problem of RFI in microwave radiometry, and the limitations they impose in external calibration. PMID:28489056
On the Use of Deep Convective Clouds to Calibrate AVHRR Data
NASA Technical Reports Server (NTRS)
Doelling, David R.; Nguyen, Louis; Minnis, Patrick
2004-01-01
Remote sensing of cloud and radiation properties from National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) satellites requires constant monitoring of the visible sensors. NOAA satellites do not have onboard visible calibration and need to be calibrated vicariously in order to determine the calibration and the degradation rate. Deep convective clouds are extremely bright and cold, are at the tropopause, have nearly a Lambertian reflectance, and provide predictable albedos. The use of deep convective clouds as calibration targets is developed into a calibration technique and applied to NOAA-16 and NOAA-17. The technique computes the relative gain drift over the life-span of the satellite. This technique is validated by comparing the gain drifts derived from inter-calibration of coincident AVHRR and Moderate-Resolution Imaging Spectroradiometer (MODIS) radiances. A ray-matched technique, which uses collocated, coincident, and co-angled pixel satellite radiance pairs is used to intercalibrate MODIS and AVHRR. The deep convective cloud calibration technique was found to be independent of solar zenith angle, by using well calibrated Visible Infrared Scanner (VIRS) radiances onboard the Tropical Rainfall Measuring Mission (TRMM) satellite, which precesses through all solar zenith angles in 23 days.
Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination
Fasano, Giancarmine; Grassi, Michele
2017-01-01
In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective-n-Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal. PMID:28946651
Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination.
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele
2017-09-24
In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective- n -Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal.
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, J.; Polly, B.; Collis, J.
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less
NASA Astrophysics Data System (ADS)
Chinowsky, Timothy M.; Yee, Sinclair S.
2002-02-01
Surface plasmon resonance (SPR) affinity sensing, the problem of bulk refractive index (RI) interference in SPR sensing, and a sensor developed to overcome this problem are briefly reviewed. The sensor uses a design based on Texas Instruments' Spreeta SPR sensor to simultaneously measure both bulk and surface RI. The bulk RI measurement is then used to compensate the surface measurement and remove the effects of bulk RI interference. To achieve accurate compensation, robust data analysis and calibration techniques are necessary. Simple linear data analysis techniques derived from measurements of the sensor response were found to provide a versatile, low noise method for extracting measurements of bulk and surface refractive index from the raw sensor data. Automatic calibration using RI gradients was used to correct the linear estimates, enabling the sensor to produce accurate data even when the sensor has a complicated nonlinear response which varies with time. The calibration procedure is described, and the factors influencing calibration accuracy are discussed. Data analysis and calibration principles are illustrated with an experiment in which sucrose and detergent solutions are used to produce changes in bulk and surface RI, respectively.
NASA Astrophysics Data System (ADS)
Webb, Mathew A.; Hall, Andrew; Kidd, Darren; Minansy, Budiman
2016-05-01
Assessment of local spatial climatic variability is important in the planning of planting locations for horticultural crops. This study investigated three regression-based calibration methods (i.e. traditional versus two optimized methods) to relate short-term 12-month data series from 170 temperature loggers and 4 weather station sites with data series from nearby long-term Australian Bureau of Meteorology climate stations. The techniques trialled to interpolate climatic temperature variables, such as frost risk, growing degree days (GDDs) and chill hours, were regression kriging (RK), regression trees (RTs) and random forests (RFs). All three calibration methods produced accurate results, with the RK-based calibration method delivering the most accurate validation measures: coefficients of determination ( R 2) of 0.92, 0.97 and 0.95 and root-mean-square errors of 1.30, 0.80 and 1.31 °C, for daily minimum, daily maximum and hourly temperatures, respectively. Compared with the traditional method of calibration using direct linear regression between short-term and long-term stations, the RK-based calibration method improved R 2 and reduced root-mean-square error (RMSE) by at least 5 % and 0.47 °C for daily minimum temperature, 1 % and 0.23 °C for daily maximum temperature and 3 % and 0.33 °C for hourly temperature. Spatial modelling indicated insignificant differences between the interpolation methods, with the RK technique tending to be the slightly better method due to the high degree of spatial autocorrelation between logger sites.
ITER-like antenna capacitors voltage probes: Circuit/electromagnetic calculations and calibrations.
Helou, W; Dumortier, P; Durodié, F; Lombard, G; Nicholls, K
2016-10-01
The analyses illustrated in this manuscript have been performed in order to provide the required data for the amplitude-and-phase calibration of the D-dot voltage probes used in the ITER-like antenna at the Joint European Torus tokamak. Their equivalent electrical circuit has been extracted and analyzed, and it has been compared to the one of voltage probes installed in simple transmission lines. A radio-frequency calibration technique has been formulated and exact mathematical relations have been derived. This technique mixes in an elegant fashion data extracted from measurements and numerical calculations to retrieve the calibration factors. The latter have been compared to previous calibration data with excellent agreement proving the robustness of the proposed radio-frequency calibration technique. In particular, it has been stressed that it is crucial to take into account environmental parasitic effects. A low-frequency calibration technique has been in addition formulated and analyzed in depth. The equivalence between the radio-frequency and low-frequency techniques has been rigorously demonstrated. The radio-frequency calibration technique is preferable in the case of the ITER-like antenna due to uncertainties on the characteristics of the cables connected at the inputs of the voltage probes. A method to extract the effect of a mismatched data acquisition system has been derived for both calibration techniques. Finally it has been outlined that in the case of the ITER-like antenna voltage probes can be in addition used to monitor the currents at the inputs of the antenna.
A Method to Test Model Calibration Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
A Method to Test Model Calibration Techniques: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
Litzenberg, Dale W; Gallagher, Ian; Masi, Kathryn J; Lee, Choonik; Prisciandaro, Joann I; Hamstra, Daniel A; Ritter, Timothy; Lam, Kwok L
2013-08-01
To present and characterize a measurement technique to quantify the calibration accuracy of an electromagnetic tracking system to radiation isocenter. This technique was developed as a quality assurance method for electromagnetic tracking systems used in a multi-institutional clinical hypofractionated prostate study. In this technique, the electromagnetic tracking system is calibrated to isocenter with the manufacturers recommended technique, using laser-based alignment. A test patient is created with a transponder at isocenter whose position is measured electromagnetically. Four portal images of the transponder are taken with collimator rotations of 45° 135°, 225°, and 315°, at each of four gantry angles (0°, 90°, 180°, 270°) using a 3×6 cm2 radiation field. In each image, the center of the copper-wrapped iron core of the transponder is determined. All measurements are made relative to this transponder position to remove gantry and imager sag effects. For each of the 16 images, the 50% collimation edges are identified and used to find a ray representing the rotational axis of each collimation edge. The 16 collimator rotation rays from four gantry angles pass through and bound the radiation isocenter volume. The center of the bounded region, relative to the transponder, is calculated and then transformed to tracking system coordinates using the transponder position, allowing the tracking system's calibration offset from radiation isocenter to be found. All image analysis and calculations are automated with inhouse software for user-independent accuracy. Three different tracking systems at two different sites were evaluated for this study. The magnitude of the calibration offset was always less than the manufacturer's stated accuracy of 0.2 cm using their standard clinical calibration procedure, and ranged from 0.014 to 0.175 cm. On three systems in clinical use, the magnitude of the offset was found to be 0.053±0.036, 0.121±0.023, and 0.093±0.013 cm. The method presented here provides an independent technique to verify the calibration of an electromagnetic tracking system to radiation isocenter. The calibration accuracy of the system was better than the 0.2 cm accuracy stated by the manufacturer. However, it should not be assumed to be zero, especially for stereotactic radiation therapy treatments where planning target volume margins are very small.
Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com
2016-06-15
The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiationmore » of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.« less
An atlas of selected calibrated stellar spectra
NASA Technical Reports Server (NTRS)
Walker, Russell G.; Cohen, Martin
1992-01-01
Five hundred and fifty six stars in the IRAS PSC-2 that are suitable for stellar radiometric standards and are brighter than 1 Jy at 25 microns were identified. In addition, 123 stars that meet all of our criteria for calibration standards, but which lack a luminosity class were identified. An approach to absolute stellar calibration of broadband infrared filters based upon new models of Vega and Sirius due to Kurucz (1992) is presented. A general technique used to assemble continuous wide-band calibrated infrared spectra is described and an absolutely calibrated 1-35 micron spectrum of alpha(Tau) is constructed and the method using new and carefully designed observations is independently validated. The absolute calibration of the IRAS Low Resolution Spectrometer (LRS) database is investigated by comparing the observed spectrum of alpha(Tau) with that assumed in the original LRS calibration scheme. Neglect of the SiO fundamental band in alpha(Tau) has led to the presence of a specious 'emission' feature in all LRS spectra near 8.5 microns, and to an incorrect spectral slope between 8 and 12 microns. Finally, some of the properties of asteroids that effect their utility as calibration objects for the middle and far infrared region are examined. A technique to determine, from IRAS multiwaveband observations, the basic physical parameters needed by various asteroid thermal models that minimize the number of assumptions required is developed.
NASA Technical Reports Server (NTRS)
Whiteman, David N.; Venable, Demetrius; Landulfo, Eduardo
2012-01-01
In a recent publication, LeBlanc and McDermid proposed a hybrid calibration technique for Raman water vapor lidar involving a tungsten lamp and radiosondes. Measurements made with the lidar telescope viewing the calibration lamp were used to stabilize the lidar calibration determined by comparison with radiosonde. The technique provided a significantly more stable calibration constant than radiosondes used alone. The technique involves the use of a calibration lamp in a fixed position in front of the lidar receiver aperture. We examine this configuration and find that such a configuration likely does not properly sample the full lidar system optical efficiency. While the technique is a useful addition to the use of radiosondes alone for lidar calibration, it is important to understand the scenarios under which it will not provide an accurate quantification of system optical efficiency changes. We offer examples of these scenarios.
Le, Huy Q.; Molloi, Sabee
2011-01-01
Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg∕ml) and iodine (4, 12, 20, 28, 36, and 44 mg∕ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30∕70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg∕ml) and iodine (5, 15, 25, 35, and 45 mg∕ml). The x-ray transport process was simulated where the Beer–Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine. PMID:21361193
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le, Huy Q.; Molloi, Sabee
Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar tomore » the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg/ml) and iodine (4, 12, 20, 28, 36, and 44 mg/ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30/70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg/ml) and iodine (5, 15, 25, 35, and 45 mg/ml). The x-ray transport process was simulated where the Beer-Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine.« less
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
NASA Technical Reports Server (NTRS)
Racette, Paul; Lang, Roger; Zhang, Zhao-Nan; Zacharias, David; Krebs, Carolyn A. (Technical Monitor)
2002-01-01
Radiometers must be periodically calibrated because the receiver response fluctuates. Many techniques exist to correct for the time varying response of a radiometer receiver. An analytical technique has been developed that uses generalized least squares regression (LSR) to predict the performance of a wide variety of calibration algorithms. The total measurement uncertainty including the uncertainty of the calibration can be computed using LSR. The uncertainties of the calibration samples used in the regression are based upon treating the receiver fluctuations as non-stationary processes. Signals originating from the different sources of emission are treated as simultaneously existing random processes. Thus, the radiometer output is a series of samples obtained from these random processes. The samples are treated as random variables but because the underlying processes are non-stationary the statistics of the samples are treated as non-stationary. The statistics of the calibration samples depend upon the time for which the samples are to be applied. The statistics of the random variables are equated to the mean statistics of the non-stationary processes over the interval defined by the time of calibration sample and when it is applied. This analysis opens the opportunity for experimental investigation into the underlying properties of receiver non stationarity through the use of multiple calibration references. In this presentation we will discuss the application of LSR to the analysis of various calibration algorithms, requirements for experimental verification of the theory, and preliminary results from analyzing experiment measurements.
Improved RF Measurements of SRF Cavity Quality Factors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzbauer, J. P.; Contreras, C.; Pischalnikov, Y.
SRF cavity quality factors can be accurately measured using RF-power based techniques only when the cavity is very close to critically coupled. This limitation is from systematic errors driven by non-ideal RF components. When the cavity is not close to critically coupled, these systematic effects limit the accuracy of the measurements. The combination of the complex base-band envelopes of the cavity RF signals in combination with a trombone in the circuit allow the relative calibration of the RF signals to be extracted from the data and systematic effects to be characterized and suppressed. The improved calibration allows accurate measurements tomore » be made over a much wider range of couplings. Demonstration of these techniques during testing of a single-spoke resonator with a coupling factor of near 7 will be presented, along with recommendations for application of these techniques.« less
High-Speed Imaging Optical Pyrometry for Study of Boron Nitride Nanotube Generation
NASA Technical Reports Server (NTRS)
Inman, Jennifer A.; Danehy, Paul M.; Jones, Stephen B.; Lee, Joseph W.
2014-01-01
A high-speed imaging optical pyrometry system is designed for making in-situ measurements of boron temperature during the boron nitride nanotube synthesis process. Spectrometer measurements show molten boron emission to be essentially graybody in nature, lacking spectral emission fine structure over the visible range of the electromagnetic spectrum. Camera calibration experiments are performed and compared with theoretical calculations to quantitatively establish the relationship between observed signal intensity and temperature. The one-color pyrometry technique described herein involves measuring temperature based upon the absolute signal intensity observed through a narrowband spectral filter, while the two-color technique uses the ratio of the signals through two spectrally separated filters. The present study calibrated both the one- and two-color techniques at temperatures between 1,173 K and 1,591 K using a pco.dimax HD CMOS-based camera along with three such filters having transmission peaks near 550 nm, 632.8 nm, and 800 nm.
Bayesian Treed Calibration: An Application to Carbon Capture With AX Sorbent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Lai, Kevin
2017-01-02
In cases where field or experimental measurements are not available, computer models can model real physical or engineering systems to reproduce their outcomes. They are usually calibrated in light of experimental data to create a better representation of the real system. Statistical methods, based on Gaussian processes, for calibration and prediction have been especially important when the computer models are expensive and experimental data limited. In this paper, we develop the Bayesian treed calibration (BTC) as an extension of standard Gaussian process calibration methods to deal with non-stationarity computer models and/or their discrepancy from the field (or experimental) data. Ourmore » proposed method partitions both the calibration and observable input space, based on a binary tree partitioning, into sub-regions where existing model calibration methods can be applied to connect a computer model with the real system. The estimation of the parameters in the proposed model is carried out using Markov chain Monte Carlo (MCMC) computational techniques. Different strategies have been applied to improve mixing. We illustrate our method in two artificial examples and a real application that concerns the capture of carbon dioxide with AX amine based sorbents. The source code and the examples analyzed in this paper are available as part of the supplementary materials.« less
Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; Ball, Kenneth R.; Lance, Brent J.
2016-01-01
Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system. PMID:27713685
Waytowich, Nicholas R; Lawhern, Vernon J; Bohannon, Addison W; Ball, Kenneth R; Lance, Brent J
2016-01-01
Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.
HoloHands: games console interface for controlling holographic optical manipulation
NASA Astrophysics Data System (ADS)
McDonald, C.; McPherson, M.; McDougall, C.; McGloin, D.
2013-03-01
The increasing number of applications for holographic manipulation techniques has sparked the development of more accessible control interfaces. Here, we describe a holographic optical tweezers experiment which is controlled by gestures that are detected by a Microsoft Kinect. We demonstrate that this technique can be used to calibrate the tweezers using the Stokes drag method and compare this to automated calibrations. We also show that multiple particle manipulation can be handled. This is a promising new line of research for gesture-based control which could find applications in a wide variety of experimental situations.
GOME Total Ozone and Calibration Error Derived Usign Version 8 TOMS Algorithm
NASA Technical Reports Server (NTRS)
Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.
2003-01-01
The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local stiucture as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The lb detector appears to be quite well behaved throughout this time period.
Borysov, Stanislav S.; Forchheimer, Daniel; Haviland, David B.
2014-10-29
Here we present a theoretical framework for the dynamic calibration of the higher eigenmode parameters (stiffness and optical lever inverse responsivity) of a cantilever. The method is based on the tip–surface force reconstruction technique and does not require any prior knowledge of the eigenmode shape or the particular form of the tip–surface interaction. The calibration method proposed requires a single-point force measurement by using a multimodal drive and its accuracy is independent of the unknown physical amplitude of a higher eigenmode.
Calibration of polarimetric radar systems with good polarization isolation
NASA Technical Reports Server (NTRS)
Sarabandi, Kamal; Ulaby, Fawwaz T.; Tassoudji, M. Ali
1990-01-01
A practical technique is proposed for calibrating single-antenna polarimetric radar systems using a metal sphere plus any second target with a strong cross-polarized radar cross section. This technique assumes perfect isolation between antenna ports. It is shown that all magnitudes and phases (relative to one of the like-polarized linear polarization configurations) of the radar transfer function can be calibrated without knowledge of the scattering matrix of the second target. Comparison of the values measured (using this calibration technique) for a tilted cylinder at X-band with theoretical values shows agreement within + or - 0.3 dB in magnitude and + or - 5 degrees in phase. The radar overall cross-polarization isolation was 25 dB. The technique is particularly useful for calibrating a radar under field conditions, because it does not require the careful alignment of calibration targets.
Raghavan, Karthik; Feldman, Marc D; Porterfield, John E; Larson, Erik R; Jenkins, J Travis; Escobedo, Daniel; Pearce, John A; Valvano, Jonathan W
2011-06-01
This paper presents the design, construction and testing of a device to measure pressure-volume loops in the left ventricle of conscious, ambulatory rats. Pressure is measured with a standard sensor, but volume is derived from data collected from a tetrapolar electrode catheter using a novel admittance technique. There are two main advantages of the admittance technique to measure volume. First, the contribution from the adjacent muscle can be instantaneously removed. Second, the admittance technique incorporates the nonlinear relationship between the electric field generated by the catheter and the blood volume. A low power instrument weighing 27 g was designed, which takes pressure-volume loops every 2 min and runs for 24 h. Pressure-volume data are transmitted wirelessly to a base station. The device was first validated on 13 rats with an acute preparation with 2D echocardiography used to measure true volume. From an accuracy standpoint, the admittance technique is superior to both the conductance technique calibrated with hypertonic saline injections, and calibrated with cuvettes. The device was then tested on six rats with 24 h chronic preparation. Stability of animal preparation and careful calibration are important factors affecting the success of the device.
Raghavan, Karthik; Feldman, Marc D; Porterfield, John E; Larson, Erik R; Jenkins, J Travis; Escobedo, Daniel; Pearce, John A
2011-01-01
This paper presents the design, construction and testing of a device to measure pressure volume loops in the left ventricle of conscious, ambulatory rats. Pressure is measured with a standard sensor, but volume is derived from data collected from a tetrapolar electrode catheter using a novel admittance technique. There are two main advantages of the admittance technique to measure volume. First, the contribution from the adjacent muscle can be instantaneously removed. Second, the admittance technique incorporates the nonlinear relationship between the electric field generated by the catheter and the blood volume. A low power instrument weighing 27 g was designed, which takes pressure-volume loops every 2 minutes and runs for 24 hours. Pressure-volume data are transmitted wirelessly to a base station. The device was first validated in thirteen rats with an acute preparation with 2-D echocardiography used to measure true volume. From an accuracy standpoint, the admittance technique is superior to both the conductance technique calibrated with hypertonic saline injections, and calibrated with cuvettes. The device was then tested in six rats with a 24-hour chronic preparation. Stability of the animal preparation and careful calibration are important factors affecting the success of the device. PMID:21606560
Li, Zhigang; Wang, Xiaoxu; Zheng, Yuquan; Li, Futian
2017-06-10
High-accuracy absolute detector-based spectroradiometric calibration techniques traceable to cryogenic absolute radiometers have made progress rapidly in recent decades under the impetus of atmospheric quantitative spectral remote sensing. A high brightness spectrally tunable radiant source using a supercontinuum fiber laser and a digital micromirror device (DMD) has been developed to meet demands of spectroradiometric calibrations for ground-based, aeronautics-based, and aerospace-based remote sensing instruments and spectral simulations of natural scenes such as the sun and atmosphere. Using a supercontinuum fiber laser as a radiant source, the spectral radiance of the spectrally tunable radiant source is 20 times higher than the spectrally tunable radiant source using conventional radiant sources such as tungsten halogen lamps, xenon lamps, or LED lamps, and the stability is better than ±0.3%/h. Using a DMD, the spectrally tunable radiant source possesses two working modes. In narrow-band modes, it is calibrated by an absolute detector, and in broad-band modes, it can calibrate for remote sensing instrument. The uncertainty of the spectral radiance of the spectrally tunable radiant source is estimated at less than 1.87% at 350 nm to 0.85% at 750 nm, and compared to only standard lamp-based calibration, a greater improvement is gained.
3D aquifer characterization using stochastic streamline calibration
NASA Astrophysics Data System (ADS)
Jang, Minchul
2007-03-01
In this study, a new inverse approach, stochastic streamline calibration is proposed. Using both a streamline concept and a stochastic technique, stochastic streamline calibration optimizes an identified field to fit in given observation data in a exceptionally fast and stable fashion. In the stochastic streamline calibration, streamlines are adopted as basic elements not only for describing fluid flow but also for identifying the permeability distribution. Based on the streamline-based inversion by Agarwal et al. [Agarwal B, Blunt MJ. Streamline-based method with full-physics forward simulation for history matching performance data of a North sea field. SPE J 2003;8(2):171-80], Wang and Kovscek [Wang Y, Kovscek AR. Streamline approach for history matching production data. SPE J 2000;5(4):353-62], permeability is modified rather along streamlines than at the individual gridblocks. Permeabilities in the gridblocks which a streamline passes are adjusted by being multiplied by some factor such that we can match flow and transport properties of the streamline. This enables the inverse process to achieve fast convergence. In addition, equipped with a stochastic module, the proposed technique supportively calibrates the identified field in a stochastic manner, while incorporating spatial information into the field. This prevents the inverse process from being stuck in local minima and helps search for a globally optimized solution. Simulation results indicate that stochastic streamline calibration identifies an unknown permeability exceptionally quickly. More notably, the identified permeability distribution reflected realistic geological features, which had not been achieved in the original work by Agarwal et al. with the limitations of the large modifications along streamlines for matching production data only. The constructed model by stochastic streamline calibration forecasted transport of plume which was similar to that of a reference model. By this, we can expect the proposed approach to be applied to the construction of an aquifer model and forecasting of the aquifer performances of interest.
Calibration Experiments for a Computer Vision Oyster Volume Estimation System
ERIC Educational Resources Information Center
Chang, G. Andy; Kerns, G. Jay; Lee, D. J.; Stanek, Gary L.
2009-01-01
Calibration is a technique that is commonly used in science and engineering research that requires calibrating measurement tools for obtaining more accurate measurements. It is an important technique in various industries. In many situations, calibration is an application of linear regression, and is a good topic to be included when explaining and…
NASA Astrophysics Data System (ADS)
Łazarek, Łukasz; Antończak, Arkadiusz J.; Wójcik, Michał R.; Kozioł, Paweł E.; Stepak, Bogusz; Abramski, Krzysztof M.
2014-08-01
Laser-induced breakdown spectroscopy (LIBS) is a fast, fully optical method, that needs little or no sample preparation. In this technique qualitative and quantitative analysis is based on comparison. The determination of composition is generally based on the construction of a calibration curve namely the LIBS signal versus the concentration of the analyte. Typically, to calibrate the system, certified reference materials with known elemental composition are used. Nevertheless, such samples due to differences in the overall composition with respect to the used complex inorganic materials can influence significantly on the accuracy. There are also some intermediate factors which can cause imprecision in measurements, such as optical absorption, surface structure, thermal conductivity etc. This paper presents the calibration procedure performed with especially prepared pellets from the tested materials, which composition was previously defined. We also proposed methods of post-processing which allowed for mitigation of the matrix effects and for a reliable and accurate analysis. This technique was implemented for determination of trace elements in industrial copper concentrates standardized by conventional atomic absorption spectroscopy with a flame atomizer. A series of copper flotation concentrate samples was analyzed for contents of three elements, that is silver, cobalt and vanadium. It has been shown that the described technique can be used to qualitative and quantitative analyses of complex inorganic materials, such as copper flotation concentrates.
Image-guided Navigation of Single-element Focused Ultrasound Transducer
Kim, Hyungmin; Chiu, Alan; Park, Shinsuk; Yoo, Seung-Schik
2014-01-01
The spatial specificity and controllability of focused ultrasound (FUS), in addition to its ability to modify the excitability of neural tissue, allows for the selective and reversible neuromodulation of the brain function, with great potential in neurotherapeutics. Intra-operative magnetic resonance imaging (MRI) guidance (in short, MRg) has limitations due to its complicated examination logistics, such as fixation through skull screws to mount the stereotactic frame, simultaneous sonication in the MRI environment, and restrictions in choosing MR-compatible materials. In order to overcome these limitations, an image-guidance system based on optical tracking and pre-operative imaging data is developed, separating the imaging acquisition for guidance and sonication procedure for treatment. Techniques to define the local coordinates of the focal point of sonication are presented. First, mechanical calibration detects the concentric rotational motion of a rigid-body optical tracker, attached to a straight rod mimicking the sonication path, pivoted at the virtual FUS focus. The spatial error presented in the mechanical calibration was compensated further by MRI-based calibration, which estimates the spatial offset between the navigated focal point and the ground-truth location of the sonication focus obtained from a temperature-sensitive MR sequence. MRI-based calibration offered a significant decrease in spatial errors (1.9±0.8 mm; 57% reduction) compared to the mechanical calibration method alone (4.4±0.9 mm). Using the presented method, pulse-mode FUS was applied to the motor area of the rat brain, and successfully stimulated the motor cortex. The presented techniques can be readily adapted for the transcranial application of FUS to intact human brain. PMID:25232203
NASA Technical Reports Server (NTRS)
Strekalov, Dmitry V.; Yu, Nam; Thompson, Robert J.
2012-01-01
The most accurate astronomical data is available from space-based observations that are not impeded by the Earth's atmosphere. Such measurements may require spectral samples taken as long as decades apart, with the 1 cm/s velocity precision integrated over a broad wavelength range. This raises the requirements specifically for instruments used in astrophysics research missions -- their stringent wavelength resolution and accuracy must be maintained over years and possibly decades. Therefore, a stable and broadband optical calibration technique compatible with spaceflights becomes essential. The space-based spectroscopic instruments need to be calibrated in situ, which puts forth specific requirements to the calibration sources, mainly concerned with their mass, power consumption, and reliability. A high-precision, high-resolution reference wavelength comb source for astronomical and astrophysics spectroscopic observations has been developed that is deployable in space. The optical comb will be used for wavelength calibrations of spectrographs and will enable Doppler measurements to better than 10 cm/s precision, one hundred times better than the current state-of-the- art.
In-flight radiometric calibration of the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)
NASA Technical Reports Server (NTRS)
Conel, James E.; Green, Robert O.; Alley, Ronald E.; Bruegge, Carol J.; Carrere, Veronique; Margolis, Jack S.; Vane, Gregg; Chrien, Thomas G.; Slater, Philip N.; Biggard, Stuart F.
1988-01-01
A reflectance-based method was used to provide an analysis of the in-flight radiometric performance of AVIRIS. Field spectral reflectance measurements of the surface and extinction measurements of the atmosphere using solar radiation were used as input to atmospheric radiative transfer calculations. Five separate codes were used in the analysis. Four include multiple scattering, and the computed radiances from these for flight conditions were in good agreement. Code-generated radiances were compared with AVIRIS-predicted radiances based on two laboratory calibrations (pre- and post-season of flight) for a uniform highly reflecting natural dry lake target. For one spectrometer (C), the pre- and post-season calibration factors were found to give identical results, and to be in agreement with the atmospheric models that include multiple scattering. This positive result validates the field and laboratory calibration technique. Results for the other spectrometers (A, B and D) were widely at variance with the models no matter which calibration factors were used. Potential causes of these discrepancies are discussed.
Cider fermentation process monitoring by Vis-NIR sensor system and chemometrics.
Villar, Alberto; Vadillo, Julen; Santos, Jose I; Gorritxategi, Eneko; Mabe, Jon; Arnaiz, Aitor; Fernández, Luis A
2017-04-15
Optimization of a multivariate calibration process has been undertaken for a Visible-Near Infrared (400-1100nm) sensor system, applied in the monitoring of the fermentation process of the cider produced in the Basque Country (Spain). The main parameters that were monitored included alcoholic proof, l-lactic acid content, glucose+fructose and acetic acid content. The multivariate calibration was carried out using a combination of different variable selection techniques and the most suitable pre-processing strategies were selected based on the spectra characteristics obtained by the sensor system. The variable selection techniques studied in this work include Martens Uncertainty test, interval Partial Least Square Regression (iPLS) and Genetic Algorithm (GA). This procedure arises from the need to improve the calibration models prediction ability for cider monitoring. Copyright © 2016 Elsevier Ltd. All rights reserved.
Investigation of laser Doppler anemometry in developing a velocity-based measurement technique
NASA Astrophysics Data System (ADS)
Jung, Ki Won
2009-12-01
Acoustic properties, such as the characteristic impedance and the complex propagation constant, of porous materials have been traditionally characterized based on pressure-based measurement techniques using microphones. Although the microphone techniques have evolved since their introduction, the most general form of the microphone technique employs two microphones in characterizing the acoustic field for one continuous medium. The shortcomings of determining the acoustic field based on only two microphones can be overcome by using numerous microphones. However, the use of a number of microphones requires a careful and intricate calibration procedure. This dissertation uses laser Doppler anemometry (LDA) to establish a new measurement technique which can resolve issues that microphone techniques have: First, it is based on a single sensor, thus the calibration is unnecessary when only overall ratio of the acoustic field is required for the characterization of a system. This includes the measurements of the characteristic impedance and the complex propagation constant of a system. Second, it can handle multiple positional measurements without calibrating the signal at each position. Third, it can measure three dimensional components of velocity even in a system with a complex geometry. Fourth, it has a flexible adaptability which is not restricted to a certain type of apparatus only if the apparatus is transparent. LDA is known to possess several disadvantages, such as the requirement of a transparent apparatus, high cost, and necessity of seeding particles. The technique based on LDA combined with a curvefitting algorithm is validated through measurements on three systems. First, the complex propagation constant of the air is measured in a rigidly terminated cylindrical pipe which has very low dissipation. Second, the radiation impedance of an open-ended pipe is measured. These two parameters can be characterized by the ratio of acoustic field measured at multiple locations. Third, the power dissipated in a variable RLC load is measured. The three experiments validate the LDA technique proposed. The utility of the LDA method is then extended to the measurement of the complex propagation constant of the air inside a 100 ppi reticulated vitreous carbon (RVC) sample. Compared to measurements in the available studies, the measurement with the 100 ppi RVC sample supports the LDA technique in that it can achieve a low uncertainty in the determined quantity. This dissertation concludes with using the LDA technique for modal decomposition of the plane wave mode and the (1,1) mode that are driven simultaneously. This modal decomposition suggests that the LDA technique surpasses microphone-based techniques, because they are unable to determine the acoustic field based on an acoustic model with unconfined propagation constants for each modal component.
Abundances of isotopologues and calibration of CO2 greenhouse gas measurements
NASA Astrophysics Data System (ADS)
Tans, Pieter P.; Crotwell, Andrew M.; Thoning, Kirk W.
2017-07-01
We have developed a method to calculate the fractional distribution of CO2 across all of its component isotopologues based on measured δ13C and δ18O values. The fractional distribution can be used with known total CO2 to calculate the amount of substance fraction (mole fraction) of each component isotopologue in air individually. The technique is applicable to any molecule where isotopologue-specific values are desired. We used it with a new CO2 calibration system to account for isotopic differences among the primary CO2 standards that define the WMO X2007 CO2-in-air calibration scale and between the primary standards and standards in subsequent levels of the calibration hierarchy. The new calibration system uses multiple laser spectroscopic techniques to measure mole fractions of the three major CO2 isotopologues (16O12C16O, 16O13C16O, and 16O12C18O) individually. The three measured values are then combined into total CO2 (accounting for the rare unmeasured isotopologues), δ13C, and δ18O values. The new calibration system significantly improves our ability to transfer the WMO CO2 calibration scale with low uncertainty through our role as the World Meteorological Organization Global Atmosphere Watch Central Calibration Laboratory for CO2. Our current estimates for reproducibility of the new calibration system are ±0.01 µmol mol-1 CO2, ±0.2 ‰ δ13C, and ±0.2 ‰ δ18O, all at 68 % confidence interval (CI).
ASD FieldSpec Calibration Setup and Techniques
NASA Technical Reports Server (NTRS)
Olive, Dan
2001-01-01
This paper describes the Analytical Spectral Devices (ASD) Fieldspec Calibration Setup and Techniques. The topics include: 1) ASD Fieldspec FR Spectroradiometer; 2) Components of Calibration; 3) Equipment list; 4) Spectral Setup; 5) Spectral Calibration; 6) Radiometric and Linearity Setup; 7) Radiometric setup; 8) Datadets Required; 9) Data files; and 10) Field of View Measurement. This paper is in viewgraph form.
Novel crystal timing calibration method based on total variation
NASA Astrophysics Data System (ADS)
Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng
2016-11-01
A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.
Mishra, Nischal; Haque, Md. Obaidul; Leigh, Larry; Aaron, David; Helder, Dennis; Markham, Brian L
2014-01-01
This study evaluates the radiometric consistency between Landsat-8 Operational Land Imager (OLI) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) using cross calibration techniques. Two approaches are used, one based on cross calibration between the two sensors using simultaneous image pairs, acquired during an underfly event on 29–30 March 2013. The other approach is based on using time series of image statistics acquired by these two sensors over the Libya 4 pseudo invariant calibration site (PICS) (+28.55°N, +23.39°E). Analyses from these approaches show that the reflectance calibration of OLI is generally within ±3% of the ETM+ radiance calibration for all the reflective bands from visible to short wave infrared regions when the ChKur solar spectrum is used to convert the ETM+ radiance to reflectance. Similar results are obtained comparing the OLI radiance calibration directly with the ETM+ radiance calibration and the results in these two different physical units (radiance and reflectance) agree to within ±2% for all the analogous bands. These results will also be useful to tie all the Landsat heritage sensors from Landsat 1 MultiSpectral Scanner (MSS) through Landsat-8 OLI to a consistent radiometric scale.
An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)
2010-03-01
technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D
An improved error assessment for the GEM-T1 gravitational model
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Marsh, J. G.; Klosko, S. M.; Pavlis, E. C.; Patel, G. B.; Chinn, D. S.; Wagner, C. A.
1988-01-01
Several tests were designed to determine the correct error variances for the Goddard Earth Model (GEM)-T1 gravitational solution which was derived exclusively from satellite tracking data. The basic method employs both wholly independent and dependent subset data solutions and produces a full field coefficient estimate of the model uncertainties. The GEM-T1 errors were further analyzed using a method based upon eigenvalue-eigenvector analysis which calibrates the entire covariance matrix. Dependent satellite and independent altimetric and surface gravity data sets, as well as independent satellite deep resonance information, confirm essentially the same error assessment. These calibrations (utilizing each of the major data subsets within the solution) yield very stable calibration factors which vary by approximately 10 percent over the range of tests employed. Measurements of gravity anomalies obtained from altimetry were also used directly as observations to show that GEM-T1 is calibrated. The mathematical representation of the covariance error in the presence of unmodeled systematic error effects in the data is analyzed and an optimum weighting technique is developed for these conditions. This technique yields an internal self-calibration of the error model, a process which GEM-T1 is shown to approximate.
33 CFR 104.215 - Vessel Security Officer (VSO).
Code of Federal Regulations, 2011 CFR
2011-07-01
... procedures, including scenario-based response training; (4) Crowd management and control techniques; (5) Operations of security equipment and systems; and (6) Testing and calibration of security equipment and...
Radiometric Calibration of the Earth Observing System's Imaging Sensors
NASA Technical Reports Server (NTRS)
Slater, Philip N. (Principal Investigator)
1997-01-01
The work on the grant was mainly directed towards developing new, accurate, redundant methods for the in-flight, absolute radiometric calibration of satellite multispectral imaging systems and refining the accuracy of methods already in use. Initially the work was in preparation for the calibration of MODIS and HIRIS (before the development of that sensor was canceled), with the realization it would be applicable to most imaging multi- or hyper-spectral sensors provided their spatial or spectral resolutions were not too coarse. The work on the grant involved three different ground-based, in-flight calibration methods reflectance-based radiance-based and diffuse-to-global irradiance ratio used with the reflectance-based method. This continuing research had the dual advantage of: (1) developing several independent methods to create the redundancy that is essential for the identification and hopefully the elimination of systematic errors; and (2) refining the measurement techniques and algorithms that can be used not only for improving calibration accuracy but also for the reverse process of retrieving ground reflectances from calibrated remote-sensing data. The grant also provided the support necessary for us to embark on other projects such as the ratioing radiometer approach to on-board calibration (this has been further developed by SBRS as the 'solar diffuser stability monitor' and is incorporated into the most important on-board calibration system for MODIS)- another example of the work, which was a spin-off from the grant funding, was a study of solar diffuser materials. Journal citations, titles and abstracts of publications authored by faculty, staff, and students are also attached.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Randeniya, S; Mirkovic, D; Titt, U
2014-06-01
Purpose: In intensity modulated proton therapy (IMPT), energy dependent, protons per monitor unit (MU) calibration factors are important parameters that determine absolute dose values from energy deposition data obtained from Monte Carlo (MC) simulations. Purpose of this study was to assess the sensitivity of MC-computed absolute dose distributions to the protons/MU calibration factors in IMPT. Methods: A “verification plan” (i.e., treatment beams applied individually to water phantom) of a head and neck patient plan was calculated using MC technique. The patient plan had three beams; one posterior-anterior (PA); two anterior oblique. Dose prescription was 66 Gy in 30 fractions. Ofmore » the total MUs, 58% was delivered in PA beam, 25% and 17% in other two. Energy deposition data obtained from the MC simulation were converted to Gy using energy dependent protons/MU calibrations factors obtained from two methods. First method is based on experimental measurements and MC simulations. Second is based on hand calculations, based on how many ion pairs were produced per proton in the dose monitor and how many ion pairs is equal to 1 MU (vendor recommended method). Dose distributions obtained from method one was compared with those from method two. Results: Average difference of 8% in protons/MU calibration factors between method one and two converted into 27 % difference in absolute dose values for PA beam; although dose distributions preserved the shape of 3D dose distribution qualitatively, they were different quantitatively. For two oblique beams, significant difference in absolute dose was not observed. Conclusion: Results demonstrate that protons/MU calibration factors can have a significant impact on absolute dose values in IMPT depending on the fraction of MUs delivered. When number of MUs increases the effect due to the calibration factors amplify. In determining protons/MU calibration factors, experimental method should be preferred in MC dose calculations. Research supported by National Cancer Institute grant P01CA021239.« less
Cernuda, Carlos; Lughofer, Edwin; Klein, Helmut; Forster, Clemens; Pawliczek, Marcin; Brandstetter, Markus
2017-01-01
During the production process of beer, it is of utmost importance to guarantee a high consistency of the beer quality. For instance, the bitterness is an essential quality parameter which has to be controlled within the specifications at the beginning of the production process in the unfermented beer (wort) as well as in final products such as beer and beer mix beverages. Nowadays, analytical techniques for quality control in beer production are mainly based on manual supervision, i.e., samples are taken from the process and analyzed in the laboratory. This typically requires significant lab technicians efforts for only a small fraction of samples to be analyzed, which leads to significant costs for beer breweries and companies. Fourier transform mid-infrared (FT-MIR) spectroscopy was used in combination with nonlinear multivariate calibration techniques to overcome (i) the time consuming off-line analyses in beer production and (ii) already known limitations of standard linear chemometric methods, like partial least squares (PLS), for important quality parameters Speers et al. (J I Brewing. 2003;109(3):229-235), Zhang et al. (J I Brewing. 2012;118(4):361-367) such as bitterness, citric acid, total acids, free amino nitrogen, final attenuation, or foam stability. The calibration models are established with enhanced nonlinear techniques based (i) on a new piece-wise linear version of PLS by employing fuzzy rules for local partitioning the latent variable space and (ii) on extensions of support vector regression variants (-PLSSVR and ν-PLSSVR), for overcoming high computation times in high-dimensional problems and time-intensive and inappropriate settings of the kernel parameters. Furthermore, we introduce a new model selection scheme based on bagged ensembles in order to improve robustness and thus predictive quality of the final models. The approaches are tested on real-world calibration data sets for wort and beer mix beverages, and successfully compared to linear methods, showing a clear out-performance in most cases and being able to meet the model quality requirements defined by the experts at the beer company. Figure Workflow for calibration of non-Linear model ensembles from FT-MIR spectra in beer production .
Standardized Photometric Calibrations for Panchromatic SSA Sensors
NASA Astrophysics Data System (ADS)
Castro, P.; Payne, T.; Battle, A.; Cole, Z.; Moody, J.; Gregory, S.; Dao, P.
2016-09-01
Panchromatic sensors used for Space Situational Awareness (SSA) have no standardized method for transforming the net flux detected by a CCD without a spectral filter into an exo-atmospheric magnitude in a standard magnitude system. Each SSA data provider appears to have their own method for computing the visual magnitude based on panchromatic brightness making cross-comparisons impossible. We provide a procedure in order to standardize the calibration of panchromatic sensors for the purposes of SSA. A technique based on theoretical modeling is presented that derives standard panchromatic magnitudes from the Johnson-Cousins photometric system defined by Arlo Landolt. We verify this technique using observations of Landolt standard stars and a Vega-like star to determine empirical panchromatic magnitudes and compare these to synthetically derived panchromatic magnitudes. We also investigate color terms caused by differences in the quantum efficiency (QE) between the Landolt standard system and panchromatic systems. We evaluate calibrated panchromatic satellite photometry by observing several GEO satellites and standard stars using three different sensors. We explore the effect of satellite color terms by comparing the satellite signatures. In order to remove other variables affecting the satellite photometry, two of the sensors are at the same site using different CCDs. The third sensor is geographically separate from the first two allowing for a definitive test of calibrated panchromatic satellite photometry.
A Review on Microdialysis Calibration Methods: the Theory and Current Related Efforts.
Kho, Chun Min; Enche Ab Rahim, Siti Kartini; Ahmad, Zainal Arifin; Abdullah, Norazharuddin Shah
2017-07-01
Microdialysis is a sampling technique first introduced in the late 1950s. Although this technique was originally designed to study endogenous compounds in animal brain, it is later modified to be used in other organs. Additionally, microdialysis is not only able to collect unbound concentration of compounds from tissue sites; this technique can also be used to deliver exogenous compounds to a designated area. Due to its versatility, microdialysis technique is widely employed in a number of areas, including biomedical research. However, for most in vivo studies, the concentration of substance obtained directly from the microdialysis technique does not accurately describe the concentration of the substance on-site. In order to relate the results collected from microdialysis to the actual in vivo condition, a calibration method is required. To date, various microdialysis calibration methods have been reported, with each method being capable to provide valuable insights of the technique itself and its applications. This paper aims to provide a critical review on various calibration methods used in microdialysis applications, inclusive of a detailed description of the microdialysis technique itself to start with. It is expected that this article shall review in detail, the various calibration methods employed, present examples of work related to each calibration method including clinical efforts, plus the advantages and disadvantages of each of the methods.
Rapid calibrated high-resolution hyperspectral imaging using tunable laser source
NASA Astrophysics Data System (ADS)
Nguyen, Lam K.; Margalith, Eli
2009-05-01
We present a novel hyperspectral imaging technique based on tunable laser technology. By replacing the broadband source and tunable filters of a typical NIR imaging instrument, several advantages are realized, including: high spectral resolution, highly variable field-of-views, fast scan-rates, high signal-to-noise ratio, and the ability to use optical fiber for efficient and flexible sample illumination. With this technique, high-resolution, calibrated hyperspectral images over the NIR range can be acquired in seconds. The performance of system features will be demonstrated on two example applications: detecting melamine contamination in wheat gluten and separating bovine protein from wheat protein in cattle feed.
Calibrating a novel multi-sensor physical activity measurement system.
John, D; Liu, S; Sasaki, J E; Howe, C A; Staudenmayer, J; Gao, R X; Freedson, P S
2011-09-01
Advancing the field of physical activity (PA) monitoring requires the development of innovative multi-sensor measurement systems that are feasible in the free-living environment. The use of novel analytical techniques to combine and process these multiple sensor signals is equally important. This paper describes a novel multi-sensor 'integrated PA measurement system' (IMS), the lab-based methodology used to calibrate the IMS, techniques used to predict multiple variables from the sensor signals, and proposes design changes to improve the feasibility of deploying the IMS in the free-living environment. The IMS consists of hip and wrist acceleration sensors, two piezoelectric respiration sensors on the torso, and an ultraviolet radiation sensor to obtain contextual information (indoors versus outdoors) of PA. During lab-based calibration of the IMS, data were collected on participants performing a PA routine consisting of seven different ambulatory and free-living activities while wearing a portable metabolic unit (criterion measure) and the IMS. Data analyses on the first 50 adult participants are presented. These analyses were used to determine if the IMS can be used to predict the variables of interest. Finally, physical modifications for the IMS that could enhance the feasibility of free-living use are proposed and refinement of the prediction techniques is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.
Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using themore » leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.« less
NASA Astrophysics Data System (ADS)
Ratliff, Bradley M.; LeMaster, Daniel A.
2012-06-01
Pixel-to-pixel response nonuniformity is a common problem that affects nearly all focal plane array sensors. This results in a frame-to-frame fixed pattern noise (FPN) that causes an overall degradation in collected data. FPN is often compensated for through the use of blackbody calibration procedures; however, FPN is a particularly challenging problem because the detector responsivities drift relative to one another in time, requiring that the sensor be recalibrated periodically. The calibration process is obstructive to sensor operation and is therefore only performed at discrete intervals in time. Thus, any drift that occurs between calibrations (along with error in the calibration sources themselves) causes varying levels of residual calibration error to be present in the data at all times. Polarimetric microgrid sensors are particularly sensitive to FPN due to the spatial differencing involved in estimating the Stokes vector images. While many techniques exist in the literature to estimate FPN for conventional video sensors, few have been proposed to address the problem in microgrid imaging sensors. Here we present a scene-based nonuniformity correction technique for microgrid sensors that is able to reduce residual fixed pattern noise while preserving radiometry under a wide range of conditions. The algorithm requires a low number of temporal data samples to estimate the spatial nonuniformity and is computationally efficient. We demonstrate the algorithm's performance using real data from the AFRL PIRATE and University of Arizona LWIR microgrid sensors.
NASA Technical Reports Server (NTRS)
Martos, Borja; Kiszely, Paul; Foster, John V.
2011-01-01
As part of the NASA Aviation Safety Program (AvSP), a novel pitot-static calibration method was developed to allow rapid in-flight calibration for subscale aircraft while flying within confined test areas. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. This method has been demonstrated in subscale flight tests and has shown small 2- error bounds with significant reduction in test time compared to other methods. The current research was motivated by the desire to further evaluate and develop this method for full-scale aircraft. A goal of this research was to develop an accurate calibration method that enables reductions in test equipment and flight time, thus reducing costs. The approach involved analysis of data acquisition requirements, development of efficient flight patterns, and analysis of pressure error models based on system identification methods. Flight tests were conducted at The University of Tennessee Space Institute (UTSI) utilizing an instrumented Piper Navajo research aircraft. In addition, the UTSI engineering flight simulator was used to investigate test maneuver requirements and handling qualities issues associated with this technique. This paper provides a summary of piloted simulation and flight test results that illustrates the performance and capabilities of the NASA calibration method. Discussion of maneuver requirements and data analysis methods is included as well as recommendations for piloting technique.
Hart, Michael L.; Drakopoulos, Michael; Reinhard, Christina; Connolley, Thomas
2013-01-01
A complete calibration method to characterize a static planar two-dimensional detector for use in X-ray diffraction at an arbitrary wavelength is described. This method is based upon geometry describing the point of intersection between a cone’s axis and its elliptical conic section. This point of intersection is neither the ellipse centre nor one of the ellipse focal points, but some other point which lies in between. The presented solution is closed form, algebraic and non-iterative in its application, and gives values for the X-ray beam energy, the sample-to-detector distance, the location of the beam centre on the detector surface and the detector tilt relative to the incident beam. Previous techniques have tended to require prior knowledge of either the X-ray beam energy or the sample-to-detector distance, whilst other techniques have been iterative. The new calibration procedure is performed by collecting diffraction data, in the form of diffraction rings from a powder standard, at known displacements of the detector along the beam path. PMID:24068840
A convenient technique for polarimetric calibration of single-antenna radar systems
NASA Technical Reports Server (NTRS)
Sarabandi, Kamal; Ulaby, Fawwaz T.
1990-01-01
A practical technique for calibrating single-antenna polarimetric radar systems is introduced. This technique requires only a single calibration target such as a conducting sphere or a trihedral corner reflector to calibrate the radar system, both in amplitude and phase, for all linear polarization configurations. By using a metal sphere, which is orientation independent, error in calibration measurement is minimized while simultaneously calibrating the crosspolarization channels. The antenna system and two orthogonal channels (in free space) are modeled as a four-port passive network. Upon using the reciprocity relations for the passive network and assuming the crosscoupling terms of the antenna to be equal, the crosstalk factors of the antenna system and the transmit and receive channel imbalances can be obtained from measurement of the backscatter from a metal sphere. For an X-band radar system with crosspolarization isolation of 25 dB, comparison of values measured for a sphere and a cylinder with theoretical values shows agreement within 0.4 dB in magnitude and 5 deg in phase. An effective polarization isolation of 50 dB is achieved using this calibration technique.
Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; ...
2016-09-22
Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIGmore » method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.
Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIGmore » method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.« less
Digital Signal Processing Techniques for the GIFTS SM EDU
NASA Technical Reports Server (NTRS)
Tian, Jialin; Reisse, Robert A.; Gazarik, Michael J.
2007-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes several digital signal processing (DSP) techniques involved in the development of the calibration model. In the first stage, the measured raw interferograms must undergo a series of processing steps that include filtering, decimation, and detector nonlinearity correction. The digital filtering is achieved by employing a linear-phase even-length FIR complex filter that is designed based on the optimum equiripple criteria. Next, the detector nonlinearity effect is compensated for using a set of pre-determined detector response characteristics. In the next stage, a phase correction algorithm is applied to the decimated interferograms. This is accomplished by first estimating the phase function from the spectral phase response of the windowed interferogram, and then correcting the entire interferogram based on the estimated phase function. In the calibration stage, we first compute the spectral responsivity based on the previous results and the ideal Planck blackbody spectra at the given temperatures, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. In the post-calibration stage, we estimate the Noise Equivalent Spectral Radiance (NESR) from the calibrated ABB and HBB spectra. The NESR is generally considered as a measure of the instrument noise performance, and can be estimated as the standard deviation of calibrated radiance spectra from multiple scans. To obtain an estimate of the FPA performance, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is developed based on the pixel performance evaluation. This would allow us to perform the calibration procedures on a random pixel population that is a good statistical representation of the entire FPA. The design and implementation of each individual component will be discussed in details.
NASA Astrophysics Data System (ADS)
Tamaru, S.; Kubota, H.; Yakushiji, K.; Fukushima, A.; Yuasa, S.
2017-11-01
This work presents a technique to calibrate the spin torque oscillator (STO) measurement system by utilizing the whiteness of shot noise. The raw shot noise spectrum in a magnetic tunnel junction based STO in the microwave frequency range is obtained by first subtracting the baseline noise, and then excluding the field dependent mag-noise components reflecting the thermally excited spin wave resonances. As the shot noise is guaranteed to be completely white, the total gain of the signal path should be proportional to the shot noise spectrum obtained by the above procedure, which allows for an accurate gain calibration of the system and a quantitative determination of each noise power. The power spectral density of the shot noise as a function of bias voltage obtained by this technique was compared with a theoretical calculation, which showed excellent agreement when the Fano factor was assumed to be 0.99.
A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques
NASA Technical Reports Server (NTRS)
Beckman, B.
1985-01-01
The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.
Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B
2016-05-01
The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.
Low-emittance tuning of storage rings using normal mode beam position monitor calibration
NASA Astrophysics Data System (ADS)
Wolski, A.; Rubin, D.; Sagan, D.; Shanks, J.
2011-07-01
We describe a new technique for low-emittance tuning of electron and positron storage rings. This technique is based on calibration of the beam position monitors (BPMs) using excitation of the normal modes of the beam motion, and has benefits over conventional methods. It is relatively fast and straightforward to apply, it can be as easily applied to a large ring as to a small ring, and the tuning for low emittance becomes completely insensitive to BPM gain and alignment errors that can be difficult to determine accurately. We discuss the theory behind the technique, present some simulation results illustrating that it is highly effective and robust for low-emittance tuning, and describe the results of some initial experimental tests on the CesrTA storage ring.
Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods
NASA Astrophysics Data System (ADS)
Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan
2017-03-01
Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.
SAR calibration technology review
NASA Technical Reports Server (NTRS)
Walker, J. L.; Larson, R. W.
1981-01-01
Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.
A combined microphone and camera calibration technique with application to acoustic imaging.
Legg, Mathew; Bradley, Stuart
2013-10-01
We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.
NASA Astrophysics Data System (ADS)
Tol, Paul; van Hees, Richard; van Kempen, Tim; Krijger, Matthijs; Cadot, Sidney; Aben, Ilse; Ludewig, Antje; Dingjan, Jos; Persijn, Stefan; Hoogeveen, Ruud
2016-10-01
The Tropospheric Monitoring Instrument (TROPOMI) on-board the Sentinel-5 Precursor satellite is an Earth-observing spectrometer with bands in the ultraviolet, visible, near infrared and short-wave infrared (SWIR). It provides daily global coverage of atmospheric trace gases relevant for tropospheric air quality and climate research. Three new techniques will be presented that are unique for the TROPOMI-SWIR spectrometer. The retrieval of methane and CO columns from the data of the SWIR band requires for each detector pixel an accurate instrument spectral response function (ISRF), i.e. the normalized signal as a function of wavelength. A new determination method for Earth-observing instruments has been used in the on-ground calibration, based on measurements with a SWIR optical parametric oscillator (OPO) that was scanned over the whole TROPOMI-SWIR spectral range. The calibration algorithm derives the ISRF without needing the absolute wavelength during the measurement. The same OPO has also been used to determine the two-dimensional stray-light distribution for each SWIR pixel with a dynamic range of 7 orders. This was achieved by combining measurements at several exposure times and taking saturation into account. The correction algorithm and data are designed to remove the mean stray-light distribution and a reflection that moves relative to the direct image, within the strict constraints of the available time for the L01b processing. A third new technique is an alternative calibration of the SWIR absolute radiance and irradiance using a black body at the temperature of melting silver. Unlike a standard FEL lamp, this source does not have to be calibrated itself, because the temperature is very stable and well known. Measurement methods, data analyses, correction algorithms and limitations of the new techniques will be presented.
Refinement of pressure calibration for multi-anvil press experiments
NASA Astrophysics Data System (ADS)
Ono, S.
2016-12-01
Accurate characterization of the pressure and temperature environment in high-pressure apparatuses is of essential importance when we apply laboratory data to the study of the Earth's interior. Recently, the synchrotron X-ray source can be used for the high-pressure experiments, and the in situ pressure calibration has been a common technique. However, this technique cannot be used in the laboratory-based experiments. Even now, the conventional pressure calibration is of great interest to understand the Earth's interior. Several high-pressure phase transitions used as the pressure calibrants in the laboratory-based multi-anvil experiments have been investigated. Precise determinations of phase boundaries of CaGeO3 [1], Fe2SiO4 [2], SiO2, and Zr [3] were performed by the multi-anvil press or the diamond anvil cell apparatuses combined with the synchrotron X-ray diffraction technique. The transition pressures in CaGeO3 (garnet-perovskite), Fe2SiO4 (alfa-gamma), and SiO2 (coesite-stishovite) were in general agreement with those reported by previous studies. However, significant discrepancies for the slopes, dP/dT, of these transitions between our and previous studies were confirmed. In the case of Zr study [3], our experimental results elucidate the inconsistency in the transition pressure between omega and beta phase in Zr observed in previous studies. [1] Ono et al. (2011) Phys. Chem. Minerals, 38, 735-740.[2] Ono et al. (2013) Phys. Chem. Minerals, 40, 811-816.[3] Ono & Kikegawa (2015) J. Solid State Chem., 225, 110-113.
Analytical model for real time, noninvasive estimation of blood glucose level.
Adhyapak, Anoop; Sidley, Matthew; Venkataraman, Jayanti
2014-01-01
The paper presents an analytical model to estimate blood glucose level from measurements made non-invasively and in real time by an antenna strapped to a patient's wrist. Some promising success has been shown by the RIT ETA Lab research group that an antenna's resonant frequency can track, in real time, changes in glucose concentration. Based on an in-vitro study of blood samples of diabetic patients, the paper presents a modified Cole-Cole model that incorporates a factor to represent the change in glucose level. A calibration technique using the input impedance technique is discussed and the results show a good estimation as compared to the glucose meter readings. An alternate calibration methodology has been developed that is based on the shift in the antenna resonant frequency using an equivalent circuit model containing a shunt capacitor to represent the shift in resonant frequency with changing glucose levels. Work under progress is the optimization of the technique with a larger sample of patients.
Pulsating stars and the distance scale
NASA Astrophysics Data System (ADS)
Macri, Lucas
2017-09-01
I present an overview of the latest results from the SH0ES project, which obtained homogeneous Hubble Space Telescope (HST) photometry in the optical and near-infrared for ˜ 3500 and ˜ 2300 Cepheids, respectively, across 19 supernova hosts and 4 calibrators to determine the value of H0 with a total uncertainty of 2.4%. I discuss the current 3.4σ "tension" between this local measurement and predictions of H0 based on observations of the CMB and the assumption of "standard" ΛCDM. I review ongoing efforts to reach σ(H0) = 1%, including recent advances on the absolute calibration of Milky Way Cepheid period-luminosity relations (PLRs) using a novel astrometric technique with HST. Lastly, I highlight recent results from another collaboration on the development of new statistical techniques to detect, classify and phase extragalactic Miras using noisy and sparsely-sampled observations. I present preliminary Mira PLRs at various wavelengths based on the application of these techniques to a survey of M33.
The Geostationary Lightning Mapper: Its Performance and Calibration
NASA Astrophysics Data System (ADS)
Christian, H. J., Jr.
2015-12-01
The Geostationary Lightning Mapper (GLM) has been developed to be an operational instrument on the GOES-R series of spacecraft. The GLM is a unique instrument, unlike other meteorological instruments, both in how it operates and in the information content that it provides. Instrumentally, it is an event detector, rather than an imager. While processing almost a billion pixels per second with 14 bits of resolution, the event detection process reduces the required telemetry bandwidth by almost 105, thus keeping the telemetry requirements modest and enabling efficient ground processing that leads to rapid data distribution to operational users. The GLM was designed to detect about 90 percent of the total lightning flashes within its almost hemispherical field of view. Based on laboratory calibration, we expect the on-orbit detection efficiency to be closer to 85%, making it the highest performing, large area coverage total lightning detector. It has a number of unique design features that will enable it have near uniform special resolution over most of its field of view and to operate with minimal impact on performance during solar eclipses. The GLM has no dedicated on-orbit calibration system, thus the ground-based calibration provides the bases for the predicted radiometric performance. A number of problems were encountered during the calibration of Flight Model 1. The issues arouse from GLM design features including its wide field of view, fast lens, the narrow-band interference filters located in both object and collimated space and the fact that the GLM is inherently a event detector yet the calibration procedures required both calibration of images and events. The GLM calibration techniques were based on those developed for the Lightning Imaging Sensor calibration, but there are enough differences between the sensors that the initial GLM calibration suggested that it is significantly more sensitive than its design parameters. The calibration discrepancies have been resolved and will be discussed. Absolute calibration will be verified on-orbit using vicarious cloud reflections. In addition to details of the GLM calibration, the presentation will address the unique design of the GLM, its features, capabilities and performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, O.
The development of the Zeeman–Doppler Imaging (ZDI) technique has provided synoptic observations of surface magnetic fields of low-mass stars. This led the stellar astrophysics community to adopt modeling techniques that have been used in solar physics using solar magnetograms. However, many of these techniques have been neglected by the solar community due to their failure to reproduce solar observations. Nevertheless, some of these techniques are still used to simulate the coronae and winds of solar analogs. Here we present a comparative study between two MHD models for the solar corona and solar wind. The first type of model is amore » polytropic wind model, and the second is the physics-based AWSOM model. We show that while the AWSOM model consistently reproduces many solar observations, the polytropic model fails to reproduce many of them, and in the cases where it does, its solutions are unphysical. Our recommendation is that polytropic models, which are used to estimate mass-loss rates and other parameters of solar analogs, must first be calibrated with solar observations. Alternatively, these models can be calibrated with models that capture more detailed physics of the solar corona (such as the AWSOM model) and that can reproduce solar observations in a consistent manner. Without such a calibration, the results of the polytropic models cannot be validated, but they can be wrongly used by others.« less
On the Long-Term Stability of Microwave Radiometers Using Noise Diodes for Calibration
NASA Technical Reports Server (NTRS)
Brown, Shannon T.; Desai, Shailen; Lu, Wenwen; Tanner, Alan B.
2007-01-01
Results are presented from the long-term monitoring and calibration of the National Aeronautics and Space Administration Jason Microwave Radiometer (JMR) on the Jason-1 ocean altimetry satellite and the ground-based Advanced Water Vapor Radiometers (AWVRs) developed for the Cassini Gravity Wave Experiment. Both radiometers retrieve the wet tropospheric path delay (PD) of the atmosphere and use internal noise diodes (NDs) for gain calibration. The JMR is the first radiometer to be flown in space that uses NDs for calibration. External calibration techniques are used to derive a time series of ND brightness for both instruments that is greater than four years. For the JMR, an optimal estimator is used to find the set of calibration coefficients that minimize the root-mean-square difference between the JMR brightness temperatures and the on-Earth hot and cold references. For the AWVR, continuous tip curves are used to derive the ND brightness. For the JMR and AWVR, both of which contain three redundant NDs per channel, it was observed that some NDs were very stable, whereas others experienced jumps and drifts in their effective brightness. Over the four-year time period, the ND stability ranged from 0.2% to 3% among the diodes for both instruments. The presented recalibration methodology demonstrates that long-term calibration stability can be achieved with frequent recalibration of the diodes using external calibration techniques. The JMR PD drift compared to ground truth over the four years since the launch was reduced from 3.9 to - 0.01 mm/year with the recalibrated ND time series. The JMR brightness temperature calibration stability is estimated to be 0.25 K over ten days.
Calibration of High Heat Flux Sensors at NIST
Murthy, A. V.; Tsai, B. K.; Gibson, C. E.
1997-01-01
An ongoing program at the National Institute of Standards and Technology (NIST) is aimed at improving and standardizing heat-flux sensor calibration methods. The current calibration needs of U.S. science and industry exceed the current NIST capability of 40 kW/m2 irradiance. In achieving this goal, as well as meeting lower-level non-radiative heat flux calibration needs of science and industry, three different types of calibration facilities currently are under development at NIST: convection, conduction, and radiation. This paper describes the research activities associated with the NIST Radiation Calibration Facility. Two different techniques, transfer and absolute, are presented. The transfer calibration technique employs a transfer standard calibrated with reference to a radiometric standard for calibrating the sensors using a graphite tube blackbody. Plans for an absolute calibration facility include the use of a spherical blackbody and a cooled aperture and sensor-housing assembly to calibrate the sensors in a low convective environment. PMID:27805156
Augmented classical least squares multivariate spectral analysis
Haaland, David M.; Melgaard, David K.
2004-02-03
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-07-26
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-01-11
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Empirical dual energy calibration (EDEC) for cone-beam computed tomography.
Stenner, Philip; Berkus, Timo; Kachelriess, Marc
2007-09-01
Material-selective imaging using dual energy CT (DECT) relies heavily on well-calibrated material decomposition functions. These require the precise knowledge of the detected x-ray spectra, and even if they are exactly known the reliability of DECT will suffer from scattered radiation. We propose an empirical method to determine the proper decomposition function. In contrast to other decomposition algorithms our empirical dual energy calibration (EDEC) technique requires neither knowledge of the spectra nor of the attenuation coefficients. The desired material-selective raw data p1 and p2 are obtained as functions of the measured attenuation data q1 and q2 (one DECT scan = two raw data sets) by passing them through a polynomial function. The polynomial's coefficients are determined using a general least squares fit based on thresholded images of a calibration phantom. The calibration phantom's dimension should be of the same order of magnitude as the test object, but other than that no assumptions on its exact size or positioning are made. Once the decomposition coefficients are determined DECT raw data can be decomposed by simply passing them through the polynomial. To demonstrate EDEC simulations of an oval CTDI phantom, a lung phantom, a thorax phantom and a mouse phantom were carried out. The method was further verified by measuring a physical mouse phantom, a half-and-half-cylinder phantom and a Yin-Yang phantom with a dedicated in vivo dual source micro-CT scanner. The raw data were decomposed into their components, reconstructed, and the pixel values obtained were compared to the theoretical values. The determination of the calibration coefficients with EDEC is very robust and depends only slightly on the type of calibration phantom used. The images of the test phantoms (simulations and measurements) show a nearly perfect agreement with the theoretical micro values and density values. Since EDEC is an empirical technique it inherently compensates for scatter components. The empirical dual energy calibration technique is a pragmatic, simple, and reliable calibration approach that produces highly quantitative DECT images.
Robinson, Andrew P; Tipping, Jill; Cullen, David M; Hamilton, David; Brown, Richard; Flynn, Alex; Oldfield, Christopher; Page, Emma; Price, Emlyn; Smith, Andrew; Snee, Richard
2016-12-01
Patient-specific absorbed dose calculations for molecular radiotherapy require accurate activity quantification. This is commonly derived from Single-Photon Emission Computed Tomography (SPECT) imaging using a calibration factor relating detected counts to known activity in a phantom insert. A series of phantom inserts, based on the mathematical models underlying many clinical dosimetry calculations, have been produced using 3D printing techniques. SPECT/CT data for the phantom inserts has been used to calculate new organ-specific calibration factors for (99m) Tc and (177)Lu. The measured calibration factors are compared to predicted values from calculations using a Gaussian kernel. Measured SPECT calibration factors for 3D printed organs display a clear dependence on organ shape for (99m) Tc and (177)Lu. The observed variation in calibration factor is reproduced using Gaussian kernel-based calculation over two orders of magnitude change in insert volume for (99m) Tc and (177)Lu. These new organ-specific calibration factors show a 24, 11 and 8 % reduction in absorbed dose for the liver, spleen and kidneys, respectively. Non-spherical calibration factors from 3D printed phantom inserts can significantly improve the accuracy of whole organ activity quantification for molecular radiotherapy, providing a crucial step towards individualised activity quantification and patient-specific dosimetry. 3D printed inserts are found to provide a cost effective and efficient way for clinical centres to access more realistic phantom data.
Development of an in situ calibration technique for combustible gas detectors
NASA Technical Reports Server (NTRS)
Shumar, J. W.; Wynveen, R. A.; Lance, N., Jr.; Lantz, J. B.
1977-01-01
This paper describes the development of an in situ calibration procedure for combustible gas detectors (CGD). The CGD will be a necessary device for future space vehicles as many subsystems in the Environmental Control/Life Support System utilize or produce hydrogen (H2) gas. Existing calibration techniques are time-consuming and require support equipment such as an environmental chamber and calibration gas supply. The in situ calibration procedure involves utilization of a water vapor electrolysis cell for the automatic in situ generation of a H2/air calibration mixture within the flame arrestor of the CGD. The development effort concluded with the successful demonstration of in situ span calibrations of a CGD.
NASA Astrophysics Data System (ADS)
Golobokov, M.; Danilevich, S.
2018-04-01
In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.
NASA Astrophysics Data System (ADS)
Liu, Yonghuai; Rodrigues, Marcos A.
2000-03-01
This paper describes research on the application of machine vision techniques to a real time automatic inspection task of air filter components in a manufacturing line. A novel calibration algorithm is proposed based on a special camera setup where defective items would show a large calibration error. The algorithm makes full use of rigid constraints derived from the analysis of geometrical properties of reflected correspondence vectors which have been synthesized into a single coordinate frame and provides a closed form solution to the estimation of all parameters. For a comparative study of performance, we also developed another algorithm based on this special camera setup using epipolar geometry. A number of experiments using synthetic data have shown that the proposed algorithm is generally more accurate and robust than the epipolar geometry based algorithm and that the geometric properties of reflected correspondence vectors provide effective constraints to the calibration of rigid body transformations.
Note: Ultrasonic gas flowmeter based on optimized time-of-flight algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, X. F.; Tang, Z. A.
2011-04-15
A new digital signal processor based single path ultrasonic gas flowmeter is designed, constructed, and experimentally tested. To achieve high accuracy measurements, an optimized ultrasound driven method of incorporation of the amplitude modulation and the phase modulation of the transmit-receive technique is used to stimulate the transmitter. Based on the regularities among the received envelope zero-crossings, different received signal's signal-to-noise ratio situations are discriminated and optional time-of-flight algorithms are applied to take flow rate calculations. Experimental results from the dry calibration indicate that the designed flowmeter prototype can meet the zero-flow verification test requirements of the American Gas Association Reportmore » No. 9. Furthermore, the results derived from the flow calibration prove that the proposed flowmeter prototype can measure flow rate accurately in the practical experiments, and the nominal accuracies after FWME adjustment are lower than 0.8% throughout the calibration range.« less
Weighted least squares techniques for improved received signal strength based localization.
Tarrío, Paula; Bernardos, Ana M; Casar, José R
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.
Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization
Tarrío, Paula; Bernardos, Ana M.; Casar, José R.
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092
Calibration of limited-area ensemble precipitation forecasts for hydrological predictions
NASA Astrophysics Data System (ADS)
Diomede, Tommaso; Marsigli, Chiara; Montani, Andrea; Nerozzi, Fabrizio; Paccagnella, Tiziana
2015-04-01
The main objective of this study is to investigate the impact of calibration for limited-area ensemble precipitation forecasts, to be used for driving discharge predictions up to 5 days in advance. A reforecast dataset, which spans 30 years, based on the Consortium for Small Scale Modeling Limited-Area Ensemble Prediction System (COSMO-LEPS) was used for testing the calibration strategy. Three calibration techniques were applied: quantile-to-quantile mapping, linear regression, and analogs. The performance of these methodologies was evaluated in terms of statistical scores for the precipitation forecasts operationally provided by COSMO-LEPS in the years 2003-2007 over Germany, Switzerland, and the Emilia-Romagna region (northern Italy). The analog-based method seemed to be preferred because of its capability of correct position errors and spread deficiencies. A suitable spatial domain for the analog search can help to handle model spatial errors as systematic errors. However, the performance of the analog-based method may degrade in cases where a limited training dataset is available. A sensitivity test on the length of the training dataset over which to perform the analog search has been performed. The quantile-to-quantile mapping and linear regression methods were less effective, mainly because the forecast-analysis relation was not so strong for the available training dataset. A comparison between the calibration based on the deterministic reforecast and the calibration based on the full operational ensemble used as training dataset has been considered, with the aim to evaluate whether reforecasts are really worthy for calibration, given that their computational cost is remarkable. The verification of the calibration process was then performed by coupling ensemble precipitation forecasts with a distributed rainfall-runoff model. This test was carried out for a medium-sized catchment located in Emilia-Romagna, showing a beneficial impact of the analog-based method on the reduction of missed events for discharge predictions.
Calibration and assessment of full-field optical strain measurement procedures and instrumentation
NASA Astrophysics Data System (ADS)
Kujawinska, Malgorzata; Patterson, E. A.; Burguete, R.; Hack, E.; Mendels, D.; Siebert, T.; Whelan, Maurice
2006-09-01
There are no international standards or norms for the use of optical techniques for full-field strain measurement. In the paper the rationale and design of a reference material and a set of standarized materials for the calibration and evaluation of optical systems for full-field measurements of strain are outlined. A classification system for the steps in the measurement process is also proposed and allows the development of a unified approach to diagnostic testing of components in an optical system for strain measurement based on any optical technique. The results described arise from a European study known as SPOTS whose objectives were to begin to fill the gap caused by a lack of standards.
Statistical photocalibration of photodetectors for radiometry without calibrated light sources
NASA Astrophysics Data System (ADS)
Yielding, Nicholas J.; Cain, Stephen C.; Seal, Michael D.
2018-01-01
Calibration of CCD arrays for identifying bad pixels and achieving nonuniformity correction is commonly accomplished using dark frames. This kind of calibration technique does not achieve radiometric calibration of the array since only the relative response of the detectors is computed. For this, a second calibration is sometimes utilized by looking at sources with known radiances. This process can be used to calibrate photodetectors as long as a calibration source is available and is well-characterized. A previous attempt at creating a procedure for calibrating a photodetector using the underlying Poisson nature of the photodetection required calculations of the skewness of the photodetector measurements. Reliance on the third moment of measurement meant that thousands of samples would be required in some cases to compute that moment. A photocalibration procedure is defined that requires only first and second moments of the measurements. The technique is applied to image data containing a known light source so that the accuracy of the technique can be surmised. It is shown that the algorithm can achieve accuracy of nearly 2.7% of the predicted number of photons using only 100 frames of image data.
Hazardous Environment Robotics
NASA Technical Reports Server (NTRS)
1996-01-01
Jet Propulsion Laboratory (JPL) developed video overlay calibration and demonstration techniques for ground-based telerobotics. Through a technology sharing agreement with JPL, Deneb Robotics added this as an option to its robotics software, TELEGRIP. The software is used for remotely operating robots in nuclear and hazardous environments in industries including automotive and medical. The option allows the operator to utilize video to calibrate 3-D computer models with the actual environment, and thus plan and optimize robot trajectories before the program is automatically generated.
NASA Astrophysics Data System (ADS)
Feeley, J.; Zajic, J.; Metcalf, A.; Baucom, T.
2009-12-01
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP) Calibration and Validation (Cal/Val) team is planning post-launch activities to calibrate the NPP sensors and validate Sensor Data Records (SDRs). The IPO has developed a web-based data collection and visualization tool in order to effectively collect, coordinate, and manage the calibration and validation tasks for the OMPS, ATMS, CrIS, and VIIRS instruments. This tool is accessible to the multi-institutional Cal/Val teams consisting of the Prime Contractor and Government Cal/Val leads along with the NASA NPP Mission team, and is used for mission planning and identification/resolution of conflicts between sensor activities. Visualization techniques aid in displaying task dependencies, including prerequisites and exit criteria, allowing for the identification of a critical path. This presentation will highlight how the information is collected, displayed, and used to coordinate the diverse instrument calibration/validation teams.
Nimbus-7 TOMS Version 7 Calibration
NASA Technical Reports Server (NTRS)
Wellemeyer, C. G.; Taylor, S. L.; Jaross, G.; DeLand, M. T.; Seftor, C. J.; Labow, G.; Swissler, T. J.; Cebula, R. P.
1996-01-01
This report describes an improved instrument characterization used for the Version 7 processing of the Nimbus-7 Total Ozone Mapping Spectrometer (TOMS) data record. An improved internal calibration technique referred to as spectral discrimination is used to provide long-term calibration precision of +/- 1%/decade in total column ozone amount. A revised wavelength scale results in a day one calibration that agrees with other satellite and ground-based measurements of total ozone, while a wavelength independent adjustment of the initial radiometric calibration constants provides good agreement with surface reflectivity measured by other satellite-borne ultraviolet measurements. The impact of other aspects of the Nimbus-7 TOMS instrument performance are also discussed. The Version 7 data should be used in all future studies involving the Nimbus-7 TOMS measurements of ozone. The data are available through the NASA Goddard Space Flight Center's Distributive Active Archive Center (DAAC).
Synthetic aperture imaging in ultrasound calibration
NASA Astrophysics Data System (ADS)
Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.
2014-03-01
Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.
NASA Technical Reports Server (NTRS)
Taylor, Brian R.
2012-01-01
A novel, efficient air data calibration method is proposed for aircraft with limited envelopes. This method uses output-error optimization on three-dimensional inertial velocities to estimate calibration and wind parameters. Calibration parameters are based on assumed calibration models for static pressure, angle of attack, and flank angle. Estimated wind parameters are the north, east, and down components. The only assumptions needed for this method are that the inertial velocities and Euler angles are accurate, the calibration models are correct, and that the steady-state component of wind is constant throughout the maneuver. A two-minute maneuver was designed to excite the aircraft over the range of air data calibration parameters and de-correlate the angle-of-attack bias from the vertical component of wind. Simulation of the X-48B (The Boeing Company, Chicago, Illinois) aircraft was used to validate the method, ultimately using data derived from wind-tunnel testing to simulate the un-calibrated air data measurements. Results from the simulation were accurate and robust to turbulence levels comparable to those observed in flight. Future experiments are planned to evaluate the proposed air data calibration in a flight environment.
A dust-parallax distance of 19 megaparsecs to the supermassive black hole in NGC 4151.
Hönig, Sebastian F; Watson, Darach; Kishimoto, Makoto; Hjorth, Jens
2014-11-27
The active galaxy NGC 4151 has a crucial role as one of only two active galactic nuclei for which black hole mass measurements based on emission line reverberation mapping can be calibrated against other dynamical techniques. Unfortunately, effective calibration requires accurate knowledge of the distance to NGC 4151, which is not at present available. Recently reported distances range from 4 to 29 megaparsecs. Strong peculiar motions make a redshift-based distance very uncertain, and the geometry of the galaxy and its nucleus prohibit accurate measurements using other techniques. Here we report a dust-parallax distance to NGC 4151 of 19.0(+2.4)(-2.6) megaparsecs. The measurement is based on an adaptation of a geometric method that uses the emission line regions of active galaxies. Because these regions are too small to be imaged with present technology, we use instead the ratio of the physical and angular sizes of the more extended hot-dust emission as determined from time delays and infrared interferometry. This distance leads to an approximately 1.4-fold increase in the dynamical black hole mass, implying a corresponding correction to emission line reverberation masses of black holes if they are calibrated against the two objects with additional dynamical masses.
Mattingly, G. E.
1992-01-01
Critical measurement performance of fluid flowmeters requires proper and quantified verification data. These data should be generated using calibration and traceability techniques established for these verification purposes. In these calibration techniques, the calibration facility should be well-characterized and its components and performance properly traced to pertinent higher standards. The use of this calibrator to calibrate flowmeters should be appropriately established and the manner in which the calibrated flowmeter is used should be specified in accord with the conditions of the calibration. These three steps: 1) characterizing the calibration facility itself, 2) using the characterized facility to calibrate a flowmeter, and 3) using the calibrated flowmeter to make a measurement are described and the pertinent equations are given for an encoded-stroke, piston displacement-type calibrator and a pulsed output flowmeter. It is concluded that, given these equations and proper instrumentation of this type of calibrator, very high levels of performance can be attained and, in turn, these can be used to achieve high fluid flow rate measurement accuracy with pulsed output flowmeters. PMID:28053444
Cross calibration of the Landsat-7 ETM+ and EO-1 ALI sensor
Chander, G.; Meyer, D.J.; Helder, D.L.
2004-01-01
As part of the Earth Observer 1 (EO-1) Mission, the Advanced Land Imager (ALI) demonstrates a potential technological direction for Landsat Data Continuity Missions. To evaluate ALI's capabilities in this role, a cross-calibration methodology has been developed using image pairs from the Landsat-7 (L7) Enhanced Thematic Mapper Plus (ETM+) and EO-1 (ALI) to verify the radiometric calibration of ALI with respect to the well-calibrated L7 ETM+ sensor. Results have been obtained using two different approaches. The first approach involves calibration of nearly simultaneous surface observations based on image statistics from areas observed simultaneously by the two sensors. The second approach uses vicarious calibration techniques to compare the predicted top-of-atmosphere radiance derived from ground reference data collected during the overpass to the measured radiance obtained from the sensor. The results indicate that the relative sensor chip assemblies gains agree with the ETM+ visible and near-infrared bands to within 2% and the shortwave infrared bands to within 4%.
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
Improved GPS-based time link calibration involving ROA and PTB.
Esteban, Héctor; Palacio, Juan; Galindo, Francisco Javier; Feldmann, Thorsten; Bauch, Andreas; Piester, Dirk
2010-03-01
The calibration of time transfer links is mandatory in the context of international collaboration for the realization of International Atomic Time. In this paper, we present the results of the calibration of the GPS time transfer link between the Real Instituto y Observatorio de la Armada (ROA) and the Physikalisch-Technische Bundesanstalt (PTB) by means of a traveling geodetic-type GPS receiver and an evaluation of the achieved type A and B uncertainty. The time transfer results were achieved by using CA, P3, and also carrier phase PPP comparison techniques. We finally use these results to re-calibrate the two-way satellite time and frequency transfer (TWSTFT) link between ROA and PTB, using one month of data. We show that a TWSTFT link can be calibrated by means of GPS time comparisons with an uncertainty below 2 ns, and that potentially even sub-nanosecond uncertainty can be achieved. This is a novel and cost-effective approach compared with the more common calibration using a traveling TWSTFT station.
Allen, Andrew J.; Zhang, Fan; Kline, R. Joseph; ...
2017-03-07
The certification of a new standard reference material for small-angle scattering [NIST Standard Reference Material (SRM) 3600: Absolute Intensity Calibration Standard for Small-Angle X-ray Scattering (SAXS)], based on glassy carbon, is presented. Creation of this SRM relies on the intrinsic primary calibration capabilities of the ultra-small-angle X-ray scattering technique. This article describes how the intensity calibration has been achieved and validated in the certified Q range, Q = 0.008–0.25 Å –1, together with the purpose, use and availability of the SRM. The intensity calibration afforded by this robust and stable SRM should be applicable universally to all SAXS instruments thatmore » employ a transmission measurement geometry, working with a wide range of X-ray energies or wavelengths. As a result, the validation of the SRM SAXS intensity calibration using small-angle neutron scattering (SANS) is discussed, together with the prospects for including SANS in a future renewal certification.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, Andrew J.; Zhang, Fan; Kline, R. Joseph
The certification of a new standard reference material for small-angle scattering [NIST Standard Reference Material (SRM) 3600: Absolute Intensity Calibration Standard for Small-Angle X-ray Scattering (SAXS)], based on glassy carbon, is presented. Creation of this SRM relies on the intrinsic primary calibration capabilities of the ultra-small-angle X-ray scattering technique. This article describes how the intensity calibration has been achieved and validated in the certified Q range, Q = 0.008–0.25 Å –1, together with the purpose, use and availability of the SRM. The intensity calibration afforded by this robust and stable SRM should be applicable universally to all SAXS instruments thatmore » employ a transmission measurement geometry, working with a wide range of X-ray energies or wavelengths. As a result, the validation of the SRM SAXS intensity calibration using small-angle neutron scattering (SANS) is discussed, together with the prospects for including SANS in a future renewal certification.« less
Allen, Andrew J; Zhang, Fan; Kline, R Joseph; Guthrie, William F; Ilavsky, Jan
2017-04-01
The certification of a new standard reference material for small-angle scattering [NIST Standard Reference Material (SRM) 3600: Absolute Intensity Calibration Standard for Small-Angle X-ray Scattering (SAXS)], based on glassy carbon, is presented. Creation of this SRM relies on the intrinsic primary calibration capabilities of the ultra-small-angle X-ray scattering technique. This article describes how the intensity calibration has been achieved and validated in the certified Q range, Q = 0.008-0.25 Å -1 , together with the purpose, use and availability of the SRM. The intensity calibration afforded by this robust and stable SRM should be applicable universally to all SAXS instruments that employ a transmission measurement geometry, working with a wide range of X-ray energies or wavelengths. The validation of the SRM SAXS intensity calibration using small-angle neutron scattering (SANS) is discussed, together with the prospects for including SANS in a future renewal certification.
Hernandez, Silvia R; Kergaravat, Silvina V; Pividori, Maria Isabel
2013-03-15
An approach based on the electrochemical detection of the horseradish peroxidase enzymatic reaction by means of square wave voltammetry was developed for the determination of phenolic compounds in environmental samples. First, a systematic optimization procedure of three factors involved in the enzymatic reaction was carried out using response surface methodology through a central composite design. Second, the enzymatic electrochemical detection coupled with a multivariate calibration method based in the partial least-squares technique was optimized for the determination of a mixture of five phenolic compounds, i.e. phenol, p-aminophenol, p-chlorophenol, hydroquinone and pyrocatechol. The calibration and validation sets were built and assessed. In the calibration model, the LODs for phenolic compounds oscillated from 0.6 to 1.4 × 10(-6) mol L(-1). Recoveries for prediction samples were higher than 85%. These compounds were analyzed simultaneously in spiked samples and in water samples collected close to tanneries and landfills. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Haag, Justin M.; Van Gorp, Byron E.; Mouroulis, Pantazis; Thompson, David R.
2017-09-01
The airborne Portable Remote Imaging Spectrometer (PRISM) instrument is based on a fast (F/1.8) Dyson spectrometer operating at 350-1050 nm and a two-mirror telescope combined with a Teledyne HyViSI 6604A detector array. Raw PRISM data contain electronic and optical artifacts that must be removed prior to radiometric calibration. We provide an overview of the process transforming raw digital numbers to calibrated radiance values. Electronic panel artifacts are first corrected using empirical relationships developed from laboratory data. The instrument spectral response functions (SRF) are reconstructed using a measurement-based optimization technique. Removal of SRF effects from the data improves retrieval of true spectra, particularly in the typically low-signal near-ultraviolet and near-infrared regions. As a final step, radiometric calibration is performed using corrected measurements of an object of known radiance. Implementation of the complete calibration procedure maximizes data quality in preparation for subsequent processing steps, such as atmospheric removal and spectral signature classification.
Technique for Radiometer and Antenna Array Calibration - TRAAC
NASA Technical Reports Server (NTRS)
Meyer, Paul; Sims, William; Varnavas, Kosta; McCracken, Jeff; Srinivasan, Karthik; Limaye, Ashutosh; Laymon, Charles; Richeson. James
2012-01-01
Highly sensitive receivers are used to detect minute amounts of emitted electromagnetic energy. Calibration of these receivers is vital to the accuracy of the measurements. Traditional calibration techniques depend on calibration reference internal to the receivers as reference for the calibration of the observed electromagnetic energy. Such methods can only calibrate errors in measurement introduced by the receiver only. The disadvantage of these existing methods is that they cannot account for errors introduced by devices, such as antennas, used for capturing electromagnetic radiation. This severely limits the types of antennas that can be used to make measurements with a high degree of accuracy. Complex antenna systems, such as electronically steerable antennas (also known as phased arrays), while offering potentially significant advantages, suffer from a lack of a reliable and accurate calibration technique. The proximity of antenna elements in an array results in interaction between the electromagnetic fields radiated (or received) by the individual elements. This phenomenon is called mutual coupling. The new calibration method uses a known noise source as a calibration load to determine the instantaneous characteristics of the antenna. The noise source is emitted from one element of the antenna array and received by all the other elements due to mutual coupling. This received noise is used as a calibration standard to monitor the stability of the antenna electronics.
Air data position-error calibration using state reconstruction techniques
NASA Technical Reports Server (NTRS)
Whitmore, S. A.; Larson, T. J.; Ehernberger, L. J.
1984-01-01
During the highly maneuverable aircraft technology (HiMAT) flight test program recently completed at NASA Ames Research Center's Dryden Flight Research Facility, numerous problems were experienced in airspeed calibration. This necessitated the use of state reconstruction techniques to arrive at a position-error calibration. For the HiMAT aircraft, most of the calibration effort was expended on flights in which the air data pressure transducers were not performing accurately. Following discovery of this problem, the air data transducers of both aircraft were wrapped in heater blankets to correct the problem. Additional calibration flights were performed, and from the resulting data a satisfactory position-error calibration was obtained. This calibration and data obtained before installation of the heater blankets were used to develop an alternate calibration method. The alternate approach took advantage of high-quality inertial data that was readily available. A linearized Kalman filter (LKF) was used to reconstruct the aircraft's wind-relative trajectory; the trajectory was then used to separate transducer measurement errors from the aircraft position error. This calibration method is accurate and inexpensive. The LKF technique has an inherent advantage of requiring that no flight maneuvers be specially designed for airspeed calibrations. It is of particular use when the measurements of the wind-relative quantities are suspected to have transducer-related errors.
Fusion-based multi-target tracking and localization for intelligent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2008-04-01
In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.
NASA Astrophysics Data System (ADS)
Mason, Michael D.; Ray, Krishanu; Feke, Gilbert D.; Grober, Robert D.; Pohlers, Gerd; Cameron, James F.
2003-05-01
Coumarin 6 (C6), a pH sensitive fluorescent molecule were doped into commercial resist systems to demonstrate a cost-effective fluorescence microscopy technique for detecting latent photoacid images in exposed chemically amplified resist films. The fluorescenec image contrast is optimized by carefully selecting optical filters to match the spectroscopic properties of C6 in the resist matrices. We demonstrate the potential of this technique for two sepcific non-invasive applications. First, a fast, conventient, fluorescence technique is demonstrated for determination of quantum yeidsl of photo-acid generation. Since the Ka of C6 in the 193nm resist system lies wihtin the range of acid concentrations that can be photogenerated, we have used this technique to evaluate the acid generation efficiency of various photo-acid generators (PAGs). The technique is based on doping the resist formulations containing the candidate PAGs with C6, coating one wafer per PAG, patterning the wafer with a dose ramp and spectroscopically imaging the wafers. The fluorescence of each pattern in the dose ramp is measured as a single image and analyzed with the optical titration model. Second, a nondestructive in-line diagnostic technique is developed for the focus calibration and validation of a projection lithography system. Our experimental results show excellent correlation between the fluorescence images and scanning electron microscope analysis of developed features. This technique has successfully been applied in both deep UV resists e.g., Shipley UVIIHS resist and 193 nm resists e.g., Shipley Vema-type resist. This method of focus calibration has also been extended to samples with feature sizes below the diffraction limit where the pitch between adjacent features is on the order of 300 nm. Image capture, data analysis, and focus latitude verification are all computer controlled from a single hardware/software platform. Typical focus calibration curves can be obtained within several minutes.
NASA Astrophysics Data System (ADS)
Cucchetti, E.; Eckart, M. E.; Peille, P.; Porter, F. S.; Pajot, F.; Pointecouteau, E.
2018-04-01
With its array of 3840 Transition Edge Sensors (TESs), the Athena X-ray Integral Field Unit (X-IFU) will provide spatially resolved high-resolution spectroscopy (2.5 eV up to 7 keV) from 0.2 to 12 keV, with an absolute energy scale accuracy of 0.4 eV. Slight changes in the TES operating environment can cause significant variations in its energy response function, which may result in systematic errors in the absolute energy scale. We plan to monitor such changes at pixel level via onboard X-ray calibration sources and correct the energy scale accordingly using a linear or quadratic interpolation of gain curves obtained during ground calibration. However, this may not be sufficient to meet the 0.4 eV accuracy required for the X-IFU. In this contribution, we introduce a new two-parameter gain correction technique, based on both the pulse-height estimate of a fiducial line and the baseline value of the pixels. Using gain functions that simulate ground calibration data, we show that this technique can accurately correct deviations in detector gain due to changes in TES operating conditions such as heat sink temperature, bias voltage, thermal radiation loading and linear amplifier gain. We also address potential optimisations of the onboard calibration source and compare the performance of this new technique with those previously used.
Instrumental Response Model and Detrending for the Dark Energy Camera
Bernstein, G. M.; Abbott, T. M. C.; Desai, S.; ...
2017-09-14
We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less
Instrumental Response Model and Detrending for the Dark Energy Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, G. M.; Abbott, T. M. C.; Desai, S.
We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less
NASA Astrophysics Data System (ADS)
Jena, S.
2015-12-01
The overexploitation of groundwater resulted in abandoning many shallow tube wells in the river Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is essential for the efficient planning and management of the water resources. The main intent of this study is to develope a 3-D groundwater flow model of the study basin using the Visual MODFLOW package and successfully calibrate and validate it using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (MCMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE) and coefficient of determination (R2) were adopted as two criteria during calibration and validation of the developed model. NSE and R2 values of groundwater flow model for calibration and validation periods were in acceptable range. Also, the MCMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.
Boncyk, Wayne C.; Markham, Brian L.; Barker, John L.; Helder, Dennis
1996-01-01
The Landsat-7 Image Assessment System (IAS), part of the Landsat-7 Ground System, will calibrate and evaluate the radiometric and geometric performance of the Enhanced Thematic Mapper Plus (ETM +) instrument. The IAS incorporates new instrument radiometric artifact correction and absolute radiometric calibration techniques which overcome some limitations to calibration accuracy inherent in historical calibration methods. Knowledge of ETM + instrument characteristics gleaned from analysis of archival Thematic Mapper in-flight data and from ETM + prelaunch tests allow the determination and quantification of the sources of instrument artifacts. This a priori knowledge will be utilized in IAS algorithms designed to minimize the effects of the noise sources before calibration, in both ETM + image and calibration data.
Swarm Optimization-Based Magnetometer Calibration for Personal Handheld Devices
Ali, Abdelrahman; Siddharth, Siddharth; Syed, Zainab; El-Sheimy, Naser
2012-01-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a processor that generates position and orientation solutions by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are usually corrupted by several errors, including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO)-based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometers. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. Furthermore, the proposed algorithm can help in the development of Pedestrian Navigation Devices (PNDs) when combined with inertial sensors and GPS/Wi-Fi for indoor navigation and Location Based Services (LBS) applications.
Long-Term Stability Assessment of Sonoran Desert for Vicarious Calibration of GOES-R
NASA Astrophysics Data System (ADS)
Kim, W.; Liang, S.; Cao, C.
2012-12-01
Vicarious calibration refers to calibration techniques that do not depend on onboard calibration devices. Although sensors and onboard calibration devices undergo rigorous validation processes before launch, performance of sensors often degrades after the launch due to exposure to the harsh space environment and the aging of devices. Such in-flight changes of devices can be identified and adjusted through vicarious calibration activities where the sensor degradation is measured in reference to exterior calibration sources such as the Sun, the Moon, and the Earth surface. Sonoran desert is one of the best calibration sites located in the North America that are available for vicarious calibration of GOES-R satellite. To accurately calibrate sensors onboard GOES-R satellite (e.g. advanced baseline imager (ABI)), the temporal stability of Sonoran desert needs to be assessed precisely. However, short-/mid-term variations in top-of-atmosphere (TOA) reflectance caused by meteorological variables such as water vapor amount and aerosol loading are often difficult to retrieve, making the use of TOA reflectance time series for the stability assessment of the site. In this paper, we address this issue of normalization of TOA reflectance time series using a time series analysis algorithm - seasonal trend decomposition procedure based on LOESS (STL) (Cleveland et al, 1990). The algorithm is basically a collection of smoothing filters which leads to decomposition of a time series into three additive components; seasonal, trend, and remainder. Since this non-linear technique is capable of extracting seasonal patterns in the presence of trend changes, the seasonal variation can be effectively identified in the time series of remote sensing data subject to various environmental changes. The experiment results performed with Landsat 5 TM data show that the decomposition results acquired for the Sonoran Desert area produce normalized series that have much less uncertainty than those of traditional BRDF models, which leads to more accurate stability assessment.
Franck, D; de Carlan, L; Pierrat, N; Broggio, D; Lamart, S
2007-01-01
Although great efforts have been made to improve the physical phantoms used to calibrate in vivo measurement systems, these phantoms represent a single average counting geometry and usually contain a uniform distribution of the radionuclide over the tissue substitute. As a matter of fact, significant corrections must be made to phantom-based calibration factors in order to obtain absolute calibration efficiencies applicable to a given individual. The importance of these corrections is particularly crucial when considering in vivo measurements of low energy photons emitted by radionuclides deposited in the lung such as actinides. Thus, it was desirable to develop a method for calibrating in vivo measurement systems that is more sensitive to these types of variability. Previous works have demonstrated the possibility of such a calibration using the Monte Carlo technique. Our research programme extended such investigations to the reconstruction of numerical anthropomorphic phantoms based on personal physiological data obtained by computed tomography. New procedures based on a new graphical user interface (GUI) for development of computational phantoms for Monte Carlo calculations and data analysis are being developed to take advantage of recent progress in image-processing codes. This paper presents the principal features of this new GUI. Results of calculations and comparison with experimental data are also presented and discussed in this work.
Avila, Agustín Brau; Mazo, Jorge Santolaria; Martín, Juan José Aguilar
2014-01-01
During the last years, the use of Portable Coordinate Measuring Machines (PCMMs) in industry has increased considerably, mostly due to their flexibility for accomplishing in-line measuring tasks as well as their reduced costs and operational advantages as compared to traditional coordinate measuring machines (CMMs). However, their operation has a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification procedures. In this work the mechanical design of an indexed metrology platform (IMP) is presented. The aim of the IMP is to increase the final accuracy and to radically simplify the calibration, identification and verification of geometrical parameter procedures of PCMMs. The IMP allows us to fix the calibrated gauge object and move the measuring instrument in such a way that it is possible to cover most of the instrument working volume, reducing the time and operator fatigue to carry out these types of procedures. PMID:24451458
Avila, Agustín Brau; Mazo, Jorge Santolaria; Martín, Juan José Aguilar
2014-01-02
During the last years, the use of Portable Coordinate Measuring Machines (PCMMs) in industry has increased considerably, mostly due to their flexibility for accomplishing in-line measuring tasks as well as their reduced costs and operational advantages as compared to traditional coordinate measuring machines (CMMs). However, their operation has a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification procedures. In this work the mechanical design of an indexed metrology platform (IMP) is presented. The aim of the IMP is to increase the final accuracy and to radically simplify the calibration, identification and verification of geometrical parameter procedures of PCMMs. The IMP allows us to fix the calibrated gauge object and move the measuring instrument in such a way that it is possible to cover most of the instrument working volume, reducing the time and operator fatigue to carry out these types of procedures.
AN ALTERNATIVE CALIBRATION OF CR-39 DETECTORS FOR RADON DETECTION BEYOND THE SATURATION LIMIT.
Franci, Daniele; Aureli, Tommaso; Cardellini, Francesco
2016-12-01
Time-integrated measurements of indoor radon levels are commonly carried out using solid-state nuclear track detectors (SSNTDs), due to the numerous advantages offered by this radiation detection technique. However, the use of SSNTD also presents some problems that may affect the accuracy of the results. The effect of overlapping tracks often results in the underestimation of the detected track density, which leads to the reduction of the counting efficiency for increasing radon exposure. This article aims to address the effect of overlapping tracks by proposing an alternative calibration technique based on the measurement of the fraction of the detector surface covered by alpha tracks. The method has been tested against a set of Monte Carlo data and then applied to a set of experimental data collected at the radon chamber of the Istituto Nazionale di Metrologia delle Radiazioni Ionizzanti, at the ENEA centre in Casaccia, using CR-39 detectors. It has been proved that the method allows to extend the detectable range of radon exposure far beyond the intrinsic limit imposed by the standard calibration based on the track density. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Malina, Roger F.; Jelinsky, Patrick; Bowyer, Stuart
1986-01-01
The calibration facilities and techniques for the Extreme Ultraviolet Explorer (EUVE) from 44 to 2500 A are described. Key elements include newly designed radiation sources and a collimated monochromatic EUV beam. Sample results for the calibration of the EUVE filters, detectors, gratings, collimators, and optics are summarized.
A variable acceleration calibration system
NASA Astrophysics Data System (ADS)
Johnson, Thomas H.
2011-12-01
A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
A Kinematic Calibration Process for Flight Robotic Arms
NASA Technical Reports Server (NTRS)
Collins, Curtis L.; Robinson, Matthew L.
2013-01-01
The Mars Science Laboratory (MSL) robotic arm is ten times more massive than any Mars robotic arm before it, yet with similar accuracy and repeatability positioning requirements. In order to assess and validate these requirements, a higher-fidelity model and calibration processes were needed. Kinematic calibration of robotic arms is a common and necessary process to ensure good positioning performance. Most methodologies assume a rigid arm, high-accuracy data collection, and some kind of optimization of kinematic parameters. A new detailed kinematic and deflection model of the MSL robotic arm was formulated in the design phase and used to update the initial positioning and orientation accuracy and repeatability requirements. This model included a higher-fidelity link stiffness matrix representation, as well as a link level thermal expansion model. In addition, it included an actuator backlash model. Analytical results highlighted the sensitivity of the arm accuracy to its joint initialization methodology. Because of this, a new technique for initializing the arm joint encoders through hardstop calibration was developed. This involved selecting arm configurations to use in Earth-based hardstop calibration that had corresponding configurations on Mars with the same joint torque to ensure repeatability in the different gravity environment. The process used to collect calibration data for the arm included the use of multiple weight stand-in turrets with enough metrology targets to reconstruct the full six-degree-of-freedom location of the rover and tool frames. The follow-on data processing of the metrology data utilized a standard differential formulation and linear parameter optimization technique.
Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi
2013-07-01
Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.
Radiometric analysis of the longwave infrared channel of the Thematic Mapper on LANDSAT 4 and 5
NASA Technical Reports Server (NTRS)
Schott, John R.; Volchok, William J.; Biegel, Joseph D.
1986-01-01
The first objective was to evaluate the postlaunch radiometric calibration of the LANDSAT Thematic Mapper (TM) band 6 data. The second objective was to determine to what extent surface temperatures could be computed from the TM and 6 data using atmospheric propagation models. To accomplish this, ground truth data were compared to a single TM-4 band 6 data set. This comparison indicated satisfactory agreement over a narrow temperature range. The atmospheric propagation model (modified LOWTRAN 5A) was used to predict surface temperature values based on the radiance at the spacecraft. The aircraft data were calibrated using a multi-altitude profile calibration technique which had been extensively tested in previous studies. This aircraft calibration permitted measurement of surface temperatures based on the radiance reaching the aircraft. When these temperature values are evaluated, an error in the satellite's ability to predict surface temperatures can be estimated. This study indicated that by carefully accounting for various sensor calibration and atmospheric propagation effects, and expected error (1 standard deviation) in surface temperature would be 0.9 K. This assumes no error in surface emissivity and no sampling error due to target location. These results indicate that the satellite calibration is within nominal limits to within this study's ability to measure error.
Fragmentation modeling of a resin bonded sand
NASA Astrophysics Data System (ADS)
Hilth, William; Ryckelynck, David
2017-06-01
Cemented sands exhibit a complex mechanical behavior that can lead to sophisticated models, with numerous parameters without real physical meaning. However, using a rather simple generalized critical state bonded soil model has proven to be a relevant compromise between an easy calibration and good results. The constitutive model formulation considers a non-associated elasto-plastic formulation within the critical state framework. The calibration procedure, using standard laboratory tests, is complemented by the study of an uniaxial compression test observed by tomography. Using finite elements simulations, this test is simulated considering a non-homogeneous 3D media. The tomography of compression sample gives access to 3D displacement fields by using image correlation techniques. Unfortunately these fields have missing experimental data because of the low resolution of correlations for low displacement magnitudes. We propose a recovery method that reconstructs 3D full displacement fields and 2D boundary displacement fields. These fields are mandatory for the calibration of the constitutive parameters by using 3D finite element simulations. The proposed recovery technique is based on a singular value decomposition of available experimental data. This calibration protocol enables an accurate prediction of the fragmentation of the specimen.
Spectral Radiance of a Large-Area Integrating Sphere Source
Walker, James H.; Thompson, Ambler
1995-01-01
The radiance and irradiance calibration of large field-of-view scanning and imaging radiometers for remote sensing and surveillance applications has resulted in the development of novel calibration techniques. One of these techniques is the employment of large-area integrating sphere sources as radiance or irradiance secondary standards. To assist the National Aeronautical and Space Administration’s space based ozone measurement program, a commercially available large-area internally illuminated integrating sphere source’s spectral radiance was characterized in the wavelength region from 230 nm to 400 nm at the National Institute of Standards and Technology. Spectral radiance determinations and spatial mappings of the source indicate that carefully designed large-area integrating sphere sources can be measured with a 1 % to 2 % expanded uncertainty (two standard deviation estimate) in the near ultraviolet with spatial nonuniformities of 0.6 % or smaller across a 20 cm diameter exit aperture. A method is proposed for the calculation of the final radiance uncertainties of the source which includes the field of view of the instrument being calibrated. PMID:29151725
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Iain S.; Wray, Craig P.; Guillot, Cyril
2003-08-01
In this report, we discuss the accuracy of flow hoods for residential applications, based on laboratory tests and field studies. The results indicate that commercially available hoods are often inadequate to measure flows in residential systems, and that there can be a wide range of performance between different flow hoods. The errors are due to poor calibrations, sensitivity of existing hoods to grille flow non-uniformities, and flow changes from added flow resistance. We also evaluated several simple techniques for measuring register airflows that could be adopted by the HVAC industry and homeowners as simple diagnostics that are often as accuratemore » as commercially available devices. Our test results also show that current calibration procedures for flow hoods do not account for field application problems. As a result, organizations such as ASHRAE or ASTM need to develop a new standard for flow hood calibration, along with a new measurement standard to address field use of flow hoods.« less
NASA Astrophysics Data System (ADS)
Deng, Xiao; Ma, Tianyu; Lecomte, Roger; Yao, Rutao
2011-10-01
To expand the availability of SPECT for biomedical research, we developed a SPECT imaging system on an existing animal PET detector by adding a slit-slat collimator. As the detector crystals are pixelated, the relative slat-to-crystal position (SCP) in the axial direction affects the photon flux distribution onto the crystals. The accurate knowledge of SCP is important to the axial resolution and sensitivity of the system. This work presents a method for optimizing SCP in system design and for determining SCP in system geometrical calibration. The optimization was achieved by finding the SCP that provides higher spatial resolution in terms of average-root-mean-square (R̅M̅S̅) width of the axial point spread function (PSF) without loss of sensitivity. The calibration was based on the least-square-error method that minimizes the difference between the measured and modeled axial point spread projections. The uniqueness and accuracy of the calibration results were validated through a singular value decomposition (SVD) based approach. Both the optimization and calibration techniques were evaluated with Monte Carlo (MC) simulated data. We showed that the [R̅M̅S̅] was improved about 15% with the optimal SCP as compared to the least-optimal SCP, and system sensitivity was not affected by SCP. The SCP error achieved by the proposed calibration method was less than 0.04 mm. The calibrated SCP value was used in MC simulation to generate the system matrix which was used for image reconstruction. The images of simulated phantoms showed the expected resolution performance and were artifact free. We conclude that the proposed optimization and calibration method is effective for the slit-slat collimator based SPECT systems.
NASA Astrophysics Data System (ADS)
Ortolano, Gaetano; Visalli, Roberto; Godard, Gaston; Cirrincione, Rosolino
2018-06-01
We present a new ArcGIS®-based tool developed in the Python programming language for calibrating EDS/WDS X-ray element maps, with the aim of acquiring quantitative information of petrological interest. The calibration procedure is based on a multiple linear regression technique that takes into account interdependence among elements and is constrained by the stoichiometry of minerals. The procedure requires an appropriate number of spot analyses for use as internal standards and provides several test indexes for a rapid check of calibration accuracy. The code is based on an earlier image-processing tool designed primarily for classifying minerals in X-ray element maps; the original Python code has now been enhanced to yield calibrated maps of mineral end-members or the chemical parameters of each classified mineral. The semi-automated procedure can be used to extract a dataset that is automatically stored within queryable tables. As a case study, the software was applied to an amphibolite-facies garnet-bearing micaschist. The calibrated images obtained for both anhydrous (i.e., garnet and plagioclase) and hydrous (i.e., biotite) phases show a good fit with corresponding electron microprobe analyses. This new GIS-based tool package can thus find useful application in petrology and materials science research. Moreover, the huge quantity of data extracted opens new opportunities for the development of a thin-section microchemical database that, using a GIS platform, can be linked with other major global geoscience databases.
Absolute calibration of Doppler coherence imaging velocity images
NASA Astrophysics Data System (ADS)
Samuell, C. M.; Allen, S. L.; Meyer, W. H.; Howard, J.
2017-08-01
A new technique has been developed for absolutely calibrating a Doppler Coherence Imaging Spectroscopy interferometer for measuring plasma ion and neutral velocities. An optical model of the interferometer is used to generate zero-velocity reference images for the plasma spectral line of interest from a calibration source some spectral distance away. Validation of this technique using a tunable diode laser demonstrated an accuracy better than 0.2 km/s over an extrapolation range of 3.5 nm; a two order of magnitude improvement over linear approaches. While a well-characterized and very stable interferometer is required, this technique opens up the possibility of calibrated velocity measurements in difficult viewing geometries and for complex spectral line-shapes.
Absolute reactivity calibration of accelerator-driven systems after RACE-T experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jammes, C. C.; Imel, G. R.; Geslot, B.
2006-07-01
The RACE-T experiments that were held in november 2005 in the ENEA-Casaccia research center near Rome allowed us to improve our knowledge of the experimental techniques for absolute reactivity calibration at either startup or shutdown phases of accelerator-driven systems. Various experimental techniques for assessing a subcritical level were inter-compared through three different subcritical configurations SC0, SC2 and SC3, about -0.5, -3 and -6 dollars, respectively. The area-ratio method based of the use of a pulsed neutron source appears as the most performing. When the reactivity estimate is expressed in dollar unit, the uncertainties obtained with the area-ratio method were lessmore » than 1% for any subcritical configuration. The sensitivity to measurement location was about slightly more than 1% and always less than 4%. Finally, it is noteworthy that the source jerk technique using a transient caused by the pulsed neutron source shutdown provides results in good agreement with those obtained from the area-ratio technique. (authors)« less
Fingerprinting and quantification of GMOs in the agro-food sector.
Taverniers, I; Van Bockstaele, E; De Loose, M
2003-01-01
Most strategies for analyzing GMOs in plants and derived food and feed products, are based on the polymerase chain reaction (PCR) technique. In conventional PCR methods, a 'known' sequence between two specific primers is amplified. To the contrary, with the 'anchor PCR' technique, unknown sequences adjacent to a known sequence, can be amplified. Because T-DNA/plant border sequences are being amplified, anchor PCR is the perfect tool for unique identification of transgenes, including non-authorized GMOs. In this work, anchor PCR was applied to characterize the 'transgene locus' and to clarify the complete molecular structure of at least six different commercial transgenic plants. Based on sequences of T-DNA/plant border junctions, obtained by anchor PCR, event specific primers were developed. The junction fragments, together with endogeneous reference gene targets, were cloned in plasmids. The latter were then used as event specific calibrators in real-time PCR, a new technique for the accurate relative quantification of GMOs. We demonstrate here the importance of anchor PCR for identification and the usefulness of plasmid DNA calibrators in quantification strategies for GMOs, throughout the agro-food sector.
Calibration Method for IATS and Application in Multi-Target Monitoring Using Coded Targets
NASA Astrophysics Data System (ADS)
Zhou, Yueyin; Wagner, Andreas; Wunderlich, Thomas; Wasmeier, Peter
2017-06-01
The technique of Image Assisted Total Stations (IATS) has been studied for over ten years and is composed of two major parts: one is the calibration procedure which combines the relationship between the camera system and the theodolite system; the other is the automatic target detection on the image by various methods of photogrammetry or computer vision. Several calibration methods have been developed, mostly using prototypes with an add-on camera rigidly mounted on the total station. However, these prototypes are not commercially available. This paper proposes a calibration method based on Leica MS50 which has two built-in cameras each with a resolution of 2560 × 1920 px: an overview camera and a telescope (on-axis) camera. Our work in this paper is based on the on-axis camera which uses the 30-times magnification of the telescope. The calibration consists of 7 parameters to estimate. We use coded targets, which are common tools in photogrammetry for orientation, to detect different targets in IATS images instead of prisms and traditional ATR functions. We test and verify the efficiency and stability of this monitoring method with multi-target.
Self-Calibration Approach for Mixed Signal Circuits in Systems-on-Chip
NASA Astrophysics Data System (ADS)
Jung, In-Seok
MOSFET scaling has served industry very well for a few decades by proving improvements in transistor performance, power, and cost. However, they require high test complexity and cost due to several issues such as limited pin count and integration of analog and digital mixed circuits. Therefore, self-calibration is an excellent and promising method to improve yield and to reduce manufacturing cost by simplifying the test complexity, because it is possible to address the process variation effects by means of self-calibration technique. Since the prior published calibration techniques were developed for a specific targeted application, it is not easy to be utilized for other applications. In order to solve the aforementioned issues, in this dissertation, several novel self-calibration design techniques in mixed-signal mode circuits are proposed for an analog to digital converter (ADC) to reduce mismatch error and improve performance. These are essential components in SOCs and the proposed self-calibration approach also compensates the process variations. The proposed novel self-calibration approach targets the successive approximation (SA) ADC. First of all, the offset error of the comparator in the SA-ADC is reduced using the proposed approach by enabling the capacitor array in the input nodes for better matching. In addition, the auxiliary capacitors for each capacitor of DAC in the SA-ADC are controlled by using synthesized digital controller to minimize the mismatch error of the DAC. Since the proposed technique is applied during foreground operation, the power overhead in SA-ADC case is minimal because the calibration circuit is deactivated during normal operation time. Another benefit of the proposed technique is that the offset voltage of the comparator is continuously adjusted for every step to decide one-bit code, because not only the inherit offset voltage of the comparator but also the mismatch of DAC are compensated simultaneously. Synthesized digital calibration control circuit operates as fore-ground mode, and the controller has been highly optimized for low power and better performance with simplified structure. In addition, in order to increase the sampling clock frequency of proposed self-calibration approach, novel variable clock period method is proposed. To achieve high speed SAR operation, a variable clock time technique is used to reduce not only peak current but also die area. The technique removes conversion time waste and extends the SAR operation speed easily. To verify and demonstrate the proposed techniques, a prototype charge-redistribution SA-ADCs with the proposed self-calibration is implemented in a 130nm standard CMOS process. The prototype circuit's silicon area is 0.0715 mm 2 and consumers 4.62mW with 1.2V power supply.
Zhang, Qian; Wang, Lei; Liu, Zengjun; Zhang, Yiming
2016-09-19
The calibration of an inertial measurement unit (IMU) is a key technique to improve the preciseness of the inertial navigation system (INS) for missile, especially for the calibration of accelerometer scale factor. Traditional calibration method is generally based on the high accuracy turntable, however, it leads to expensive costs and the calibration results are not suitable to the actual operating environment. In the wake of developments in multi-axis rotational INS (RINS) with optical inertial sensors, self-calibration is utilized as an effective way to calibrate IMU on missile and the calibration results are more accurate in practical application. However, the introduction of multi-axis RINS causes additional calibration errors, including non-orthogonality errors of mechanical processing and non-horizontal errors of operating environment, it means that the multi-axis gimbals could not be regarded as a high accuracy turntable. As for its application on missiles, in this paper, after analyzing the relationship between the calibration error of accelerometer scale factor and non-orthogonality and non-horizontal angles, an innovative calibration procedure using the signals of fiber optic gyro and photoelectric encoder is proposed. The laboratory and vehicle experiment results validate the theory and prove that the proposed method relaxes the orthogonality requirement of rotation axes and eliminates the strict application condition of the system.
Hemsley, Victoria S; Smyth, Timothy J; Martin, Adrian P; Frajka-Williams, Eleanor; Thompson, Andrew F; Damerell, Gillian; Painter, Stuart C
2015-10-06
An autonomous underwater vehicle (Seaglider) has been used to estimate marine primary production (PP) using a combination of irradiance and fluorescence vertical profiles. This method provides estimates for depth-resolved and temporally evolving PP on fine spatial scales in the absence of ship-based calibrations. We describe techniques to correct for known issues associated with long autonomous deployments such as sensor calibration drift and fluorescence quenching. Comparisons were made between the Seaglider, stable isotope ((13)C), and satellite estimates of PP. The Seaglider-based PP estimates were comparable to both satellite estimates and stable isotope measurements.
NASA Astrophysics Data System (ADS)
Robson, E. I.; Stevens, J. A.; Jenness, T.
2001-11-01
Calibrated data for 65 flat-spectrum extragalactic radio sources are presented at a wavelength of 850μm, covering a three-year period from 1997 April. The data, obtained from the James Clerk Maxwell Telescope using the SCUBA camera in pointing mode, were analysed using an automated pipeline process based on the Observatory Reduction and Acquisition Control-Data Reduction (orac-dr) system. This paper describes the techniques used to analyse and calibrate the data, and presents the data base of results along with a representative sample of the better-sampled light curves.
NASA Technical Reports Server (NTRS)
Ryan, Robert E.; Harrington, Gary; Holekamp, Kara; Pagnutti, Mary; Russell, Jeffrey; Frisbie, Troy; Stanley, Thomas
2007-01-01
Autonomous Visible to SWIR ground-based vicarious Cal/Val will be an essential Cal/Val component with such a large number of systems. Radiometrically calibrated spectroradiometers can improve confidence in current ground truth data through validation of radiometric modeling and validation or replacement of traditional sun photometer measurement. They also should enable significant reduction in deployed equipment such as equipment used in traditional sun photometer approaches. Simple, field-portable, white-light LED calibration source shows promise for visible range (420-750 nm). Prototype demonstrated <0.5% drift over 10-40 C temperature range. Additional complexity (more LEDs) will be necessary for extending spectral range into the NIR and SWIR. LED long lifetimes should produce at least several hundreds of hours or more of stability, minimizing the need for expensive calibrations and supporting long-duration field campaigns.
Automatic Phase Calibration for RF Cavities using Beam-Loading Signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, J. P.; Chase, B. E.
Precise calibration of the cavity phase signals is necessary for the operation of any particle accelerator. For many systems this requires human in the loop adjustments based on measurements of the beam parameters downstream. Some recent work has developed a scheme for the calibration of the cavity phase using beam measurements and beam-loading however this scheme is still a multi-step process that requires heavy automation or human in the loop. In this paper we analyze a new scheme that uses only RF signals reacting to beam-loading to calculate the phase of the beam relative to the cavity. This technique couldmore » be used in slow control loops to provide real-time adjustment of the cavity phase calibration without human intervention thereby increasing the stability and reliability of the accelerator.« less
Absolute calibration for complex-geometry biomedical diffuse optical spectroscopy
NASA Astrophysics Data System (ADS)
Mastanduno, Michael A.; Jiang, Shudong; El-Ghussein, Fadi; diFlorio-Alexander, Roberta; Pogue, Brian W.; Paulsen, Keith D.
2013-03-01
We have presented methodology to calibrate data in NIRS/MRI imaging versus an absolute reference phantom and results in both phantoms and healthy volunteers. This method directly calibrates data to a diffusion-based model, takes advantage of patient specific geometry from MRI prior information, and generates an initial guess without the need for a large data set. This method of calibration allows for more accurate quantification of total hemoglobin, oxygen saturation, water content, scattering, and lipid concentration as compared with other, slope-based methods. We found the main source of error in the method to be derived from incorrect assignment of reference phantom optical properties rather than initial guess in reconstruction. We also present examples of phantom and breast images from a combined frequency domain and continuous wave MRI-coupled NIRS system. We were able to recover phantom data within 10% of expected contrast and within 10% of the actual value using this method and compare these results with slope-based calibration methods. Finally, we were able to use this technique to calibrate and reconstruct images from healthy volunteers. Representative images are shown and discussion is provided for comparison with existing literature. These methods work towards fully combining the synergistic attributes of MRI and NIRS for in-vivo imaging of breast cancer. Complete software and hardware integration in dual modality instruments is especially important due to the complexity of the technology and success will contribute to complex anatomical and molecular prognostic information that can be readily obtained in clinical use.
Load Modeling and Calibration Techniques for Power System Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, Forrest S.; Mayhorn, Ebony T.; Elizondo, Marcelo A.
2011-09-23
Load modeling is the most uncertain area in power system simulations. Having an accurate load model is important for power system planning and operation. Here, a review of load modeling and calibration techniques is given. This paper is not comprehensive, but covers some of the techniques most commonly found in the literature. The advantages and disadvantages of each technique are outlined.
NASA Technical Reports Server (NTRS)
Tawfik, Hazem
1991-01-01
A relatively simple, inexpensive, and generic technique that could be used in both laboratories and some operation site environments is introduced at the Robotics Applications and Development Laboratory (RADL) at Kennedy Space Center (KSC). In addition, this report gives a detailed explanation of the set up procedure, data collection, and analysis using this new technique that was developed at the State University of New York at Farmingdale. The technique was used to evaluate the repeatability, accuracy, and overshoot of the Unimate Industrial Robot, PUMA 500. The data were statistically analyzed to provide an insight into the performance of the systems and components of the robot. Also, the same technique was used to check the forward kinematics against the inverse kinematics of RADL's PUMA robot. Recommendations were made for RADL to use this technique for laboratory calibration of the currently existing robots such as the ASEA, high speed controller, Automated Radiator Inspection Device (ARID) etc. Also, recommendations were made to develop and establish other calibration techniques that will be more suitable for site calibration environment and robot certification.
Facilities and Techniques for X-Ray Diagnostic Calibration in the 100-eV to 100-keV Energy Range
NASA Astrophysics Data System (ADS)
Gaines, J. L.; Wittmayer, F. J.
1986-08-01
The Lawrence Livermore National Laboratory (LLNL) has been a pioneer in the field of x-ray diagnostic calibration for more than 20 years. We have built steady state x-ray sources capable of supplying fluorescent lines of high spectral purity in the 100-eV to 100-keV energy range, and these sources have been used in the calibration of x-ray detectors, mirrors, crystals, filters, and film. This paper discusses our calibration philosophy and techniques, and describes some of our x-ray sources. Examples of actual calibration data are presented as well.
New Wavenumber Calibration Tables From Heterodyne Frequency Measurements
Maki, Arthur G.; Wells, Joseph S.
1992-01-01
This new calibration atlas is based on frequency rather than wavelength calibration techniques for absolute references. Since a limited number of absolute frequency measurements is possible, additional data from alternate methodology are used for difference frequency measurements within each band investigated by the frequency measurements techniques. Data from these complementary techniques include the best Fourier transform measurements available. Included in the text relating to the atlas are a description of the heterodyne frequency measurement techniques and details of the analysis, including the Hamiltonians and least-squares-fitting and calculation. Also included are other relevant considerations such as intensities and lincshape parameters. A 390-entry bibliography which contains all data sources used and a subsequent section on errors conclude the text portion. The primary calibration molecules are the linear triatomics, carbonyl sulfide and nitrous oxide, which cover portions of the infrared spectrum ranging from 488 to 3120 cm−1. Some gaps in the coverage afforded by OCS and N2O are partially covered by NO, CO, and CS2. An additional region from 4000 to 4400 cm−1 is also included. The tabular portion of the atlas is too lengthy to include in an archival journal. Furthermore, different users have different requirements for such an atlas. In an effort to satisfy most users, we have made two different options available. The first is NIST Special Publication 821, which has a spectral map/facing table format. The spectral maps (as well as the facing tables) are calculated from molecular constants derived for the work. A complete list of all of the molecular transitions that went into making the maps is too long (perhaps by a factor of 4 or 5) to include in the facing tables. The second option for those not interested in maps (or perhaps to supplement Special Publication 821) is the complete list (tables-only) which is available in computerized format as NIST Standard Reference Database #39, Wavelength Calibration Tables. PMID:28053441
Chai, X S; Schork, F J; DeCinque, Anthony
2005-04-08
This paper reports an improved headspace gas chromatographic (GC) technique for determination of monomer solubilities in water. The method is based on a multiple headspace extraction GC technique developed previously [X.S. Chai, Q.X. Hou, F.J. Schork, J. Appl. Polym. Sci., in press], but with the major modification in the method calibration technique. As a result, only a few iterations of headspace extraction and GC measurement are required, which avoids the "exhaustive" headspace extraction, and thus the experimental time for each analysis. For highly insoluble monomers, effort must be made to minimize adsorption in the headspace sampling channel, transportation conduit and capillary column by using higher operating temperature and a short capillary column in the headspace sampler and GC system. For highly water soluble monomers, a new calibration method is proposed. The combinations of these technique modifications results in a method that is simple, rapid and automated. While the current focus of the authors is on the determination of monomer solubility in aqueous solutions, the method should be applicable to determination of solubility of any organic in water.
Bisi, Maria Cristina; Stagni, Rita; Caroselli, Alessio; Cappello, Angelo
2015-08-01
Inertial sensors are becoming widely used for the assessment of human movement in both clinical and research applications, thanks to their usability out of the laboratory. This work aims to propose a method for calibrating anatomical landmark position in the wearable sensor reference frame with an ease to use, portable and low cost device. An off-the-shelf camera, a stick and a pattern, attached to the inertial sensor, compose the device. The proposed technique is referred to as video Calibrated Anatomical System Technique (vCAST). The absolute orientation of a synthetic femur was tracked both using the vCAST together with an inertial sensor and using stereo-photogrammetry as reference. Anatomical landmark calibration showed mean absolute error of 0.6±0.5 mm: these errors are smaller than those affecting the in-vivo identification of anatomical landmarks. The roll, pitch and yaw anatomical frame orientations showed root mean square errors close to the accuracy limit of the wearable sensor used (1°), highlighting the reliability of the proposed technique. In conclusion, the present paper proposes and preliminarily verifies the performance of a method (vCAST) for calibrating anatomical landmark position in the wearable sensor reference frame: the technique is low time consuming, highly portable, easy to implement and usable outside laboratory. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
SWAT: Model use, calibration, and validation
USDA-ARS?s Scientific Manuscript database
SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...
Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.
Hack, C Eric
2006-04-17
Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.
Lucio, Francesco; Calamia, Elisa; Russi, Elvio; Marchetto, Flavio
2013-01-01
When using an electronic portal imaging device (EPID) for dosimetric verifications, the calibration of the sensitive area is of paramount importance. Two calibration methods are generally adopted: one, empirical, based on an external reference dosimeter or on multiple narrow beam irradiations, and one based on the EPID response simulation. In this paper we present an alternative approach based on an intercalibration procedure, independent from external dosimeters and from simulations, and is quick and easy to perform. Each element of a detector matrix is characterized by a different gain; the aim of the calibration procedure is to relate the gain of each element to a reference one. The method that we used to compute the relative gains is based on recursive acquisitions with the EPID placed in different positions, assuming a constant fluence of the beam for subsequent deliveries. By applying an established procedure and analysis algorithm, the EPID calibration was repeated in several working conditions. Data show that both the photons energy and the presence of a medium between the source and the detector affect the calibration coefficients less than 1%. The calibration coefficients were then applied to the acquired images, comparing the EPID dose images with films. Measurements were performed with open field, placing the film at the level of the EPID. The standard deviation of the distribution of the point‐to‐point difference is 0.6%. An approach of this type for the EPID calibration has many advantages with respect to the standard methods — it does not need an external dosimeter, it is not related to the irradiation techniques, and it is easy to implement in the clinical practice. Moreover, it can be applied in case of transit or nontransit dosimetry, solving the problem of the EPID calibration independently from the dose reconstruction method. PACS number: 87.56.‐v PMID:24257285
Madou, Marc; Zoval, Jim; Jia, Guangyao; Kido, Horacio; Kim, Jitae; Kim, Nahui
2006-01-01
In this paper, centrifuge-based microfluidic platforms are reviewed and compared with other popular microfluidic propulsion methods. The underlying physical principles of centrifugal pumping in microfluidic systems are presented and the various centrifuge fluidic functions, such as valving, decanting, calibration, mixing, metering, heating, sample splitting, and separation, are introduced. Those fluidic functions have been combined with analytical measurement techniques, such as optical imaging, absorbance, and fluorescence spectroscopy and mass spectrometry, to make the centrifugal platform a powerful solution for medical and clinical diagnostics and high throughput screening (HTS) in drug discovery. Applications of a compact disc (CD)-based centrifuge platform analyzed in this review include two-point calibration of an optode-based ion sensor, an automated immunoassay platform, multiple parallel screening assays, and cellular-based assays. The use of modified commercial CD drives for high-resolution optical imaging is discussed as well. From a broader perspective, we compare technical barriers involved in applying microfluidics for sensing and diagnostic use and applying such techniques to HTS. The latter poses less challenges and explains why HTS products based on a CD fluidic platform are already commercially available, whereas we might have to wait longer to see commercial CD-based diagnostics.
High Purity Pion Beam at TRIUMF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kettell, S.; Kettell, S.; Aguilar-Arevalo, A.
An extension of the TRIUMF M13 low-energy pion channel designed to suppress positrons based on an energy-loss technique is described. A source of beam channel momentum calibration from the decay {pi}{sup +} {yields} e{sup +}{nu} is also described.
Dosimetry of 192Ir sources used for endovascular brachytherapy
NASA Astrophysics Data System (ADS)
Reynaert, N.; Van Eijkeren, M.; Taeymans, Y.; Thierens, H.
2001-02-01
An in-phantom calibration technique for 192Ir sources used for endovascular brachytherapy is presented. Three different source lengths were investigated. The calibration was performed in a solid phantom using a Farmer-type ionization chamber at source to detector distances ranging from 1 cm to 5 cm. The dosimetry protocol for medium-energy x-rays extended with a volume-averaging correction factor was used to convert the chamber reading to dose to water. The air kerma strength of the sources was determined as well. EGS4 Monte Carlo calculations were performed to determine the depth dose distribution at distances ranging from 0.6 mm to 10 cm from the source centre. In this way we were able to convert the absolute dose rate at 1 cm distance to the reference point chosen at 2 mm distance. The Monte Carlo results were confirmed by radiochromic film measurements, performed with a double-exposure technique. The dwell times to deliver a dose of 14 Gy at the reference point were determined and compared with results given by the source supplier (CORDIS). They determined the dwell times from a Sievert integration technique based on the source activity. The results from both methods agreed to within 2% for the 12 sources that were evaluated. A Visual Basic routine that superimposes dose distributions, based on the Monte Carlo calculations and the in-phantom calibration, onto intravascular ultrasound images is presented. This routine can be used as an online treatment planning program.
Hybrid x-space: a new approach for MPI reconstruction.
Tateo, A; Iurino, A; Settanni, G; Andrisani, A; Stifanelli, P F; Larizza, P; Mazzia, F; Mininni, R M; Tangaro, S; Bellotti, R
2016-06-07
Magnetic particle imaging (MPI) is a new medical imaging technique capable of recovering the distribution of superparamagnetic particles from their measured induced signals. In literature there are two main MPI reconstruction techniques: measurement-based (MB) and x-space (XS). The MB method is expensive because it requires a long calibration procedure as well as a reconstruction phase that can be numerically costly. On the other side, the XS method is simpler than MB but the exact knowledge of the field free point (FFP) motion is essential for its implementation. Our simulation work focuses on the implementation of a new approach for MPI reconstruction: it is called hybrid x-space (HXS), representing a combination of the previous methods. Specifically, our approach is based on XS reconstruction because it requires the knowledge of the FFP position and velocity at each time instant. The difference with respect to the original XS formulation is how the FFP velocity is computed: we estimate it from the experimental measurements of the calibration scans, typical of the MB approach. Moreover, a compressive sensing technique is applied in order to reduce the calibration time, setting a fewer number of sampling positions. Simulations highlight that HXS and XS methods give similar results. Furthermore, an appropriate use of compressive sensing is crucial for obtaining a good balance between time reduction and reconstructed image quality. Our proposal is suitable for open geometry configurations of human size devices, where incidental factors could make the currents, the fields and the FFP trajectory irregular.
Tunable laser techniques for improving the precision of observational astronomy
NASA Astrophysics Data System (ADS)
Cramer, Claire E.; Brown, Steven W.; Lykke, Keith R.; Woodward, John T.; Bailey, Stephen; Schlegel, David J.; Bolton, Adam S.; Brownstein, Joel; Doherty, Peter E.; Stubbs, Christopher W.; Vaz, Amali; Szentgyorgyi, Andrew
2012-09-01
Improving the precision of observational astronomy requires not only new telescopes and instrumentation, but also advances in observing protocols, calibrations and data analysis. The Laser Applications Group at the National Institute of Standards and Technology in Gaithersburg, Maryland has been applying advances in detector metrology and tunable laser calibrations to problems in astronomy since 2007. Using similar measurement techniques, we have addressed a number of seemingly disparate issues: precision flux calibration for broad-band imaging, precision wavelength calibration for high-resolution spectroscopy, and precision PSF mapping for fiber spectrographs of any resolution. In each case, we rely on robust, commercially-available laboratory technology that is readily adapted to use at an observatory. In this paper, we give an overview of these techniques.
Zhang, Jiarui; Zhang, Yingjie; Chen, Bo
2017-12-20
The three-dimensional measurement system with a binary defocusing technique is widely applied in diverse fields. The measurement accuracy is mainly determined by out-of-focus projector calibration accuracy. In this paper, a high-precision out-of-focus projector calibration method that is based on distortion correction on the projection plane and nonlinear optimization algorithm is proposed. To this end, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. In terms of this principle, the proposed method uses a high-order radial and tangential lens distortion representation on the projection plane to correct the calibration residuals caused by projection distortion. The final accuracy parameters of out-of-focus projector were obtained using a nonlinear optimization algorithm with good initial values, which were provided by coarsely calibrating the parameters of the out-of-focus projector on the focal and projection planes. Finally, the experimental results demonstrated that the proposed method can accuracy calibrate an out-of-focus projector, regardless of the amount of defocusing.
Updating the HST/ACS G800L Grism Calibration
NASA Astrophysics Data System (ADS)
Hathi, Nimish P.; Pirzkal, Norbert; Grogin, Norman A.; Chiaberge, Marco; ACS Team
2018-06-01
We present results from our ongoing work on obtaining newly derived trace and wavelength calibrations of the HST/ACS G800L grism and comparing them to previous set of calibrations. Past calibration efforts were based on 2003 observations. New observations of an emission line Wolf-Rayet star (WR96) were recently taken in HST Cycle 25 (PID: 15401). These observations are used to analyze and measure various grism properties, including wavelength calibration, spectral trace/tilt, length/size of grism orders, and spacing between various grism orders. To account for the field dependence, we observe WR96 at 3 different observing positions over the HST/ACS field of view. The three locations are the center of chip 1, the center of chip 2, and the center of the WFC1A-2K subarray (center of WFC Amp A on chip 1). This new data will help us to evaluate any differences in the G800L grism properties compared to previous calibration data, and to apply improved data analysis techniques to update these old measurements.
DOT National Transportation Integrated Search
1963-03-01
A simple technique is presented for calibrating an electronic system used in the plotting of erythrocyte volume spectra. The calibration factors, once obtained, apparently remain applicable for some time. Precise estimates of calibration factors appe...
The Majorana Demonstrator calibration system
Abgrall, N.; Arnquist, I. J.; Avignone, III, F. T.; ...
2017-08-08
The Majorana Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The Majorana Demonstrator is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a 1-ton 76Ge-based search. The ultra low-background conditions require regular calibrations to verify proper function of the detectors. Radioactive line sources can be deployed around the cryostats containing the detectors for regular energy calibrations. When measuring in low-background mode, these line sources have to be stored outside the shielding so they do not contribute to the background. The deployment and the retraction of the source aremore » designed to be controlled by the data acquisition system and do not require any direct human interaction. In this study, we detail the design requirements and implementation of the calibration apparatus, which provides the event rates needed to define the pulse-shape cuts and energy calibration used in the final analysis as well as data that can be compared to simulations.« less
The MAJORANA DEMONSTRATOR calibration system
NASA Astrophysics Data System (ADS)
Abgrall, N.; Arnquist, I. J.; Avignone, F. T., III; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Bradley, A. W.; Brudanin, V.; Busch, M.; Buuck, M.; Caldwell, T. S.; Christofferson, C. D.; Chu, P.-H.; Cuesta, C.; Detwiler, J. A.; Dunagan, C.; Efremenko, Yu.; Ejiri, H.; Elliott, S. R.; Fu, Z.; Gehman, V. M.; Gilliss, T.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guinn, I. S.; Guiseppe, V. E.; Haufe, C. R.; Henning, R.; Hoppe, E. W.; Howe, M. A.; Jasinski, B. R.; Keeter, K. J.; Kidd, M. F.; Konovalov, S. I.; Kouzes, R. T.; Lopez, A. M.; MacMullin, J.; Martin, R. D.; Massarczyk, R.; Meijer, S. J.; Mertens, S.; Orrell, J. L.; O'Shaughnessy, C.; Poon, A. W. P.; Radford, D. C.; Rager, J.; Reine, A. L.; Rielage, K.; Robertson, R. G. H.; Shanks, B.; Shirchenko, M.; Suriano, A. M.; Tedeschi, D.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yu, C.-H.; Yumatov, V.; Zhitnikov, I.; Zhu, B. X.
2017-11-01
The MAJORANA Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The MAJORANA DEMONSTRATOR is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a 1-ton 76Ge-based search. The ultra low-background conditions require regular calibrations to verify proper function of the detectors. Radioactive line sources can be deployed around the cryostats containing the detectors for regular energy calibrations. When measuring in low-background mode, these line sources have to be stored outside the shielding so they do not contribute to the background. The deployment and the retraction of the source are designed to be controlled by the data acquisition system and do not require any direct human interaction. In this paper, we detail the design requirements and implementation of the calibration apparatus, which provides the event rates needed to define the pulse-shape cuts and energy calibration used in the final analysis as well as data that can be compared to simulations.
The Majorana Demonstrator calibration system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abgrall, N.; Arnquist, I. J.; Avignone, III, F. T.
The Majorana Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The Majorana Demonstrator is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a 1-ton 76Ge-based search. The ultra low-background conditions require regular calibrations to verify proper function of the detectors. Radioactive line sources can be deployed around the cryostats containing the detectors for regular energy calibrations. When measuring in low-background mode, these line sources have to be stored outside the shielding so they do not contribute to the background. The deployment and the retraction of the source aremore » designed to be controlled by the data acquisition system and do not require any direct human interaction. In this study, we detail the design requirements and implementation of the calibration apparatus, which provides the event rates needed to define the pulse-shape cuts and energy calibration used in the final analysis as well as data that can be compared to simulations.« less
McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J
2014-02-21
A new method for calibrating thermodynamic data to be used in the prediction of analyte retention times is presented. The method allows thermodynamic data collected on one column to be used in making predictions across columns of the same stationary phase but with varying geometries. This calibration is essential as slight variances in the column inner diameter and stationary phase film thickness between columns or as a column ages will adversely affect the accuracy of predictions. The calibration technique uses a Grob standard mixture along with a Nelder-Mead simplex algorithm and a previously developed model of GC retention times based on a three-parameter thermodynamic model to estimate both inner diameter and stationary phase film thickness. The calibration method is highly successful with the predicted retention times for a set of alkanes, ketones and alcohols having an average error of 1.6s across three columns. Copyright © 2014 Elsevier B.V. All rights reserved.
Technique for the metrology calibration of a Fourier transform spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, Locke D.; Naylor, David A
2008-11-10
A method is presented for using a Fourier transform spectrometer (FTS) to calibrate the metrology of a second FTS. This technique is particularly useful when the second FTS is inside a cryostat or otherwise inaccessible.
A fast combination calibration of foreground and background for pipelined ADCs
NASA Astrophysics Data System (ADS)
Kexu, Sun; Lenian, He
2012-06-01
This paper describes a fast digital calibration scheme for pipelined analog-to-digital converters (ADCs). The proposed method corrects the nonlinearity caused by finite opamp gain and capacitor mismatch in multiplying digital-to-analog converters (MDACs). The considered calibration technique takes the advantages of both foreground and background calibration schemes. In this combination calibration algorithm, a novel parallel background calibration with signal-shifted correlation is proposed, and its calibration cycle is very short. The details of this technique are described in the example of a 14-bit 100 Msample/s pipelined ADC. The high convergence speed of this background calibration is achieved by three means. First, a modified 1.5-bit stage is proposed in order to allow the injection of a large pseudo-random dithering without missing code. Second, before correlating the signal, it is shifted according to the input signal so that the correlation error converges quickly. Finally, the front pipeline stages are calibrated simultaneously rather than stage by stage to reduce the calibration tracking constants. Simulation results confirm that the combination calibration has a fast startup process and a short background calibration cycle of 2 × 221 conversions.
Waugh, C J; Rosenberg, M J; Zylstra, A B; Frenje, J A; Séguin, F H; Petrasso, R D; Glebov, V Yu; Sangster, T C; Stoeckl, C
2015-05-01
Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.
Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.; ...
2015-05-27
Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition,more » comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.
Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition,more » comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waugh, C. J., E-mail: cjwaugh@mit.edu; Zylstra, A. B.; Frenje, J. A.
2015-05-15
Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition,more » comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.« less
Calibration of the SBUV version 8.6 ozone data product
NASA Astrophysics Data System (ADS)
DeLand, M. T.; Taylor, S. L.; Huang, L. K.; Fisher, B. L.
2012-11-01
This paper describes the calibration process for the Solar Backscatter Ultraviolet (SBUV) Version 8.6 (V8.6) ozone data product. Eight SBUV instruments have flown on NASA and NOAA satellites since 1970, and a continuous data record is available since November 1978. The accuracy of ozone trends determined from these data depends on the calibration and long-term characterization of each instrument. V8.6 calibration adjustments are determined at the radiance level, and do not rely on comparison of retrieved ozone products with other instruments. The primary SBUV instrument characterization is based on prelaunch laboratory tests and dedicated on-orbit calibration measurements. We supplement these results with "soft" calibration techniques using carefully chosen subsets of radiance data and information from the retrieval algorithm output to validate each instrument's calibration. The estimated long-term uncertainty in albedo is approximately ±0.8-1.2% (1σ) for most of the instruments. The overlap between these instruments and the Shuttle SBUV (SSBUV) data allows us to intercalibrate the SBUV instruments to produce a coherent V8.6 data set covering more than 32 yr. The estimated long-term uncertainty in albedo is less than 3% over this period.
Calibration of the SBUV version 8.6 ozone data product
NASA Astrophysics Data System (ADS)
DeLand, M. T.; Taylor, S. L.; Huang, L. K.; Fisher, B. L.
2012-07-01
This paper describes the calibration process for the Solar Backscatter Ultraviolet (SBUV) Version 8.6 (V8.6) ozone data product. Eight SBUV instruments have flown on NASA and NOAA satellites since 1970, and a continuous data record is available since November 1978. The accuracy of ozone trends determined from these data depends on the calibration and long-term characterization of each instrument. V8.6 calibration adjustments are determined at the radiance level, and do not rely on comparison of retrieved ozone products with other instruments. The primary SBUV instrument characterization is based on prelaunch laboratory tests and dedicated on-orbit calibration measurements. We supplement these results with "soft" calibration techniques using carefully chosen subsets of radiance data and information from the retrieval algorithm output to validate each instrument's calibration. The estimated long-term uncertainty in albedo is approximately ±0.8-1.2% (1σ) for most of the instruments. The overlap between these instruments and the Shuttle SBUV (SSBUV) data allows us to intercalibrate the SBUV instruments to produce a coherent V8.6 data set covering more than 32 yr. The estimated long-term uncertainty in albedo is less than 3% over this period.
Time-resolved quantitative-phase microscopy of laser-material interactions using a wavefront sensor.
Gallais, Laurent; Monneret, Serge
2016-07-15
We report on a simple and efficient technique based on a wavefront sensor to obtain time-resolved amplitude and phase images of laser-material interactions. The main interest of the technique is to obtain quantitative self-calibrated phase measurements in one shot at the femtosecond time-scale, with high spatial resolution. The technique is used for direct observation and quantitative measurement of the Kerr effect in a fused silica substrate and free electron generation by photo-ionization processes in an optical coating.
ALMA High Frequency Techniques
NASA Astrophysics Data System (ADS)
Meyer, J. D.; Mason, B.; Impellizzeri, V.; Kameno, S.; Fomalont, E.; Chibueze, J.; Takahashi, S.; Remijan, A.; Wilson, C.; ALMA Science Team
2015-12-01
The purpose of the ALMA High Frequency Campaign is to improve the quality and efficiency of science observing in Bands 8, 9, and 10 (385-950 GHz), the highest frequencies available to the ALMA project. To this end, we outline observing modes which we have demonstrated to improve high frequency calibration for the 12m array and the ACA, and we present the calibration of the total power antennas at these frequencies. Band-to-band (B2B) transfer and bandwidth switching (BWSW), techniques which improve the speed and accuracy of calibration at the highest frequencies, are most necessary in Bands 8, 9, and 10 due to the rarity of strong calibrators. These techniques successfully enable increased signal-to-noise on the calibrator sources (and better calibration solutions) by measuring the calibrators at lower frequencies (B2B) or in wider bandwidths (BWSW) compared to the science target. We have also demonstrated the stability of the bandpass shape to better than 2.4% for 1 hour, hidden behind random noise, in Band 9. Finally, total power observing using the dual sideband receivers in Bands 9 and 10 requires the separation of the two sidebands; this procedure has been demonstrated in Band 9 and is undergoing further testing in Band 10.
NASA Astrophysics Data System (ADS)
Macander, M. J.; Frost, G. V., Jr.
2015-12-01
Regional-scale mapping of vegetation and other ecosystem properties has traditionally relied on medium-resolution remote sensing such as Landsat (30 m) and MODIS (250 m). Yet, the burgeoning availability of high-resolution (<=2 m) imagery and ongoing advances in computing power and analysis tools raises the prospect of performing ecosystem mapping at fine spatial scales over large study domains. Here we demonstrate cutting-edge mapping approaches over a ~35,000 km² study area on Alaska's North Slope using calibrated and atmospherically-corrected mosaics of high-resolution WorldView-2 and GeoEye-1 imagery: (1) an a priori spectral approach incorporating the Satellite Imagery Automatic Mapper (SIAM) algorithms; (2) image segmentation techniques; and (3) texture metrics. The SIAM spectral approach classifies radiometrically-calibrated imagery to general vegetation density categories and non-vegetated classes. The SIAM classes were developed globally and their applicability in arctic tundra environments has not been previously evaluated. Image segmentation, or object-based image analysis, automatically partitions high-resolution imagery into homogeneous image regions that can then be analyzed based on spectral, textural, and contextual information. We applied eCognition software to delineate waterbodies and vegetation classes, in combination with other techniques. Texture metrics were evaluated to determine the feasibility of using high-resolution imagery to algorithmically characterize periglacial surface forms (e.g., ice-wedge polygons), which are an important physical characteristic of permafrost-dominated regions but which cannot be distinguished by medium-resolution remote sensing. These advanced mapping techniques provide products which can provide essential information supporting a broad range of ecosystem science and land-use planning applications in northern Alaska and elsewhere in the circumpolar Arctic.
Uplink Array Calibration via Far-Field Power Maximization
NASA Technical Reports Server (NTRS)
Vilnrotter, V.; Mukai, R.; Lee, D.
2006-01-01
Uplink antenna arrays have the potential to greatly increase the Deep Space Network s high-data-rate uplink capabilities as well as useful range, and to provide additional uplink signal power during critical spacecraft emergencies. While techniques for calibrating an array of receive antennas have been addressed previously, proven concepts for uplink array calibration have yet to be demonstrated. This article describes a method of utilizing the Moon as a natural far-field reflector for calibrating a phased array of uplink antennas. Using this calibration technique, the radio frequency carriers transmitted by each antenna of the array are optimally phased to ensure that the uplink power received by the spacecraft is maximized.
World-Wide Standardized Seismograph Network: a data users guide
Peterson, Jon R.; Hutt, Charles R.
2014-01-01
The purpose of this report, which is based on an unpublished draft prepared in the 1970s, is to provide seismologists with the information they may need to use the WWSSN data set as it becomes available in a more easily accessible and convenient format on the Internet. The report includes a description of the WWSSN network, station facilities, operations and instrumentation, a derivation of the instrument transfer functions, tables of transfer functions, a description of calibration techniques, and a description of a method used to determine important instrument constants using recorded calibration data.
Some aspects of robotics calibration, design and control
NASA Technical Reports Server (NTRS)
Tawfik, Hazem
1990-01-01
The main objective is to introduce techniques in the areas of testing and calibration, design, and control of robotic systems. A statistical technique is described that analyzes a robot's performance and provides quantitative three-dimensional evaluation of its repeatability, accuracy, and linearity. Based on this analysis, a corrective action should be taken to compensate for any existing errors and enhance the robot's overall accuracy and performance. A comparison between robotics simulation software packages that were commercially available (SILMA, IGRIP) and that of Kennedy Space Center (ROBSIM) is also included. These computer codes simulate the kinematics and dynamics patterns of various robot arm geometries to help the design engineer in sizing and building the robot manipulator and control system. A brief discussion on an adaptive control algorithm is provided.
Odegård, M; Mansfeld, J; Dundas, S H
2001-08-01
Calibration materials for microanalysis of Ti minerals have been prepared by direct fusion of synthetic and natural materials by resistance heating in high-purity graphite electrodes. Synthetic materials were FeTiO3 and TiO2 reagents doped with minor and trace elements; CRMs for ilmenite, rutile, and a Ti-rich magnetite were used as natural materials. Problems occurred during fusion of Fe2O3-rich materials, because at atmospheric pressure Fe2O3 decomposes into Fe3O4 and O2 at 1462 degrees C. An alternative fusion technique under pressure was tested, but the resulting materials were characterized by extensive segregation and development of separate phases. Fe2O3-rich materials were therefore fused below this temperature, resulting in a form of sintering, without conversion of the materials into amorphous glasses. The fused materials were studied by optical microscopy and EPMA, and tested as calibration materials by inductively coupled plasma mass spectrometry, equipped with laser ablation for sample introduction (LA-ICP-MS). It was demonstrated that calibration curves based on materials of rutile composition, within normal analytical uncertainty, generally coincide with calibration curves based on materials of ilmenite composition. It is, therefore, concluded that LA-ICP-MS analysis of Ti minerals can with advantage be based exclusively on calibration materials prepared for rutile, thereby avoiding the special fusion problems related to oxide mixtures of ilmenite composition. It is documented that sintered materials were in good overall agreement with homogeneous glass materials, an observation that indicates that in other situations also sintered mineral concentrates might be a useful alternative for instrument calibration, e.g. as alternative to pressed powders.
Optogalvanic wavelength calibration for laser monitoring of reactive atmospheric species
NASA Technical Reports Server (NTRS)
Webster, C. R.
1982-01-01
Laser-based techniques have been successfully employed for monitoring atmospheric species of importance to stratospheric ozone chemistry or tropospheric air quality control. When spectroscopic methods using tunable lasers are used, a simultaneously recorded reference spectrum is required for wavelength calibration. For stable species this is readily achieved by incorporating into the sensing instrument a reference cell containing the species to be monitored. However, when the species of interest is short-lived, this approach is unsuitable. It is proposed that wavelength calibration for short-lived species may be achieved by generating the species of interest in an electrical or RF discharge and using optogalvanic detection as a simple, sensitive, and reliable means of recording calibration spectra. The wide applicability of this method is emphasized. Ultraviolet, visible, or infrared lasers, either CW or pulsed, may be used in aircraft, balloon, or shuttle experiments for sensing atoms, molecules, radicals, or ions.
NASA Astrophysics Data System (ADS)
Borisov, A. A.; Deryabina, N. A.; Markovskij, D. V.
2017-12-01
Instant power is a key parameter of the ITER. Its monitoring with an accuracy of a few percent is an urgent and challenging aspect of neutron diagnostics. In a series of works published in Problems of Atomic Science and Technology, Series: Thermonuclear Fusion under a common title, the step-by-step neutronics analysis was given to substantiate a calibration technique for the DT and DD modes of the ITER. A Gauss quadrature scheme, optimal for processing "expensive" experiments, is used for numerical integration of 235U and 238U detector responses to the point sources of 14-MeV neutrons. This approach allows controlling the integration accuracy in relation to the number of coordinate mesh points and thus minimizing the number of irradiations at the given uncertainty of the full monitor response. In the previous works, responses of the divertor and blanket monitors to the isotropic point sources of DT and DD neutrons in the plasma profile and to the models of real sources were calculated within the ITER model using the MCNP code. The neutronics analyses have allowed formulating the basic principles of calibration that are optimal for having the maximum accuracy at the minimum duration of in situ experiments at the reactor. In this work, scenarios of the preliminary and basic experimental ITER runs are suggested on the basis of those principles. It is proposed to calibrate the monitors only with DT neutrons and use correction factors to the DT mode calibration for the DD mode. It is reasonable to perform full calibration only with 235U chambers and calibrate 238U chambers by responses of the 235U chambers during reactor operation (cross-calibration). The divertor monitor can be calibrated using both direct measurement of responses at the Gauss positions of a point source and simplified techniques based on the concepts of equivalent ring sources and inverse response distributions, which will considerably reduce the amount of measurements. It is shown that the monitor based on the average responses of the horizontal and vertical neutron chambers remains spatially stable as the source moves and can be used in addition to the staff monitor at neutron fluxes in the detectors four orders of magnitude lower than on the first wall, where staff detectors are located. Owing to low background, detectors of neutron chambers do not need calibration in the reactor because it is actually determination of the absolute detector efficiency for 14-MeV neutrons, which is a routine out-of-reactor procedure.
A two-step A/D conversion and column self-calibration technique for low noise CMOS image sensors.
Bae, Jaeyoung; Kim, Daeyun; Ham, Seokheon; Chae, Youngcheol; Song, Minkyu
2014-07-04
In this paper, a 120 frames per second (fps) low noise CMOS Image Sensor (CIS) based on a Two-Step Single Slope ADC (TS SS ADC) and column self-calibration technique is proposed. The TS SS ADC is suitable for high speed video systems because its conversion speed is much faster (by more than 10 times) than that of the Single Slope ADC (SS ADC). However, there exist some mismatching errors between the coarse block and the fine block due to the 2-step operation of the TS SS ADC. In general, this makes it difficult to implement the TS SS ADC beyond a 10-bit resolution. In order to improve such errors, a new 4-input comparator is discussed and a high resolution TS SS ADC is proposed. Further, a feedback circuit that enables column self-calibration to reduce the Fixed Pattern Noise (FPN) is also described. The proposed chip has been fabricated with 0.13 μm Samsung CIS technology and the chip satisfies the VGA resolution. The pixel is based on the 4-TR Active Pixel Sensor (APS). The high frame rate of 120 fps is achieved at the VGA resolution. The measured FPN is 0.38 LSB, and measured dynamic range is about 64.6 dB.
Non-parametric and least squares Langley plot methods
NASA Astrophysics Data System (ADS)
Kiedron, P. W.; Michalsky, J. J.
2016-01-01
Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.
NASA Astrophysics Data System (ADS)
Liu, Yande; Ying, Yibin; Lu, Huishan; Fu, Xiaping
2005-11-01
A new method is proposed to eliminate the varying background and noise simultaneously for multivariate calibration of Fourier transform near infrared (FT-NIR) spectral signals. An ideal spectrum signal prototype was constructed based on the FT-NIR spectrum of fruit sugar content measurement. The performances of wavelet based threshold de-noising approaches via different combinations of wavelet base functions were compared. Three families of wavelet base function (Daubechies, Symlets and Coiflets) were applied to estimate the performance of those wavelet bases and threshold selection rules by a series of experiments. The experimental results show that the best de-noising performance is reached via the combinations of Daubechies 4 or Symlet 4 wavelet base function. Based on the optimization parameter, wavelet regression models for sugar content of pear were also developed and result in a smaller prediction error than a traditional Partial Least Squares Regression (PLSR) mode.
An Investigation of a Photographic Technique of Measuring High Surface Temperatures
NASA Technical Reports Server (NTRS)
Siviter, James H., Jr.; Strass, H. Kurt
1960-01-01
A photographic method of temperature determination has been developed to measure elevated temperatures of surfaces. The technique presented herein minimizes calibration procedures and permits wide variation in emulsion developing techniques. The present work indicates that the lower limit of applicability is approximately 1,400 F when conventional cameras, emulsions, and moderate exposures are used. The upper limit is determined by the calibration technique and the accuracy required.
NASA Technical Reports Server (NTRS)
2008-01-01
Commodity-free calibration is a reaction rate calibration technique that does not require the addition of any commodities. This technique is a specific form of the reaction rate technique, where all of the necessary reactants, other than the sample being analyzed, are either inherent in the analyzing system or specifically added or provided to the system for a reason other than calibration. After introduction, the component of interest is exposed to other reactants or flow paths already present in the system. The instrument detector records one of the following to determine the rate of reaction: the increase in the response of the reaction product, a decrease in the signal of the analyte response, or a decrease in the signal from the inherent reactant. With this data, the initial concentration of the analyte is calculated. This type of system can analyze and calibrate simultaneously, reduce the risk of false positives and exposure to toxic vapors, and improve accuracy. Moreover, having an excess of the reactant already present in the system eliminates the need to add commodities, which further reduces cost, logistic problems, and potential contamination. Also, the calculations involved can be simplified by comparison to those of the reaction rate technique. We conducted tests with hypergols as an initial investigation into the feasiblility of the technique.
Analysis of calibration materials to improve dual-energy CT scanning for petrophysical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayyalasomavaiula, K.; McIntyre, D.; Jain, J.
2011-01-01
Dual energy CT-scanning is a rapidly emerging imaging technique employed in non-destructive evaluation of various materials. Although CT (Computerized Tomography) has been used for characterizing rocks and visualizing and quantifying multiphase flow through rocks for over 25 years, most of the scanning is done at a voltage setting above 100 kV for taking advantage of the Compton scattering (CS) effect, which responds to density changes. Below 100 kV the photoelectric effect (PE) is dominant which responds to the effective atomic numbers (Zeff), which is directly related to the photo electric factor. Using the combination of the two effects helps inmore » better characterization of reservoir rocks. The most common technique for dual energy CT-scanning relies on homogeneous calibration standards to produce the most accurate decoupled data. However, the use of calibration standards with impurities increases the probability of error in the reconstructed data and results in poor rock characterization. This work combines ICP-OES (inductively coupled plasma optical emission spectroscopy) and LIBS (laser induced breakdown spectroscopy) analytical techniques to quantify the type and level of impurities in a set of commercially purchased calibration standards used in dual-energy scanning. The Zeff data on the calibration standards with and without impurity data were calculated using the weighted linear combination of the various elements present and used in calculating Zeff using the dual energy technique. Results show 2 to 5% difference in predicted Zeff values which may affect the corresponding log calibrations. The effect that these techniques have on improving material identification data is discussed and analyzed. The workflow developed in this paper will translate to a more accurate material identification estimates for unknown samples and improve calibration of well logging tools.« less
Full Flight Envelope Direct Thrust Measurement on a Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Conners, Timothy R.; Sims, Robert L.
1998-01-01
Direct thrust measurement using strain gages offers advantages over analytically-based thrust calculation methods. For flight test applications, the direct measurement method typically uses a simpler sensor arrangement and minimal data processing compared to analytical techniques, which normally require costly engine modeling and multisensor arrangements throughout the engine. Conversely, direct thrust measurement has historically produced less than desirable accuracy because of difficulty in mounting and calibrating the strain gages and the inability to account for secondary forces that influence the thrust reading at the engine mounts. Consequently, the strain-gage technique has normally been used for simple engine arrangements and primarily in the subsonic speed range. This paper presents the results of a strain gage-based direct thrust-measurement technique developed by the NASA Dryden Flight Research Center and successfully applied to the full flight envelope of an F-15 aircraft powered by two F100-PW-229 turbofan engines. Measurements have been obtained at quasi-steady-state operating conditions at maximum non-augmented and maximum augmented power throughout the altitude range of the vehicle and to a maximum speed of Mach 2.0 and are compared against results from two analytically-based thrust calculation methods. The strain-gage installation and calibration processes are also described.
A nonlinear propagation model-based phase calibration technique for membrane hydrophones.
Cooling, Martin P; Humphrey, Victor F
2008-01-01
A technique for the phase calibration of membrane hydrophones in the frequency range up to 80 MHz is described. This is achieved by comparing measurements and numerical simulation of a nonlinearly distorted test field. The field prediction is obtained using a finite-difference model that solves the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation in the frequency domain. The measurements are made in the far field of a 3.5 MHz focusing circular transducer in which it is demonstrated that, for the high drive level used, spatial averaging effects due to the hydrophone's finite-receive area are negligible. The method provides a phase calibration of the hydrophone under test without the need for a device serving as a phase response reference, but it requires prior knowledge of the amplitude sensitivity at the fundamental frequency. The technique is demonstrated using a 50-microm thick bilaminar membrane hydrophone, for which the results obtained show functional agreement with predictions of a hydrophone response model. Further validation of the results is obtained by application of the response to the measurement of the high amplitude waveforms generated by a modern biomedical ultrasonic imaging system. It is demonstrated that full deconvolution of the calculated complex frequency response of a nonideal hydrophone results in physically realistic measurements of the transmitted waveforms.
Gao, Zhiyuan; Yang, Congjie; Xu, Jiangtao; Nie, Kaiming
2015-11-06
This paper presents a dynamic range (DR) enhanced readout technique with a two-step time-to-digital converter (TDC) for high speed linear CMOS image sensors. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA) structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A column-parallel ADC based on a two-step TDC is utilized to improve the conversion rate. The conversion is divided into coarse phase and fine phase. An error calibration scheme is also proposed to correct quantization errors caused by propagation delay skew within -T(clk)~+T(clk). A linear CMOS image sensor pixel array is designed in the 0.13 μm CMOS process to verify this DR-enhanced high speed readout technique. The post simulation results indicate that the dynamic range of readout circuit is 99.02 dB and the ADC achieves 60.22 dB SNDR and 9.71 bit ENOB at a conversion rate of 2 MS/s after calibration, with 14.04 dB and 2.4 bit improvement, compared with SNDR and ENOB of that without calibration.
GNSS VTEC calibration using satellite altimetry and LEO data
NASA Astrophysics Data System (ADS)
Alizadeh, M. Mahdi; Schuh, Harald
2015-04-01
Among different systems remote sensing the ionosphere, space geodetic techniques have turned into a promising tool for monitoring and modeling the ionospheric parameters. Due to the fact that ionosphere is a dispersive medium, the signals travelling through this medium provide information about the parameters of the ionosphere in terms of Total Electron Content (TEC) or electron density along the ray path. The classical input data for development of Global Ionosphere Maps (GIM) of the Vertical Total Electron Content (VTEC) is obtained from the dual-frequency Global Navigation Satellite Systems (GNSS) ground-based observations. Nevertheless due to the fact that GNSS ground stations are in-homogeneously distributed with poor coverage over the oceans (namely southern Pacific and southern Atlantic) and also parts of Africa, the precision of VTEC maps are rather low in these areas. From long term analyses it is believed that the International GNSS Service (IGS) VTEC maps have an accuracy of 1-2 TECU in areas well covered with GNSS receivers; conversely, in areas with poor coverage the accuracy can be degraded by a factor of up to five. On the other hand dual-frequency satellite altimetry missions (such as Jason-1&2) provide direct VTEC values exactly over the oceans, and furthermore the Low Earth Orbiting (LEO) satellites such as the Formosat-3/COSMIC (F/C) provide about a great number of globally distributed occultation measurements per day, which can be used to obtain VTEC values. Combining these data with the ground-based data improves the accuracy and reliability of the VTEC maps by closing of observation gaps that arise when using ground-based data only. In this approach an essential step is the evaluation and calibration of the different data sources used for the combination procedure. This study investigates the compatibility of calibrated TEC observables derived from GNSS dual-frequency data, recorded at global ground-based station networks, with space-based TEC values from satellite altimetry and F/C observations. In the current procedure the ground-based GNSS observations have been used to develop a GNSS-only GIM, using the parameter estimation technique. The VTEC values extracted from these models have been quantified and calibrated with the raw altimetry and LEO measurements. The calibrated values have been consequently used for developing the combined GIMs of the VTEC.
An Investigation of Acoustic Cavitation Produced by Pulsed Ultrasound
1987-12-01
S~ PVDF Hydrophone Sensitivity Calibration Curves C. DESCRIPTION OF TEST AND CALIBRATION TECHNIQUE We chose the reciprocity technique for calibration...NAVAL POSTGRADUATE SCHOOLN a n Monterey, Calif ornia ITHESIS AN INVESTIGATION OF ACOUSTIC CAVITATION PRODUCED BY PULSED ULTRASOUND by Robert L. Bruce...INVESTIGATION OF ACOUSTIC CAVITATION PRODUCED B~Y PULSED ULTRASOUND !2 PERSONAL AUTHOR(S) .RR~r. g~rtL_ 1DLJN, Rober- ., Jr. 13a TYPE OF REPORT )3b TIME
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
Improved dewpoint-probe calibration
NASA Technical Reports Server (NTRS)
Stephenson, J. G.; Theodore, E. A.
1978-01-01
Relatively-simple pressure-control apparatus calibrates dewpoint probes considerably faster than conventional methods, with no loss of accuracy. Technique requires only pressure measurement at each calibration point and single absolute-humidity measurement at beginning of run. Several probes can be calibrated simultaneously and points can be checked above room temperature.
NASA Astrophysics Data System (ADS)
Lu, Zenghai; Matcher, Stephen J.
2013-03-01
We report on a new calibration technique that permits the accurate extraction of sample Jones matrix and hence fast-axis orientation by using fiber-based polarization-sensitive optical coherence tomography (PS-OCT) that is completely based on non polarization maintaining fiber such as SMF-28. In this technique, two quarter waveplates are used to completely specify the parameters of the system fibers in the sample arm so that the Jones matrix of the sample can be determined directly. The device was validated on measurements of a quarter waveplate and an equine tendon sample by a single-mode fiber-based swept-source PS-OCT system.
Smith, Allan W.; Lorentz, Steven R.; Stone, Thomas C.; Datla, Raju V.
2012-01-01
The need to understand and monitor climate change has led to proposed radiometric accuracy requirements for space-based remote sensing instruments that are very stringent and currently outside the capabilities of many Earth orbiting instruments. A major problem is quantifying changes in sensor performance that occur from launch and during the mission. To address this problem on-orbit calibrators and monitors have been developed, but they too can suffer changes from launch and the harsh space environment. One solution is to use the Moon as a calibration reference source. Already the Moon has been used to remove post-launch drift and to cross-calibrate different instruments, but further work is needed to develop a new model with low absolute uncertainties capable of climate-quality absolute calibration of Earth observing instruments on orbit. To this end, we are proposing an Earth-based instrument suite to measure the absolute lunar spectral irradiance to an uncertainty1 of 0.5 % (k=1) over the spectral range from 320 nm to 2500 nm with a spectral resolution of approximately 0.3 %. Absolute measurements of lunar radiance will also be acquired to facilitate calibration of high spatial resolution sensors. The instruments will be deployed at high elevation astronomical observatories and flown on high-altitude balloons in order to mitigate the effects of the Earth’s atmosphere on the lunar observations. Periodic calibrations using instrumentation and techniques available from NIST will ensure traceability to the International System of Units (SI) and low absolute radiometric uncertainties. PMID:26900523
Smith, Allan W; Lorentz, Steven R; Stone, Thomas C; Datla, Raju V
2012-01-01
The need to understand and monitor climate change has led to proposed radiometric accuracy requirements for space-based remote sensing instruments that are very stringent and currently outside the capabilities of many Earth orbiting instruments. A major problem is quantifying changes in sensor performance that occur from launch and during the mission. To address this problem on-orbit calibrators and monitors have been developed, but they too can suffer changes from launch and the harsh space environment. One solution is to use the Moon as a calibration reference source. Already the Moon has been used to remove post-launch drift and to cross-calibrate different instruments, but further work is needed to develop a new model with low absolute uncertainties capable of climate-quality absolute calibration of Earth observing instruments on orbit. To this end, we are proposing an Earth-based instrument suite to measure the absolute lunar spectral irradiance to an uncertainty(1) of 0.5 % (k=1) over the spectral range from 320 nm to 2500 nm with a spectral resolution of approximately 0.3 %. Absolute measurements of lunar radiance will also be acquired to facilitate calibration of high spatial resolution sensors. The instruments will be deployed at high elevation astronomical observatories and flown on high-altitude balloons in order to mitigate the effects of the Earth's atmosphere on the lunar observations. Periodic calibrations using instrumentation and techniques available from NIST will ensure traceability to the International System of Units (SI) and low absolute radiometric uncertainties.
NASA Technical Reports Server (NTRS)
Rothermel, Jeffry; Chambers, Diana M.; Jarzembski, Maurice A.; Srivastava, Vandana; Bowdle, David A.; Jones, William D.
1996-01-01
Two continuous-wave(CW)focused C02 Doppler lidars (9.1 and 10.6 micrometers) were developed for airborne in situ aerosol backscatter measurements. The complex path of reliably calibrating these systems, with different signal processors, for accurate derivation of atmospheric backscatter coefficients is documented. Lidar calibration for absolute backscatter measurement for both lidars is based on range response over the lidar sample volume, not solely at focus. Both lidars were calibrated with a new technique using well-characterized aerosols as radiometric standard targets and related to conventional hard-target calibration. A digital signal processor (DSP), a surface acoustic and spectrum analyzer and manually tuned spectrum analyzer signal analyzers were used. The DSP signals were analyzed with an innovative method of correcting for systematic noise fluctuation; the noise statistics exhibit the chi-square distribution predicted by theory. System parametric studies and detailed calibration improved the accuracy of conversion from the measured signal-to-noise ratio to absolute backscatter. The minimum backscatter sensitivity is approximately 3 x 10(exp -12)/m/sr at 9.1 micrometers and approximately 9 x 10(exp -12)/m/sr at 10.6 micrometers. Sample measurements are shown for a flight over the remote Pacific Ocean in 1990 as part of the NASA Global Backscatter Experiment (GLOBE) survey missions, the first time to our knowledge that 9.1-10.6 micrometer lidar intercomparisons were made. Measurements at 9.1 micrometers, a potential wavelength for space-based lidar remote-sensing applications, are to our knowledge the first based on the rare isotope C-12 O(2)-18 gas.
NASA Astrophysics Data System (ADS)
Ruiz, C. L.; Chandler, G. A.; Cooper, G. W.; Fehl, D. L.; Hahn, K. D.; Leeper, R. J.; McWatters, B. R.; Nelson, A. J.; Smelser, R. M.; Snow, C. S.; Torres, J. A.
2012-10-01
The 350-keV Cockroft-Walton accelerator at Sandia National laboratory's Ion Beam facility is being used to calibrate absolutely a total DT neutron yield diagnostic based on the 63Cu(n,2n)62Cu(β+) reaction. These investigations have led to first-order uncertainties approaching 5% or better. The experiments employ the associated-particle technique. Deuterons at 175 keV impinge a 2.6 μm thick erbium tritide target producing 14.1 MeV neutrons from the T(d,n)4He reaction. The alpha particles emitted are measured at two angles relative to the beam direction and used to infer the neutron flux on a copper sample. The induced 62Cu activity is then measured and related to the neutron flux. This method is known as the F-factor technique. Description of the associated-particle method, copper sample geometries employed, and the present estimates of the uncertainties to the F-factor obtained are given.
Ruiz, C L; Chandler, G A; Cooper, G W; Fehl, D L; Hahn, K D; Leeper, R J; McWatters, B R; Nelson, A J; Smelser, R M; Snow, C S; Torres, J A
2012-10-01
The 350-keV Cockroft-Walton accelerator at Sandia National laboratory's Ion Beam facility is being used to calibrate absolutely a total DT neutron yield diagnostic based on the (63)Cu(n,2n)(62)Cu(β+) reaction. These investigations have led to first-order uncertainties approaching 5% or better. The experiments employ the associated-particle technique. Deuterons at 175 keV impinge a 2.6 μm thick erbium tritide target producing 14.1 MeV neutrons from the T(d,n)(4)He reaction. The alpha particles emitted are measured at two angles relative to the beam direction and used to infer the neutron flux on a copper sample. The induced (62)Cu activity is then measured and related to the neutron flux. This method is known as the F-factor technique. Description of the associated-particle method, copper sample geometries employed, and the present estimates of the uncertainties to the F-factor obtained are given.
Bortolussi, Silva; Ciani, Laura; Postuma, Ian; Protti, Nicoletta; Luca Reversi; Bruschi, Piero; Ferrari, Cinzia; Cansolino, Laura; Panza, Luigi; Ristori, Sandra; Altieri, Saverio
2014-06-01
The possibility to measure boron concentration with high precision in tissues that will be irradiated represents a fundamental step for a safe and effective BNCT treatment. In Pavia, two techniques have been used for this purpose, a quantitative method based on charged particles spectrometry and a boron biodistribution imaging based on neutron autoradiography. A quantitative method to determine boron concentration by neutron autoradiography has been recently set-up and calibrated for the measurement of biological samples, both solid and liquid, in the frame of the feasibility study of BNCT. This technique was calibrated and the obtained results were cross checked with those of α spectrometry, in order to validate them. The comparisons were performed using tissues taken form animals treated with different boron administration protocols. Subsequently the quantitative neutron autoradiography was employed to measure osteosarcoma cell samples treated with BPA and with new boronated formulations. © 2013 Published by Elsevier Ltd.
Magnetically launched flyer plate technique for probing electrical conductivity of compressed copper
NASA Astrophysics Data System (ADS)
Cochrane, K. R.; Lemke, R. W.; Riford, Z.; Carpenter, J. H.
2016-03-01
The electrical conductivity of materials under extremes of temperature and pressure is of crucial importance for a wide variety of phenomena, including planetary modeling, inertial confinement fusion, and pulsed power based dynamic materials experiments. There is a dearth of experimental techniques and data for highly compressed materials, even at known states such as along the principal isentrope and Hugoniot, where many pulsed power experiments occur. We present a method for developing, calibrating, and validating material conductivity models as used in magnetohydrodynamic (MHD) simulations. The difficulty in calibrating a conductivity model is in knowing where the model should be modified. Our method isolates those regions that will have an impact. It also quantitatively prioritizes which regions will have the most beneficial impact. Finally, it tracks the quantitative improvements to the conductivity model during each incremental adjustment. In this paper, we use an experiment on Sandia National Laboratories Z-machine to isentropically launch multiple flyer plates and, with the MHD code ALEGRA and the optimization code DAKOTA, calibrated the conductivity such that we matched an experimental figure of merit to +/-1%.
Magnetically launched flyer plate technique for probing electrical conductivity of compressed copper
Cochrane, Kyle R.; Lemke, Raymond W.; Riford, Z.; ...
2016-03-11
The electrical conductivity of materials under extremes of temperature and pressure is of crucial importance for a wide variety of phenomena, including planetary modeling, inertial confinement fusion, and pulsed power based dynamic materialsexperiments. There is a dearth of experimental techniques and data for highly compressed materials, even at known states such as along the principal isentrope and Hugoniot, where many pulsed power experiments occur. We present a method for developing, calibrating, and validating material conductivity models as used in magnetohydrodynamic(MHD) simulations. The difficulty in calibrating a conductivity model is in knowing where the model should be modified. Our method isolatesmore » those regions that will have an impact. It also quantitatively prioritizes which regions will have the most beneficial impact. Finally, it tracks the quantitative improvements to the conductivity model during each incremental adjustment. In this study, we use an experiment on Sandia National Laboratories Z-machine to isentropically launch multiple flyer plates and, with the MHD code ALEGRA and the optimization code DAKOTA, calibrated the conductivity such that we matched an experimental figure of merit to +/–1%.« less
Umchid, S.; Gopinath, R.; Srinivasan, K.; Lewin, P. A.; Daryoush, A. S.; Bansal, L.; El-Sherif, M.
2009-01-01
The primary objective of this work was to develop and optimize the calibration techniques for ultrasonic hydrophone probes used in acoustic field measurements up to 100 MHz. A dependable, 100 MHz calibration method was necessary to examine the behavior of a sub-millimeter spatial resolution fiber optic (FO) sensor and assess the need for such a sensor as an alternative tool for high frequency characterization of ultrasound fields. Also, it was of interest to investigate the feasibility of using FO probes in high intensity fields such as those employed in HIFU (High Intensity Focused Ultrasound) applications. In addition to the development and validation of a novel, 100 MHz calibration technique the innovative elements of this research include implementation and testing of a prototype FO sensor with an active diameter of about 10 μm that exhibits uniform sensitivity over the considered frequency range and does not require any spatial averaging corrections up to about 75 MHz. The results of the calibration measurements are presented and it is shown that the optimized calibration technique allows the sensitivity of the hydrophone probes to be determined as a virtually continuous function of frequency and is also well suited to verify the uniformity of the FO sensor frequency response. As anticipated, the overall uncertainty of the calibration was dependent on frequency and determined to be about ±12% (±1 dB) up to 40 MHz, ±20% (±1.5 dB) from 40 to 60 MHz and ±25% (±2 dB) from 60 to 100 MHz. The outcome of this research indicates that once fully developed and calibrated, the combined acousto-optic system will constitute a universal reference tool in the wide, 100 MHz bandwidth. PMID:19110289
Assessment of MODIS On-Orbit Calibration Using a Deep Convective Cloud Technique
NASA Technical Reports Server (NTRS)
Mu, Qiaozhen; Wu, Aisheng; Chang, Tiejun; Angal, Amit; Link, Daniel; Xiong, Xiaoxiong; Doelling, David R.; Bhatt, Rajendra
2016-01-01
The MODerate Resolution Imaging Spectroradiometer (MODIS) sensors onboard Terra and Aqua satellites are calibrated on-orbit with a solar diffuser (SD) for the reflective solar bands (RSB). The MODIS sensors are operating beyond their designed lifetime and hence present a major challenge to maintain the calibration accuracy. The degradation of the onboard SD is tracked by a solar diffuser stability monitor (SDSM) over a wavelength range from 0.41 to 0.94 micrometers. Therefore, any degradation of the SD beyond 0.94 micrometers cannot be captured by the SDSM. The uncharacterized degradation at wavelengths beyond this limit could adversely affect the Level 1B (L1B) product. To reduce the calibration uncertainties caused by the SD degradation, invariant Earth-scene targets are used to monitor and calibrate the MODIS L1B product. The use of deep convective clouds (DCCs) is one such method and particularly significant for the short-wave infrared (SWIR) bands in assessing their long-term calibration stability. In this study, we use the DCC technique to assess the performance of the Terra and Aqua MODIS Collection-6 L1B for RSB 1 3- 7, and 26, with spectral coverage from 0.47 to 2.13 micrometers. Results show relatively stable trends in Terra and Aqua MODIS reflectance for most bands. Careful attention needs to be paid to Aqua band 1, Terra bands 3 and 26 as their trends are larger than 1% during the study time period. We check the feasibility of using the DCC technique to assess the stability in MODIS bands 17-19. The assessment test on response versus scan angle (RVS) calibration shows substantial trend difference for Aqua band 1between different angles of incidence (AOIs). The DCC technique can be used to improve the RVS calibration in the future.
NASA Astrophysics Data System (ADS)
Kleshnin, Mikhail; Orlova, Anna; Kirillin, Mikhail; Golubiatnikov, German; Turchin, Ilya
2017-07-01
A new approach to optical measuring blood oxygen saturation was developed and implemented. This technique is based on an original three-stage algorithm for reconstructing the relative concentration of biological chromophores (hemoglobin, water, lipids) from the measured spectra of diffusely scattered light at different distances from the probing radiation source. The numerical experiments and approbation of the proposed technique on a biological phantom have shown the high reconstruction accuracy and the possibility of correct calculation of hemoglobin oxygenation in the presence of additive noise and calibration errors. The obtained results of animal studies have agreed with the previously published results of other research groups and demonstrated the possibility to apply the developed technique to monitor oxygen saturation in tumor tissue.
NASA Technical Reports Server (NTRS)
Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.
1984-01-01
Observations of raw image data, raw radiometric calibration data, and background measurements extracted from the raw data streams on high density tape reveal major shortcomings in a technique proposed by the Canadian Center for Remote Sensing in 1982 for the radiometric correction of TM data. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and data corrected using the earlier proposed technique is explained and the correction required for these factors as a function of individual scan line number for each detector is described. How the revised technique can be incorporated into an operational environment is demonstrated.
NASA Technical Reports Server (NTRS)
Refaat, Tamer F.; Singh, Upendra N.; Petros, Mulugeta; Remus, Ruben; Yu, Jirong
2015-01-01
Double-pulsed 2-micron integrated path differential absorption (IPDA) lidar is well suited for atmospheric CO2 remote sensing. The IPDA lidar technique relies on wavelength differentiation between strong and weak absorbing features of the gas normalized to the transmitted energy. In the double-pulse case, each shot of the transmitter produces two successive laser pulses separated by a short interval. Calibration of the transmitted pulse energies is required for accurate CO2 measurement. Design and calibration of a 2-micron double-pulse laser energy monitor is presented. The design is based on an InGaAs pin quantum detector. A high-speed photo-electromagnetic quantum detector was used for laser-pulse profile verification. Both quantum detectors were calibrated using a reference pyroelectric thermal detector. Calibration included comparing the three detection technologies in the single-pulsed mode, then comparing the quantum detectors in the double-pulsed mode. In addition, a self-calibration feature of the 2-micron IPDA lidar is presented. This feature allows one to monitor the transmitted laser energy, through residual scattering, with a single detection channel. This reduces the CO2 measurement uncertainty. IPDA lidar ground validation for CO2 measurement is presented for both calibrated energy monitor and self-calibration options. The calibrated energy monitor resulted in a lower CO2 measurement bias, while self-calibration resulted in a better CO2 temporal profiling when compared to the in situ sensor.
Optical Measurement Technique for Space Column Characterization
NASA Technical Reports Server (NTRS)
Barrows, Danny A.; Watson, Judith J.; Burner, Alpheus W.; Phelps, James E.
2004-01-01
A simple optical technique for the structural characterization of lightweight space columns is presented. The technique is useful for determining the coefficient of thermal expansion during cool down as well as the induced strain during tension and compression testing. The technique is based upon object-to-image plane scaling and does not require any photogrammetric calibrations or computations. Examples of the measurement of the coefficient of thermal expansion are presented for several lightweight space columns. Examples of strain measured during tension and compression testing are presented along with comparisons to results obtained with Linear Variable Differential Transformer (LVDT) position transducers.
Videogrammetric Model Deformation Measurement Technique
NASA Technical Reports Server (NTRS)
Burner, A. W.; Liu, Tian-Shu
2001-01-01
The theory, methods, and applications of the videogrammetric model deformation (VMD) measurement technique used at NASA for wind tunnel testing are presented. The VMD technique, based on non-topographic photogrammetry, can determine static and dynamic aeroelastic deformation and attitude of a wind-tunnel model. Hardware of the system includes a video-rate CCD camera, a computer with an image acquisition frame grabber board, illumination lights, and retroreflective or painted targets on a wind tunnel model. Custom software includes routines for image acquisition, target-tracking/identification, target centroid calculation, camera calibration, and deformation calculations. Applications of the VMD technique at five large NASA wind tunnels are discussed.
A new technique for measuring gas conversion factors for hydrocarbon mass flowmeters
NASA Technical Reports Server (NTRS)
Singh, J. J.; Sprinkle, D. R.
1983-01-01
A technique for measuring calibration conversion factors for hydrocarbon mass flowmeters was developed. It was applied to a widely used type of commercial thermal mass flowmeter for hydrocarbon gases. The values of conversion factors for two common hydrocarbons measured using this technique are in good agreement with the empirical values cited by the manufacturer. Similar agreements can be expected for all other hydrocarbons. The technique is based on Nernst theorem for matching the partial pressure of oxygen in the combustion product gases with that in normal air. It is simple, quick and relatively safe--particularly for toxic/poisonous hydrocarbons.
NASA Astrophysics Data System (ADS)
Mohanty, B.; Jena, S.; Panda, R. K.
2016-12-01
The overexploitation of groundwater elicited in abandoning several shallow tube wells in the study Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is indispensable for the effective planning and management of the water resources. The basic intent of this study is to develop a 3-D groundwater flow model of the study basin using the Visual MODFLOW Flex 2014.2 package and successfully calibrate and validate the model using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (McMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE), Coefficient of Determination (R2), Mean Absolute Error (MAE), Mean Percent Deviation (Dv) and Root Mean Squared Error (RMSE) were adopted as criteria of model evaluation during calibration and validation of the developed model. NSE, R2, MAE, Dv and RMSE values for groundwater flow model during calibration and validation were in acceptable range. Also, the McMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.
Proof of Concept for an Approach to a Finer Resolution Inventory
Chris J. Cieszewski; Kim Iles; Roger C. Lowe; Michal Zasada
2005-01-01
This report presents a proof of concept for a statistical framework to develop a timely, accurate, and unbiased fiber supply assessment in the State of Georgia, U.S.A. The proposed approach is based on using various data sources and modeling techniques to calibrate satellite image-based statewide stand lists, which provide initial estimates for a State inventory on a...
USDA-ARS?s Scientific Manuscript database
Directed soil sampling based on geospatial measurements of apparent soil electrical conductivity (ECa) is a potential means of characterizing the spatial variability of any soil property that influences ECa including soil salinity, water content, texture, bulk density, organic matter, and cation exc...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Zhijie; Pan, Wenxiao
2016-01-01
To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less
Philip Ye, X; Liu, Lu; Hayes, Douglas; Womac, Alvin; Hong, Kunlun; Sokhansanj, Shahab
2008-10-01
The objectives of this research were to determine the variation of chemical composition across botanical fractions of cornstover, and to probe the potential of Fourier transform near-infrared (FT-NIR) techniques in qualitatively classifying separated cornstover fractions and in quantitatively analyzing chemical compositions of cornstover by developing calibration models to predict chemical compositions of cornstover based on FT-NIR spectra. Large variations of cornstover chemical composition for wide calibration ranges, which is required by a reliable calibration model, were achieved by manually separating the cornstover samples into six botanical fractions, and their chemical compositions were determined by conventional wet chemical analyses, which proved that chemical composition varies significantly among different botanical fractions of cornstover. Different botanic fractions, having total saccharide content in descending order, are husk, sheath, pith, rind, leaf, and node. Based on FT-NIR spectra acquired on the biomass, classification by Soft Independent Modeling of Class Analogy (SIMCA) was employed to conduct qualitative classification of cornstover fractions, and partial least square (PLS) regression was used for quantitative chemical composition analysis. SIMCA was successfully demonstrated in classifying botanical fractions of cornstover. The developed PLS model yielded root mean square error of prediction (RMSEP %w/w) of 0.92, 1.03, 0.17, 0.27, 0.21, 1.12, and 0.57 for glucan, xylan, galactan, arabinan, mannan, lignin, and ash, respectively. The results showed the potential of FT-NIR techniques in combination with multivariate analysis to be utilized by biomass feedstock suppliers, bioethanol manufacturers, and bio-power producers in order to better manage bioenergy feedstocks and enhance bioconversion.
Microhotplate Temperature Sensor Calibration and BIST.
Afridi, M; Montgomery, C; Cooper-Balis, E; Semancik, S; Kreider, K G; Geist, J
2011-01-01
In this paper we describe a novel long-term microhotplate temperature sensor calibration technique suitable for Built-In Self Test (BIST). The microhotplate thermal resistance (thermal efficiency) and the thermal voltage from an integrated platinum-rhodium thermocouple were calibrated against a freshly calibrated four-wire polysilicon microhotplate-heater temperature sensor (heater) that is not stable over long periods of time when exposed to higher temperatures. To stress the microhotplate, its temperature was raised to around 400 °C and held there for days. The heater was then recalibrated as a temperature sensor, and microhotplate temperature measurements were made based on the fresh calibration of the heater, the first calibration of the heater, the microhotplate thermal resistance, and the thermocouple voltage. This procedure was repeated 10 times over a period of 80 days. The results show that the heater calibration drifted substantially during the period of the test while the microhotplate thermal resistance and the thermocouple-voltage remained stable to within about plus or minus 1 °C over the same period. Therefore, the combination of a microhotplate heater-temperature sensor and either the microhotplate thermal resistance or an integrated thin film platinum-rhodium thermocouple can be used to provide a stable, calibrated, microhotplate-temperature sensor, and the combination of the three sensor is suitable for implementing BIST functionality. Alternatively, if a stable microhotplate-heater temperature sensor is available, such as a properly annealed platinum heater-temperature sensor, then the thermal resistance of the microhotplate and the electrical resistance of the platinum heater will be sufficient to implement BIST. It is also shown that aluminum- and polysilicon-based temperature sensors, which are not stable enough for measuring high microhotplate temperatures (>220 °C) without impractically frequent recalibration, can be used to measure the silicon substrate temperature if never exposed to temperatures above about 220 °C.
Calibration and verification of thermographic cameras for geometric measurements
NASA Astrophysics Data System (ADS)
Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.
2011-03-01
Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better results for the Nec camera.
Information Retrieval from SAGE II and MFRSR Multi-Spectral Extinction Measurements
NASA Technical Reports Server (NTRS)
Lacis, Andrew A.; Hansen, James E. (Technical Monitor)
2001-01-01
Direct beam spectral extinction measurements of solar radiation contain important information on atmospheric composition in a form that is essentially free from multiple scattering contributions that otherwise tend to complicate the data analysis and information retrieval. Such direct beam extinction measurements are available from the solar occultation satellite-based measurements made by the Stratospheric and Aerosol Gas Experiment (SAGE II) instrument and by ground-based Multi-Filter Shadowband Radiometers (MFRSRs). The SAGE II data provide cross-sectional slices of the atmosphere twice per orbit at seven wavelengths between 385 and 1020 nm with approximately 1 km vertical resolution, while the MFRSR data provide atmospheric column measurements at six wavelengths between 415 and 940 nm but at one minute time intervals. We apply the same retrieval technique of simultaneous least-squares fit to the observed spectral extinctions to retrieve aerosol optical depth, effective radius and variance, and ozone, nitrogen dioxide, and water vapor amounts from the SAGE II and MFRSR measurements. The retrieval technique utilizes a physical model approach based on laboratory measurements of ozone and nitrogen dioxide extinction, line-by-line and numerical k-distribution calculations for water vapor absorption, and Mie scattering constraints on aerosol spectral extinction properties. The SAGE II measurements have the advantage of being self-calibrating in that deep space provides an effective zero point for the relative spectral extinctions. The MFRSR measurements require periodic clear-day Langley regression calibration events to maintain accurate knowledge of instrument calibration.
Utilization of advanced calibration techniques in stochastic rock fall analysis of quarry slopes
NASA Astrophysics Data System (ADS)
Preh, Alexander; Ahmadabadi, Morteza; Kolenprat, Bernd
2016-04-01
In order to study rock fall dynamics, a research project was conducted by the Vienna University of Technology and the Austrian Central Labour Inspectorate (Federal Ministry of Labour, Social Affairs and Consumer Protection). A part of this project included 277 full-scale drop tests at three different quarries in Austria and recording key parameters of the rock fall trajectories. The tests involved a total of 277 boulders ranging from 0.18 to 1.8 m in diameter and from 0.009 to 8.1 Mg in mass. The geology of these sites included strong rock belonging to igneous, metamorphic and volcanic types. In this paper the results of the tests are used for calibration and validation a new stochastic computer model. It is demonstrated that the error of the model (i.e. the difference between observed and simulated results) has a lognormal distribution. Selecting two parameters, advanced calibration techniques including Markov Chain Monte Carlo Technique, Maximum Likelihood and Root Mean Square Error (RMSE) are utilized to minimize the error. Validation of the model based on the cross validation technique reveals that in general, reasonable stochastic approximations of the rock fall trajectories are obtained in all dimensions, including runout, bounce heights and velocities. The approximations are compared to the measured data in terms of median, 95% and maximum values. The results of the comparisons indicate that approximate first-order predictions, using a single set of input parameters, are possible and can be used to aid practical hazard and risk assessment.
A New Online Calibration Method Based on Lord's Bias-Correction.
He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei
2017-09-01
Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.
Daytime sky polarization calibration limitations
NASA Astrophysics Data System (ADS)
Harrington, David M.; Kuhn, Jeffrey R.; Ariste, Arturo López
2017-01-01
The daytime sky has recently been demonstrated as a useful calibration tool for deriving polarization cross-talk properties of large astronomical telescopes. The Daniel K. Inouye Solar Telescope and other large telescopes under construction can benefit from precise polarimetric calibration of large mirrors. Several atmospheric phenomena and instrumental errors potentially limit the technique's accuracy. At the 3.67-m AEOS telescope on Haleakala, we performed a large observing campaign with the HiVIS spectropolarimeter to identify limitations and develop algorithms for extracting consistent calibrations. Effective sampling of the telescope optical configurations and filtering of data for several derived parameters provide robustness to the derived Mueller matrix calibrations. Second-order scattering models of the sky show that this method is relatively insensitive to multiple-scattering in the sky, provided calibration observations are done in regions of high polarization degree. The technique is also insensitive to assumptions about telescope-induced polarization, provided the mirror coatings are highly reflective. Zemax-derived polarization models show agreement between the functional dependence of polarization predictions and the corresponding on-sky calibrations.
NASA Astrophysics Data System (ADS)
Tian, Jialin; Smith, William L.; Gazarik, Michael J.
2008-12-01
The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data collected during the moon tracking and viewing experiment events. From which, we derive the lunar surface temperature and emissivity associated with the moon viewing measurements.
NASA Technical Reports Server (NTRS)
Tian, Jialin; Smith, William L.; Gazarik, Michael J.
2008-01-01
The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere s thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data collected during the moon tracking and viewing experiment events. From which, we derive the lunar surface temperature and emissivity associated with the moon viewing measurements.
Robot calibration with a photogrammetric on-line system using reseau scanning cameras
NASA Astrophysics Data System (ADS)
Diewald, Bernd; Godding, Robert; Henrich, Andreas
1994-03-01
The possibility for testing and calibration of industrial robots becomes more and more important for manufacturers and users of such systems. Exacting applications in connection with the off-line programming techniques or the use of robots as measuring machines are impossible without a preceding robot calibration. At the LPA an efficient calibration technique has been developed. Instead of modeling the kinematic behavior of a robot, the new method describes the pose deviations within a user-defined section of the robot's working space. High- precision determination of 3D coordinates of defined path positions is necessary for calibration and can be done by digital photogrammetric systems. For the calibration of a robot at the LPA a digital photogrammetric system with three Rollei Reseau Scanning Cameras was used. This system allows an automatic measurement of a large number of robot poses with high accuracy.
Spatial calibration of an optical see-through head mounted display
Gilson, Stuart J.; Fitzgibbon, Andrew W.; Glennerster, Andrew
2010-01-01
We present here a method for calibrating an optical see-through Head Mounted Display (HMD) using techniques usually applied to camera calibration (photogrammetry). Using a camera placed inside the HMD to take pictures simultaneously of a tracked object and features in the HMD display, we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the HMD (width, height, focal length, optic centre and principal ray of the display). Our method gives low re-projection errors and, unlike existing methods, involves no time-consuming and error-prone human measurements, nor any prior estimates about the HMD geometry. PMID:18599125
Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays.
Itoh, Yuta; Klinker, Gudrun
2015-04-01
A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.
Initial Radiometric Calibration of the AWiFS using Vicarious Calibration Techniques
NASA Technical Reports Server (NTRS)
Pagnutti, Mary; Thome, Kurtis; Aaron, David; Leigh, Larry
2006-01-01
NASA SSC maintains four ASD FieldSpec FR spectroradiometers: 1) Laboratory transfer radiometers; 2) Ground surface reflectance for V&V field collection activities. Radiometric Calibration consists of a NIST-calibrated integrating sphere which serves as a source with known spectral radiance. Spectral Calibration consists of a laser and pen lamp illumination of integrating sphere. Environmental Testing includes temperature stability tests performed in environmental chamber.
Design and characterization of a nano-Newton resolution thrust stand
NASA Astrophysics Data System (ADS)
Soni, J.; Roy, S.
2013-09-01
The paper describes the design, calibration, and characterization of a thrust stand capable of nano-Newton resolution. A low uncertainty calibration method is proposed and demonstrated. A passive eddy current based damper, which is non-contact and vacuum compatible, is employed. Signal analysis techniques are used to perform noise characterization, and potential sources are identified. Calibrated system noise floor suggests thrust measurement resolution of the order of 10 nN is feasible under laboratory conditions. Force measurement from this balance for a standard macroscale dielectric barrier discharge (DBD) plasma actuator is benchmarked with a commercial precision balance of 9.8 μN resolution and is found to be in good agreement. Published results of a microscale DBD plasma actuator force measurement and low pressure characterization of conventional plasma actuators are presented for completeness.
Planar temperature measurement in compressible flows using laser-induced iodine fluorescence
NASA Technical Reports Server (NTRS)
Hartfield, Roy J., Jr.; Hollo, Steven D.; Mcdaniel, James C.
1991-01-01
A laser-induced iodine fluorescence technique that is suitable for the planar measurement of temperature in cold nonreacting compressible air flows is investigated analytically and demonstrated in a known flow field. The technique is based on the temperature dependence of the broadband fluorescence from iodine excited by the 514-nm line of an argon-ion laser. Temperatures ranging from 165 to 245 K were measured in the calibration flow field. This technique makes complete, spatially resolved surveys of temperature practical in highly three-dimensional, low-temperature compressible flows.
Calibration, reconstruction, and rendering of cylindrical millimeter-wave image data
NASA Astrophysics Data System (ADS)
Sheen, David M.; Hall, Thomas E.
2011-05-01
Cylindrical millimeter-wave imaging systems and technology have been under development at the Pacific Northwest National Laboratory (PNNL) for several years. This technology has been commercialized, and systems are currently being deployed widely across the United States and internationally. These systems are effective at screening for concealed items of all types; however, new sensor designs, image reconstruction techniques, and image rendering algorithms could potentially improve performance. At PNNL, a number of specific techniques have been developed recently to improve cylindrical imaging methods including wideband techniques, combining data from full 360-degree scans, polarimetric imaging techniques, calibration methods, and 3-D data visualization techniques. Many of these techniques exploit the three-dimensionality of the cylindrical imaging technique by optimizing the depth resolution of the system and using this information to enhance detection. Other techniques, such as polarimetric methods, exploit scattering physics of the millimeter-wave interaction with concealed targets on the body. In this paper, calibration, reconstruction, and three-dimensional rendering techniques will be described that optimize the depth information in these images and the display of the images to the operator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
Determining Equilibrium Position For Acoustical Levitation
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Aveni, G.; Putterman, S.; Rudnick, J.
1989-01-01
Equilibrium position and orientation of acoustically-levitated weightless object determined by calibration technique on Earth. From calibration data, possible to calculate equilibrium position and orientation in presence of Earth gravitation. Sample not levitated acoustically during calibration. Technique relies on Boltzmann-Ehrenfest adiabatic-invariance principle. One converts resonant-frequency-shift data into data on normalized acoustical potential energy. Minimum of energy occurs at equilibrium point. From gradients of acoustical potential energy, one calculates acoustical restoring force or torque on objects as function of deviation from equilibrium position or orientation.
Calibration techniques and strategies for the present and future LHC electromagnetic calorimeters
NASA Astrophysics Data System (ADS)
Aleksa, M.
2018-02-01
This document describes the different calibration strategies and techniques applied by the two general purpose experiments at the LHC, ATLAS and CMS, and discusses them underlining their respective strengths and weaknesses from the view of the author. The resulting performances of both calorimeters are described and compared on the basis of selected physics results. Future upgrade plans for High Luminosity LHC (HL-LHC) are briefly introduced and planned calibration strategies for the upgraded detectors are shown.
NASA Technical Reports Server (NTRS)
Prasad, C. B.; Prabhakaran, R.; Tompkins, S.
1987-01-01
The first step in the extension of the semidestructive hole-drilling technique for residual stress measurement to orthotropic composite materials is the determination of the three calibration constants. Attention is presently given to an experimental determination of these calibration constants for a highly orthotropic, unidirectionally-reinforced graphite fiber-reinforced polyimide composite. A comparison of the measured values with theoretically obtained ones shows agreement to be good, in view of the many possible sources of experimental variation.
A Review of Calibration Transfer Practices and Instrument Differences in Spectroscopy.
Workman, Jerome J
2018-03-01
Calibration transfer for use with spectroscopic instruments, particularly for near-infrared, infrared, and Raman analysis, has been the subject of multiple articles, research papers, book chapters, and technical reviews. There has been a myriad of approaches published and claims made for resolving the problems associated with transferring calibrations; however, the capability of attaining identical results over time from two or more instruments using an identical calibration still eludes technologists. Calibration transfer, in a precise definition, refers to a series of analytical approaches or chemometric techniques used to attempt to apply a single spectral database, and the calibration model developed using that database, for two or more instruments, with statistically retained accuracy and precision. Ideally, one would develop a single calibration for any particular application, and move it indiscriminately across instruments and achieve identical analysis or prediction results. There are many technical aspects involved in such precision calibration transfer, related to the measuring instrument reproducibility and repeatability, the reference chemical values used for the calibration, the multivariate mathematics used for calibration, and sample presentation repeatability and reproducibility. Ideally, a multivariate model developed on a single instrument would provide a statistically identical analysis when used on other instruments following transfer. This paper reviews common calibration transfer techniques, mostly related to instrument differences, and the mathematics of the uncertainty between instruments when making spectroscopic measurements of identical samples. It does not specifically address calibration maintenance or reference laboratory differences.
Stress in recrystallized quartz by electron backscatter diffraction mapping
NASA Astrophysics Data System (ADS)
Llana-Fúnez, S.
2017-07-01
The long-term state of stress at middle and lower crustal depths can be estimated through the study of the microstructure of exhumed rocks from active and/or ancient shear zones. Constitutive equations for deformation mechanisms in experimentally deformed rocks relate differential stress to the size of recrystallized grains. Cross et al. (2017) take advantage of electron backscatter diffraction mapping to systematically separate new recrystallized grains from host grains on the basis of the measurable lattice distorsion within the grains. They produce the first calibrated piezometer for quartz with this technique, reproducing within error a previous calibration based on optical microscopy.
Calibration of a universal indicated turbulence system
NASA Technical Reports Server (NTRS)
Chapin, W. G.
1977-01-01
Theoretical and experimental work on a Universal Indicated Turbulence Meter is described. A mathematical transfer function from turbulence input to output indication was developed. A random ergodic process and a Gaussian turbulence distribution were assumed. A calibration technique based on this transfer function was developed. The computer contains a variable gain amplifier to make the system output independent of average velocity. The range over which this independence holds was determined. An optimum dynamic response was obtained for the tubulation between the system pitot tube and pressure transducer by making dynamic response measurements for orifices of various lengths and diameters at the source end.
NASA Astrophysics Data System (ADS)
Czapla-Myers, J.
2013-12-01
Landsat 8 was successfully launched from Vandenberg Air Force Base in California on 11 February 2013, and was placed into the orbit previously occupied by Landsat 5. Landsat 8 is the latest platform in the 40-year history of the Landsat series of satellites, and it contains two instruments that operate in the solar-reflective and the thermal infrared regimes. The Operational Land Imager (OLI) is a pushbroom sensor that contains eight multispectral bands ranging from 400-2300 nm, and one panchromatic band. The spatial resolution of the multispectral bands is 30 m, which is similar to previous Landsat sensors, and the panchromatic band has a 15-m spatial resolution, which is also similar to previous Landsat sensors. The 12-bit radiometric resolution of OLI improves upon the 8-bit resolution of the Enhanced Thematic Mapper Plus (ETM+) onboard Landsat 7. An important requirement for the Landsat program is the long-term radiometric continuity of its sensors. Ground-based vicarious techniques have been used for over 20 years to determine the absolute radiometric calibration of sensors that encompass a wide variety of spectral and spatial characteristics. This work presents the early radiometric calibration results of Landsat 8 OLI that were obtained using the traditional reflectance-based approach. University of Arizona personnel used five sites in Arizona, California, and Nevada to collect ground-based data. In addition, a unique set of in situ data were collected in March 2013, when Landsat 7 and Landsat 8 were observing the same site within minutes of each other. The tandem overfly schedule occurred while Landsat 8 was shifting to the WRS-2 orbital grid, and lasted only a few days. The ground-based data also include results obtained using the University of Arizona's Radiometric Calibration Test Site (RadCaTS), which is an automated suite of instruments located at Railroad Valley, Nevada. The results presented in this work include a comparison to the L1T at-sensor spectral radiance and the top-of-atmosphere reflectance, both of which are standard products available from the US Geological Survey.
Comparison of Calibration Techniques for Low-Cost Air Quality Monitoring
NASA Astrophysics Data System (ADS)
Malings, C.; Ramachandran, S.; Tanzer, R.; Kumar, S. P. N.; Hauryliuk, A.; Zimmerman, N.; Presto, A. A.
2017-12-01
Assessing the intra-city spatial distribution and temporal variability of air quality can be facilitated by a dense network of monitoring stations. However, the cost of implementing such a network can be prohibitive if high-quality but high-cost monitoring systems are used. To this end, the Real-time Affordable Multi-Pollutant (RAMP) sensor package has been developed at the Center for Atmospheric Particle Studies of Carnegie Mellon University, in collaboration with SenSevere LLC. This self-contained unit can measure up to five gases out of CO, SO2, NO, NO2, O3, VOCs, and CO2, along with temperature and relative humidity. Responses of individual gas sensors can vary greatly even when exposed to the same ambient conditions. Those of VOC sensors in particular were observed to vary by a factor-of-8, which suggests that each sensor requires its own calibration model. To this end, we apply and compare two different calibration methods to data collected by RAMP sensors collocated with a reference monitor station. The first method, random forest (RF) modeling, is a rule-based method which maps sensor responses to pollutant concentrations by implementing a trained sequence of decision rules. RF modeling has previously been used for other RAMP gas sensors by the group, and has produced precise calibrated measurements. However, RF models can only predict pollutant concentrations within the range observed in the training data collected during the collocation period. The second method, Gaussian process (GP) modeling, is a probabilistic Bayesian technique whereby broad prior estimates of pollutant concentrations are updated using sensor responses to generate more refined posterior predictions, as well as allowing predictions beyond the range of the training data. The accuracy and precision of these techniques are assessed and compared on VOC data collected during the summer of 2017 in Pittsburgh, PA. By combining pollutant data gathered by each RAMP sensor and applying appropriate calibration techniques, the potentially noisy or biased responses of individual sensors can be mapped to pollutant concentration values which are comparable to those of reference instruments.
Validation of a deformable image registration technique for cone beam CT-based dose verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moteabbed, M., E-mail: mmoteabbed@partners.org; Sharp, G. C.; Wang, Y.
2015-01-15
Purpose: As radiation therapy evolves toward more adaptive techniques, image guidance plays an increasingly important role, not only in patient setup but also in monitoring the delivered dose and adapting the treatment to patient changes. This study aimed to validate a method for evaluation of delivered intensity modulated radiotherapy (IMRT) dose based on multimodal deformable image registration (DIR) for prostate treatments. Methods: A pelvic phantom was scanned with CT and cone-beam computed tomography (CBCT). Both images were digitally deformed using two realistic patient-based deformation fields. The original CT was then registered to the deformed CBCT resulting in a secondary deformedmore » CT. The registration quality was assessed as the ability of the DIR method to recover the artificially induced deformations. The primary and secondary deformed CT images as well as vector fields were compared to evaluate the efficacy of the registration method and it’s suitability to be used for dose calculation. PLASTIMATCH, a free and open source software was used for deformable image registration. A B-spline algorithm with optimized parameters was used to achieve the best registration quality. Geometric image evaluation was performed through voxel-based Hounsfield unit (HU) and vector field comparison. For dosimetric evaluation, IMRT treatment plans were created and optimized on the original CT image and recomputed on the two warped images to be compared. The dose volume histograms were compared for the warped structures that were identical in both warped images. This procedure was repeated for the phantom with full, half full, and empty bladder. Results: The results indicated mean HU differences of up to 120 between registered and ground-truth deformed CT images. However, when the CBCT intensities were calibrated using a region of interest (ROI)-based calibration curve, these differences were reduced by up to 60%. Similarly, the mean differences in average vector field lengths decreased from 10.1 to 2.5 mm when CBCT was calibrated prior to registration. The results showed no dependence on the level of bladder filling. In comparison with the dose calculated on the primary deformed CT, differences in mean dose averaged over all organs were 0.2% and 3.9% for dose calculated on the secondary deformed CT with and without CBCT calibration, respectively, and 0.5% for dose calculated directly on the calibrated CBCT, for the full-bladder scenario. Gamma analysis for the distance to agreement of 2 mm and 2% of prescribed dose indicated a pass rate of 100% for both cases involving calibrated CBCT and on average 86% without CBCT calibration. Conclusions: Using deformable registration on the planning CT images to evaluate the IMRT dose based on daily CBCTs was found feasible. The proposed method will provide an accurate dose distribution using planning CT and pretreatment CBCT data, avoiding the additional uncertainties introduced by CBCT inhomogeneity and artifacts. This is a necessary initial step toward future image-guided adaptive radiotherapy of the prostate.« less
Homemade Equipment for the Teaching of Electrochemistry at Advanced Level. Part II.
ERIC Educational Resources Information Center
Chan, K. M.
1985-01-01
Provides a detailed description for the construction of equipment needed to investigate acid/base equilibria through the measurement of pH and potentiometric titrations. Suggested experiments and calibration techniques are explained. This information helps to solve the problems of inadequate, expensive equipment required for A-level chemistry…
Reconstruction method for fringe projection profilometry based on light beams.
Li, Xuexing; Zhang, Zhijiang; Yang, Chen
2016-12-01
A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.
Morales, Jesús; Martínez, Jorge L.; Mandow, Anthony; Reina, Antonio J.; Pequeño-Boter, Alejandro; García-Cerezo, Alfonso
2014-01-01
Many applications, like mobile robotics, can profit from acquiring dense, wide-ranging and accurate 3D laser data. Off-the-shelf 2D scanners are commonly customized with an extra rotation as a low-cost, lightweight and low-power-demanding solution. Moreover, aligning the extra rotation axis with the optical center allows the 3D device to maintain the same minimum range as the 2D scanner and avoids offsets in computing Cartesian coordinates. The paper proposes a practical procedure to estimate construction misalignments based on a single scan taken from an arbitrary position in an unprepared environment that contains planar surfaces of unknown dimensions. Inherited measurement limitations from low-cost 2D devices prevent the estimation of very small translation misalignments, so the calibration problem reduces to obtaining boresight parameters. The distinctive approach with respect to previous plane-based intrinsic calibration techniques is the iterative maximization of both the flatness and the area of visible planes. Calibration results are presented for a case study. The method is currently being applied as the final stage in the production of a commercial 3D rangefinder. PMID:25347585
Prediction of valid acidity in intact apples with Fourier transform near infrared spectroscopy.
Liu, Yan-De; Ying, Yi-Bin; Fu, Xia-Ping
2005-03-01
To develop nondestructive acidity prediction for intact Fuji apples, the potential of Fourier transform near infrared (FT-NIR) method with fiber optics in interactance mode was investigated. Interactance in the 800 nm to 2619 nm region was measured for intact apples, harvested from early to late maturity stages. Spectral data were analyzed by two multivariate calibration techniques including partial least squares (PLS) and principal component regression (PCR) methods. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influences of different data preprocessing and spectra treatments were also quantified. Calibration models based on smoothing spectra were slightly worse than that based on derivative spectra, and the best result was obtained when the segment length was 5 nm and the gap size was 10 points. Depending on data preprocessing and PLS method, the best prediction model yielded correlation coefficient of determination (r2) of 0.759, low root mean square error of prediction (RMSEP) of 0.0677, low root mean square error of calibration (RMSEC) of 0.0562. The results indicated the feasibility of FT-NIR spectral analysis for predicting apple valid acidity in a nondestructive way.
Prediction of valid acidity in intact apples with Fourier transform near infrared spectroscopy*
Liu, Yan-de; Ying, Yi-bin; Fu, Xia-ping
2005-01-01
To develop nondestructive acidity prediction for intact Fuji apples, the potential of Fourier transform near infrared (FT-NIR) method with fiber optics in interactance mode was investigated. Interactance in the 800 nm to 2619 nm region was measured for intact apples, harvested from early to late maturity stages. Spectral data were analyzed by two multivariate calibration techniques including partial least squares (PLS) and principal component regression (PCR) methods. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influences of different data preprocessing and spectra treatments were also quantified. Calibration models based on smoothing spectra were slightly worse than that based on derivative spectra, and the best result was obtained when the segment length was 5 nm and the gap size was 10 points. Depending on data preprocessing and PLS method, the best prediction model yielded correlation coefficient of determination (r 2) of 0.759, low root mean square error of prediction (RMSEP) of 0.0677, low root mean square error of calibration (RMSEC) of 0.0562. The results indicated the feasibility of FT-NIR spectral analysis for predicting apple valid acidity in a nondestructive way. PMID:15682498
40 CFR 86.1308-84 - Dynamometer and engine equipment specifications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... technique involves the calibration of a master load cell (i.e., dynamometer case load cell). This... hydraulically actuated precalibrated master load cell. This calibration is then transferred to the flywheel torque measuring device. The technique involves the following steps: (i) A master load cell shall be...
Subnanosecond GPS-based clock synchronization and precision deep-space tracking
NASA Technical Reports Server (NTRS)
Dunn, C. E.; Lichten, S. M.; Jefferson, D. C.; Border, J. S.
1992-01-01
Interferometric spacecraft tracking is accomplished by the Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals at ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3-nsec error in clock synchronization resulting in an 11-nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock offsets and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft tracking without near-simultaneous quasar-based calibrations. Solutions are presented for a worldwide network of Global Positioning System (GPS) receivers in which the formal errors for DSN clock offset parameters are less than 0.5 nsec. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry (VLBI), as well as the examination of clock closure, suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation-error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.
Comparison of various techniques for calibration of AIS data
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Yamaguchi, Y.; Lyon, R. J. P.
1986-01-01
The Airborne Imaging Spectrometer (AIS) samples a region which is strongly influenced by decreasing solar irradiance at longer wavelengths and strong atmospheric absorptions. Four techniques, the Log Residual, the Least Upper Bound Residual, the Flat Field Correction and calibration using field reflectance measurements were investigated as a means for removing these two features. Of the four techniques field reflectance calibration proved to be superior in terms of noise and normalization. Of the other three techniques, the Log Residual was superior when applied to areas which did not contain one dominant cover type. In heavily vegetated areas, the Log Residual proved to be ineffective. After removing anomalously bright data values, the Least Upper Bound Residual proved to be almost as effective as the Log Residual in sparsely vegetated areas and much more effective in heavily vegetated areas. Of all the techniques, the Flat Field Correction was the noisest.
Calibration of the CMS hadron calorimeter in Run 2
NASA Astrophysics Data System (ADS)
Chadeeva, M.; Lychkovskaya, N.
2018-03-01
Various calibration techniques for the CMS Hadron calorimeter in Run 2 and the results of calibration using 2016 collision data are presented. The radiation damage corrections, intercalibration of different channels using the phi-symmetry technique for barrel, endcap and forward calorimeter regions are described, as well as the intercalibration with muons of the outer hadron calorimeter. The achieved intercalibration precision is within 3%. The in situ energy scale calibration is performed in the barrel and endcap regions using isolated charged hadrons and in the forward calorimeter using the Zarrow ee process. The impact of pileup and the developed technique of correction for pileup is also discussed. The achieved uncertainty of the response to hadrons is 3.4% in the barrel and 2.6% in the endcap region (at the pseudorapidity range |η|<2) and is dominated by the systematic uncertainty due to pileup contributions.
NASA Astrophysics Data System (ADS)
Dykema, John A.; Anderson, James G.
2006-06-01
A methodology to achieve spectral thermal radiance measurements from space with demonstrable on-orbit traceability to the International System of Units (SI) is described. This technique results in measurements of infrared spectral radiance R(\\tilde {\\upsilon }) , with spectral index \\tilde {\\upsilon } in cm-1, with a relative combined uncertainty u_c[R(\\tilde {\\upsilon })] of 0.0015 (k = 1) for the average mid-infrared radiance emitted by the Earth. This combined uncertainty, expressed in brightness temperature units, is equivalent to ±0.1 K at 250 K at 750 cm-1. This measurement goal is achieved by utilizing a new method for infrared scale realization combined with an instrument design optimized to minimize component uncertainties and admit tests of radiometric performance. The SI traceability of the instrument scale is established by evaluation against source-based and detector-based infrared scales in defined laboratory protocols before launch. A novel strategy is executed to ensure fidelity of on-orbit calibration to the pre-launch scale. This strategy for on-orbit validation relies on the overdetermination of instrument calibration. The pre-launch calibration against scales derived from physically independent paths to the base SI units provides the foundation for a critical analysis of the overdetermined on-orbit calibration to establish an SI-traceable estimate of the combined measurement uncertainty. Redundant calibration sources and built-in diagnostic tests to assess component measurement uncertainties verify the SI traceability of the instrument calibration over the mission lifetime. This measurement strategy can be realized by a practical instrument, a prototype Fourier-transform spectrometer under development for deployment on a small satellite. The measurement record resulting from the methodology described here meets the observational requirements for climate monitoring and climate model testing and improvement.
NASA Technical Reports Server (NTRS)
Wielicki, B. A. (Principal Investigator); Barkstrom, B. R. (Principal Investigator); Charlock, T. P.; Baum, B. A.; Green, R. N.; Minnis, P.; Smith, G. L.; Coakley, J. A.; Randall, D. R.; Lee, R. B., III
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 2 details the techniques used to geolocate and calibrate the CERES scanning radiometer measurements of shortwave and longwave radiance to invert the radiances to top-of-the-atmosphere (TOA) and surface fluxes following the Earth Radiation Budget Experiment (ERBE) approach, and to average the fluxes over various time and spatial scales to produce an ERBE-like product. Spacecraft ephemeris and sensor telemetry are used with calibration coefficients to produce a chronologically ordered data product called bidirectional scan (BDS) radiances. A spatially organized instrument Earth scan product is developed for the cloud-processing subsystem. The ERBE-like inversion subsystem converts BDS radiances to unfiltered instantaneous TOA and surface fluxes. The TOA fluxes are determined by using established ERBE techniques. Hourly TOA fluxes are computed from the instantaneous values by using ERBE methods. Hourly surface fluxes are estimated from TOA fluxes by using simple parameterizations based on recent research. The averaging process produces daily, monthly-hourly, and monthly means of TOA and surface fluxes at various scales. This product provides a continuation of the ERBE record.
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-07
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.
Studying the Diurnal Cycle of Convection Using a TRMM-Calibrated Infrared Rain Algorithm
NASA Technical Reports Server (NTRS)
Negri, Andrew J.
2005-01-01
The development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale is presented. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics. The technique makes use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of nonraining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the last being important for the calculation of vertical profiles of latent heating. The diurnal cycle of rainfall, as well as the division between convective and Stratiform rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. Results from five years of PR data will show the global-tropical partitioning of convective and stratiform rainfall.
The recalibration of the IUE scientific instrument
NASA Technical Reports Server (NTRS)
Imhoff, Catherine L.; Oliversen, Nancy A.; Nichols-Bohlin, Joy; Casatella, Angelo; Lloyd, Christopher
1988-01-01
The IUE instrument was recalibrated because of long time-scale changes in the scientific instrument, a better understanding of the performance of the instrument, improved sets of calibration data, and improved analysis techniques. Calibrations completed or planned include intensity transfer functions (ITF), low-dispersion absolute calibrations, high-dispersion ripple corrections and absolute calibrations, improved geometric mapping of the ITFs to spectral images, studies to improve the signal-to-noise, enhanced absolute calibrations employing corrections for time, temperature, and aperture dependence, and photometric and geometric calibrations for the FES.
Calibration and validation of TRUST MRI for the estimation of cerebral blood oxygenation
Lu, Hanzhang; Xu, Feng; Grgac, Ksenija; Liu, Peiying; Qin, Qin; van Zijl, Peter
2011-01-01
Recently, a T2-Relaxation-Under-Spin-Tagging (TRUST) MRI technique was developed to quantitatively estimate blood oxygen saturation fraction (Y) via the measurement of pure blood T2. This technique has shown promise for normalization of fMRI signals, for the assessment of oxygen metabolism, and in studies of cognitive aging and multiple sclerosis. However, a human validation study has not been conducted. In addition, the calibration curve used to convert blood T2 to Y has not accounted for the effects of hematocrit (Hct). In the present study, we first conducted experiments on blood samples under physiologic conditions, and the Carr-Purcell-Meiboom-Gill (CPMG) T2 was determined for a range of Y and Hct values. The data were fitted to a two-compartment exchange model to allow the characterization of a three-dimensional plot that can serve to calibrate the in vivo data. Next, in a validation study in humans, we showed that arterial Y estimated using TRUST MRI was 0.837±0.036 (N=7) during the inhalation of 14% O2, which was in excellent agreement with the gold-standard Y values of 0.840±0.036 based on Pulse-Oximetry. These data suggest that the availability of this calibration plot should enhance the applicability of TRUST MRI for non-invasive assessment of cerebral blood oxygenation. PMID:21590721
NASA Astrophysics Data System (ADS)
Venable, Demetrius D.; Whiteman, David N.; Calhoun, Monique N.; Dirisu, Afusat O.; Connell, Rasheen M.; Landulfo, Eduardo
2011-08-01
We have investigated a technique that allows for the independent determination of the water vapor mixing ratio calibration factor for a Raman lidar system. This technique utilizes a procedure whereby a light source of known spectral characteristics is scanned across the aperture of the lidar system's telescope and the overall optical efficiency of the system is determined. Direct analysis of the temperature-dependent differential scattering cross sections for vibration and vibration-rotation transitions (convolved with narrowband filters) along with the measured efficiency of the system, leads to a theoretical determination of the water vapor mixing ratio calibration factor. A calibration factor was also obtained experimentally from lidar measurements and radiosonde data. A comparison of the theoretical and experimentally determined values agrees within 5%. We report on the sensitivity of the water vapor mixing ratio calibration factor to uncertainties in parameters that characterize the narrowband transmission filters, the temperature-dependent differential scattering cross section, and the variability of the system efficiency ratios as the lamp is scanned across the aperture of the telescope used in the Howard University Raman Lidar system.
Concentration Independent Calibration of β-γ Coincidence Detector Using 131mXe and 133Xe
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIntyre, Justin I.; Cooper, Matthew W.; Carman, April J.
Absolute efficiency calibration of radiometric detectors is frequently difficult and requires careful detector modeling and accurate knowledge of the radioactive source used. In the past we have calibrated the b-g coincidence detector of the Automated Radioxenon Sampler/Analyzer (ARSA) using a variety of sources and techniques which have proven to be less than desirable.[1] A superior technique has been developed that uses the conversion-electron (CE) and x-ray coincidence of 131mXe to provide a more accurate absolute gamma efficiency of the detector. The 131mXe is injected directly into the beta cell of the coincident counting system and no knowledge of absolute sourcemore » strength is required. In addition, 133Xe is used to provide a second independent means to obtain the absolute efficiency calibration. These two data points provide the necessary information for calculating the detector efficiency and can be used in conjunction with other noble gas isotopes to completely characterize and calibrate the ARSA nuclear detector. In this paper we discuss the techniques and results that we have obtained.« less
Consideration of VT5 etch-based OPC modeling
NASA Astrophysics Data System (ADS)
Lim, ChinTeong; Temchenko, Vlad; Kaiser, Dieter; Meusel, Ingo; Schmidt, Sebastian; Schneider, Jens; Niehoff, Martin
2008-03-01
Including etch-based empirical data during OPC model calibration is a desired yet controversial decision for OPC modeling, especially for process with a large litho to etch biasing. While many OPC software tools are capable of providing this functionality nowadays; yet few were implemented in manufacturing due to various risks considerations such as compromises in resist and optical effects prediction, etch model accuracy or even runtime concern. Conventional method of applying rule-based alongside resist model is popular but requires a lot of lengthy code generation to provide a leaner OPC input. This work discusses risk factors and their considerations, together with introduction of techniques used within Mentor Calibre VT5 etch-based modeling at sub 90nm technology node. Various strategies are discussed with the aim of better handling of large etch bias offset without adding complexity into final OPC package. Finally, results were presented to assess the advantages and limitations of the final method chosen.
NASA Technical Reports Server (NTRS)
Marks, David A.; Wolff, David B.; Silberstein, David S.; Tokay, Ali; Pippitt, Jason L.; Wang, Jianxin
2008-01-01
Since the Tropical Rainfall Measuring Mission (TRMM) satellite launch in November 1997, the TRMM Satellite Validation Office (TSVO) at NASA Goddard Space Flight Center (GSFC) has been performing quality control and estimating rainfall from the KPOL S-band radar at Kwajalein, Republic of the Marshall Islands. Over this period, KPOL has incurred many episodes of calibration and antenna pointing angle uncertainty. To address these issues, the TSVO has applied the Relative Calibration Adjustment (RCA) technique to eight years of KPOL radar data to produce Ground Validation (GV) Version 7 products. This application has significantly improved stability in KPOL reflectivity distributions needed for Probability Matching Method (PMM) rain rate estimation and for comparisons to the TRMM Precipitation Radar (PR). In years with significant calibration and angle corrections, the statistical improvement in PMM distributions is dramatic. The intent of this paper is to show improved stability in corrected KPOL reflectivity distributions by using the PR as a stable reference. Inter-month fluctuations in mean reflectivity differences between the PR and corrected KPOL are on the order of 1-2 dB, and inter-year mean reflectivity differences fluctuate by approximately 1 dB. This represents a marked improvement in stability with confidence comparable to the established calibration and uncertainty boundaries of the PR. The practical application of the RCA method has salvaged eight years of radar data that would have otherwise been unusable, and has made possible a high-quality database of tropical ocean-based reflectivity measurements and precipitation estimates for the research community.
NASA Technical Reports Server (NTRS)
Held, D.; Werner, C.; Wall, S.
1983-01-01
The absolute amplitude calibration of the spaceborne Seasat SAR data set is presented based on previous relative calibration studies. A scale factor making it possible to express the perceived radar brightness of a scene in units of sigma-zero is established. The system components are analyzed for error contribution, and the calibration techniques are introduced for each stage. These include: A/D converter saturation tests; prevention of clipping in the processing step; and converting the digital image into the units of received power. Experimental verification was performed by screening and processing the data of the lava flow surrounding the Pisgah Crater in Southern California, for which previous C-130 airborne scatterometer data were available. The average backscatter difference between the two data sets is estimated to be 2 dB in the brighter, and 4 dB in the dimmer regions. For the SAR a calculated uncertainty of 3 dB is expected.
Poláček, Roman; Májek, Pavel; Hroboňová, Katarína; Sádecká, Jana
2016-04-01
Fluoxetine is the most prescribed antidepressant chiral drug worldwide. Its enantiomers have a different duration of serotonin inhibition. A novel simple and rapid method for determination of the enantiomeric composition of fluoxetine in pharmaceutical pills is presented. Specifically, emission, excitation, and synchronous fluorescence techniques were employed to obtain the spectral data, which with multivariate calibration methods, namely, principal component regression (PCR) and partial least square (PLS), were investigated. The chiral recognition of fluoxetine enantiomers in the presence of β-cyclodextrin was based on diastereomeric complexes. The results of the multivariate calibration modeling indicated good prediction abilities. The obtained results for tablets were compared with those from chiral HPLC and no significant differences are shown by Fisher's (F) test and Student's t-test. The smallest residuals between reference or nominal values and predicted values were achieved by multivariate calibration of synchronous fluorescence spectral data. This conclusion is supported by calculated values of the figure of merit.
NASA Technical Reports Server (NTRS)
Moore, Alvah S., Jr.; Mauldin, L. ED, III; Stump, Charles W.; Reagan, John A.; Fabert, Milton G.
1989-01-01
The calibration of the Halogen Occultation Experiment (HALOE) sun sensor is described. This system consists of two energy-balancing silicon detectors which provide coarse azimuth and elevation control signals and a silicon photodiode array which provides top and bottom solar edge data for fine elevation control. All three detectors were calibrated on a mountaintop near Tucson, Ariz., using the Langley plot technique. The conventional Langley plot technique was modified to allow calibration of the two coarse detectors, which operate wideband. A brief description of the test setup is given. The HALOE instrument is a gas correlation radiometer that is now being developed for the Upper Atmospheric Research Satellite.
Improved Radial Velocity Precision with a Tunable Laser Calibrator
NASA Astrophysics Data System (ADS)
Cramer, Claire; Brown, S.; Dupree, A. K.; Lykke, K. R.; Smith, A.; Szentgyorgyi, A.
2010-01-01
We present radial velocities obtained using a novel laser-based wavelength calibration technique. We have built a prototype laser calibrator for the Hectochelle spectrograph at the MMT 6.5 m telescope. The Hectochelle is a high-dispersion, fiber-fed, multi-object spectrograph capable of recording up to 240 spectra simultaneously with a resolving power of 40000. The standard wavelength calibration method makes use of spectra from thorium-argon hollow cathode lamps shining directly onto the fibers. The difference in light path between calibration and science light as well as the uneven distribution of spectral lines are believed to introduce errors of up to several hundred m/s in the wavelength scale. Our tunable laser wavelength calibrator solves these problems. The laser is bright enough for use with a dome screen, allowing the calibration light path to better match the science light path. Further, the laser is tuned in regular steps across a spectral order to generate a calibration spectrum, creating a comb of evenly-spaced lines on the detector. Using the solar spectrum reflected from the atmosphere to record the same spectrum in every fiber, we show that laser wavelength calibration brings radial velocity uncertainties down below 100 m/s. We present these results as well as an application of tunable laser calibration to stellar radial velocities determined with the infrared Ca triplet in globular clusters M15 and NGC 7492. We also suggest how the tunable laser could be useful for other instruments, including single-object, cross-dispersed echelle spectrographs, and adapted for infrared spectroscopy.
Transverse Pupil Shifts for Adaptive Optics Non-Common Path Calibration
NASA Technical Reports Server (NTRS)
Bloemhof, Eric E.
2011-01-01
A simple new way of obtaining absolute wavefront measurements with a laboratory Fizeau interferometer was recently devised. In that case, the observed wavefront map is the difference of two cavity surfaces, those of the mirror under test and of an unknown reference surface on the Fizeau s transmission flat. The absolute surface of each can be determined by applying standard wavefront reconstruction techniques to two grids of absolute surface height differences of the mirror under test, obtained from pairs of measurements made with slight transverse shifts in X and Y. Adaptive optics systems typically provide an actuated periscope between wavefront sensor (WFS) and commonmode optics, used for lateral registration of deformable mirror (DM) to WFS. This periscope permits independent adjustment of either pupil or focal spot incident on the WFS. It would be used to give the required lateral pupil motion between common and non-common segments, analogous to the lateral shifts of the two phase contributions in the lab Fizeau. The technique is based on a completely new approach to calibration of phase. It offers unusual flexibility with regard to the transverse spatial frequency scales probed, and will give results quite quickly, making use of no auxiliary equipment other than that built into the adaptive optics system. The new technique may be applied to provide novel calibration information about other optical systems in which the beam may be shifted transversely in a controlled way.
A novel camera localization system for extending three-dimensional digital image correlation
NASA Astrophysics Data System (ADS)
Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher
2018-03-01
The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.
NASA Astrophysics Data System (ADS)
Apel, W. D.; Arteaga-Velázquez, J. C.; Bähren, L.; Bezyazeekov, P. A.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Budnev, N. M.; Cantoni, E.; Chiavassa, A.; Daumiller, K.; de Souza, V.; di Pierro, F.; Doll, P.; Engel, R.; Falcke, H.; Fedorov, O.; Fuchs, B.; Gemmeke, H.; Gress, O. A.; Grupen, C.; Haungs, A.; Heck, D.; Hiller, R.; Hörandel, J. R.; Horneffer, A.; Huber, D.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Kazarina, Y.; Kleifges, M.; Korosteleva, E. E.; Kostunin, D.; Krömer, O.; Kuijpers, J.; Kuzmichev, L. A.; Link, K.; Lubsandorzhiev, N.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Melissas, M.; Mirgazov, R. R.; Monkhoev, R.; Morello, C.; Oehlschläger, J.; Osipova, E. A.; Pakhorukov, A.; Palmieri, N.; Pankov, L.; Pierog, T.; Prosin, V. V.; Rautenberg, J.; Rebel, H.; Roth, M.; Rubtsov, G. I.; Rühle, C.; Saftoiu, A.; Schieler, H.; Schmidt, A.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Weindl, A.; Wischnewski, R.; Wochele, J.; Zabierowski, J.; Zagorodnikov, A.; Zensus, J. A.; Tunka-Rex; Lopes Collaborations
2016-12-01
The radio technique is a promising method for detection of cosmic-ray air showers of energies around 100PeV and higher with an array of radio antennas. Since the amplitude of the radio signal can be measured absolutely and increases with the shower energy, radio measurements can be used to determine the air-shower energy on an absolute scale. We show that calibrated measurements of radio detectors operated in coincidence with host experiments measuring air showers based on other techniques can be used for comparing the energy scales of these host experiments. Using two approaches, first via direct amplitude measurements, and second via comparison of measurements with air shower simulations, we compare the energy scales of the air-shower experiments Tunka-133 and KASCADE-Grande, using their radio extensions, Tunka-Rex and LOPES, respectively. Due to the consistent amplitude calibration for Tunka-Rex and LOPES achieved by using the same reference source, this comparison reaches an accuracy of approximately 10% - limited by some shortcomings of LOPES, which was a prototype experiment for the digital radio technique for air showers. In particular we show that the energy scales of cosmic-ray measurements by the independently calibrated experiments KASCADE-Grande and Tunka-133 are consistent with each other on this level.
NASA Astrophysics Data System (ADS)
Kakihara, H.; Yabuki, M.; Kitafuji, F.; Tsuda, T.; Tsukamoto, M.; Hasegawa, T.; Hashiguchi, H.; Yamamoto, M.
2017-12-01
Atmospheric water vapor plays an important role in atmospheric chemistry and meteorology, with implications for climate change and severe weather. The Raman lidar technique is useful for observing water-vapor with high spatiotemporal resolutions. However, the calibration factor must be determined before observations. Because the calibration factor is generally evaluated by comparing Raman-signal results with those of independent measurement techniques (e.g., radiosonde), it is difficult to apply this technique to lidar sites where radiosonde observation cannot be carried out. In this study, we propose a new calibration technique for water-vapor Raman lidar using global navigation satellite system (GNSS)-derived precipitable water vapor (PWV) and Japan Meteorological Agency meso-scale model (MSM). The analysis was accomplished by fitting the GNSS-PWV to integrated water-vapor profiles combined with the MSM and the results of the lidar observations. The maximum height of the lidar signal applicable to this method was determined within 2.0 km by considering the signal noise mainly caused by low clouds. The MSM data was employed at higher regions that cannot apply the lidar data. This method can be applied to lidar signals lower than a limited height range due to weather conditions and lidar specifications. For example, Raman lidar using a laser operating in the ultraviolet C (UV-C) region has the advantage of daytime observation since there is no solar background radiation in the system. The observation range is, however, limited at altitudes lower than 1-3 km because of strong ozone absorption at the UV-C region. The new calibration technique will allow the utilization of various types of Raman lidar systems and provide many opportunities for calibration. We demonstrated the potential of this method by using the UV-C Raman lidar and GNSS observation data at the Shigaraki MU radar observatory (34°51'N, 136°06'E; 385m a.s.l.) of the Research Institute for Sustainable Humanosphere (RISH, Kyoto University, Japan, in June 2016. Differences of the calibration factor between the proposed method and the conventional method were 0.7% under optimal conditions such as clear skies and low ozone concentrations.
Overview of intercalibration of satellite instruments
Chander, G.; Hewison, T.J.; Fox, N.; Wu, X.; Xiong, X.; Blackwell, W.J.
2013-01-01
Inter-calibration of satellite instruments is critical for detection and quantification of changes in the Earth’s environment, weather forecasting, understanding climate processes, and monitoring climate and land cover change. These applications use data from many satellites; for the data to be inter-operable, the instruments must be cross-calibrated. To meet the stringent needs of such applications requires that instruments provide reliable, accurate, and consistent measurements over time. Robust techniques are required to ensure that observations from different instruments can be normalized to a common scale that the community agrees on. The long-term reliability of this process needs to be sustained in accordance with established reference standards and best practices. Furthermore, establishing physical meaning to the information through robust Système International d'unités (SI) traceable Calibration and Validation (Cal/Val) is essential to fully understand the parameters under observation. The processes of calibration, correction, stability monitoring, and quality assurance need to be underpinned and evidenced by comparison with “peer instruments” and, ideally, highly calibrated in-orbit reference instruments. Inter-calibration between instruments is a central pillar of the Cal/Val strategies of many national and international satellite remote sensing organizations. Inter-calibration techniques as outlined in this paper not only provide a practical means of identifying and correcting relative biases in radiometric calibration between instruments but also enable potential data gaps between measurement records in a critical time series to be bridged. Use of a robust set of internationally agreed upon and coordinated inter-calibration techniques will lead to significant improvement in the consistency between satellite instruments and facilitate accurate monitoring of the Earth’s climate at uncertainty levels needed to detect and attribute the mechanisms of change. This paper summarizes the state-of-the-art of post-launch radiometric calibration of remote sensing satellite instruments, through inter-calibration.
External calibration of polarimetric radar images using distributed targets
NASA Technical Reports Server (NTRS)
Yueh, Simon H.; Nghiem, S. V.; Kwok, R.
1992-01-01
A new technique is presented for calibrating polarimetric synthetic aperture radar (SAR) images using only the responses from natural distributed targets. The model for polarimetric radars is assumed to be X = cRST where X is the measured scattering matrix corresponding to the target scattering matrix S distorted by the system matrices T and R (in general T does not equal R(sup t)). To allow for the polarimetric calibration using only distributed targets and corner reflectors, van Zyl assumed a reciprocal polarimetric radar model with T = R(sup t); when applied for JPL SAR data, a heuristic symmetrization procedure is used by POLCAL to compensate the phase difference between the measured HV and VH responses and then take the average of both. This heuristic approach causes some non-removable cross-polarization responses for corner reflectors, which can be avoided by a rigorous symmetrization method based on reciprocity. After the radar is made reciprocal, a new algorithm based on the responses from distributed targets with reflection symmetry is developed to estimate the cross-talk parameters. The new algorithm never experiences problems in convergence and is also found to converge faster than the existing routines implemented for POLCAL. When the new technique is implemented for the JPL polarimetric data, symmetrization and cross-talk removal are performed on a line-by-line (azimuth) basis. After the cross-talks are removed from the entire image, phase and amplitude calibrations are carried out by selecting distributed targets either with azimuthal symmetry along the looking direction or with some well-known volume and surface scattering mechanisms to estimate the relative phases and amplitude responses of the horizontal and vertical channels.
The next generation of low-cost personal air quality sensors for quantitative exposure monitoring
NASA Astrophysics Data System (ADS)
Piedrahita, R.; Xiang, Y.; Masson, N.; Ortega, J.; Collier, A.; Jiang, Y.; Li, K.; Dick, R.; Lv, Q.; Hannigan, M.; Shang, L.
2014-03-01
Advances in embedded systems and low-cost gas sensors are enabling a new wave of low cost air quality monitoring tools. Our team has been engaged in the development of low-cost wearable air quality monitors (M-Pods) using the Arduino platform. The M-Pods use commercially available metal oxide semiconductor (MOx) sensors to measure CO, O3, NO2, and total VOCs, and NDIR sensors to measure CO2. MOx sensors are low in cost and show high sensitivity near ambient levels; however they display non-linear output signals and have cross sensitivity effects. Thus, a quantification system was developed to convert the MOx sensor signals into concentrations. Two deployments were conducted at a regulatory monitoring station in Denver, Colorado. M-Pod concentrations were determined using laboratory calibration techniques and co-location calibrations, in which we place the M-Pods near regulatory monitors to then derive calibration function coefficients using the regulatory monitors as the standard. The form of the calibration function was derived based on laboratory experiments. We discuss various techniques used to estimate measurement uncertainties. A separate user study was also conducted to assess personal exposure and M-Pod reliability. In this study, 10 M-Pods were calibrated via co-location multiple times over 4 weeks and sensor drift was analyzed with the result being a calibration function that included drift. We found that co-location calibrations perform better than laboratory calibrations. Lab calibrations suffer from bias and difficulty in covering the necessary parameter space. During co-location calibrations, median standard errors ranged between 4.0-6.1 ppb for O3, 6.4-8.4 ppb for NO2, 0.28-0.44 ppm for CO, and 16.8 ppm for CO2. Median signal to noise (S/N) ratios for the M-Pod sensors were higher for M-Pods than the regulatory instruments: for NO2, 3.6 compared to 23.4; for O3, 1.4 compared to 1.6; for CO, 1.1 compared to 10.0; and for CO2, 42.2 compared to 300-500. The user study provided trends and location-specific information on pollutants, and affected change in user behavior. The study demonstrated the utility of the M-Pod as a tool to assess personal exposure.
Rezende, L F C; Arenque-Musa, B C; Moura, M S B; Aidar, S T; Von Randow, C; Menezes, R S C; Ometto, J P B H
2016-06-01
The semiarid region of northeastern Brazil, the Caatinga, is extremely important due to its biodiversity and endemism. Measurements of plant physiology are crucial to the calibration of Dynamic Global Vegetation Models (DGVMs) that are currently used to simulate the responses of vegetation in face of global changes. In a field work realized in an area of preserved Caatinga forest located in Petrolina, Pernambuco, measurements of carbon assimilation (in response to light and CO2) were performed on 11 individuals of Poincianella microphylla, a native species that is abundant in this region. These data were used to calibrate the maximum carboxylation velocity (Vcmax) used in the INLAND model. The calibration techniques used were Multiple Linear Regression (MLR), and data mining techniques as the Classification And Regression Tree (CART) and K-MEANS. The results were compared to the UNCALIBRATED model. It was found that simulated Gross Primary Productivity (GPP) reached 72% of observed GPP when using the calibrated Vcmax values, whereas the UNCALIBRATED approach accounted for 42% of observed GPP. Thus, this work shows the benefits of calibrating DGVMs using field ecophysiological measurements, especially in areas where field data is scarce or non-existent, such as in the Caatinga.
NASA Astrophysics Data System (ADS)
Klaas, D. K. S. Y.; Imteaz, M. A.; Sudiayem, I.; Klaas, E. M. E.; Klaas, E. C. M.
2017-10-01
In groundwater modelling, robust parameterisation of sub-surface parameters is crucial towards obtaining an agreeable model performance. Pilot point is an alternative in parameterisation step to correctly configure the distribution of parameters into a model. However, the methodology given by the current studies are considered less practical to be applied on real catchment conditions. In this study, a practical approach of using geometric features of pilot point and distribution of hydraulic gradient over the catchment area is proposed to efficiently configure pilot point distribution in the calibration step of a groundwater model. A development of new pilot point distribution, Head Zonation-based (HZB) technique, which is based on the hydraulic gradient distribution of groundwater flow, is presented. Seven models of seven zone ratios (1, 5, 10, 15, 20, 25 and 30) using HZB technique were constructed on an eogenetic karst catchment in Rote Island, Indonesia and their performances were assessed. This study also concludes some insights into the trade-off between restricting and maximising the number of pilot points and offers a new methodology for selecting pilot point properties and distribution method in the development of a physically-based groundwater model.
Fantínová, K; Fojtík, P; Malátová, I
2016-09-01
Rapid measurement techniques are required for a large-scale emergency monitoring of people. In vivo measurement of the bremsstrahlung radiation produced by incorporated pure-beta emitters can offer a rapid technique for the determination of such radionuclides in the human body. This work presents a method for the calibration of spectrometers, based on the use of UPh-02T (so-called IGOR) phantom and specific (90)Sr/(90)Y sources, which can account for recent as well as previous contaminations. The process of the whole- and partial-body counter calibration in combination with application of a Monte Carlo code offers readily extension also to other pure-beta emitters and various exposure scenarios. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Hasegawa, M; Tajima, O; Chinone, Y; Hazumi, M; Ishidoshiro, K; Nagai, M
2011-05-01
We present a novel system to calibrate millimeter-wave polarimeters for cosmic microwave background (CMB) polarization measurements. This technique is an extension of the conventional metal mirror rotation approach, however, it employs cryogenically-cooled blackbody absorbers. The primary advantage of this system is that it can generate a slightly polarized signal (∼100 mK) in the laboratory; this is at a similar level to that measured by ground-based CMB polarization experiments observing a ∼10 K sky. It is important to reproduce the observing condition in the laboratory for reliable characterization of polarimeters before deployment. In this paper, we present the design and principle of the system and demonstrate its use with a coherent-type polarimeter used for an actual CMB polarization experiment. This technique can also be applied to incoherent-type polarimeters and it is very promising for the next-generation CMB polarization experiments.
Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A.; Ontiveros, Sinué; Tosello, Guido
2017-01-01
The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems’ traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component’s calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from the validation workpiece are, respectively, 0.27 (VDI) and 0.35 (MPE), by assuring tolerances in the range of ± 20–30 µm. For the dental file, the EN < 1 value analysis is favorable in the majority of the cases (70.4%) and 2U/T is equal to 0.31 for sub-mm measurands (L < 1 mm and tolerance intervals of ± 40–80 µm). PMID:28509869
Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A; Ontiveros, Sinué; Tosello, Guido
2017-05-16
The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems' traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component's calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from the validation workpiece are, respectively, 0.27 (VDI) and 0.35 (MPE), by assuring tolerances in the range of ± 20-30 µm. For the dental file, the E N < 1 value analysis is favorable in the majority of the cases (70.4%) and 2U/T is equal to 0.31 for sub-mm measurands (L < 1 mm and tolerance intervals of ± 40-80 µm).
SHORT COMMUNICATION: An image processing approach to calibration of hydrometers
NASA Astrophysics Data System (ADS)
Lorefice, S.; Malengo, A.
2004-06-01
The usual method adopted for multipoint calibration of glass hydrometers is based on the measurement of the buoyancy by hydrostatic weighing when the hydrometer is plunged in a reference liquid up to the scale mark to be calibrated. An image processing approach is proposed by the authors to align the relevant scale mark with the reference liquid surface level. The method uses image analysis with a data processing technique and takes into account the perspective error. For this purpose a CCD camera with a pixel matrix of 604H × 576V and a lens of 16 mm focal length were used. High accuracy in the hydrometer reading was obtained as the resulting reading uncertainty was lower than 0.02 mm, about a fifth of the usual figure with the visual reading made by an operator.
Calibration of LR-115 for 222Rn monitoring taking into account the plateout effect.
Da Silva, A A R; Yoshimura, E M
2003-01-01
The dose received by people exposed to indoor radon is mainly due to radon progeny. This fact points to the establishment of techniques that access either radon and progeny together, or only radon progeny concentration. In this work a low cost and easy to use methodology is presented to determine the total indoor alpha emission concentration. It is based on passive detection using LR-115 and CR-39 detectors, taking into account the plateout effect. A calibration of LR-115 track density response was done by indoor exposure in controlled environments and dwellings, places where 222Rn and progeny concentration were measured with CR-39. The calibration factor obtained showed great dependence on the ambient condition: (0.69 +/- 0.04) cm for controlled environments and (0.43 +/- 0.03) cm for dwellings.
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, R; Jee, K; Sharp, G
Purpose: Proton radiography, which images the patients with the same type of particles that they are to be treated with, is a promising approach for image guidance and range uncertainties reduction. This study aimed to realize quality proton radiography by measuring dose rate functions (DRF) in time domain using a single flat panel and retrieve water equivalent path length (WEPL) from them. Methods: An amorphous silicon flat panel (PaxScan™ 4030CB, Varian Medical Systems, Inc., Palo Alto, CA) was placed behind phantoms to measure DRFs from a proton beam modulated by the modulator wheel. To retrieve WEPL and RSP, calibration modelsmore » based on the intensity of DRFs only, root mean square (RMS) of DRFs only and the intensity weighted RMS were tested. The quality of obtained WEPL images (in terms of spatial resolution and level of details) and the accuracy of WEPL were compared. Results: RSPs for most of the Gammex phantom inserts were retrieved within ± 1% errors by calibration models based on the RMS and intensity weighted RMS. The mean percentage error for all inserts was reduced from 1.08% to 0.75% by matching intensity in the calibration model. In specific cases such as the insert with a titanium rod, the calibration model based on RMS only fails while the that based on intensity weighted RMS is still valid. The quality of retrieved WEPL images were significantly improved for calibration models including intensity matching. Conclusion: For the first time, a flat panel, which is readily available in the beamline for image guidance, was tested to acquire quality proton radiography with WEPL accurately retrieved from it. This technique is promising to be applied for image-guided proton therapy as well as patient specific RSP determination to reduce uncertainties of beam ranges.« less
Video-guided calibration of an augmented reality mobile C-arm.
Chen, Xin; Naik, Hemal; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal
2014-11-01
The augmented reality (AR) fluoroscope augments an X-ray image by video and provides the surgeon with a real-time in situ overlay of the anatomy. The overlay alignment is crucial for diagnostic and intra-operative guidance, so precise calibration of the AR fluoroscope is required. The first and most complex step of the calibration procedure is the determination of the X-ray source position. Currently, this is achieved using a biplane phantom with movable metallic rings on its top layer and fixed X-ray opaque markers on its bottom layer. The metallic rings must be moved to positions where at least two pairs of rings and markers are isocentric in the X-ray image. The current "trial and error" calibration process currently requires acquisition of many X-ray images, a task that is both time consuming and radiation intensive. An improved process was developed and tested for C-arm calibration. Video guidance was used to drive the calibration procedure to minimize both X-ray exposure and the time involved. For this, a homography between X-ray and video images is estimated. This homography is valid for the plane at which the metallic rings are positioned and is employed to guide the calibration procedure. Eight users having varying calibration experience (i.e., 2 experts, 2 semi-experts, 4 novices) were asked to participate in the evaluation. The video-guided technique reduced the number of intra-operative X-ray calibration images by 89% and decreased the total time required by 59%. A video-based C-arm calibration method has been developed that improves the usability of the AR fluoroscope with a friendlier interface, reduced calibration time and clinically acceptable radiation doses.
NASA Astrophysics Data System (ADS)
Brown, Anthony M.
2018-01-01
Recent advances in unmanned aerial vehicle (UAV) technology have made UAVs an attractive possibility as an airborne calibration platform for astronomical facilities. This is especially true for arrays of telescopes spread over a large area such as the Cherenkov Telescope Array (CTA). In this paper, the feasibility of using UAVs to calibrate CTA is investigated. Assuming a UAV at 1km altitude above CTA, operating on astronomically clear nights with stratified, low atmospheric dust content, appropriate thermal protection for the calibration light source and an onboard photodiode to monitor its absolute light intensity, inter-calibration of CTA's telescopes of the same size class is found to be achievable with a 6 - 8 % uncertainty. For cross-calibration of different telescope size classes, a systematic uncertainty of 8 - 10 % is found to be achievable. Importantly, equipping the UAV with a multi-wavelength calibration light source affords us the ability to monitor the wavelength-dependent degradation of CTA telescopes' optical system, allowing us to not only maintain this 6 - 10 % uncertainty after the first few years of telescope deployment, but also to accurately account for the effect of multi-wavelength degradation on the cross-calibration of CTA by other techniques, namely with images of air showers and local muons. A UAV-based system thus provides CTA with several independent and complementary methods of cross-calibrating the optical throughput of individual telescopes. Furthermore, housing environmental sensors on the UAV system allows us to not only minimise the systematic uncertainty associated with the atmospheric transmission of the calibration signal, it also allows us to map the dust content above CTA as well as monitor the temperature, humidity and pressure profiles of the first kilometre of atmosphere above CTA with each UAV flight.
Yohannes, Indra; Kolditz, Daniel; Langner, Oliver; Kalender, Willi A
2012-03-07
Tissue- and water-equivalent materials (TEMs) are widely used in quality assurance and calibration procedures, both in radiodiagnostics and radiotherapy. In radiotherapy, particularly, the TEMs are often used for computed tomography (CT) number calibration in treatment planning systems. However, currently available TEMs may not be very accurate in the determination of the calibration curves due to their limitation in mimicking radiation characteristics of the corresponding real tissues in both low- and high-energy ranges. Therefore, we are proposing a new formulation of TEMs using a stoichiometric analysis method to obtain TEMs for the calibration purposes. We combined the stoichiometric calibration and the basic data method to compose base materials to develop TEMs matching standard real tissues from ICRU Report 44 and 46. First, the CT numbers of six materials with known elemental compositions were measured to get constants for the stoichiometric calibration. The results of the stoichiometric calibration were used together with the basic data method to formulate new TEMs. These new TEMs were scanned to validate their CT numbers. The electron density and the stopping power calibration curves were also generated. The absolute differences of the measured CT numbers of the new TEMs were less than 4 HU for the soft tissues and less than 22 HU for the bone compared to the ICRU real tissues. Furthermore, the calculated relative electron density and electron and proton stopping powers of the new TEMs differed by less than 2% from the corresponding ICRU real tissues. The new TEMs which were formulated using the proposed technique increase the simplicity of the calibration process and preserve the accuracy of the stoichiometric calibration simultaneously.
Cao, Jianping; Xiong, Jianyin; Wang, Lixin; Xu, Ying; Zhang, Yinping
2016-09-06
Solid-phase microextraction (SPME) is regarded as a nonexhaustive sampling technique with a smaller extraction volume and a shorter extraction time than traditional sampling techniques and is hence widely used. The SPME sampling process is affected by the convection or diffusion effect along the coating surface, but this factor has seldom been studied. This paper derives an analytical model to characterize SPME sampling for semivolatile organic compounds (SVOCs) as well as for volatile organic compounds (VOCs) by considering the surface mass transfer process. Using this model, the chemical concentrations in a sample matrix can be conveniently calculated. In addition, the model can be used to determine the characteristic parameters (partition coefficient and diffusion coefficient) for typical SPME chemical samplings (SPME calibration). Experiments using SPME samplings of two typical SVOCs, dibutyl phthalate (DBP) in sealed chamber and di(2-ethylhexyl) phthalate (DEHP) in ventilated chamber, were performed to measure the two characteristic parameters. The experimental results demonstrated the effectiveness of the model and calibration method. Experimental data from the literature (VOCs sampled by SPME) were used to further validate the model. This study should prove useful for relatively rapid quantification of concentrations of different chemicals in various circumstances with SPME.
NASA Technical Reports Server (NTRS)
Bailey, G. B.; Dwyer, J. L.; Meyer, D. J.
1988-01-01
Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) data collected over a geologically diverse field site and over a nearby calibration site were analyzed and interpreted in efforts to document radiometric and geometric characteristics of AVIRIS, quantify and correct for detrimental sensor phenomena, and evaluate the utility of AVIRIS data for discriminating rock types and identifying their constituent mineralogy. AVIRIS data acquired for these studies exhibit a variety of detrimental artifacts and have lower signal-to-noise ratios than expected in the longer wavelength bands. Artifacts are both inherent in the image data and introduced during ground processing, but most may be corrected by appropriate processing techniques. Poor signal-to-noise characteristics of this AVIRIS data set limited the usefulness of the data for lithologic discrimination and mineral identification. Various data calibration techniques, based on field-acquired spectral measurements, were applied to the AVIRIS data. Major absorption features of hydroxyl-bearing minerals were resolved in the spectra of the calibrated AVIRIS data, and the presence of hydroxyl-bearing minerals at the corresponding ground locations was confirmed by field data.
NASA Astrophysics Data System (ADS)
Takahashi, Tomoko; Thornton, Blair
2017-12-01
This paper reviews methods to compensate for matrix effects and self-absorption during quantitative analysis of compositions of solids measured using Laser Induced Breakdown Spectroscopy (LIBS) and their applications to in-situ analysis. Methods to reduce matrix and self-absorption effects on calibration curves are first introduced. The conditions where calibration curves are applicable to quantification of compositions of solid samples and their limitations are discussed. While calibration-free LIBS (CF-LIBS), which corrects matrix effects theoretically based on the Boltzmann distribution law and Saha equation, has been applied in a number of studies, requirements need to be satisfied for the calculation of chemical compositions to be valid. Also, peaks of all elements contained in the target need to be detected, which is a bottleneck for in-situ analysis of unknown materials. Multivariate analysis techniques are gaining momentum in LIBS analysis. Among the available techniques, principal component regression (PCR) analysis and partial least squares (PLS) regression analysis, which can extract related information to compositions from all spectral data, are widely established methods and have been applied to various fields including in-situ applications in air and for planetary explorations. Artificial neural networks (ANNs), where non-linear effects can be modelled, have also been investigated as a quantitative method and their applications are introduced. The ability to make quantitative estimates based on LIBS signals is seen as a key element for the technique to gain wider acceptance as an analytical method, especially in in-situ applications. In order to accelerate this process, it is recommended that the accuracy should be described using common figures of merit which express the overall normalised accuracy, such as the normalised root mean square errors (NRMSEs), when comparing the accuracy obtained from different setups and analytical methods.
Skrdla, Peter J; Zhang, Dan
2014-03-01
The crystalline citrate salt (CS) of a developmental pharmaceutical compound, MK-Q, was investigated in this work from two different, but related, perspectives. In the first part of the paper, the apparent disproportionation kinetics were surveyed using two different slurry systems, one containing water and the other a pH 6.9 phosphate buffer, using time-dependent measurements of the solution pH or by acquiring online Raman spectra of the solids. While the CS is generally stable when stored as a solid under ambient conditions of temperature and humidity, its low pHmax (<3) facilitates rapid disproportionation in aqueous solution, particularly at higher pH values. The rate of disappearance of the CS was found to obey first-order (Noyes-Whitney/dissolution rate-limited) kinetics, however, the formation of the crystalline product form in the slurry system was observed to exhibit kinetics consistent with a heterogeneous nucleation-and-growth mechanism. In the second part of this paper, more sensitive offline measurements made using XRPD, DSC and FT-Raman spectroscopy were applied to the characterization of binary physical mixtures of the CS and free base (FB) crystalline forms of MK-Q to obtain a calibration curve for each technique. It was found that all calibration plots exhibited good linearity of response, with the limit of detection (LOD) for each technique estimated to be ≤7 wt% FB. While additional calibration curves would need to be constructed to allow for accurate quantitation in various slurry systems, the general feasibility of these techniques is demonstrated for detecting low levels of CS disproportionation. Copyright © 2013 Elsevier B.V. All rights reserved.
Absolute calorimetric calibration of low energy brachytherapy sources
NASA Astrophysics Data System (ADS)
Stump, Kurt E.
In the past decade there has been a dramatic increase in the use of permanent radioactive source implants in the treatment of prostate cancer. A small radioactive source encapsulated in a titanium shell is used in this type of treatment. The radioisotopes used are generally 125I or 103Pd. Both of these isotopes have relatively short half-lives, 59.4 days and 16.99 days, respectively, and have low-energy emissions and a low dose rate. These factors make these sources well suited for this application, but the calibration of these sources poses significant metrological challenges. The current standard calibration technique involves the measurement of ionization in air to determine the source air-kerma strength. While this has proved to be an improvement over previous techniques, the method has been shown to be metrologically impure and may not be the ideal means of calbrating these sources. Calorimetric methods have long been viewed to be the most fundamental means of determining source strength for a radiation source. This is because calorimetry provides a direct measurement of source energy. However, due to the low energy and low power of the sources described above, current calorimetric methods are inadequate. This thesis presents work oriented toward developing novel methods to provide direct and absolute measurements of source power for low-energy low dose rate brachytherapy sources. The method is the first use of an actively temperature-controlled radiation absorber using the electrical substitution method to determine total contained source power of these sources. The instrument described operates at cryogenic temperatures. The method employed provides a direct measurement of source power. The work presented here is focused upon building a metrological foundation upon which to establish power-based calibrations of clinical-strength sources. To that end instrument performance has been assessed for these source strengths. The intent is to establish the limits of the current instrument to direct further work in this field. It has been found that for sources with powers above approximately 2 muW the instrument is able to determine the source power in agreement to within less than 7% of what is expected based upon the current source strength standard. For lower power sources, the agreement is still within the uncertainty of the power measurement, but the calorimeter noise dominates. Thus, to provide absolute calibration of lower power sources additional measures must be taken. The conclusion of this thesis describes these measures and how they will improve the factors that limit the current instrument. The results of the work presented in this thesis establish the methodology of active radiometric calorimetey for the absolute calibration of radioactive sources. The method is an improvement over previous techniques in that there is no reliance upon the thermal properties of the materials used or the heat flow pathways on the source measurements. The initial work presented here will help to shape future refinements of this technique to allow lower power sources to be calibrated with high precision and high accuracy.
Empirical transfer functions for stations in the Central California seismological network
Bakun, W.H.; Dratler, Jay
1976-01-01
A sequence of calibration signals composed of a station identification code, a transient from the release of the seismometer mass at rest from a known displacement from the equilibrium position, and a transient from a known step in voltage to the amplifier input are generated by the automatic daily calibration system (ADCS) now operational in the U.S. Geological Survey central California seismographic network. Documentation of a sequence of interactive programs to compute, from the calibration data, the complex transfer functions for the seismographic system (ground motion through digitizer) the electronics (amplifier through digitizer), and the seismometer alone are presented. The analysis utilizes the Fourier transform technique originally suggested by Espinosa et al (1962). Section I is a general description of seismographic calibration. Section II contrasts the 'Fourier transform' and the 'least-squares' techniques for analyzing transient calibration signals. Theoretical consideration for the Fourier transform technique used here are described in Section III. Section IV is a detailed description of the sequence of calibration signals generated by the ADCS. Section V is a brief 'cookbook description' of the calibration programs; Section VI contains a detailed sample program execution. Section VII suggests the uses of the resultant empirical transfer functions. Supplemental interactive programs by which smooth response functions, suitable for reducing seismic data to ground motion, are also documented in Section VII. Appendices A and B contain complete listings of the Fortran source Codes while Appendix C is an update containing preliminary results obtained from an analysis of some of the calibration signals from stations in the seismographic network near Oroville, California.
Virtual Reality Calibration for Telerobotic Servicing
NASA Technical Reports Server (NTRS)
Kim, W.
1994-01-01
A virtual reality calibration technique of matching a virtual environment of simulated graphics models in 3-D geometry and perspective with actual camera views of the remote site task environment has been developed to enable high-fidelity preview/predictive displays with calibrated graphics overlay on live video.
NASA Astrophysics Data System (ADS)
Thalman, R.; Baeza-Romero, M. T.; Ball, S. M.; Borrás, E.; Daniels, M. J. S.; Goodall, I. C. A.; Henry, S. B.; Karl, T.; Keutsch, F. N.; Kim, S.; Mak, J.; Monks, P. S.; Muñoz, A.; Orlando, J.; Peppe, S.; Rickard, A. R.; Ródenas, M.; Sánchez, P.; Seco, R.; Su, L.; Tyndall, G.; Vázquez, M.; Vera, T.; Waxman, E.; Volkamer, R.
2014-08-01
The α-dicarbonyl compounds glyoxal (CHOCHO) and methyl glyoxal (CH3C(O)CHO) are produced in the atmosphere by the oxidation of hydrocarbons, and emitted directly from pyrogenic sources. Measurements of ambient concentrations inform about the rate of hydrocarbon oxidation, oxidative capacity, and secondary organic aerosol (SOA) formation. We present results from a comprehensive instrument comparison effort at 2 simulation chamber facilities in the US and Europe that included 9 instruments, and 7 different measurement techniques: Broadband Cavity Enhanced Absorption Spectroscopy (BBCEAS), Cavity Enhanced Differential Optical Absorption Spectroscopy (CE-DOAS), White-cell DOAS, Fourier Transform Infra-Red Spectroscopy (FTIR, two separate instruments), Laser Induced Phosphoresence (LIP), Solid Phase Micro Extraction (SPME), and Proton Transfer Reaction Mass Spectrometry (PTR-ToF-MS, two separate instruments; only methyl glyoxal as no significant response was observed for glyoxal). Experiments at the National Center for Atmospheric Research (NCAR) compare 3 independent sources of calibration as a function of temperature (293 K to 330 K). Calibrations from absorption cross-section spectra at UV-visible and IR wavelengths are found to agree within 2% for glyoxal, and 4% for methyl glyoxal at all temperatures; further calibrations based on ion-molecule rate constant calculations agreed within 5% for methyl glyoxal at all temperatures. At the EUropean PHOtoREactor (EUPHORE) all measurements are calibrated from the same UV-visible spectra (either directly or indirectly), thus minimizing potential systematic bias. We find excellent linearity under idealized conditions (pure glyoxal or methyl glyoxal, R2 > 0.96), and in complex gas mixtures characteristic of dry photochemical smog systems (o-xylene/NOx and isoprene/NOx, R2 > 0.95; R2 ~ 0.65 for offline SPME measurements of methyl glyoxal). The correlations are more variable in humid ambient air mixtures (RH > 45%) for methyl glyoxal (0.58 < R2 < 0.68) than for glyoxal (0.79 < R2 < 0.99). The intercepts of correlations were insignificant for the most part; slopes varied by less than 5% for instruments that also measure NO2. For glyoxal and methyl glyoxal the slopes varied by less than 12% and 17% (both 3-sigma) between inherently calibrated instruments (i.e., calibration from knowledge of the absorption cross-section). We find a larger variability among in situ techniques that employ external calibration sources (75% to 90%, 3-sigma), and/or techniques that employ offline analysis. Our inter-comparison reveal existing differences in reports about precision and detection limits in the literature, and enables comparison on a common basis by observing a common airmass. Finally, we evaluate the influence of interfering species (e.g., NO2, O3 and H2O) of relevance in field and laboratory applications. Techniques now exist to conduct fast and accurate measurements of glyoxal at ambient concentrations, and methyl glyoxal under simulated conditions. However, techniques to measure methyl glyoxal at ambient concentrations remain a challenge, and would be desirable.
The development of an electrochemical technique for in situ calibrating of combustible gas detectors
NASA Technical Reports Server (NTRS)
Shumar, J. W.; Lantz, J. B.; Schubert, F. H.
1976-01-01
A program to determine the feasibility of performing in situ calibration of combustible gas detectors was successfully completed. Several possible techniques for performing the in situ calibration were proposed. The approach that showed the most promise involved the use of a miniature water vapor electrolysis cell for the generation of hydrogen within the flame arrestor of a combustible gas detector to be used for the purpose of calibrating the combustible gas detectors. A preliminary breadboard of the in situ calibration hardware was designed, fabricated and assembled. The breadboard equipment consisted of a commercially available combustible gas detector, modified to incorporate a water vapor electrolysis cell, and the instrumentation required for controlling the water vapor electrolysis and controlling and calibrating the combustible gas detector. The results showed that operation of the water vapor electrolysis at a given current density for a specific time period resulted in the attainment of a hydrogen concentration plateau within the flame arrestor of the combustible gas detector.
Infrared Stokes polarimetry and spectropolarimetry
NASA Astrophysics Data System (ADS)
Kudenov, Michael William
In this work, three methods of measuring the polarization state of light in the thermal infrared (3-12 mum) are modeled, simulated, calibrated and experimentally verified in the laboratory. The first utilizes the method of channeled spectropolarimetry (CP) to encode the Stokes polarization parameters onto the optical power spectrum. This channeled spectral technique is implemented with the use of two Yttrium Vanadate (YVO4) crystal retarders. A basic mathematical model for the system is presented, showing that all the Stokes parameters are directly present in the interferogram. Theoretical results are compared with real data from the system, an improved model is provided to simulate the effects of absorption within the crystal, and a modified calibration technique is introduced to account for this absorption. Lastly, effects due to interferometer instabilities on the reconstructions, including non-uniform sampling and interferogram translations, are investigated and techniques are employed to mitigate them. Second is the method of prismatic imaging polarimetry (PIP), which can be envisioned as the monochromatic application of channeled spectropolarimetry. Unlike CP, PIP encodes the 2-dimensional Stokes parameters in a scene onto spatial carrier frequencies. However, the calibration techniques derived in the infrared for CP are extremely similar to that of the PIP. Consequently, the PIP technique is implemented with a set of four YVO4 crystal prisms. A mathematical model for the polarimeter is presented in which diattenuation due to Fresnel effects and dichroism in the crystal are included. An improved polarimetric calibration technique is introduced to remove the diattenuation effects, along with the relative radiometric calibration required for the BPIP operating with a thermal background and large detector offsets. Data demonstrating emission polarization are presented from various blackbodies, which are compared to data from our Fourier transform infrared spectropolarimeter. Additionally, limitations in the PIP technique with regards to the spectral bandwidth and F/# of the imaging system are analyzed. A model able to predict the carrier frequency's fringe visibility is produced and experimentally verified, further reinforcing the PIP's limitations. The last technique is significantly different from CP or PIP and involves the simulation and calibration of a thermal infrared division of amplitude imaging Stokes polarimeter. For the first time, application of microbolometer focal plane array (FPA) technology to polarimetry is demonstrated. The sensor utilizes a wire-grid beamsplitter with imaging systems positioned at each output to analyze two orthogonal linear polarization states simultaneously. Combined with a form birefringent wave plate, the system is capable of snapshot imaging polarimetry in any one Stokes vector (S1, S2 or S3). Radiometric and polarimetric calibration procedures for the instrument are provided and the reduction matrices from the calibration are compared to rigorous coupled wave analysis (RCWA) and raytracing simulations. The design and optimization of the sensor's wire-grid beam splitter and wave plate are presented, along with their corresponding prescriptions. Polarimetric calibration error due to the spectrally broadband nature of the instrument is also overviewed. Image registration techniques for the sensor are discussed and data from the instrument are presented, demonstrating a microbolometer's ability to measure the small intensity variations corresponding to polarized emission in natural environments.
ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.
ERIC Educational Resources Information Center
Vale, C. David; Gialluca, Kathleen A.
ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…
NASA Astrophysics Data System (ADS)
Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.
1996-02-01
Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.
Intrinsic coincident linear polarimetry using stacked organic photovoltaics.
Roy, S Gupta; Awartani, O M; Sen, P; O'Connor, B T; Kudenov, M W
2016-06-27
Polarimetry has widespread applications within atmospheric sensing, telecommunications, biomedical imaging, and target detection. Several existing methods of imaging polarimetry trade off the sensor's spatial resolution for polarimetric resolution, and often have some form of spatial registration error. To mitigate these issues, we have developed a system using oriented polymer-based organic photovoltaics (OPVs) that can preferentially absorb linearly polarized light. Additionally, the OPV cells can be made semitransparent, enabling multiple detectors to be cascaded along the same optical axis. Since each device performs a partial polarization measurement of the same incident beam, high temporal resolution is maintained with the potential for inherent spatial registration. In this paper, a Mueller matrix model of the stacked OPV design is provided. Based on this model, a calibration technique is developed and presented. This calibration technique and model are validated with experimental data, taken with a cascaded three cell OPV Stokes polarimeter, capable of measuring incident linear polarization states. Our results indicate polarization measurement error of 1.2% RMS and an average absolute radiometric accuracy of 2.2% for the demonstrated polarimeter.
Huang, Hui; Liu, Li; Ngadi, Michael O; Gariépy, Claude; Prasher, Shiv O
2014-01-01
Marbling is an important quality attribute of pork. Detection of pork marbling usually involves subjective scoring, which raises the efficiency costs to the processor. In this study, the ability to predict pork marbling using near-infrared (NIR) hyperspectral imaging (900-1700 nm) and the proper image processing techniques were studied. Near-infrared images were collected from pork after marbling evaluation according to current standard chart from the National Pork Producers Council. Image analysis techniques-Gabor filter, wide line detector, and spectral averaging-were applied to extract texture, line, and spectral features, respectively, from NIR images of pork. Samples were grouped into calibration and validation sets. Wavelength selection was performed on calibration set by stepwise regression procedure. Prediction models of pork marbling scores were built using multiple linear regressions based on derivatives of mean spectra and line features at key wavelengths. The results showed that the derivatives of both texture and spectral features produced good results, with correlation coefficients of validation of 0.90 and 0.86, respectively, using wavelengths of 961, 1186, and 1220 nm. The results revealed the great potential of the Gabor filter for analyzing NIR images of pork for the effective and efficient objective evaluation of pork marbling.
Firefly as a novel swarm intelligence variable selection method in spectroscopy.
Goodarzi, Mohammad; dos Santos Coelho, Leandro
2014-12-10
A critical step in multivariate calibration is wavelength selection, which is used to build models with better prediction performance when applied to spectral data. Up to now, many feature selection techniques have been developed. Among all different types of feature selection techniques, those based on swarm intelligence optimization methodologies are more interesting since they are usually simulated based on animal and insect life behavior to, e.g., find the shortest path between a food source and their nests. This decision is made by a crowd, leading to a more robust model with less falling in local minima during the optimization cycle. This paper represents a novel feature selection approach to the selection of spectroscopic data, leading to more robust calibration models. The performance of the firefly algorithm, a swarm intelligence paradigm, was evaluated and compared with genetic algorithm and particle swarm optimization. All three techniques were coupled with partial least squares (PLS) and applied to three spectroscopic data sets. They demonstrate improved prediction results in comparison to when only a PLS model was built using all wavelengths. Results show that firefly algorithm as a novel swarm paradigm leads to a lower number of selected wavelengths while the prediction performance of built PLS stays the same. Copyright © 2014. Published by Elsevier B.V.
Chen, Y. M.; Lin, P.; He, Y.; He, J. Q.; Zhang, J.; Li, X. L.
2016-01-01
A novel strategy based on the near infrared hyperspectral imaging techniques and chemometrics were explored for fast quantifying the collision strength index of ethylene-vinyl acetate copolymer (EVAC) coverings on the fields. The reflectance spectral data of EVAC coverings was obtained by using the near infrared hyperspectral meter. The collision analysis equipment was employed to measure the collision intensity of EVAC materials. The preprocessing algorithms were firstly performed before the calibration. The algorithms of random frog and successive projection (SP) were applied to extracting the fingerprint wavebands. A correlation model between the significant spectral curves which reflected the cross-linking attributions of the inner organic molecules and the degree of collision strength was set up by taking advantage of the support vector machine regression (SVMR) approach. The SP-SVMR model attained the residual predictive deviation of 3.074, the square of percentage of correlation coefficient of 93.48% and 93.05% and the root mean square error of 1.963 and 2.091 for the calibration and validation sets, respectively, which exhibited the best forecast performance. The results indicated that the approaches of integrating the near infrared hyperspectral imaging techniques with the chemometrics could be utilized to rapidly determine the degree of collision strength of EVAC. PMID:26875544
Laser Energy Monitor for Double-Pulsed 2-Micrometer IPDA Lidar Application
NASA Technical Reports Server (NTRS)
Refaat, Tamer F.; Petros, Mulugeta; Remus, Ruben; Yu, Jirong; Singh, Upendra N.
2014-01-01
Integrated path differential absorption (IPDA) lidar is a remote sensing technique for monitoring different atmospheric species. The technique relies on wavelength differentiation between strong and weak absorbing features normalized to the transmitted energy. 2-micron double-pulsed IPDA lidar is best suited for atmospheric carbon dioxide measurements. In such case, the transmitter produces two successive laser pulses separated by short interval (200 microseconds), with low repetition rate (10Hz). Conventional laser energy monitors, based on thermal detectors, are suitable for low repetition rate single pulse lasers. Due to the short pulse interval in double-pulsed lasers, thermal energy monitors underestimate the total transmitted energy. This leads to measurement biases and errors in double-pulsed IPDA technique. The design and calibration of a 2-micron double-pulse laser energy monitor is presented. The design is based on a high-speed, extended range InGaAs pin quantum detectors suitable for separating the two pulse events. Pulse integration is applied for converting the detected pulse power into energy. Results are compared to a photo-electro-magnetic (PEM) detector for impulse response verification. Calibration included comparing the three detection technologies in single-pulsed mode, then comparing the pin and PEM detectors in double-pulsed mode. Energy monitor linearity will be addressed.
Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna Roberts
Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order polynomial with 16-bit precision, significant improvement over the one and two-point correction algorithms. All algorithm have been implemented in software with satisfactory results and the third order gain equalization non-uniformity correction algorithm has been implemented in hardware.
Sub-nanosecond clock synchronization and precision deep space tracking
NASA Technical Reports Server (NTRS)
Dunn, Charles; Lichten, Stephen; Jefferson, David; Border, James S.
1992-01-01
Interferometric spacecraft tracking is accomplished at the NASA Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals to ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3 ns error in clock synchronization resulting in an 11 nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock synchronization and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft observations without near-simultaneous quasar-based calibrations. Solutions are presented for a global network of GPS receivers in which the formal errors in clock offset parameters are less than 0.5 ns. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry and the examination of clock closure suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.
NASA Technical Reports Server (NTRS)
Voss, Kenneth J.; McLean, Scott; Lewis, Marlon; Johnson, Carol; Flora, Stephanie; Feinholz, Michael; Yarbrough, Mark; Trees, Charles; Twardowski, Mike; Clark, Dennis
2010-01-01
Vicarious calibration of ocean color satellites involves the use of accurate surface measurements of water-leaving radiance to update and improve the system calibration of ocean color satellite sensors. An experiment was performed to compare a free-fall technique with the established MOBY measurement. We found in the laboratory that the radiance and irradiance instruments compared well within their estimated uncertainties for various spectral sources. The spectrally averaged differences between the NIST values for the sources and the instruments were less than 2.5% for the radiance sensors and less than 1.5% for the irradiance sensors. In the field, the sensors measuring the above-surface downwelling irradiance performed nearly as well as they had in the laboratory, with an average difference of less than 2%. While the water-leaving radiance, L(sub w) calculated from each instrument agreed in almost all cases within the combined instrument uncertainties (approximately 7%), there was a relative bias between the two instrument classes/techniques that varied spectrally. The spectrally averaged (400 nm to 600 nm) difference between the two instrument classes/techniques was 3.1 %. However the spectral variation resulted in the free fall instruments being 0.2% lower at 450 nm and 5.9% higher at 550 nm. Based on the analysis of one matchup, the bias in the L(sub w), was similar to that observed for L(sub u)(1 m) with both systems, indicating the difference did not come from propagating L(sub u)(1 m) to L(sub w).
Geometric artifacts reduction for cone-beam CT via L0-norm minimization without dedicated phantoms.
Gong, Changcheng; Cai, Yufang; Zeng, Li
2018-01-01
For cone-beam computed tomography (CBCT), transversal shifts of the rotation center exist inevitably, which will result in geometric artifacts in CT images. In this work, we propose a novel geometric calibration method for CBCT, which can also be used in micro-CT. The symmetry property of the sinogram is used for the first calibration, and then L0-norm of the gradient image from the reconstructed image is used as the cost function to be minimized for the second calibration. An iterative search method is adopted to pursue the local minimum of the L0-norm minimization problem. The transversal shift value is updated with affirmatory step size within a search range determined by the first calibration. In addition, graphic processing unit (GPU)-based FDK algorithm and acceleration techniques are designed to accelerate the calibration process of the presented new method. In simulation experiments, the mean absolute difference (MAD) and the standard deviation (SD) of the transversal shift value were less than 0.2 pixels between the noise-free and noisy projection images, which indicated highly accurate calibration applying the new calibration method. In real data experiments, the smaller entropies of the corrected images also indicated that higher resolution image was acquired using the corrected projection data and the textures were well protected. Study results also support the feasibility of applying the proposed method to other imaging modalities.
Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.
2017-01-01
Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613
Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bekele, E. G.; Nicklow, J. W.
2005-12-01
Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.
ERIC Educational Resources Information Center
Downs, Nathan; Parisi, Alfio; Powell, Samantha; Turner, Joanna; Brennan, Chris
2010-01-01
A technique has previously been described for secondary school-aged children to make ultraviolet (UV) dosimeters from highlighter pen ink drawn onto strips of paper. This technique required digital comparison of exposed ink paper strips with unexposed ink paper strips to determine a simple calibration function relating the degree of ink fading to…
Overview of hypersonic CFD code calibration studies
NASA Technical Reports Server (NTRS)
Miller, Charles G.
1987-01-01
The topics are presented in viewgraph form and include the following: definitions of computational fluid dynamics (CFD) code validation; climate in hypersonics and LaRC when first 'designed' CFD code calibration studied was initiated; methodology from the experimentalist's perspective; hypersonic facilities; measurement techniques; and CFD code calibration studies.
The CCD Photometric Calibration Cookbook
NASA Astrophysics Data System (ADS)
Palmer, J.; Davenhall, A. C.
This cookbook presents simple recipes for the photometric calibration of CCD frames. Using these recipes you can calibrate the brightness of objects measured in CCD frames into magnitudes in standard photometric systems, such as the Johnson-Morgan UBV, system. The recipes use standard software available at all Starlink sites. The topics covered include: selecting standard stars, measuring instrumental magnitudes and calibrating instrumental magnitudes into a standard system. The recipes are appropriate for use with data acquired with optical CCDs and filters, operated in standard ways, and describe the usual calibration technique of observing standard stars. The software is robust and reliable, but the techniques are usually not suitable where very high accuracy is required. In addition to the recipes and scripts, sufficient background material is presented to explain the procedures and techniques used. The treatment is deliberately practical rather than theoretical, in keeping with the aim of providing advice on the actual calibration of observations. This cookbook is aimed firmly at people who are new to astronomical photometry. Typical readers might have a set of photometric observations to reduce (perhaps observed by a colleague) or be planning a programme of photometric observations, perhaps for the first time. No prior knowledge of astronomical photometry is assumed. The cookbook is not aimed at experts in astronomical photometry. Many finer points are omitted for clarity and brevity. Also, in order to make the most accurate possible calibration of high-precision photometry, it is usually necessary to use bespoke software tailored to the observing programme and photometric system you are using.
NASA Technical Reports Server (NTRS)
Bard, Edouard; Hamelin, Bruno; Fairbanks, Richard G.; Zindler, Alan
1990-01-01
Uranium-thorium ages obtained by mass spectrometry from corals raised off the island of Barbados confirm the high precision of this technique over at least the past 30,000 years. Comparison of the U-Th ages with C-14 ages obtained on the Holocene samples shows that the U-Th ages are accurate, because they accord with the dendrochronological calibration. Before 9,000 yr BP, the C-14 ages are systematically younger than the U-Th ages, with a maximum difference of about 3500 yr at about 20,000 yr BP. The U-Th technique thus provides a way of calibrating the radiocarbon timescale beyond the range of dendrochronological calibration.
Analysis of Electric Vehicle DC High Current Conversion Technology
NASA Astrophysics Data System (ADS)
Yang, Jing; Bai, Jing-fen; Lin, Fan-tao; Lu, Da
2017-05-01
Based on the background of electric vehicles, it is elaborated the necessity about electric energy accurate metering of electric vehicle power batteries, and it is analyzed about the charging and discharging characteristics of power batteries. It is needed a DC large current converter to realize accurate calibration of power batteries electric energy metering. Several kinds of measuring methods are analyzed based on shunts and magnetic induction principle in detail. It is put forward power batteries charge and discharge calibration system principle, and it is simulated and analyzed ripple waves containing rate and harmonic waves containing rate of power batteries AC side and DC side. It is put forward suitable DC large current measurement methods of power batteries by comparing different measurement principles and it is looked forward the DC large current measurement techniques.
Light curves of flat-spectrum radio sources (Jenness+, 2010)
NASA Astrophysics Data System (ADS)
Jenness, T.; Robson, E. I.; Stevens, J. A.
2010-05-01
Calibrated data for 143 flat-spectrum extragalactic radio sources are presented at a wavelength of 850um covering a 5-yr period from 2000 April. The data, obtained at the James Clerk Maxwell Telescope using the Submillimetre Common-User Bolometer Array (SCUBA) camera in pointing mode, were analysed using an automated pipeline process based on the Observatory Reduction and Acquisition Control - Data Reduction (ORAC-DR) system. This paper describes the techniques used to analyse and calibrate the data, and presents the data base of results along with a representative sample of the better-sampled light curves. A re-analysis of previously published data from 1997 to 2000 is also presented. The combined catalogue, comprising 10493 flux density measurements, provides a unique and valuable resource for studies of extragalactic radio sources. (2 data files).
NASA Technical Reports Server (NTRS)
Mach, D. M.; Koshak, W. J.
2007-01-01
A matrix calibration procedure has been developed that uniquely relates the electric fields measured at the aircraft with the external vector electric field and net aircraft charge. The calibration method can be generalized to any reasonable combination of electric field measurements and aircraft. A calibration matrix is determined for each aircraft that represents the individual instrument responses to the external electric field. The aircraft geometry and configuration of field mills (FMs) uniquely define the matrix. The matrix can then be inverted to determine the external electric field and net aircraft charge from the FM outputs. A distinct advantage of the method is that if one or more FMs need to be eliminated or deemphasized [e.g., due to a malfunction), it is a simple matter to reinvert the matrix without the malfunctioning FMs. To demonstrate the calibration technique, data are presented from several aircraft programs (ER-2, DC-8, Altus, and Citation).
Two imaging techniques for 3D quantification of pre-cementation space for CAD/CAM crowns.
Rungruanganunt, Patchanee; Kelly, J Robert; Adams, Douglas J
2010-12-01
Internal three-dimensional (3D) "fit" of prostheses to prepared teeth is likely more important clinically than "fit" judged only at the level of the margin (i.e. marginal "opening"). This work evaluates two techniques for quantitatively defining 3D "fit", both using pre-cementation space impressions: X-ray microcomputed tomography (micro-CT) and quantitative optical analysis. Both techniques are of interest for comparison of CAD/CAM system capabilities and for documenting "fit" as part of clinical studies. Pre-cementation space impressions were taken of a single zirconia coping on its die using a low viscosity poly(vinyl siloxane) impression material. Calibration specimens of this material were fabricated between the measuring platens of a micrometre. Both calibration curves and pre-cementation space impression data sets were obtained by examination using micro-CT and quantitative optical analysis. Regression analysis was used to compare calibration curves with calibration sets. Micro-CT calibration data showed tighter 95% confidence intervals and was able to measure over a wider thickness range than for the optical technique. Regions of interest (e.g., lingual, cervical) were more easily analysed with optical image analysis and this technique was more suitable for extremely thin impression walls (<10-15μm). Specimen preparation is easier for micro-CT and segmentation parameters appeared to capture dimensions accurately. Both micro-CT and the optical method can be used to quantify the thickness of pre-cementation space impressions. Each has advantages and limitations but either technique has the potential for use as part of clinical studies or CAD/CAM protocol optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.
In situ calibration of inductively coupled plasma-atomic emission and mass spectroscopy
Braymen, Steven D.
1996-06-11
A method and apparatus for in situ addition calibration of an inductively coupled plasma atomic emission spectrometer or mass spectrometer using a precision gas metering valve to introduce a volatile calibration gas of an element of interest directly into an aerosol particle stream. The present situ calibration technique is suitable for various remote, on-site sampling systems such as laser ablation or nebulization.
Internal Water Vapor Photoacoustic Calibration
NASA Technical Reports Server (NTRS)
Pilgrim, Jeffrey S.
2009-01-01
Water vapor absorption is ubiquitous in the infrared wavelength range where photoacoustic trace gas detectors operate. This technique allows for discontinuous wavelength tuning by temperature-jumping a laser diode from one range to another within a time span suitable for photoacoustic calibration. The use of an internal calibration eliminates the need for external calibrated reference gases. Commercial applications include an improvement of photoacoustic spectrometers in all fields of use.
Instrument intercomparison of glyoxal, methyl glyoxal and NO2 under simulated atmospheric conditions
NASA Astrophysics Data System (ADS)
Thalman, R.; Baeza-Romero, M. T.; Ball, S. M.; Borrás, E.; Daniels, M. J. S.; Goodall, I. C. A.; Henry, S. B.; Karl, T.; Keutsch, F. N.; Kim, S.; Mak, J.; Monks, P. S.; Muñoz, A.; Orlando, J.; Peppe, S.; Rickard, A. R.; Ródenas, M.; Sánchez, P.; Seco, R.; Su, L.; Tyndall, G.; Vázquez, M.; Vera, T.; Waxman, E.; Volkamer, R.
2015-04-01
The α-dicarbonyl compounds glyoxal (CHOCHO) and methyl glyoxal (CH3C(O)CHO) are produced in the atmosphere by the oxidation of hydrocarbons and emitted directly from pyrogenic sources. Measurements of ambient concentrations inform about the rate of hydrocarbon oxidation, oxidative capacity, and secondary organic aerosol (SOA) formation. We present results from a comprehensive instrument comparison effort at two simulation chamber facilities in the US and Europe that included nine instruments, and seven different measurement techniques: broadband cavity enhanced absorption spectroscopy (BBCEAS), cavity-enhanced differential optical absorption spectroscopy (CE-DOAS), white-cell DOAS, Fourier transform infrared spectroscopy (FTIR, two separate instruments), laser-induced phosphorescence (LIP), solid-phase micro extraction (SPME), and proton transfer reaction mass spectrometry (PTR-ToF-MS, two separate instruments; for methyl glyoxal only because no significant response was observed for glyoxal). Experiments at the National Center for Atmospheric Research (NCAR) compare three independent sources of calibration as a function of temperature (293-330 K). Calibrations from absorption cross-section spectra at UV-visible and IR wavelengths are found to agree within 2% for glyoxal, and 4% for methyl glyoxal at all temperatures; further calibrations based on ion-molecule rate constant calculations agreed within 5% for methyl glyoxal at all temperatures. At the European Photoreactor (EUPHORE) all measurements are calibrated from the same UV-visible spectra (either directly or indirectly), thus minimizing potential systematic bias. We find excellent linearity under idealized conditions (pure glyoxal or methyl glyoxal, R2 > 0.96), and in complex gas mixtures characteristic of dry photochemical smog systems (o-xylene/NOx and isoprene/NOx, R2 > 0.95; R2 ∼ 0.65 for offline SPME measurements of methyl glyoxal). The correlations are more variable in humid ambient air mixtures (RH > 45%) for methyl glyoxal (0.58 < R2 < 0.68) than for glyoxal (0.79 < R2 < 0.99). The intercepts of correlations were insignificant for the most part (below the instruments' experimentally determined detection limits); slopes further varied by less than 5% for instruments that could also simultaneously measure NO2. For glyoxal and methyl glyoxal the slopes varied by less than 12 and 17% (both 3-σ) between direct absorption techniques (i.e., calibration from knowledge of the absorption cross section). We find a larger variability among in situ techniques that employ external calibration sources (75-90%, 3-σ), and/or techniques that employ offline analysis. Our intercomparison reveals existing differences in reports about precision and detection limits in the literature, and enables comparison on a common basis by observing a common air mass. Finally, we evaluate the influence of interfering species (e.g., NO2, O3 and H2O) of relevance in field and laboratory applications. Techniques now exist to conduct fast and accurate measurements of glyoxal at ambient concentrations, and methyl glyoxal under simulated conditions. However, techniques to measure methyl glyoxal at ambient concentrations remain a challenge, and would be desirable.
The phantom robot - Predictive displays for teleoperation with time delay
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Kim, Won S.; Venema, Steven C.
1990-01-01
An enhanced teleoperation technique for time-delayed bilateral teleoperator control is discussed. The control technique selected for time delay is based on the use of a high-fidelity graphics phantom robot that is being controlled in real time (without time delay) against the static task image. Thus, the motion of the phantom robot image on the monitor predicts the motion of the real robot. The real robot's motion will follow the phantom robot's motion on the monitor with the communication time delay implied in the task. Real-time high-fidelity graphics simulation of a PUMA arm is generated and overlaid on the actual camera view of the arm. A simple camera calibration technique is used for calibrated graphics overlay. A preliminary experiment is performed with the predictive display by using a very simple tapping task. The results with this simple task indicate that predictive display enhances the human operator's telemanipulation task performance significantly during free motion when there is a long time delay. It appears, however, that either two-view or stereoscopic predictive displays are necessary for general three-dimensional tasks.
Harrison, Peter M C; Collins, Tom; Müllensiefen, Daniel
2017-06-15
Modern psychometric theory provides many useful tools for ability testing, such as item response theory, computerised adaptive testing, and automatic item generation. However, these techniques have yet to be integrated into mainstream psychological practice. This is unfortunate, because modern psychometric techniques can bring many benefits, including sophisticated reliability measures, improved construct validity, avoidance of exposure effects, and improved efficiency. In the present research we therefore use these techniques to develop a new test of a well-studied psychological capacity: melodic discrimination, the ability to detect differences between melodies. We calibrate and validate this test in a series of studies. Studies 1 and 2 respectively calibrate and validate an initial test version, while Studies 3 and 4 calibrate and validate an updated test version incorporating additional easy items. The results support the new test's viability, with evidence for strong reliability and construct validity. We discuss how these modern psychometric techniques may also be profitably applied to other areas of music psychology and psychological science in general.
Wu, Hongpeng; Dong, Lei; Zheng, Huadan; Yu, Yajun; Ma, Weiguang; Zhang, Lei; Yin, Wangbao; Xiao, Liantuan; Jia, Suotang; Tittel, Frank K.
2017-01-01
Quartz-enhanced photoacoustic spectroscopy (QEPAS) is a sensitive gas detection technique which requires frequent calibration and has a long response time. Here we report beat frequency (BF) QEPAS that can be used for ultra-sensitive calibration-free trace-gas detection and fast spectral scan applications. The resonance frequency and Q-factor of the quartz tuning fork (QTF) as well as the trace-gas concentration can be obtained simultaneously by detecting the beat frequency signal generated when the transient response signal of the QTF is demodulated at its non-resonance frequency. Hence, BF-QEPAS avoids a calibration process and permits continuous monitoring of a targeted trace gas. Three semiconductor lasers were selected as the excitation source to verify the performance of the BF-QEPAS technique. The BF-QEPAS method is capable of measuring lower trace-gas concentration levels with shorter averaging times as compared to conventional PAS and QEPAS techniques and determines the electrical QTF parameters precisely. PMID:28561065
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clegg, Samuel M; Barefield, James E; Wiens, Roger C
2008-01-01
Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from whichmore » unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.« less
NASA Technical Reports Server (NTRS)
Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.
1984-01-01
A technique for the radiometric correction of LANDSAT-4 Thematic Mapper data was proposed by the Canada Center for Remote Sensing. Subsequent detailed observations of raw image data, raw radiometric calibration data and background measurements extracted from the raw data stream on High Density Tape highlighted major shortcomings in the proposed method which if left uncorrected, can cause severe radiometric striping in the output product. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and on data corrected using the earlier proposed technique is explained, and the correction required for these factors as a function of individual scan line number for each detector is described. It is shown how the revised technique can be incorporated into an operational environment.
Configurations and calibration methods for passive sampling techniques.
Ouyang, Gangfeng; Pawliszyn, Janusz
2007-10-19
Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.
Yang, Hao; Xu, Xiangyang; Neumann, Ingo
2014-11-19
Terrestrial laser scanning technology (TLS) is a new technique for quickly getting three-dimensional information. In this paper we research the health assessment of concrete structures with a Finite Element Method (FEM) model based on TLS. The goal focuses on the benefits of 3D TLS in the generation and calibration of FEM models, in order to build a convenient, efficient and intelligent model which can be widely used for the detection and assessment of bridges, buildings, subways and other objects. After comparing the finite element simulation with surface-based measurement data from TLS, the FEM model is determined to be acceptable with an error of less than 5%. The benefit of TLS lies mainly in the possibility of a surface-based validation of results predicted by the FEM model.
Particle swarm optimization algorithm based low cost magnetometer calibration
NASA Astrophysics Data System (ADS)
Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.
2011-12-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments
Keller, Scott B; Dudley, Jonathan A; Binzel, Katherine; Jasensky, Joshua; de Pedro, Hector Michael; Frey, Eric W; Urayama, Paul
2008-10-15
Time-gated techniques are useful for the rapid sampling of excited-state (fluorescence) emission decays in the time domain. Gated detectors coupled with bright, economical, nanosecond-pulsed light sources like flashlamps and nitrogen lasers are an attractive combination for bioanalytical and biomedical applications. Here we present a calibration approach for lifetime determination that is noniterative and that does not assume a negligible instrument response function (i.e., a negligible excitation pulse width) as does most current rapid lifetime determination approaches. Analogous to a transducer-based sensor, signals from fluorophores of known lifetime (0.5-12 ns) serve as calibration references. A fast avalanche photodiode and a GHz-bandwidth digital oscilloscope is used to detect transient emission from reference samples excited using a nitrogen laser. We find that the normalized time-integrated emission signal is proportional to the lifetime, which can be determined with good reproducibility (typically <100 ps) even for data with poor signal-to-noise ratios ( approximately 20). Results are in good agreement with simulations. Additionally, a new time-gating scheme for fluorescence lifetime imaging applications is proposed. In conclusion, a calibration-based approach is a valuable analysis tool for the rapid determination of lifetime in applications using time-gated detection and finite pulse width excitation.
NASA Astrophysics Data System (ADS)
Sheykhizadeh, Saheleh; Naseri, Abdolhossein
2018-04-01
Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively.
Sheykhizadeh, Saheleh; Naseri, Abdolhossein
2018-04-05
Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
Lee, Jeong Wan
2008-01-01
This paper proposes a field calibration technique for aligning a wind direction sensor to the true north. The proposed technique uses the synchronized measurements of captured images by a camera, and the output voltage of a wind direction sensor. The true wind direction was evaluated through image processing techniques using the captured picture of the sensor with the least square sense. Then, the evaluated true value was compared with the measured output voltage of the sensor. This technique solves the discordance problem of the wind direction sensor in the process of installing meteorological mast. For this proposed technique, some uncertainty analyses are presented and the calibration accuracy is discussed. Finally, the proposed technique was applied to the real meteorological mast at the Daegwanryung test site, and the statistical analysis of the experimental testing estimated the values of stable misalignment and uncertainty level. In a strict sense, it is confirmed that the error range of the misalignment from the exact north could be expected to decrease within the credibility level. PMID:27873957
Microhotplate Temperature Sensor Calibration and BIST
Afridi, M.; Montgomery, C.; Cooper-Balis, E.; Semancik, S.; Kreider, K. G.; Geist, J.
2011-01-01
In this paper we describe a novel long-term microhotplate temperature sensor calibration technique suitable for Built-In Self Test (BIST). The microhotplate thermal resistance (thermal efficiency) and the thermal voltage from an integrated platinum-rhodium thermocouple were calibrated against a freshly calibrated four-wire polysilicon microhotplate-heater temperature sensor (heater) that is not stable over long periods of time when exposed to higher temperatures. To stress the microhotplate, its temperature was raised to around 400 °C and held there for days. The heater was then recalibrated as a temperature sensor, and microhotplate temperature measurements were made based on the fresh calibration of the heater, the first calibration of the heater, the microhotplate thermal resistance, and the thermocouple voltage. This procedure was repeated 10 times over a period of 80 days. The results show that the heater calibration drifted substantially during the period of the test while the microhotplate thermal resistance and the thermocouple-voltage remained stable to within about plus or minus 1 °C over the same period. Therefore, the combination of a microhotplate heater-temperature sensor and either the microhotplate thermal resistance or an integrated thin film platinum-rhodium thermocouple can be used to provide a stable, calibrated, microhotplate-temperature sensor, and the combination of the three sensor is suitable for implementing BIST functionality. Alternatively, if a stable microhotplate-heater temperature sensor is available, such as a properly annealed platinum heater-temperature sensor, then the thermal resistance of the microhotplate and the electrical resistance of the platinum heater will be sufficient to implement BIST. It is also shown that aluminum- and polysilicon-based temperature sensors, which are not stable enough for measuring high microhotplate temperatures (>220 °C) without impractically frequent recalibration, can be used to measure the silicon substrate temperature if never exposed to temperatures above about 220 °C. PMID:26989603
Cho, H-M; Ding, H; Ziemer, B P; Molloi, S
2014-12-07
Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using x-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for x-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm(2) in detection area. The angular dependence of x-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded x-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of x-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of x-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic x-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the x-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory.
NASA Astrophysics Data System (ADS)
Cho, H.-M.; Ding, H.; Ziemer, BP; Molloi, S.
2014-12-01
Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using x-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for x-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm2 in detection area. The angular dependence of x-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded x-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of x-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of x-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic x-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the x-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory.
Cho, H-M; Ding, H; Ziemer, BP; Molloi, S
2014-01-01
Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using X-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for X-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm2 in detection area. The angular dependence of X-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded X-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of X-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of X-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic X-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the X-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory. PMID:25369288
Experience with novel technologies for direct measurement of atmospheric NO2
NASA Astrophysics Data System (ADS)
Hueglin, Christoph; Hundt, Morten; Mueller, Michael; Schwarzenbach, Beat; Tuzson, Bela; Emmenegger, Lukas
2017-04-01
Nitrogen dioxide (NO2) is an air pollutant that has a large impact on human health and ecosystems, and it plays a key role in the formation of ozone and secondary particulate matter. Consequently, legal limit values for NO2 are set in the EU and elsewhere, and atmospheric observation networks typically include NO2 in their measurement programmes. Atmospheric NO2 is principally measured by chemiluminescence detection, an indirect measurement technique that requires conversion of NO2 into nitrogen monoxide (NO) and finally calculation of NO2 from the difference between total nitrogen oxides (NOx) and NO. Consequently, NO2 measurements with the chemiluminescence method have a relatively high measurement uncertainty and can be biased depending on the selectivity of the applied NO2 conversion method. In the past years, technologies for direct and selective measurement of NO2 have become available, e.g. cavity attenuated phase shift spectroscopy (CAPS), cavity enhanced laser absorption spectroscopy and quantum cascade laser absorption spectrometry (QCLAS). These technologies offer clear advantages over the indirect chemiluminescence method. We tested the above mentioned direct measurement techniques for NO2 over extended time periods at atmospheric measurement stations and report on our experience including comparisons with co-located chemiluminescence instruments equipped with molybdenum as well as photolytic NO2 converters. A still open issue related to the direct measurement of NO2 is instrument calibration. Accurate and traceable reference standards and NO2 calibration gases are needed. We present results from the application of different calibration strategies based on the use of static NO2 calibration gases as well as dynamic NO2 calibration gases produced by permeation and by gas-phase titration (GPT).
Calibration and Evaluation of Ultrasound Thermography using Infrared Imaging
Hsiao, Yi-Sing; Deng, Cheri X.
2015-01-01
Real-time monitoring of the spatiotemporal evolution of tissue temperature is important to ensure safe and effective treatment in thermal therapies including hyperthermia and thermal ablation. Ultrasound thermography has been proposed as a non-invasive technique for temperature measurement, and accurate calibration of the temperature-dependent ultrasound signal changes against temperature is required. Here we report a method that uses infrared (IR) thermography for calibration and validation of ultrasound thermography. Using phantoms and cardiac tissue specimens subjected to high-intensity focused ultrasound (HIFU) heating, we simultaneously acquired ultrasound and IR imaging data from the same surface plane of a sample. The commonly used echo time shift-based method was chosen to compute ultrasound thermometry. We first correlated the ultrasound echo time shifts with IR-measured temperatures for material-dependent calibration and found that the calibration coefficient was positive for fat-mimicking phantom (1.49 ± 0.27) but negative for tissue-mimicking phantom (− 0.59 ± 0.08) and cardiac tissue (− 0.69 ± 0.18 °C-mm/ns). We then obtained the estimation error of the ultrasound thermometry by comparing against the IR measured temperature and revealed that the error increased with decreased size of the heated region. Consistent with previous findings, the echo time shifts were no longer linearly dependent on temperature beyond 45 – 50 °C in cardiac tissues. Unlike previous studies where thermocouples or water-bath techniques were used to evaluate the performance of ultrasound thermography, our results show that high resolution IR thermography provides a useful tool that can be applied to evaluate and understand the limitations of ultrasound thermography methods. PMID:26547634
Calibration and Evaluation of Ultrasound Thermography Using Infrared Imaging.
Hsiao, Yi-Sing; Deng, Cheri X
2016-02-01
Real-time monitoring of the spatiotemporal evolution of tissue temperature is important to ensure safe and effective treatment in thermal therapies including hyperthermia and thermal ablation. Ultrasound thermography has been proposed as a non-invasive technique for temperature measurement, and accurate calibration of the temperature-dependent ultrasound signal changes against temperature is required. Here we report a method that uses infrared thermography for calibration and validation of ultrasound thermography. Using phantoms and cardiac tissue specimens subjected to high-intensity focused ultrasound heating, we simultaneously acquired ultrasound and infrared imaging data from the same surface plane of a sample. The commonly used echo time shift-based method was chosen to compute ultrasound thermometry. We first correlated the ultrasound echo time shifts with infrared-measured temperatures for material-dependent calibration and found that the calibration coefficient was positive for fat-mimicking phantom (1.49 ± 0.27) but negative for tissue-mimicking phantom (-0.59 ± 0.08) and cardiac tissue (-0.69 ± 0.18°C-mm/ns). We then obtained the estimation error of the ultrasound thermometry by comparing against the infrared-measured temperature and revealed that the error increased with decreased size of the heated region. Consistent with previous findings, the echo time shifts were no longer linearly dependent on temperature beyond 45°C-50°C in cardiac tissues. Unlike previous studies in which thermocouples or water bath techniques were used to evaluate the performance of ultrasound thermography, our results indicate that high-resolution infrared thermography is a useful tool that can be applied to evaluate and understand the limitations of ultrasound thermography methods. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Koulikov, Serguei; Assonov, Sergey; Fajgelj, Ales; Tans, Pieter
2018-07-01
The manuscript explores some advantages and limitations of laser based optical spectroscopy, aimed at achieving robust, high-reproducibility 13 C 16 O 2 and 12 C 16 O 2 ratio determinations on the VPDB-CO 2 δ 13 C scale by measuring the absorbance of line pairs of 13 C 16 O 2 and 12 C 16 O 2 . In particular, the sensitivities of spectroscopic lines to both pressure (P) and temperature (T) are discussed. Based on the considerations and estimations presented, a level of reproducibility of the 13 C 16 O 2 / 12 C 16 O 2 ratio determinations may be achieved of about 10 -6 . Thus one may establish an optical spectroscopic measurement technique for robust, high-precision 13 C 16 O 2 and 12 C 16 O 2 ratio measurements aimed at very low uncertainty. (Notably, creating such an optical instrument and developing technical solutions is beyond the scope of this paper.) The total combined uncertainty will also include the uncertainty component(s) related to the accuracy of calibration on the VPDB-CO 2 δ 13 C scale. Addressing high-accuracy calibrations is presently not straightforward - absolute numerical values of 13 C/ 12 C for the VPDB-CO 2 scale are not well known. Traditional stable isotope mass-spectrometry uses calibrations vs CO 2 evolved from the primary carbonate reference materials; which can hardly be used for calibrating commercial optical stable isotope analysers. In contrast to mass-spectrometry, the major advantage of the laser-based spectrometric technique detailed in this paper is its high robustness. Therefore one can introduce a new spectrometric δ 13 C characterisation method which, being once well-calibrated on the VPDB-CO 2 scale, may not require any further (re-)calibrations. This can be used for characterisation of δ 13 C in CO 2 -in-air mixtures with high precision and also with high accuracy. If this technique can be realised with the estimated long-term reproducibility (order of 10 -6 ), it could potentially serve as a more convenient Optical Transfer Standard (OTS), characterising large amounts of CO 2 gas mixtures on the VPDB-CO 2 δ 13 C scale without having to compare to carbonate-evolved CO 2 . Furthermore, if the OTS method proves to be successful, it might be considered for re-defining the VPDB-CO 2 δ 13 C-scale as the ratio of selected CO 2 spectroscopic absorbance lines measured at pre-defined T & P conditions. The approach can also be expanded to δ 18 O characterisation (using 16 O 12 C 18 O and 16 O 12 C 16 O absorbance lines) of CO 2 gas mixtures and potentially to other isotope ratios of other gases. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Prasad, C. B.; Prabhakaran, R.; Tompkins, S.
1987-01-01
The hole-drilling technique for the measurement of residual stresses using electrical resistance strain gages has been widely used for isotropic materials and has been adopted by the ASTM as a standard method. For thin isotropic plates, with a hole drilled through the thickness, the idealized hole-drilling calibration constants are obtained by making use of the well-known Kirsch's solution. In this paper, an analogous attempt is made to theoretically determine the three idealized hole-drilling calibration constants for thin orthotropic materials by employing Savin's (1961) complex stress function approach.
NASA Technical Reports Server (NTRS)
Romanofsky, Robert R.; Shalkhauser, Kurt A.
1989-01-01
The design and evaluation of a novel fixturing technique for characterizing millimeter wave solid state devices is presented. The technique utilizes a cosine-tapered ridge guide fixture and a one-tier de-embedding procedure to produce accurate and repeatable device level data. Advanced features of this technique include nondestructive testing, full waveguide bandwidth operation, universality of application, and rapid, yet repeatable, chip-level characterization. In addition, only one set of calibration standards is required regardless of the device geometry.
All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement.
Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi
2016-01-30
This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of -20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system.
All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement
Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi
2016-01-01
This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of −20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system. PMID:26840316
Atmospheric stellar parameters from cross-correlation functions
NASA Astrophysics Data System (ADS)
Malavolta, L.; Lovis, C.; Pepe, F.; Sneden, C.; Udry, S.
2017-08-01
The increasing number of spectra gathered by spectroscopic sky surveys and transiting exoplanet follow-up has pushed the community to develop automated tools for atmospheric stellar parameters determination. Here we present a novel approach that allows the measurement of temperature (Teff), metallicity ([Fe/H]) and gravity (log g) within a few seconds and in a completely automated fashion. Rather than performing comparisons with spectral libraries, our technique is based on the determination of several cross-correlation functions (CCFs) obtained by including spectral features with different sensitivity to the photospheric parameters. We use literature stellar parameters of high signal-to-noise (SNR), high-resolution HARPS spectra of FGK main-sequence stars to calibrate Teff, [Fe/H] and log g as a function of CCF parameters. Our technique is validated using low-SNR spectra obtained with the same instrument. For FGK stars we achieve a precision of σ _{{T_eff}} = 50 K, σlog g = 0.09 dex and σ _{{{[Fe/H]}}} =0.035 dex at SNR = 50, while the precision for observation with SNR ≳ 100 and the overall accuracy are constrained by the literature values used to calibrate the CCFs. Our approach can easily be extended to other instruments with similar spectral range and resolution or to other spectral range and stars other than FGK dwarfs if a large sample of reference stars is available for the calibration. Additionally, we provide the mathematical formulation to convert synthetic equivalent widths to CCF parameters as an alternative to direct calibration. We have made our tool publicly available.
NASA Astrophysics Data System (ADS)
Smets, Quentin; Verreck, Devin; Verhulst, Anne S.; Rooyackers, Rita; Merckling, Clément; Van De Put, Maarten; Simoen, Eddy; Vandervorst, Wilfried; Collaert, Nadine; Thean, Voon Y.; Sorée, Bart; Groeseneken, Guido; Heyns, Marc M.
2014-05-01
Promising predictions are made for III-V tunnel-field-effect transistor (FET), but there is still uncertainty on the parameters used in the band-to-band tunneling models. Therefore, two simulators are calibrated in this paper; the first one uses a semi-classical tunneling model based on Kane's formalism, and the second one is a quantum mechanical simulator implemented with an envelope function formalism. The calibration is done for In0.53Ga0.47As using several p+/intrinsic/n+ diodes with different intrinsic region thicknesses. The dopant profile is determined by SIMS and capacitance-voltage measurements. Error bars are used based on statistical and systematic uncertainties in the measurement techniques. The obtained parameters are in close agreement with theoretically predicted values and validate the semi-classical and quantum mechanical models. Finally, the models are applied to predict the input characteristics of In0.53Ga0.47As n- and p-lineTFET, with the n-lineTFET showing competitive performance compared to MOSFET.
NASA Technical Reports Server (NTRS)
Navard, Sharon E.
1989-01-01
In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.
Techniques for precise energy calibration of particle pixel detectors
NASA Astrophysics Data System (ADS)
Kroupa, M.; Campbell-Ricketts, T.; Bahadori, A.; Empl, A.
2017-03-01
We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.
Techniques for precise energy calibration of particle pixel detectors.
Kroupa, M; Campbell-Ricketts, T; Bahadori, A; Empl, A
2017-03-01
We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.
Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?
NASA Technical Reports Server (NTRS)
Lum, Karen; Hihn, Jairus; Menzies, Tim
2006-01-01
While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.
NASA Astrophysics Data System (ADS)
Kaufman, Lloyd; Williamson, Samuel J.; Costaribeiro, P.
1988-02-01
Recently developed small arrays of SQUID-based magnetic sensors can, if appropriately placed, locate the position of a confined biomagnetic source without moving the array. The authors present a technique with a relative accuracy of about 2 percent for calibrating such sensors having detection coils with the geometry of a second-order gradiometer. The effects of calibration error and magnetic noise on the accuracy of locating an equivalent current dipole source in the human brain are investigated for 5- and 7-sensor probes and for a pair of 7-sensor probes. With a noise level of 5 percent of peak signal, uncertainties of about 20 percent in source strength and depth for a 5-sensor probe are reduced to 8 percent for a pair of 7-sensor probes, and uncertainties of about 15 mm in lateral position are reduced to 1 mm, for the configuration considered.
Sentinel-1 Precise Orbit Calibration and Validation
NASA Astrophysics Data System (ADS)
Monti Guarnieri, Andrea; Mancon, Simone; Tebaldini, Stefano
2015-05-01
In this paper, we propose a model-based procedure to calibrate and validate Sentinel-1 orbit products by the Multi-Squint (MS) phase. The technique allows to calibrate an interferometric pair geometry by refining the slave orbit with reference to the orbit of a master image. Accordingly, we state the geometric model of the InSAR phase as function of positioning errors of targets and slave track; and the MS phase model as derivative of the InSAR phase geometric model with respect to the squint angle. In this paper we focus on the TOPSAR acquisition modes of Sentinel-1 (IW and EW) assuming at the most a linear error in the known slave trajectory. In particular, we describe a dedicated methodology to prevent InSAR phase artifacts on data acquired by the TOPSAR acquisition mode. Experimental results obtained by interferometric pairs acquired by Sentinel-1 sensor will be displayed.
Spatial calibration of a tokamak neutral beam diagnostic using in situ neutral beam emission
Chrystal, Colin; Burrell, Keith H.; Grierson, Brian A.; ...
2015-10-20
Neutral beam injection is used in tokamaks to heat, apply torque, drive non-inductive current, and diagnose plasmas. Neutral beam diagnostics need accurate spatial calibrations to benefit from the measurement localization provided by the neutral beam. A new technique has been developed that uses in-situ measurements of neutral beam emission to determine the spatial location of the beam and the associated diagnostic views. This technique was developed to improve the charge exchange recombination diagnostic (CER) at the DIII-D tokamak and uses measurements of the Doppler shift and Stark splitting of neutral beam emission made by that diagnostic. These measurements contain informationmore » about the geometric relation between the diagnostic views and the neutral beams when they are injecting power. This information is combined with standard spatial calibration measurements to create an integrated spatial calibration that provides a more complete description of the neutral beam-CER system. The integrated spatial calibration results are very similar to the standard calibration results and derived quantities from CER measurements are unchanged within their measurement errors. Lastly, the methods developed to perform the integrated spatial calibration could be useful for tokamaks with limited physical access.« less
NASA Technical Reports Server (NTRS)
Axholt, Magnus; Skoglund, Martin; Peterson, Stephen D.; Cooper, Matthew D.; Schoen, Thomas B.; Gustafsson, Fredrik; Ynnerman, Anders; Ellis, Stephen R.
2010-01-01
Augmented Reality (AR) is a technique by which computer generated signals synthesize impressions that are made to coexist with the surrounding real world as perceived by the user. Human smell, taste, touch and hearing can all be augmented, but most commonly AR refers to the human vision being overlaid with information otherwise not readily available to the user. A correct calibration is important on an application level, ensuring that e.g. data labels are presented at correct locations, but also on a system level to enable display techniques such as stereoscopy to function properly [SOURCE]. Thus, vital to AR, calibration methodology is an important research area. While great achievements already have been made, there are some properties in current calibration methods for augmenting vision which do not translate from its traditional use in automated cameras calibration to its use with a human operator. This paper uses a Monte Carlo simulation of a standard direct linear transformation camera calibration to investigate how user introduced head orientation noise affects the parameter estimation during a calibration procedure of an optical see-through head mounted display.
Spatial calibration of a tokamak neutral beam diagnostic using in situ neutral beam emission
NASA Astrophysics Data System (ADS)
Chrystal, C.; Burrell, K. H.; Grierson, B. A.; Pace, D. C.
2015-10-01
Neutral beam injection is used in tokamaks to heat, apply torque, drive non-inductive current, and diagnose plasmas. Neutral beam diagnostics need accurate spatial calibrations to benefit from the measurement localization provided by the neutral beam. A new technique has been developed that uses in situ measurements of neutral beam emission to determine the spatial location of the beam and the associated diagnostic views. This technique was developed to improve the charge exchange recombination (CER) diagnostic at the DIII-D tokamak and uses measurements of the Doppler shift and Stark splitting of neutral beam emission made by that diagnostic. These measurements contain information about the geometric relation between the diagnostic views and the neutral beams when they are injecting power. This information is combined with standard spatial calibration measurements to create an integrated spatial calibration that provides a more complete description of the neutral beam-CER system. The integrated spatial calibration results are very similar to the standard calibration results and derived quantities from CER measurements are unchanged within their measurement errors. The methods developed to perform the integrated spatial calibration could be useful for tokamaks with limited physical access.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-06-24
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-01-01
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961
Ding, Xiaorong; Yan, Bryan P; Zhang, Yuan-Ting; Liu, Jing; Zhao, Ni; Tsang, Hon Ki
2017-09-14
Cuffless technique enables continuous blood pressure (BP) measurement in an unobtrusive manner, and thus has the potential to revolutionize the conventional cuff-based approaches. This study extends the pulse transit time (PTT) based cuffless BP measurement method by introducing a new indicator - the photoplethysmogram (PPG) intensity ratio (PIR). The performance of the models with PTT and PIR was comprehensively evaluated in comparison with six models that are based on sole PTT. The validation conducted on 33 subjects with and without hypertension, at rest and under various maneuvers with induced BP changes, and over an extended calibration interval, respectively. The results showed that, comparing to the PTT models, the proposed methods achieved better accuracy on each subject group at rest state and over 24 hours calibration interval. Although the BP estimation errors under dynamic maneuvers and over extended calibration interval were significantly increased for all methods, the proposed methods still outperformed the compared methods in the latter situation. These findings suggest that additional BP-related indicator other than PTT has added value for improving the accuracy of cuffless BP measurement. This study also offers insights into future research in cuffless BP measurement for tracking dynamic BP changes and over extended periods of time.
NASA Astrophysics Data System (ADS)
Ungar, S.
2017-12-01
Over the past 3 years, the Earth Observing-one (EO-1) Hyperion imaging spectrometer was used to slowly scan the lunar surface at a rate which results in up to 32X oversampling to effectively increase the SNR. Several strategies, including comparison against the USGS RObotic Lunar Observatory (ROLO) mode,l are being employed to estimate the absolute and relative accuracy of the measurement set. There is an existing need to resolve discrepancies as high as 10% between ROLO and solar based calibration of current NASA EOS assets. Although the EO-1 mission was decommissioned at the end of March 2017, the development of a well-characterized exoatmospheric spectral radiometric database, for a range of lunar phase angles surrounding the fully illuminated moon, continues. Initial studies include a comprehensive analysis of the existing 17-year collection of more than 200 monthly lunar acquisitions. Specific lunar surface areas, such as a lunar mare, are being characterized as potential "lunar calibration sites" in terms of their radiometric stability in the presence of lunar nutation and libration. Site specific Hyperion-derived lunar spectral reflectance are being compared against spectrographic measurements made during the Apollo program. Techniques developed through this activity can be employed by future high-quality orbiting imaging spectrometers (such as HyspIRI and EnMap) to further refine calibration accuracies. These techniques will enable the consistent cross calibration of existing and future earth observing systems (spectral and multi-spectral) including those that do not have lunar viewing capability. When direct lunar viewing is not an option for an earth observing asset, orbiting imaging spectrometers can serve as transfer radiometers relating that asset's sensor response to lunar values through near contemporaneous observations of well characterized stable CEOS test sites. Analysis of this dataset will lead to the development of strategies to ensure more accurate cross calibrations when employing the more capable, future imaging spectrometers.
Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B
2017-06-01
To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Parkes, Stephen; Wang, Lixin; McCabe, Matthew
2015-04-01
In recent years there has been an increasing amount of water vapor stable isotope data collected using in-situ instrumentation. A number of papers have characterized the performance of these in-situ analyzers and suggested methods for calibrating raw measurements. The cross-sensitivity of the isotopic measurements on the mixing ratio has been shown to be a major uncertainty and a variety of techniques have been suggested to characterize this inaccuracy. However, most of these are based on relating isotopic ratios to water vapor mixing ratios from in-situ analyzers when the mixing ratio is varied and the isotopic composition kept constant. An additional correction for the span of the isotopic ratio scale is then applied by measuring different isotopic standards. Here we argue that the water vapor cross-sensitivity arises from different instrument responses (span and offset) of the parent H2O isotope and the heavier isotopes, rather than spectral overlap that could cause a true variation in the isotopic ratio with mixing ratio. This is especially relevant for commercial laser optical instruments where absorption lines are well resolved. Thus, the cross-sensitivity determined using more conventional techniques is dependent on the isotopic ratio of the standard used for the characterization, although errors are expected to be small. Consequently, the cross-sensitivity should be determined by characterizing the span and zero offset of each isotope mixing ratio. In fact, this technique makes the span correction for the isotopic ratio redundant. In this work we model the impact of changes in the span and offset of the heavy and light isotopes and illustrate the impact on the cross-sensitivity of the isotopic ratios on water vapor. This clearly shows the importance of determining the zero offset for the two isotopes. The cross-sensitivity of the isotopic ratios on water vapor is then characterized by determining the instrument response for the individual isotopes for a number of different in-situ analyzers that employ different optical methods. We compare this simplified calibration technique to more conventional characterization of both the cross-sensitivity determined in isotopic ratio space and the isotopic ratio span. Utilizing this simplified calibration approach with improved software control can lead to a significant reduction in time spent calibrating in-situ instrumentation or enable an increase in calibration frequency as required to minimize measurement uncertainty.
NASA Astrophysics Data System (ADS)
Odry, Jean; Arnaud, Patrick
2016-04-01
The SHYREG method (Aubert et al., 2014) associates a stochastic rainfall generator and a rainfall-runoff model to produce rainfall and flood quantiles on a 1 km2 mesh covering the whole French territory. The rainfall generator is based on the description of rainy events by descriptive variables following probability distributions and is characterised by a high stability. This stochastic generator is fully regionalised, and the rainfall-runoff transformation is calibrated with a single parameter. Thanks to the stability of the approach, calibration can be performed against only flood quantiles associated with observated frequencies which can be extracted from relatively short time series. The aggregation of SHYREG flood quantiles to the catchment scale is performed using an areal reduction factor technique unique on the whole territory. Past studies demonstrated the accuracy of SHYREG flood quantiles estimation for catchments where flow data are available (Arnaud et al., 2015). Nevertheless, the parameter of the rainfall-runoff model is independently calibrated for each target catchment. As a consequence, this parameter plays a corrective role and compensates approximations and modelling errors which makes difficult to identify its proper spatial pattern. It is an inherent objective of the SHYREG approach to be completely regionalised in order to provide a complete and accurate flood quantiles database throughout France. Consequently, it appears necessary to identify the model configuration in which the calibrated parameter could be regionalised with acceptable performances. The revaluation of some of the method hypothesis is a necessary step before the regionalisation. Especially the inclusion or the modification of the spatial variability of imposed parameters (like production and transfer reservoir size, base flow addition and quantiles aggregation function) should lead to more realistic values of the only calibrated parameter. The objective of the work presented here is to develop a SHYREG evaluation scheme focusing on both local and regional performances. Indeed, it is necessary to maintain the accuracy of at site flood quantiles estimation while identifying a configuration leading to a satisfactory spatial pattern of the calibrated parameter. This ability to be regionalised can be appraised by the association of common regionalisation techniques and split sample validation tests on a set of around 1,500 catchments representing the whole diversity of France physiography. Also, the presence of many nested catchments and a size-based split sample validation make possible to assess the relevance of the calibrated parameter spatial structure inside the largest catchments. The application of this multi-objective evaluation leads to the selection of a version of SHYREG more suitable for regionalisation. References: Arnaud, P., Cantet, P., Aubert, Y., 2015. Relevance of an at-site flood frequency analysis method for extreme events based on stochastic simulation of hourly rainfall. Hydrological Sciences Journal: on press. DOI:10.1080/02626667.2014.965174 Aubert, Y., Arnaud, P., Ribstein, P., Fine, J.A., 2014. The SHYREG flow method-application to 1605 basins in metropolitan France. Hydrological Sciences Journal, 59(5): 993-1005. DOI:10.1080/02626667.2014.902061
Advanced analysis techniques for uranium assay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, W. H.; Ensslin, Norbert; Carrillo, L. A.
2001-01-01
Uranium has a negligible passive neutron emission rate making its assay practicable only with an active interrogation method. The active interrogation uses external neutron sources to induce fission events in the uranium in order to determine the mass. This technique requires careful calibration with standards that are representative of the items to be assayed. The samples to be measured are not always well represented by the available standards which often leads to large biases. A technique of active multiplicity counting is being developed to reduce some of these assay difficulties. Active multiplicity counting uses the measured doubles and triples countmore » rates to determine the neutron multiplication (f4) and the product of the source-sample coupling ( C ) and the 235U mass (m). Since the 35U mass always appears in the multiplicity equations as the product of Cm, the coupling needs to be determined before the mass can be known. A relationship has been developed that relates the coupling to the neutron multiplication. The relationship is based on both an analytical derivation and also on empirical observations. To determine a scaling constant present in this relationship, known standards must be used. Evaluation of experimental data revealed an improvement over the traditional calibration curve analysis method of fitting the doubles count rate to the 235Um ass. Active multiplicity assay appears to relax the requirement that the calibration standards and unknown items have the same chemical form and geometry.« less
Three-dimensional rotational micro-angiography
NASA Astrophysics Data System (ADS)
Patel, Vikas
Computed tomography (CT) is state-of-the-art for 3D imaging in which images are acquired about the patient and are used to reconstruct the data. But the commercial CT systems suffer from low spatial resolution (0.5-2 lp/mm). Micro-CT (microCT) systems have high resolution 3D reconstruction (>10 lp/mm), but are currently limited to small objects, e.g., small animals. To achieve artifact free reconstructions, geometric calibration of the rotating-object cone-beam microCT (CBmicroCT) system is performed using new techniques that use only the projection images of the object, i.e., no calibration objects are required. Translations (up to 0.2 mm) occurring during the acquisition in the horizontal direction are detected, quantified, and corrected based on sinogram analysis. The parameters describing the physical axis of rotation determined using our image-based method (aligning anti-posed images) agree well (within 0.1 mm and 0.3 degrees) with those determined using other techniques that use calibration objects. Geometric calibrations of the rotational angiography (RA) systems (clinical cone-beam CT systems with fluoroscopic capabilities provided by flat-panel detectors (FPD)) are performed using a simple single projection technique (SPT), which aligns a known 3D model of a calibration phantom with the projection data. The calibration parameters obtained by the SPT are found to be reproducible (angles within 0.2° and x- and y-translations less than 2 mm) for over 7 months. The spatial resolution of the RA systems is found to be virtually unaffected by such small geometric variations. Finally, using our understanding of the geometric calibrations, we have developed methods to combine relatively low-resolution RA acquisitions (2-3 lp/mm) with high resolution microCT acquisitions (using a high-resolution micro-angiographic fluoroscope (MAF) attached to the RA gantry) to produce the first-ever 3D rotational micro-angiography (3D-RmicroA) system on a clinical gantry. Images of a rabbit with a coronary stent placed in an artery were obtained and reconstructed. To eliminate artifacts due to image truncation, lower-dose (compared to the MAF acquisition) full-FOV (FFOV) FPD RA sequences are also obtained. To ensure high-quality high-resolution reconstruction, the high-resolution images from the MAF are aligned spatially with the lower-dose FPD images (average correlation coefficient before and after alignment: 0.65 and 0.97 respectively), and the pixel values in the FPD image data are scaled (using linear regression) to match those of the MAF. Greater details without any visible truncation artifacts are seen in 3D RmicroA (MAF-FPD) images than in those of the FPD alone. The FWHM of line profiles of stent struts (100 micron diameter) are approximately 192 +/- 21 and 313 +/- 38 microns for the 3D RmicroA and FPD data, respectively. Thus, with the RmicroA system, we have essentially developed a high resolution CBmicroCT system for clinical use.
NASA Astrophysics Data System (ADS)
Abu Anas, Emran Mohammad; Kim, Jae Gon; Lee, Soo Yeol; Kamrul Hasan, Md
2011-10-01
The use of an x-ray flat panel detector is increasingly becoming popular in 3D cone beam volume CT machines. Due to the deficient semiconductor array manufacturing process, the cone beam projection data are often corrupted by different types of abnormalities, which cause severe ring and radiant artifacts in a cone beam reconstruction image, and as a result, the diagnostic image quality is degraded. In this paper, a novel technique is presented for the correction of error in the 2D cone beam projections due to abnormalities often observed in 2D x-ray flat panel detectors. Template images are derived from the responses of the detector pixels using their statistical properties and then an effective non-causal derivative-based detection algorithm in 2D space is presented for the detection of defective and mis-calibrated detector elements separately. An image inpainting-based 3D correction scheme is proposed for the estimation of responses of defective detector elements, and the responses of the mis-calibrated detector elements are corrected using the normalization technique. For real-time implementation, a simplification of the proposed off-line method is also suggested. Finally, the proposed algorithms are tested using different real cone beam volume CT images and the experimental results demonstrate that the proposed methods can effectively remove ring and radiant artifacts from cone beam volume CT images compared to other reported techniques in the literature.
USDA-ARS?s Scientific Manuscript database
The progressive improvement of computer science and development of auto-calibration techniques means that calibration of simulation models is no longer a major challenge for watershed planning and management. Modelers now increasingly focus on challenges such as improved representation of watershed...
Calibration of remotely sensed proportion or area estimates for misclassification error
Raymond L. Czaplewski; Glenn P. Catts
1992-01-01
Classifications of remotely sensed data contain misclassification errors that bias areal estimates. Monte Carlo techniques were used to compare two statistical methods that correct or calibrate remotely sensed areal estimates for misclassification bias using reference data from an error matrix. The inverse calibration estimator was consistently superior to the...
Nuclear medicine image registration by spatially noncoherent interferometry.
Scheiber, C; Malet, Y; Sirat, G; Grucker, D
2000-02-01
This article introduces a technique for obtaining high-resolution body contour data in the same coordinate frame as that of a rotating gamma camera, using a miniature range finder, the conoscope, mounted on the camera gantry. One potential application of the technique is accurate coregistration in longitudinal brain SPECT studies, using the face of the patient (or "mask"), instead of SPECT slices, to coregister subsequent acquisitions involving the brain. Conoscopic holography is an interferometry technique that relies on spatially incoherent light interference in birefringent crystals. In this study, the conoscope was used to measure the absolute distance (Z) between a light source reflected from the skin and its observation plane. This light was emitted by a 0.2-mW laser diode. A scanning system was used to image the face during SPECT acquisition. The system consisted of a motor-driven mirror (Y axis) and the gamma-camera gantry (1 profile was obtained for each rotation step, X axis). The system was calibrated to place the conoscopic measurements and SPECT slices in the same coordinate frame. Through a simple and robust calibration of the system, the SE for measurements performed on geometric shapes was less than 2 mm, i.e., less than the actual pixel size of the SPECT data. Biometric measurements of an anthropomorphic brain phantom were within 3%-5% of actual values. The mask data were used to register images of a brain phantom and of a volunteer's brain, respectively. The rigid transformation that allowed the merging of masks by visual inspection was applied to the 2 sets of SPECT slices to perform the fusion of the data. At the cost of an additional low-cost setup integrated into the gamma-camera gantry, real-time data about the surface of the head were obtained. As in all other surface-based techniques (as opposed to volume-based techniques), this method allows the match of data independently from the dataset of interest and facilitates further registration of data from any other source. The main advantage of this technique compared with other optically based methods is the robustness of the calibration procedure and the compactness of the sensor as a result of the colinearity of the projected beam and the reflected (diffused) beams of the conoscope. Taking into account the experimental nature of this preliminary work, significant improvements in the accuracy and speed of measurements (up to 1000 points/s) are expected.
Rectifying calibration error of Goldmann applanation tonometer is easy!
Choudhari, Nikhil S; Moorthy, Krishna P; Tungikar, Vinod B; Kumar, Mohan; George, Ronnie; Rao, Harsha L; Senthil, Sirisha; Vijaya, Lingam; Garudadri, Chandra Sekhar
2014-11-01
Purpose: Goldmann applanation tonometer (GAT) is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland) were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn't suffice. We followed the South East Asia Glaucoma Interest Group's definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively). Results: Twelve out of 29 (41.3%) GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6%) faulty instruments. Only one (8.3%) faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.
Segmented Gamma Scanner for Small Containers of Uranium Processing Waste- 12295
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, K.E.; Smith, S.K.; Gailey, S.
2012-07-01
The Segmented Gamma Scanner (SGS) is commonly utilized in the assay of 55-gallon drums containing radioactive waste. Successfully deployed calibration methods include measurement of vertical line source standards in representative matrices and mathematical efficiency calibrations. The SGS technique can also be utilized to assay smaller containers, such as those used for criticality safety in uranium processing facilities. For such an application, a Can SGS System is aptly suited for the identification and quantification of radionuclides present in fuel processing wastes. Additionally, since the significant presence of uranium lumping can confound even a simple 'pass/fail' measurement regimen, the high-resolution gamma spectroscopymore » allows for the use of lump-detection techniques. In this application a lump correction is not required, but the application of a differential peak approach is used to simply identify the presence of U-235 lumps. The Can SGS is similar to current drum SGSs, but differs in the methodology for vertical segmentation. In the current drum SGS, the drum is placed on a rotator at a fixed vertical position while the detector, collimator, and transmission source are moved vertically to effect vertical segmentation. For the Can SGS, segmentation is more efficiently done by raising and lowering the rotator platform upon which the small container is positioned. This also reduces the complexity of the system mechanism. The application of the Can SGS introduces new challenges to traditional calibration and verification approaches. In this paper, we revisit SGS calibration methodology in the context of smaller waste containers, and as applied to fuel processing wastes. Specifically, we discuss solutions to the challenges introduced by requiring source standards to fit within the confines of the small containers and the unavailability of high-enriched uranium source standards. We also discuss the implementation of a previously used technique for identifying the presence of uranium lumping. The SGS technique is a well-accepted NDA technique applicable to containers of almost any size. It assumes a homogenous matrix and activity distribution throughout the entire container; an assumption that is at odds with the detection of lumps within the assay item typical of uranium-processing waste. This fact, in addition to the difficultly in constructing small reference standards of uranium-bearing materials, required the methodology used for performing an efficiency curve calibration to be altered. The solution discussed in this paper is demonstrated to provide good results for both the segment activity and full container activity when measuring heterogeneous source distributions. The application of this approach will need to be based on process knowledge of the assay items, as biases can be introduced if used with homogenous, or nearly homogenous, activity distributions. The bias will need to be quantified for each combination of container geometry and SGS scanning settings. One recommended approach for using the heterogeneous calibration discussed here is to assay each item using a homogenous calibration initially. Review of the segment activities compared to the full container activity will signal the presence of a non-uniform activity distribution as the segment activity will be grossly disproportionate to the full container activity. Upon seeing this result, the assay should either be reanalyzed or repeated using the heterogeneous calibration. (authors)« less
An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian
For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less
Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices
NASA Astrophysics Data System (ADS)
Semkow, T. M.; Bradt, C. J.; Beach, S. E.; Haines, D. K.; Khan, A. J.; Bari, A.; Torres, M. A.; Marrantino, J. C.; Syed, U.-F.; Kitto, M. E.; Hoffman, T. J.; Curtis, P.
2015-11-01
A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm-3. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid.
Wang, Lixin; Caylor, Kelly K; Dragoni, Danilo
2009-02-01
The (18)O and (2)H of water vapor serve as powerful tracers of hydrological processes. The typical method for determining water vapor delta(18)O and delta(2)H involves cryogenic trapping and isotope ratio mass spectrometry. Even with recent technical advances, these methods cannot resolve vapor composition at high temporal resolutions. In recent years, a few groups have developed continuous laser absorption spectroscopy (LAS) approaches for measuring delta(18)O and delta(2)H which achieve accuracy levels similar to those of lab-based mass spectrometry methods. Unfortunately, most LAS systems need cryogenic cooling and constant calibration to a reference gas, and have substantial power requirements, making them unsuitable for long-term field deployment at remote field sites. A new method called Off-Axis Integrated Cavity Output Spectroscopy (OA-ICOS) has been developed which requires extremely low-energy consumption and neither reference gas nor cryogenic cooling. In this report, we develop a relatively simple pumping system coupled to a dew point generator to calibrate an ICOS-based instrument (Los Gatos Research Water Vapor Isotope Analyzer (WVIA) DLT-100) under various pressures using liquid water with known isotopic signatures. Results show that the WVIA can be successfully calibrated using this customized system for different pressure settings, which ensure that this instrument can be combined with other gas-sampling systems. The precisions of this instrument and the associated calibration method can reach approximately 0.08 per thousand for delta(18)O and approximately 0.4 per thousand for delta(2)H. Compared with conventional mass spectrometry and other LAS-based methods, the OA-ICOS technique provides a promising alternative tool for continuous water vapor isotopic measurements in field deployments. Copyright 2009 John Wiley & Sons, Ltd.
In situ calibration of inductively coupled plasma-atomic emission and mass spectroscopy
Braymen, S.D.
1996-06-11
A method and apparatus are disclosed for in situ addition calibration of an inductively coupled plasma atomic emission spectrometer or mass spectrometer using a precision gas metering valve to introduce a volatile calibration gas of an element of interest directly into an aerosol particle stream. The present in situ calibration technique is suitable for various remote, on-site sampling systems such as laser ablation or nebulization. 5 figs.
Development of a Machine-Vision System for Recording of Force Calibration Data
NASA Astrophysics Data System (ADS)
Heamawatanachai, Sumet; Chaemthet, Kittipong; Changpan, Tawat
This paper presents the development of a new system for recording of force calibration data using machine vision technology. Real time camera and computer system were used to capture images of the reading from the instruments during calibration. Then, the measurement images were transformed and translated to numerical data using optical character recognition (OCR) technique. These numerical data along with raw images were automatically saved to memories as the calibration database files. With this new system, the human error of recording would be eliminated. The verification experiments were done by using this system for recording the measurement results from an amplifier (DMP 40) with load cell (HBM-Z30-10kN). The NIMT's 100-kN deadweight force standard machine (DWM-100kN) was used to generate test forces. The experiments setup were done in 3 categories; 1) dynamics condition (record during load changing), 2) statics condition (record during fix load), and 3) full calibration experiments in accordance with ISO 376:2011. The captured images from dynamics condition experiment gave >94% without overlapping of number. The results from statics condition experiment were >98% images without overlapping. All measurement images without overlapping were translated to number by the developed program with 100% accuracy. The full calibration experiments also gave 100% accurate results. Moreover, in case of incorrect translation of any result, it is also possible to trace back to the raw calibration image to check and correct it. Therefore, this machine-vision-based system and program should be appropriate for recording of force calibration data.
Vicarious calibration of the Geostationary Ocean Color Imager.
Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram; Oh, Im Sang
2015-09-07
Measurements of ocean color from Geostationary Ocean Color Imager (GOCI) with a moderate spatial resolution and a high temporal frequency demonstrate high value for a number of oceanographic applications. This study aims to propose and evaluate the calibration of GOCI as needed to achieve the level of radiometric accuracy desired for ocean color studies. Previous studies reported that the GOCI retrievals of normalized water-leaving radiances (nLw) are biased high for all visible bands due to the lack of vicarious calibration. The vicarious calibration approach described here relies on the assumed constant aerosol characteristics over the open-ocean sites to accurately estimate atmospheric radiances for the two near-infrared (NIR) bands. The vicarious calibration of visible bands is performed using in situ nLw measurements and the satellite-estimated atmospheric radiance using two NIR bands over the case-1 waters. Prior to this analysis, the in situ nLw spectra in the NIR are corrected by the spectrum optimization technique based on the NIR similarity spectrum assumption. The vicarious calibration gain factors derived for all GOCI bands (except 865nm) significantly improve agreement in retrieved remote-sensing reflectance (Rrs) relative to in situ measurements. These gain factors are independent of angular geometry and possible temporal variability. To further increase the confidence in the calibration gain factors, a large data set from shipboard measurements and AERONET-OC is used in the validation process. It is shown that the absolute percentage difference of the atmospheric correction results from the vicariously calibrated GOCI system is reduced by ~6.8%.
NASA Technical Reports Server (NTRS)
Amer, Tahani; Tripp, John; Tcheng, Ping; Burkett, Cecil; Sealey, Bradley
2004-01-01
This paper presents the calibration results and uncertainty analysis of a high-precision reference pressure measurement system currently used in wind tunnels at the NASA Langley Research Center (LaRC). Sensors, calibration standards, and measurement instruments are subject to errors due to aging, drift with time, environment effects, transportation, the mathematical model, the calibration experimental design, and other factors. Errors occur at every link in the chain of measurements and data reduction from the sensor to the final computed results. At each link of the chain, bias and precision uncertainties must be separately estimated for facility use, and are combined to produce overall calibration and prediction confidence intervals for the instrument, typically at a 95% confidence level. The uncertainty analysis and calibration experimental designs used herein, based on techniques developed at LaRC, employ replicated experimental designs for efficiency, separate estimation of bias and precision uncertainties, and detection of significant parameter drift with time. Final results, including calibration confidence intervals and prediction intervals given as functions of the applied inputs, not as a fixed percentage of the full-scale value are presented. System uncertainties are propagated beginning with the initial reference pressure standard, to the calibrated instrument as a working standard in the facility. Among the several parameters that can affect the overall results are operating temperature, atmospheric pressure, humidity, and facility vibration. Effects of factors such as initial zeroing and temperature are investigated. The effects of the identified parameters on system performance and accuracy are discussed.
NASA Astrophysics Data System (ADS)
Evans, Aaron H.
Thermal remote sensing is a powerful tool for measuring the spatial variability of evapotranspiration due to the cooling effect of vaporization. The residual method is a popular technique which calculates evapotranspiration by subtracting sensible heat from available energy. Estimating sensible heat requires aerodynamic surface temperature which is difficult to retrieve accurately. Methods such as SEBAL/METRIC correct for this problem by calibrating the relationship between sensible heat and retrieved surface temperature. Disadvantage of these calibrations are 1) user must manually identify extremely dry and wet pixels in image 2) each calibration is only applicable over limited spatial extent. Producing larger maps is operationally limited due to time required to manually calibrate multiple spatial extents over multiple days. This dissertation develops techniques which automatically detect dry and wet pixels. LANDSAT imagery is used because it resolves dry pixels. Calibrations using 1) only dry pixels and 2) including wet pixels are developed. Snapshots of retrieved evaporative fraction and actual evapotranspiration are compared to eddy covariance measurements for five study areas in Florida: 1) Big Cypress 2) Disney Wilderness 3) Everglades 4) near Gainesville, FL. 5) Kennedy Space Center. The sensitivity of evaporative fraction to temperature, available energy, roughness length and wind speed is tested. A technique for temporally interpolating evapotranspiration by fusing LANDSAT and MODIS is developed and tested. The automated algorithm is successful at detecting wet and dry pixels (if they exist). Including wet pixels in calibration and assuming constant atmospheric conductance significantly improved results for all but Big Cypress and Gainesville. Evaporative fraction is not very sensitive to instantaneous available energy but it is sensitive to temperature when wet pixels are included because temperature is required for estimating wet pixel evapotranspiration. Data fusion techniques only slightly outperformed linear interpolation. Eddy covariance comparison and temporal interpolation produced acceptable bias error for most cases suggesting automated calibration and interpolation could be used to predict monthly or annual ET. Maps demonstrating spatial patterns of evapotranspiration at field scale were successfully produced, but only for limited spatial extents. A framework has been established for producing larger maps by creating a mosaic of smaller individual maps.
Cross Calibration of TOMS, SBUV/2 and SCIAMACHY Radiances from Ground Observations
NASA Technical Reports Server (NTRS)
Hilsenrath, Ernest; Bhartia, P. K.; Bojkov, B.; Kowaleski, M.; Labow, G.; Ahmad, Z.
2002-01-01
We have shown that validation of radiances is a very effective means for correcting absolute accuracy and long term drifts of backscatter type satellite measurements. This method by-passes the algorithms used for both satellite and ground based measurements which are normally used to validate and correct the satellite data. A new method for satellite validation is planned which will compliment measurements from the existing ground-based networks. This method will employ very accurate comparisons between ground based zenith sky radiances and satellite nadir radiances. These comparisons will rely heavily on the experience derived from the Shuttle SBUV (SSBUV) program which provided a reference standard of radiance measurements for SBUV/2, TOMS, and GOME. This new measurement program, called 'Skyrad', employs two well established capabilities at the Goddard Space Flight Center, 1) the SSBUV calibration facilities and 2) the radiative transfer codes used for the TOMS and SBUV/2 algorithms and their subsequent refinements. Radiative transfer calculations show that ground based zenith sky and satellite nadir backscatter ultraviolet comparisons can be made very accurately under certain viewing conditions. The Skyrad instruments (SSBUV, Brewer spectrophotometers, and possibly others) will be calibrated and maintained to a precision of a few tenths of a percent. Skyrad data will then enable long term calibration of upcoming satellite instruments such as QuickTOMS, SBUV/2s and SCIAMACHY with a high degree of precision. This technique can be further employed to monitor the performance of future instruments such as GOMEZ, OMI, and OMPS. Additional information is included in the original extended abstract.
NASA Astrophysics Data System (ADS)
Dreißigacker, Anne; Köhler, Eberhard; Fabel, Oliver; van Gasselt, Stephan
2014-05-01
At the Planetary Sciences and Remote Sensing research group at Freie Universität Berlin an SCD-based X-Ray Fluorescence Spectrometer is being developed to be employed on planetary orbiters to conduct direct, passive energy-dispersive x-ray fluorescence measurements of planetary surfaces through measuring the emitted X-Ray fluorescence induced by solar x-rays and high energy particles. Because the Sun is a highly variable radiation source, the intensity of solar X-Ray radiation has to be monitored constantly to allow for comparison and signal calibration of X-Ray radiation from lunar surface materials. Measurements are obtained by indirectly monitoring incident solar x-rays emitted from a calibration sample. This has the additional advantage of minimizing the risk of detector overload and damage during extreme solar events such as high-energy solar flares and particle storms as only the sample targets receive the higher radiation load directly (while the monitor is never directly pointing towards the Sun). Quantitative data are being obtained and can be subsequently analysed through synchronous measurement of fluorescence of the Moon's surface by the XRF-S main instrument and the emitted x-ray fluorescence of calibration samples by the XRF-S-ISM (Indirect Solar Monitor). We are currently developing requirements for 3 sample tiles for onboard correction and calibration of XRF-S, each with an area of 3-9 cm2 and a maximum weight of 45 g. This includes development of design concepts, determination of techniques for sample manufacturing, manufacturing and testing of prototypes and statistical analysis of measurement characteristics and quantification of error sources for the advanced prototypes and final samples. Apart from using natural rock samples as calibration sample, we are currently investigating techniques for sample manufacturing including laser sintering of rock-glass on metals, SiO2-stabilized mineral-powders, or artificial volcanic glass. High precision measurements of the chemical composition of the final samples (EPMA, various energy-dispersive XRF) will serve as calibration standard for XRF-S. Development is funded by the German Aerospace Agency under grant 50 JR 1303.
Design of transonic airfoil sections using a similarity theory
NASA Technical Reports Server (NTRS)
Nixon, D.
1978-01-01
A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.
Boboc, A; Bieg, B; Felton, R; Dalley, S; Kravtsov, Yu
2015-09-01
In this paper, we present the work in the implementation of a new calibration for the JET real-time polarimeter based on the complex amplitude ratio technique and a new self-validation mechanism of data. This allowed easy integration of the polarimetry measurements into the JET plasma density control (gas feedback control) and as well as machine protection systems (neutral beam injection heating safety interlocks). The new addition was used successfully during 2014 JET Campaign and is envisaged that will operate routinely from 2015 campaign onwards in any plasma condition (including ITER relevant scenarios). This mode of operation elevated the importance of the polarimetry as a diagnostic tool in the view of future fusion experiments.
Absolute stellar photometry on moderate-resolution FPA images
Stone, T.C.
2009-01-01
An extensive database of star (and Moon) images has been collected by the ground-based RObotic Lunar Observatory (ROLO) as part of the US Geological Survey program for lunar calibration. The stellar data are used to derive nightly atmospheric corrections for the observations from extinction measurements, and absolute calibration of the ROLO sensors is based on observations of Vega and published reference flux and spectrum data. The ROLO telescopes were designed for imaging the Moon at moderate resolution, thus imposing some limitations for the stellar photometry. Attaining accurate stellar photometry with the ROLO image data has required development of specialized processing techniques. A key consideration is consistency in discriminating the star core signal from the off-axis point spread function. The analysis and processing methods applied to the ROLO stellar image database are described. ?? 2009 BIPM and IOP Publishing Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boboc, A., E-mail: Alexandru.Boboc@ccfe.ac.uk; Felton, R.; Dalley, S.
2015-09-15
In this paper, we present the work in the implementation of a new calibration for the JET real-time polarimeter based on the complex amplitude ratio technique and a new self-validation mechanism of data. This allowed easy integration of the polarimetry measurements into the JET plasma density control (gas feedback control) and as well as machine protection systems (neutral beam injection heating safety interlocks). The new addition was used successfully during 2014 JET Campaign and is envisaged that will operate routinely from 2015 campaign onwards in any plasma condition (including ITER relevant scenarios). This mode of operation elevated the importance ofmore » the polarimetry as a diagnostic tool in the view of future fusion experiments.« less
NASA Astrophysics Data System (ADS)
Baresel, Björn; Bucher, Hugo; Brosse, Morgane; Bagherpour, Borhan; Schaltegger, Urs
2015-04-01
To construct a revised and high resolution calibrated time scale for the Permian-Triassic boundary (PTB) we use (1) high-precision U-Pb zircon age determinations of a unique succession of volcanic ash layers interbedded with deep water fossiliferous sediments in the Nanpanjiang Basin (South China) combined with (2) accurate quantitative biochronology based on ammonoids, conodonts, radiolarians, and foraminifera and (3) tracers of marine bioproductivity (carbon isotopes) across the PTB. The unprecedented precision of the single grain chemical abrasion isotope-dilution thermal ionization mass spectrometry (CA-ID-TIMS) dating technique at sub-per mil level (radio-isotopic calibration of the PTB at the <100 ka level) now allows calibrating magmatic and biological timescales at resolution adequate for both groups of processes. Using these alignments allows (1) positioning the PTB in different depositional setting and (2) solving the age contradictions generated by the misleading use of the first occurrence (FO) of the conodont Hindeodus parvus, whose diachronous first occurrences are arbitrarily used for placing the base of the Triassic. This new age framework provides the basis for a combined calibration of chemostratigraphic records with high-resolution biochronozones of the Late Permian and Early Triassic. Here, we present new single grain U-Pb zircon data of volcanic ash layers from two deep marine sections (Dongpan and Penglaitan) revealing stratigraphic consistent dates over several volcanic ash layers bracketing the PTB. These analyses define weighted mean 206Pb/238U ages of 251.956±0.033 Ma (Dongpan) and 252.062±0.043 Ma (Penglaitan) for the last Permian ash bed. By calibration with detailed litho- and biostratigraphy new U-Pb ages of 251.953±0.038 Ma (Dongpan) and 251.907±0.033 Ma (Penglaitan) are established for the onset of the Triassic.
NASA Technical Reports Server (NTRS)
Haney, Conor; Doeling, David; Minnis, Patrick; Bhatt, Rajendra; Scarino, Benjamin; Gopalan, Arun
2016-01-01
The Deep Space Climate Observatory (DSCOVR), launched on 11 February 2015, is a satellite positioned near the Lagrange-1 (L1) point, carrying several instruments that monitor space weather, and Earth-view sensors designed for climate studies. The Earth Polychromatic Imaging Camera (EPIC) onboard DSCOVR continuously views the sun-illuminated portion of the Earth with spectral coverage in the UV, VIS, and NIR bands. Although the EPIC instrument does not have any onboard calibration abilities, its constant view of the sunlit Earth disk provides a unique opportunity for simultaneous viewing with several other satellite instruments. This arrangement allows the EPIC sensor to be inter-calibrated using other well-characterized satellite instrument reference standards. Two such instruments with onboard calibration are MODIS, flown on Aqua and Terra, and VIIRS, onboard Suomi-NPP. The MODIS and VIIRS reference calibrations will be transferred to the EPIC instrument using both all-sky ocean and deep convective clouds (DCC) ray-matched EPIC and MODIS/VIIRS radiance pairs. An automated navigation correction routine was developed to more accurately align the EPIC and MODIS/VIIRS granules. The automated navigation correction routine dramatically reduced the uncertainty of the resulting calibration gain based on the EPIC and MODIS/VIIRS radiance pairs. The SCIAMACHY-based spectral band adjustment factors (SBAF) applied to the MODIS/ VIIRS radiances were found to successfully adjust the reference radiances to the spectral response of the specific EPIC channel for over-lapping spectral channels. The SBAF was also found to be effective for the non-overlapping EPIC channel 10. Lastly, both ray-matching techniques found no discernable trends for EPIC channel 7 over the year of publically released EPIC data.
Learning a common dictionary for subject-transfer decoding with resting calibration.
Morioka, Hiroshi; Kanemura, Atsunori; Hirayama, Jun-ichiro; Shikauchi, Manabu; Ogawa, Takeshi; Ikeda, Shigeyuki; Kawanabe, Motoaki; Ishii, Shin
2015-05-01
Brain signals measured over a series of experiments have inherent variability because of different physical and mental conditions among multiple subjects and sessions. Such variability complicates the analysis of data from multiple subjects and sessions in a consistent way, and degrades the performance of subject-transfer decoding in a brain-machine interface (BMI). To accommodate the variability in brain signals, we propose 1) a method for extracting spatial bases (or a dictionary) shared by multiple subjects, by employing a signal-processing technique of dictionary learning modified to compensate for variations between subjects and sessions, and 2) an approach to subject-transfer decoding that uses the resting-state activity of a previously unseen target subject as calibration data for compensating for variations, eliminating the need for a standard calibration based on task sessions. Applying our methodology to a dataset of electroencephalography (EEG) recordings during a selective visual-spatial attention task from multiple subjects and sessions, where the variability compensation was essential for reducing the redundancy of the dictionary, we found that the extracted common brain activities were reasonable in the light of neuroscience knowledge. The applicability to subject-transfer decoding was confirmed by improved performance over existing decoding methods. These results suggest that analyzing multisubject brain activities on common bases by the proposed method enables information sharing across subjects with low-burden resting calibration, and is effective for practical use of BMI in variable environments. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves
2017-10-01
Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, I.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.
A Consistent EPIC Visible Channel Calibration Using VIIRS and MODIS as a Reference.
NASA Astrophysics Data System (ADS)
Haney, C.; Doelling, D. R.; Minnis, P.; Bhatt, R.; Scarino, B. R.; Gopalan, A.
2017-12-01
The Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) satellite constantly images the sunlit disk of Earth from the Lagrange-1 (L1) point in 10 spectral channels spanning the UV, VIS, and NIR spectrums. Recently, the DSCOVR EPIC team has publicly released version 2 dataset, which has implemented improved navigation, stray-light correction, and flat-fielding of the CCD array. The EPIC 2-year data record must be well-calibrated for consistent cloud, aerosol, trace gas, land use and other retrievals. Because EPIC lacks onboard calibrators, the observations made by EPIC channels must be calibrated vicariously using the coincident measurements from radiometrically stable instruments that have onboard calibration systems. MODIS and VIIRS are best-suited instruments for this task as they contain similar spectral bands that are well-calibrated onboard using solar diffusers and lunar tracking. We have previously calibrated the EPIC version 1 dataset by using EPIC and VIIRS angularly matched radiance pairs over both all-sky ocean and deep convective clouds (DCC). We noted that the EPIC image required navigations adjustments, and that the EPIC stray-light correction provided an offset term closer to zero based on the linear regression of the EPIC and VIIRS ray-matched radiance pairs. We will evaluate the EPIC version 2 navigation and stray-light improvements using the same techniques. In addition, we will monitor the EPIC channel calibration over the two years for any temporal degradation or anomalous behavior. These two calibration methods will be further validated using desert and DCC invariant Earth targets. The radiometric characterization of the selected invariant targets is performed using multiple years of MODIS and VIIRS measurements. Results of these studies will be shown at the conference.
A Consistent EPIC Visible Channel Calibration using VIIRS and MODIS as a Reference
NASA Technical Reports Server (NTRS)
Haney, C. O.; Doelling, D. R.; Minnis, P.; Bhatt, R.; Scarino, B. R.; Gopalan, A.
2017-01-01
The Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) satellite constantly images the sunlit disk of Earth from the Lagrange-1 (L1) point in 10 spectral channels spanning the UV, VIS, and NIR spectrums. Recently, the DSCOVR EPIC team has publicly released version 2 dataset, which has implemented improved navigation, stray-light correction, and flat-fielding of the CCD array. The EPIC 2-year data record must be well-calibrated for consistent cloud, aerosol, trace gas, land use and other retrievals. Because EPIC lacks onboard calibrators, the observations made by EPIC channels must be calibrated vicariously using the coincident measurements from radiometrically stable instruments that have onboard calibration systems. MODIS and VIIRS are best-suited instruments for this task as they contain similar spectral bands that are well-calibrated onboard using solar diffusers and lunar tracking. We have previously calibrated the EPIC version 1 dataset by using EPIC and VIIRS angularly matched radiance pairs over both all-sky ocean and deep convective clouds (DCC). We noted that the EPIC image required navigations adjustments, and that the EPIC stray-light correction provided an offset term closer to zero based on the linear regression of the EPIC and VIIRS ray-matched radiance pairs. We will evaluate the EPIC version 2 navigation and stray-light improvements using the same techniques. In addition, we will monitor the EPIC channel calibration over the two years for any temporal degradation or anomalous behavior. These two calibration methods will be further validated using desert and DCC invariant Earth targets. The radiometric characterization of the selected invariant targets is performed using multiple years of MODIS and VIIRS measurements. Results of these studies will be shown at the conference.
Report of the panel on international programs
NASA Technical Reports Server (NTRS)
Anderson, Allen Joel; Fuchs, Karl W.; Ganeka, Yasuhiro; Gaur, Vinod; Green, Andrew A.; Siegfried, W.; Lambert, Anthony; Rais, Jacub; Reighber, Christopher; Seeger, Herman
1991-01-01
The panel recommends that NASA participate and take an active role in the continuous monitoring of existing regional networks, the realization of high resolution geopotential and topographic missions, the establishment of interconnection of the reference frames as defined by different space techniques, the development and implementation of automation for all ground-to-space observing systems, calibration and validation experiments for measuring techniques and data, the establishment of international space-based networks for real-time transmission of high density space data in standardized formats, tracking and support for non-NASA missions, and the extension of state-of-the art observing and analysis techniques to developing nations.
On techniques for angle compensation in nonideal iris recognition.
Schuckers, Stephanie A C; Schmid, Natalia A; Abhyankar, Aditya; Dorairaj, Vivekanand; Boyce, Christopher K; Hornak, Lawrence A
2007-10-01
The popularity of the iris biometric has grown considerably over the past two to three years. Most research has been focused on the development of new iris processing and recognition algorithms for frontal view iris images. However, a few challenging directions in iris research have been identified, including processing of a nonideal iris and iris at a distance. In this paper, we describe two nonideal iris recognition systems and analyze their performance. The word "nonideal" is used in the sense of compensating for off-angle occluded iris images. The system is designed to process nonideal iris images in two steps: 1) compensation for off-angle gaze direction and 2) processing and encoding of the rotated iris image. Two approaches are presented to account for angular variations in the iris images. In the first approach, we use Daugman's integrodifferential operator as an objective function to estimate the gaze direction. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed for a frontal image is based on the application of the global independent component analysis. The second approach uses an angular deformation calibration model. The angular deformations are modeled, and calibration parameters are calculated. The proposed method consists of a closed-form solution, followed by an iterative optimization procedure. The images are projected on the plane closest to the base calibrated plane. Biorthogonal wavelets are used for encoding to perform iris recognition. We use a special dataset of the off-angle iris images to quantify the performance of the designed systems. A series of receiver operating characteristics demonstrate various effects on the performance of the nonideal-iris-based recognition system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y; Sharp, G
2014-06-15
Purpose: Gain calibration for X-ray imaging systems with movable flat panel detectors (FPD) and intrinsic crosshairs is a challenge due to the geometry dependence of the heel effect and crosshair artifact. This study aims to develop a gain correction method for such systems by implementing the multi-acquisition gain image correction (MAGIC) technique. Methods: Raw flat-field images containing crosshair shadows and heel effect were acquired in 4 different FPD positions with fixed exposure parameters. The crosshair region was automatically detected and substituted with interpolated values from nearby exposed regions, generating a conventional single-image gain-map for each FPD position. Large kernel-based correctionmore » was applied to these images to correct the heel effect. A mask filter was used to invalidate the original cross-hair regions previously filled with the interpolated values. A final, seamless gain-map was created from the processed images by either the sequential filling (SF) or selective averaging (SA) techniques developed in this study. Quantitative evaluation was performed based on detective quantum efficiency improvement factor (DQEIF) for gain-corrected images using the conventional and proposed techniques. Results: Qualitatively, the MAGIC technique was found to be more effective in eliminating crosshair artifacts compared to the conventional single-image method. The mean DQEIF over the range of frequencies from 0.5 to 3.5 mm-1 were 1.09±0.06, 2.46±0.32, and 3.34±0.36 in the crosshair-artifact region and 2.35±0.31, 2.33±0.31, and 3.09±0.34 in the normal region, for the conventional, MAGIC-SF, and MAGIC-SA techniques, respectively. Conclusion: The introduced MAGIC technique is appropriate for gain calibration of an imaging system associated with a moving FPD and an intrinsic crosshair. The technique showed advantages over a conventional single image-based technique by successfully reducing residual crosshair artifacts, and higher image quality with respect to DQE.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi
Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Basedmore » on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.« less