Sample records for calibration techniques applicable

  1. Simple laser vision sensor calibration for surface profiling applications

    NASA Astrophysics Data System (ADS)

    Abu-Nabah, Bassam A.; ElSoussi, Adnane O.; Al Alami, Abed ElRahman K.

    2016-09-01

    Due to the relatively large structures in the Oil and Gas industry, original equipment manufacturers (OEMs) have been implementing custom-designed laser vision sensor (LVS) surface profiling systems as part of quality control in their manufacturing processes. The rough manufacturing environment and the continuous movement and misalignment of these custom-designed tools adversely affect the accuracy of laser-based vision surface profiling applications. Accordingly, Oil and Gas businesses have been raising the demand from the OEMs to implement practical and robust LVS calibration techniques prior to running any visual inspections. This effort introduces an LVS calibration technique representing a simplified version of two known calibration techniques, which are commonly implemented to obtain a calibrated LVS system for surface profiling applications. Both calibration techniques are implemented virtually and experimentally to scan simulated and three-dimensional (3D) printed features of known profiles, respectively. Scanned data is transformed from the camera frame to points in the world coordinate system and compared with the input profiles to validate the introduced calibration technique capability against the more complex approach and preliminarily assess the measurement technique for weld profiling applications. Moreover, the sensitivity to stand-off distances is analyzed to illustrate the practicality of the presented technique.

  2. One-calibrant kinetic calibration for on-site water sampling with solid-phase microextraction.

    PubMed

    Ouyang, Gangfeng; Cui, Shufen; Qin, Zhipei; Pawliszyn, Janusz

    2009-07-15

    The existing solid-phase microextraction (SPME) kinetic calibration technique, using the desorption of the preloaded standards to calibrate the extraction of the analytes, requires that the physicochemical properties of the standard should be similar to those of the analyte, which limited the application of the technique. In this study, a new method, termed the one-calibrant kinetic calibration technique, which can use the desorption of a single standard to calibrate all extracted analytes, was proposed. The theoretical considerations were validated by passive water sampling in laboratory and rapid water sampling in the field. To mimic the variety of the environment, such as temperature, turbulence, and the concentration of the analytes, the flow-through system for the generation of standard aqueous polycyclic aromatic hydrocarbons (PAHs) solution was modified. The experimental results of the passive samplings in the flow-through system illustrated that the effect of the environmental variables was successfully compensated with the kinetic calibration technique, and all extracted analytes can be calibrated through the desorption of a single calibrant. On-site water sampling with rotated SPME fibers also illustrated the feasibility of the new technique for rapid on-site sampling of hydrophobic organic pollutants in water. This technique will accelerate the application of the kinetic calibration method and also will be useful for other microextraction techniques.

  3. A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation

    NASA Technical Reports Server (NTRS)

    Negri, Andrew J.; Adler, Robert F.

    2002-01-01

    The development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale is presented. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR-based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.

  4. A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation

    NASA Technical Reports Server (NTRS)

    Negri, Andrew J.; Adler, Robert F.; Xu, Li-Ming

    2003-01-01

    This paper presents the development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during summer 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR- based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and t i f m rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.

  5. Calibration Experiments for a Computer Vision Oyster Volume Estimation System

    ERIC Educational Resources Information Center

    Chang, G. Andy; Kerns, G. Jay; Lee, D. J.; Stanek, Gary L.

    2009-01-01

    Calibration is a technique that is commonly used in science and engineering research that requires calibrating measurement tools for obtaining more accurate measurements. It is an important technique in various industries. In many situations, calibration is an application of linear regression, and is a good topic to be included when explaining and…

  6. A Review on Microdialysis Calibration Methods: the Theory and Current Related Efforts.

    PubMed

    Kho, Chun Min; Enche Ab Rahim, Siti Kartini; Ahmad, Zainal Arifin; Abdullah, Norazharuddin Shah

    2017-07-01

    Microdialysis is a sampling technique first introduced in the late 1950s. Although this technique was originally designed to study endogenous compounds in animal brain, it is later modified to be used in other organs. Additionally, microdialysis is not only able to collect unbound concentration of compounds from tissue sites; this technique can also be used to deliver exogenous compounds to a designated area. Due to its versatility, microdialysis technique is widely employed in a number of areas, including biomedical research. However, for most in vivo studies, the concentration of substance obtained directly from the microdialysis technique does not accurately describe the concentration of the substance on-site. In order to relate the results collected from microdialysis to the actual in vivo condition, a calibration method is required. To date, various microdialysis calibration methods have been reported, with each method being capable to provide valuable insights of the technique itself and its applications. This paper aims to provide a critical review on various calibration methods used in microdialysis applications, inclusive of a detailed description of the microdialysis technique itself to start with. It is expected that this article shall review in detail, the various calibration methods employed, present examples of work related to each calibration method including clinical efforts, plus the advantages and disadvantages of each of the methods.

  7. Application of the Langley plot method to the calibration of the solar backscattered ultraviolet instrument on the Nimbus 7 satellite

    NASA Technical Reports Server (NTRS)

    Bhartia, P. K.; Taylor, S.; Mcpeters, R. D.; Wellemeyer, C.

    1995-01-01

    The concept of the well-known Langley plot technique, used for the calibration of ground-based instruments, has been generalized for application to satellite instruments. In polar regions, near summer solstice, the solar backscattered ultraviolet (SBUV) instrument on the Nimbus 7 satellite samples the same ozone field at widely different solar zenith angles. These measurements are compared to assess the long-term drift in the instrument calibration. Although the technique provides only a relative wavelength-to-wavelength calibration, it can be combined with existing techniques to determine the drift of the instrument at any wavelength. Using this technique, we have generated a 12-year data set of ozone vertical profiles from SBUV with an estimated accuracy of +/- 5% at 1 mbar and +/- 2% at 10 mbar (95% confidence) over 12 years. Since the method is insensitive to true changes in the atmospheric ozone profile, it can also be used to compare the calibrations of similar SBUV instruments launched without temporal overlap.

  8. Hybrid dynamic radioactive particle tracking (RPT) calibration technique for multiphase flow systems

    NASA Astrophysics Data System (ADS)

    Khane, Vaibhav; Al-Dahhan, Muthanna H.

    2017-04-01

    The radioactive particle tracking (RPT) technique has been utilized to measure three-dimensional hydrodynamic parameters for multiphase flow systems. An analytical solution to the inverse problem of the RPT technique, i.e. finding the instantaneous tracer positions based upon instantaneous counts received in the detectors, is not possible. Therefore, a calibration to obtain a counts-distance map is needed. There are major shortcomings in the conventional RPT calibration method due to which it has limited applicability in practical applications. In this work, the design and development of a novel dynamic RPT calibration technique are carried out to overcome the shortcomings of the conventional RPT calibration method. The dynamic RPT calibration technique has been implemented around a test reactor with 1foot in diameter and 1 foot in height using Cobalt-60 as an isotopes tracer particle. Two sets of experiments have been carried out to test the capability of novel dynamic RPT calibration. In the first set of experiments, a manual calibration apparatus has been used to hold a tracer particle at known static locations. In the second set of experiments, the tracer particle was moved vertically downwards along a straight line path in a controlled manner. The obtained reconstruction results about the tracer particle position were compared with the actual known position and the reconstruction errors were estimated. The obtained results revealed that the dynamic RPT calibration technique is capable of identifying tracer particle positions with a reconstruction error between 1 to 5.9 mm for the conditions studied which could be improved depending on various factors outlined here.

  9. Calibration of an electronic counter and pulse height analyzer for plotting erythrocyte volume spectra.

    DOT National Transportation Integrated Search

    1963-03-01

    A simple technique is presented for calibrating an electronic system used in the plotting of erythrocyte volume spectra. The calibration factors, once obtained, apparently remain applicable for some time. Precise estimates of calibration factors appe...

  10. Comments on: Accuracy of Raman Lidar Water Vapor Calibration and its Applicability to Long-Term Measurements

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Venable, Demetrius; Landulfo, Eduardo

    2012-01-01

    In a recent publication, LeBlanc and McDermid proposed a hybrid calibration technique for Raman water vapor lidar involving a tungsten lamp and radiosondes. Measurements made with the lidar telescope viewing the calibration lamp were used to stabilize the lidar calibration determined by comparison with radiosonde. The technique provided a significantly more stable calibration constant than radiosondes used alone. The technique involves the use of a calibration lamp in a fixed position in front of the lidar receiver aperture. We examine this configuration and find that such a configuration likely does not properly sample the full lidar system optical efficiency. While the technique is a useful addition to the use of radiosondes alone for lidar calibration, it is important to understand the scenarios under which it will not provide an accurate quantification of system optical efficiency changes. We offer examples of these scenarios.

  11. A new polarimetric active radar calibrator and calibration technique

    NASA Astrophysics Data System (ADS)

    Tang, Jianguo; Xu, Xiaojian

    2015-10-01

    Polarimetric active radar calibrator (PARC) is one of the most important calibrators with high radar cross section (RCS) for polarimetry measurement. In this paper, a new double-antenna polarimetric active radar calibrator (DPARC) is proposed, which consists of two rotatable antennas with wideband electromagnetic polarization filters (EMPF) to achieve lower cross-polarization for transmission and reception. With two antennas which are rotatable around the radar line of sight (LOS), the DPARC provides a variety of standard polarimetric scattering matrices (PSM) through the rotation combination of receiving and transmitting polarization, which are useful for polarimatric calibration in different applications. In addition, a technique based on Fourier analysis is proposed for calibration processing. Numerical simulation results are presented to demonstrate the superior performance of the proposed DPARC and processing technique.

  12. SAR calibration technology review

    NASA Technical Reports Server (NTRS)

    Walker, J. L.; Larson, R. W.

    1981-01-01

    Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.

  13. Generic Techniques for the Calibration of Robots with Application of the 3-D Fixtures and Statistical Technique on the PUMA 500 and ARID Robots

    NASA Technical Reports Server (NTRS)

    Tawfik, Hazem

    1991-01-01

    A relatively simple, inexpensive, and generic technique that could be used in both laboratories and some operation site environments is introduced at the Robotics Applications and Development Laboratory (RADL) at Kennedy Space Center (KSC). In addition, this report gives a detailed explanation of the set up procedure, data collection, and analysis using this new technique that was developed at the State University of New York at Farmingdale. The technique was used to evaluate the repeatability, accuracy, and overshoot of the Unimate Industrial Robot, PUMA 500. The data were statistically analyzed to provide an insight into the performance of the systems and components of the robot. Also, the same technique was used to check the forward kinematics against the inverse kinematics of RADL's PUMA robot. Recommendations were made for RADL to use this technique for laboratory calibration of the currently existing robots such as the ASEA, high speed controller, Automated Radiator Inspection Device (ARID) etc. Also, recommendations were made to develop and establish other calibration techniques that will be more suitable for site calibration environment and robot certification.

  14. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2016-01-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.

  15. In-situ technique for checking the calibration of platinum resistance thermometers

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Dillon-Townes, Lawrence A.

    1987-01-01

    The applicability of the self-heating technique for checking the calibration of platinum resistance thermometers located inside wind tunnels was investigated. This technique is based on a steady state measurement of resistance increase versus joule heating. This method was found to be undesirable, mainly because of the fluctuations of flow variables during any wind tunnel testing.

  16. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  17. Detection Angle Calibration of Pressure-Sensitive Paints

    NASA Technical Reports Server (NTRS)

    Bencic, Timothy J.

    2000-01-01

    Uses of the pressure-sensitive paint (PSP) techniques in areas other than external aerodynamics continue to expand. The NASA Glenn Research Center has become a leader in the application of the global technique to non-conventional aeropropulsion applications including turbomachinery testing. The use of the global PSP technique in turbomachinery applications often requires detection of the luminescent paint in confined areas. With the limited viewing usually available, highly oblique illumination and detection angles are common in the confined areas in these applications. This paper will describe the results of pressure, viewing and excitation angle dependence calibrations using three popular PSP formulations to get a better understanding of the errors associated with these non-traditional views.

  18. Underwater 3D Surface Measurement Using Fringe Projection Based Scanning Devices

    PubMed Central

    Bräuer-Burchardt, Christian; Heinze, Matthias; Schmidt, Ingo; Kühmstedt, Peter; Notni, Gunther

    2015-01-01

    In this work we show the principle of optical 3D surface measurements based on the fringe projection technique for underwater applications. The challenges of underwater use of this technique are shown and discussed in comparison with the classical application. We describe an extended camera model which takes refraction effects into account as well as a proposal of an effective, low-effort calibration procedure for underwater optical stereo scanners. This calibration technique combines a classical air calibration based on the pinhole model with ray-based modeling and requires only a few underwater recordings of an object of known length and a planar surface. We demonstrate a new underwater 3D scanning device based on the fringe projection technique. It has a weight of about 10 kg and the maximal water depth for application of the scanner is 40 m. It covers an underwater measurement volume of 250 mm × 200 mm × 120 mm. The surface of the measurement objects is captured with a lateral resolution of 150 μm in a third of a second. Calibration evaluation results are presented and examples of first underwater measurements are given. PMID:26703624

  19. Chemical Contaminant and Decontaminant Test Methodology Source Document. Second Edition

    DTIC Science & Technology

    2012-07-01

    performance as described in “A Statistical Overview on Univariate Calibration, Inverse Regression, and Detection Limits: Application to Gas Chromatography...Overview on Univariate Calibration, Inverse Regression, and Detection Limits: Application to Gas Chromatography/Mass Spectrometry Technique. Mass... APPLICATIONS INTERNATIONAL CORPORATION Gunpowder, MD 21010-0068 July 2012 Approved for public release; distribution is unlimited

  20. A combined microphone and camera calibration technique with application to acoustic imaging.

    PubMed

    Legg, Mathew; Bradley, Stuart

    2013-10-01

    We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.

  1. An Investigation of a Photographic Technique of Measuring High Surface Temperatures

    NASA Technical Reports Server (NTRS)

    Siviter, James H., Jr.; Strass, H. Kurt

    1960-01-01

    A photographic method of temperature determination has been developed to measure elevated temperatures of surfaces. The technique presented herein minimizes calibration procedures and permits wide variation in emulsion developing techniques. The present work indicates that the lower limit of applicability is approximately 1,400 F when conventional cameras, emulsions, and moderate exposures are used. The upper limit is determined by the calibration technique and the accuracy required.

  2. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  3. Classification of high-resolution multi-swath hyperspectral data using Landsat 8 surface reflectance data as a calibration target and a novel histogram based unsupervised classification technique to determine natural classes from biophysically relevant fit parameters

    NASA Astrophysics Data System (ADS)

    McCann, C.; Repasky, K. S.; Morin, M.; Lawrence, R. L.; Powell, S. L.

    2016-12-01

    Compact, cost-effective, flight-based hyperspectral imaging systems can provide scientifically relevant data over large areas for a variety of applications such as ecosystem studies, precision agriculture, and land management. To fully realize this capability, unsupervised classification techniques based on radiometrically-calibrated data that cluster based on biophysical similarity rather than simply spectral similarity are needed. An automated technique to produce high-resolution, large-area, radiometrically-calibrated hyperspectral data sets based on the Landsat surface reflectance data product as a calibration target was developed and applied to three subsequent years of data covering approximately 1850 hectares. The radiometrically-calibrated data allows inter-comparison of the temporal series. Advantages of the radiometric calibration technique include the need for minimal site access, no ancillary instrumentation, and automated processing. Fitting the reflectance spectra of each pixel using a set of biophysically relevant basis functions reduces the data from 80 spectral bands to 9 parameters providing noise reduction and data compression. Examination of histograms of these parameters allows for determination of natural splitting into biophysical similar clusters. This method creates clusters that are similar in terms of biophysical parameters, not simply spectral proximity. Furthermore, this method can be applied to other data sets, such as urban scenes, by developing other physically meaningful basis functions. The ability to use hyperspectral imaging for a variety of important applications requires the development of data processing techniques that can be automated. The radiometric-calibration combined with the histogram based unsupervised classification technique presented here provide one potential avenue for managing big-data associated with hyperspectral imaging.

  4. A Synthesis of Star Calibration Techniques for Ground-Based Narrowband Electron-Multiplying Charge-Coupled Device Imagers Used in Auroral Photometry

    NASA Technical Reports Server (NTRS)

    Grubbs, Guy II; Michell, Robert; Samara, Marilia; Hampton, Don; Jahn, Jorg-Micha

    2016-01-01

    A technique is presented for the periodic and systematic calibration of ground-based optical imagers. It is important to have a common system of units (Rayleighs or photon flux) for cross comparison as well as self-comparison over time. With the advancement in technology, the sensitivity of these imagers has improved so that stars can be used for more precise calibration. Background subtraction, flat fielding, star mapping, and other common techniques are combined in deriving a calibration technique appropriate for a variety of ground-based imager installations. Spectral (4278, 5577, and 8446 A ) ground-based imager data with multiple fields of view (19, 47, and 180 deg) are processed and calibrated using the techniques developed. The calibration techniques applied result in intensity measurements in agreement between different imagers using identical spectral filtering, and the intensity at each wavelength observed is within the expected range of auroral measurements. The application of these star calibration techniques, which convert raw imager counts into units of photon flux, makes it possible to do quantitative photometry. The computed photon fluxes, in units of Rayleighs, can be used for the absolute photometry between instruments or as input parameters for auroral electron transport models.

  5. Studying the Diurnal Cycle of Convection Using a TRMM-Calibrated Infrared Rain Algorithm

    NASA Technical Reports Server (NTRS)

    Negri, Andrew J.

    2005-01-01

    The development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale is presented. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics. The technique makes use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of nonraining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the last being important for the calculation of vertical profiles of latent heating. The diurnal cycle of rainfall, as well as the division between convective and Stratiform rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. Results from five years of PR data will show the global-tropical partitioning of convective and stratiform rainfall.

  6. Wind Tunnel Force Balance Calibration Study - Interim Results

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.

    2012-01-01

    Wind tunnel force balance calibration is preformed utilizing a variety of different methods and does not have a direct traceable standard such as standards used for most calibration practices (weights, and voltmeters). These different calibration methods and practices include, but are not limited to, the loading schedule, the load application hardware, manual and automatic systems, re-leveling and non-re-leveling. A study of the balance calibration techniques used by NASA was undertaken to develop metrics for reviewing and comparing results using sample calibrations. The study also includes balances of different designs, single and multi-piece. The calibration systems include, the manual, and the automatic that are provided by NASA and its vendors. The results to date will be presented along with the techniques for comparing the results. In addition, future planned calibrations and investigations based on the results will be provided.

  7. Extensions in Pen Ink Dosimetry: Ultraviolet Calibration Applications for Primary and Secondary Schools

    ERIC Educational Resources Information Center

    Downs, Nathan; Parisi, Alfio; Powell, Samantha; Turner, Joanna; Brennan, Chris

    2010-01-01

    A technique has previously been described for secondary school-aged children to make ultraviolet (UV) dosimeters from highlighter pen ink drawn onto strips of paper. This technique required digital comparison of exposed ink paper strips with unexposed ink paper strips to determine a simple calibration function relating the degree of ink fading to…

  8. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  9. Tunable laser techniques for improving the precision of observational astronomy

    NASA Astrophysics Data System (ADS)

    Cramer, Claire E.; Brown, Steven W.; Lykke, Keith R.; Woodward, John T.; Bailey, Stephen; Schlegel, David J.; Bolton, Adam S.; Brownstein, Joel; Doherty, Peter E.; Stubbs, Christopher W.; Vaz, Amali; Szentgyorgyi, Andrew

    2012-09-01

    Improving the precision of observational astronomy requires not only new telescopes and instrumentation, but also advances in observing protocols, calibrations and data analysis. The Laser Applications Group at the National Institute of Standards and Technology in Gaithersburg, Maryland has been applying advances in detector metrology and tunable laser calibrations to problems in astronomy since 2007. Using similar measurement techniques, we have addressed a number of seemingly disparate issues: precision flux calibration for broad-band imaging, precision wavelength calibration for high-resolution spectroscopy, and precision PSF mapping for fiber spectrographs of any resolution. In each case, we rely on robust, commercially-available laboratory technology that is readily adapted to use at an observatory. In this paper, we give an overview of these techniques.

  10. Photo-reconnaissance applications of computer processing of images.

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1971-01-01

    An imaging processing technique is developed for enhancement and calibration of imaging experiments. The technique is shown to be useful not only for the original application but also when applied to images from a wide variety of sources.

  11. Internal Water Vapor Photoacoustic Calibration

    NASA Technical Reports Server (NTRS)

    Pilgrim, Jeffrey S.

    2009-01-01

    Water vapor absorption is ubiquitous in the infrared wavelength range where photoacoustic trace gas detectors operate. This technique allows for discontinuous wavelength tuning by temperature-jumping a laser diode from one range to another within a time span suitable for photoacoustic calibration. The use of an internal calibration eliminates the need for external calibrated reference gases. Commercial applications include an improvement of photoacoustic spectrometers in all fields of use.

  12. Virtual environment assessment for laser-based vision surface profiling

    NASA Astrophysics Data System (ADS)

    ElSoussi, Adnane; Al Alami, Abed ElRahman; Abu-Nabah, Bassam A.

    2015-03-01

    Oil and gas businesses have been raising the demand from original equipment manufacturers (OEMs) to implement a reliable metrology method in assessing surface profiles of welds before and after grinding. This certainly mandates the deviation from the commonly used surface measurement gauges, which are not only operator dependent, but also limited to discrete measurements along the weld. Due to its potential accuracy and speed, the use of laser-based vision surface profiling systems have been progressively rising as part of manufacturing quality control. This effort presents a virtual environment that lends itself for developing and evaluating existing laser vision sensor (LVS) calibration and measurement techniques. A combination of two known calibration techniques is implemented to deliver a calibrated LVS system. System calibration is implemented virtually and experimentally to scan simulated and 3D printed features of known profiles, respectively. Scanned data is inverted and compared with the input profiles to validate the virtual environment capability for LVS surface profiling and preliminary assess the measurement technique for weld profiling applications. Moreover, this effort brings 3D scanning capability a step closer towards robust quality control applications in a manufacturing environment.

  13. Universal test fixture for monolithic mm-wave integrated circuits calibrated with an augmented TRD algorithm

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R.; Shalkhauser, Kurt A.

    1989-01-01

    The design and evaluation of a novel fixturing technique for characterizing millimeter wave solid state devices is presented. The technique utilizes a cosine-tapered ridge guide fixture and a one-tier de-embedding procedure to produce accurate and repeatable device level data. Advanced features of this technique include nondestructive testing, full waveguide bandwidth operation, universality of application, and rapid, yet repeatable, chip-level characterization. In addition, only one set of calibration standards is required regardless of the device geometry.

  14. Self-Calibration Approach for Mixed Signal Circuits in Systems-on-Chip

    NASA Astrophysics Data System (ADS)

    Jung, In-Seok

    MOSFET scaling has served industry very well for a few decades by proving improvements in transistor performance, power, and cost. However, they require high test complexity and cost due to several issues such as limited pin count and integration of analog and digital mixed circuits. Therefore, self-calibration is an excellent and promising method to improve yield and to reduce manufacturing cost by simplifying the test complexity, because it is possible to address the process variation effects by means of self-calibration technique. Since the prior published calibration techniques were developed for a specific targeted application, it is not easy to be utilized for other applications. In order to solve the aforementioned issues, in this dissertation, several novel self-calibration design techniques in mixed-signal mode circuits are proposed for an analog to digital converter (ADC) to reduce mismatch error and improve performance. These are essential components in SOCs and the proposed self-calibration approach also compensates the process variations. The proposed novel self-calibration approach targets the successive approximation (SA) ADC. First of all, the offset error of the comparator in the SA-ADC is reduced using the proposed approach by enabling the capacitor array in the input nodes for better matching. In addition, the auxiliary capacitors for each capacitor of DAC in the SA-ADC are controlled by using synthesized digital controller to minimize the mismatch error of the DAC. Since the proposed technique is applied during foreground operation, the power overhead in SA-ADC case is minimal because the calibration circuit is deactivated during normal operation time. Another benefit of the proposed technique is that the offset voltage of the comparator is continuously adjusted for every step to decide one-bit code, because not only the inherit offset voltage of the comparator but also the mismatch of DAC are compensated simultaneously. Synthesized digital calibration control circuit operates as fore-ground mode, and the controller has been highly optimized for low power and better performance with simplified structure. In addition, in order to increase the sampling clock frequency of proposed self-calibration approach, novel variable clock period method is proposed. To achieve high speed SAR operation, a variable clock time technique is used to reduce not only peak current but also die area. The technique removes conversion time waste and extends the SAR operation speed easily. To verify and demonstrate the proposed techniques, a prototype charge-redistribution SA-ADCs with the proposed self-calibration is implemented in a 130nm standard CMOS process. The prototype circuit's silicon area is 0.0715 mm 2 and consumers 4.62mW with 1.2V power supply.

  15. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  16. Application of the Langley plot for calibration of sun sensors for the Halogen Occultation Experiment (HALOE)

    NASA Technical Reports Server (NTRS)

    Moore, Alvah S., Jr.; Mauldin, L. ED, III; Stump, Charles W.; Reagan, John A.; Fabert, Milton G.

    1989-01-01

    The calibration of the Halogen Occultation Experiment (HALOE) sun sensor is described. This system consists of two energy-balancing silicon detectors which provide coarse azimuth and elevation control signals and a silicon photodiode array which provides top and bottom solar edge data for fine elevation control. All three detectors were calibrated on a mountaintop near Tucson, Ariz., using the Langley plot technique. The conventional Langley plot technique was modified to allow calibration of the two coarse detectors, which operate wideband. A brief description of the test setup is given. The HALOE instrument is a gas correlation radiometer that is now being developed for the Upper Atmospheric Research Satellite.

  17. Amelioration de la precision d'un bras robotise pour une application d'ebavurage

    NASA Astrophysics Data System (ADS)

    Mailhot, David

    Process automation is a more and more referred solution when it comes to complex, tedious or even dangerous tasks for human. Flexibility, low cost and compactness make industrial robots very attractive for automation. Even if many developments have been made to enhance robot's performances, they still can not meet some industries requirements. For instance, aerospace industry requires very tight tolerances on a large variety of parts, which is not what robots were designed for at first. When it comes to robotic deburring, robot imprecision is a major problem that needs to be addressed before it can be implemented in production. This master's thesis explores different calibration techniques for robot's dimensions that could overcome the problem and make the robotic deburring application possible. Some calibration techniques that are easy to implement in production environment are simulated and compared. A calibration technique for tool's dimensions is simulated and implemented to evaluate its potential. The most efficient technique will be used within the application. Finally, the production environment and requirements are explained. The remaining imprecision will be compensated by the use of a force/torque sensor integrated with the robot's controller and by the use of a camera. Many tests are made to define the best parameters to use to deburr a specific feature on a chosen part. Concluding tests are shown and demonstrate the potential use of robotic deburring. Keywords: robotic calibration, robotic arm, robotic precision, robotic deburring

  18. HoloHands: games console interface for controlling holographic optical manipulation

    NASA Astrophysics Data System (ADS)

    McDonald, C.; McPherson, M.; McDougall, C.; McGloin, D.

    2013-03-01

    The increasing number of applications for holographic manipulation techniques has sparked the development of more accessible control interfaces. Here, we describe a holographic optical tweezers experiment which is controlled by gestures that are detected by a Microsoft Kinect. We demonstrate that this technique can be used to calibrate the tweezers using the Stokes drag method and compare this to automated calibrations. We also show that multiple particle manipulation can be handled. This is a promising new line of research for gesture-based control which could find applications in a wide variety of experimental situations.

  19. Calibrating and training of neutron based NSA techniques with less SNM standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, William H; Swinhoe, Martyn T; Bracken, David S

    2010-01-01

    Accessing special nuclear material (SNM) standards for the calibration of and training on nondestructive assay (NDA) instruments has become increasingly difficult in light of enhanced safeguards and security regulations. Limited or nonexistent access to SNM has affected neutron based NDA techniques more than gamma ray techniques because the effects of multiplication require a range of masses to accurately measure the detector response. Neutron based NDA techniques can also be greatly affected by the matrix and impurity characteristics of the item. The safeguards community has been developing techniques for calibrating instrumentation and training personnel with dwindling numbers of SNM standards. Montemore » Carlo methods have become increasingly important for design and calibration of instrumentation. Monte Carlo techniques have the ability to accurately predict the detector response for passive techniques. The Monte Carlo results are usually benchmarked to neutron source measurements such as californium. For active techniques, the modeling becomes more difficult because of the interaction of the interrogation source with the detector and nuclear material; and the results cannot be simply benchmarked with neutron sources. A Monte Carlo calculated calibration curve for a training course in Indonesia of material test reactor (MTR) fuel elements assayed with an active well coincidence counter (AWCC) will be presented as an example. Performing training activities with reduced amounts of nuclear material makes it difficult to demonstrate how the multiplication and matrix properties of the item affects the detector response and limits the knowledge that can be obtained with hands-on training. A neutron pulse simulator (NPS) has been developed that can produce a pulse stream representative of a real pulse stream output from a detector measuring SNM. The NPS has been used by the International Atomic Energy Agency (IAEA) for detector testing and training applications at the Agency due to the lack of appropriate SNM standards. This paper will address the effect of reduced access to SNM for calibration and training of neutron NDA applications along with the advantages and disadvantages of some solutions that do not use standards, such as the Monte Carlo techniques and the NPS.« less

  20. Overview of intercalibration of satellite instruments

    USGS Publications Warehouse

    Chander, G.; Hewison, T.J.; Fox, N.; Wu, X.; Xiong, X.; Blackwell, W.J.

    2013-01-01

    Inter-calibration of satellite instruments is critical for detection and quantification of changes in the Earth’s environment, weather forecasting, understanding climate processes, and monitoring climate and land cover change. These applications use data from many satellites; for the data to be inter-operable, the instruments must be cross-calibrated. To meet the stringent needs of such applications requires that instruments provide reliable, accurate, and consistent measurements over time. Robust techniques are required to ensure that observations from different instruments can be normalized to a common scale that the community agrees on. The long-term reliability of this process needs to be sustained in accordance with established reference standards and best practices. Furthermore, establishing physical meaning to the information through robust Système International d'unités (SI) traceable Calibration and Validation (Cal/Val) is essential to fully understand the parameters under observation. The processes of calibration, correction, stability monitoring, and quality assurance need to be underpinned and evidenced by comparison with “peer instruments” and, ideally, highly calibrated in-orbit reference instruments. Inter-calibration between instruments is a central pillar of the Cal/Val strategies of many national and international satellite remote sensing organizations. Inter-calibration techniques as outlined in this paper not only provide a practical means of identifying and correcting relative biases in radiometric calibration between instruments but also enable potential data gaps between measurement records in a critical time series to be bridged. Use of a robust set of internationally agreed upon and coordinated inter-calibration techniques will lead to significant improvement in the consistency between satellite instruments and facilitate accurate monitoring of the Earth’s climate at uncertainty levels needed to detect and attribute the mechanisms of change. This paper summarizes the state-of-the-art of post-launch radiometric calibration of remote sensing satellite instruments, through inter-calibration.

  1. Robot calibration with a photogrammetric on-line system using reseau scanning cameras

    NASA Astrophysics Data System (ADS)

    Diewald, Bernd; Godding, Robert; Henrich, Andreas

    1994-03-01

    The possibility for testing and calibration of industrial robots becomes more and more important for manufacturers and users of such systems. Exacting applications in connection with the off-line programming techniques or the use of robots as measuring machines are impossible without a preceding robot calibration. At the LPA an efficient calibration technique has been developed. Instead of modeling the kinematic behavior of a robot, the new method describes the pose deviations within a user-defined section of the robot's working space. High- precision determination of 3D coordinates of defined path positions is necessary for calibration and can be done by digital photogrammetric systems. For the calibration of a robot at the LPA a digital photogrammetric system with three Rollei Reseau Scanning Cameras was used. This system allows an automatic measurement of a large number of robot poses with high accuracy.

  2. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, Francis J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  3. Active/passive scanning. [airborne multispectral laser scanners for agricultural and water resources applications

    NASA Technical Reports Server (NTRS)

    Woodfill, J. R.; Thomson, F. J.

    1979-01-01

    The paper deals with the design, construction, and applications of an active/passive multispectral scanner combining lasers with conventional passive remote sensors. An application investigation was first undertaken to identify remote sensing applications where active/passive scanners (APS) would provide improvement over current means. Calibration techniques and instrument sensitivity are evaluated to provide predictions of the APS's capability to meet user needs. A preliminary instrument design was developed from the initial conceptual scheme. A design review settled the issues of worthwhile applications, calibration approach, hardware design, and laser complement. Next, a detailed mechanical design was drafted and construction of the APS commenced. The completed APS was tested and calibrated in the laboratory, then installed in a C-47 aircraft and ground tested. Several flight tests completed the test program.

  4. Optical trapping

    PubMed Central

    Neuman, Keir C.; Block, Steven M.

    2006-01-01

    Since their invention just over 20 years ago, optical traps have emerged as a powerful tool with broad-reaching applications in biology and physics. Capabilities have evolved from simple manipulation to the application of calibrated forces on—and the measurement of nanometer-level displacements of—optically trapped objects. We review progress in the development of optical trapping apparatus, including instrument design considerations, position detection schemes and calibration techniques, with an emphasis on recent advances. We conclude with a brief summary of innovative optical trapping configurations and applications. PMID:16878180

  5. Mabs monograph air blast instrumentation, 1943 - 1993. Measurement techniques and instrumentation. Volume 3. Air blast structural target and gage calibration. Technical report, 17 September 1993-31 May 1994, FLD04

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reisler, R.E.; Keefer, J.H.; Ethridge, N.H.

    1995-08-01

    Structural response measurement techniques and instrumentation developed by Military Applications of Blast Simulators (MABS) participating countries for field tests over the period 1943 through 1993 are summarized. Electronic and non-electronic devices deployed on multi-ton nuclear and high-explosive events are presented with calibration techniques. The country and the year the gage was introduced are included with the description. References for each are also provided.

  6. Using HEC-HMS: Application to Karkheh river basin

    USDA-ARS?s Scientific Manuscript database

    This paper aims to facilitate the use of HEC-HMS model using a systematic event-based technique for manual calibration of soil moisture accounting and snowmelt degree-day parameters. Manual calibration, which helps ensure the HEC-HMS parameter values are physically-relevant, is often a time-consumin...

  7. Radiometrically accurate scene-based nonuniformity correction for array sensors.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2003-10-01

    A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.

  8. Anatomical calibration for wearable motion capture systems: Video calibrated anatomical system technique.

    PubMed

    Bisi, Maria Cristina; Stagni, Rita; Caroselli, Alessio; Cappello, Angelo

    2015-08-01

    Inertial sensors are becoming widely used for the assessment of human movement in both clinical and research applications, thanks to their usability out of the laboratory. This work aims to propose a method for calibrating anatomical landmark position in the wearable sensor reference frame with an ease to use, portable and low cost device. An off-the-shelf camera, a stick and a pattern, attached to the inertial sensor, compose the device. The proposed technique is referred to as video Calibrated Anatomical System Technique (vCAST). The absolute orientation of a synthetic femur was tracked both using the vCAST together with an inertial sensor and using stereo-photogrammetry as reference. Anatomical landmark calibration showed mean absolute error of 0.6±0.5 mm: these errors are smaller than those affecting the in-vivo identification of anatomical landmarks. The roll, pitch and yaw anatomical frame orientations showed root mean square errors close to the accuracy limit of the wearable sensor used (1°), highlighting the reliability of the proposed technique. In conclusion, the present paper proposes and preliminarily verifies the performance of a method (vCAST) for calibrating anatomical landmark position in the wearable sensor reference frame: the technique is low time consuming, highly portable, easy to implement and usable outside laboratory. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Performance evaluation of GNSS-TEC estimation techniques at the grid point in middle and low latitudes during different geomagnetic conditions

    NASA Astrophysics Data System (ADS)

    Abe, O. E.; Otero Villamide, X.; Paparini, C.; Radicella, S. M.; Nava, B.; Rodríguez-Bouza, M.

    2017-04-01

    Global Navigation Satellite Systems (GNSS) have become a powerful tool use in surveying and mapping, air and maritime navigation, ionospheric/space weather research and other applications. However, in some cases, its maximum efficiency could not be attained due to some uncorrelated errors associated with the system measurements, which is caused mainly by the dispersive nature of the ionosphere. Ionosphere has been represented using the total number of electrons along the signal path at a particular height known as Total Electron Content (TEC). However, there are many methods to estimate TEC but the outputs are not uniform, which could be due to the peculiarity in characterizing the biases inside the observables (measurements), and sometimes could be associated to the influence of mapping function. The errors in TEC estimation could lead to wrong conclusion and this could be more critical in case of safety-of-life application. This work investigated the performance of Ciraolo's and Gopi's GNSS-TEC calibration techniques, during 5 geomagnetic quiet and disturbed conditions in the month of October 2013, at the grid points located in low and middle latitudes. The data used are obtained from the GNSS ground-based receivers located at Borriana in Spain (40°N, 0°E; mid latitude) and Accra in Ghana (5.50°N, -0.20°E; low latitude). The results of the calibrated TEC are compared with the TEC obtained from European Geostationary Navigation Overlay System Processing Set (EGNOS PS) TEC algorithm, which is considered as a reference data. The TEC derived from Global Ionospheric Maps (GIM) through International GNSS service (IGS) was also examined at the same grid points. The results obtained in this work showed that Ciraolo's calibration technique (a calibration technique based on carrier-phase measurements only) estimates TEC better at middle latitude in comparison to Gopi's technique (a calibration technique based on code and carrier-phase measurements). At the same time, Gopi's calibration was also found more reliable in low latitude than Ciraolo's technique. In addition, the TEC derived from IGS GIM seems to be much reliable in middle-latitude than in low-latitude region.

  10. An integrated approach to monitoring the calibration stability of operational dual-polarization radars

    DOE PAGES

    Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.; ...

    2016-11-08

    The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less

  11. Calibration of strontium-90 eye applicator using a strontium external beam standard.

    PubMed

    Siddle, D; Langmack, K

    1999-07-01

    Four techniques for measuring the dose rate from Sr-90 concave eye plaques are presented. The techniques involve calibrating a concave eye plaque against a Sr-90 teletherapy unit using X-Omat film, radiochromic film, black LiF TLD discs and LiF chips. The mean dose rate predicted by these dosimeters is 7.5 cGy s(-1). The dose rate quoted by the manufacturer is 33% lower than this value, which is consistent with discrepancies reported by other authors. Calibration against a 6 MV linear accelerator was also carried out using each of the above dosimetric devices, and appropriate sensitivity correction factors have been presented.

  12. Calibration of strontium-90 eye applicator using a strontium external beam standard

    NASA Astrophysics Data System (ADS)

    Siddle, D.; Langmack, K.

    1999-07-01

    Four techniques for measuring the dose rate from Sr-90 concave eye plaques are presented. The techniques involve calibrating a concave eye plaque against a Sr-90 teletherapy unit using X-Omat film, radiochromic film, black LiF TLD discs and LiF chips. The mean dose rate predicted by these dosimeters is 7.5 cGy s-1. The dose rate quoted by the manufacturer is 33% lower than this value, which is consistent with discrepancies reported by other authors. Calibration against a 6 MV linear accelerator was also carried out using each of the above dosimetric devices, and appropriate sensitivity correction factors have been presented.

  13. Innovative self-calibration method for accelerometer scale factor of the missile-borne RINS with fiber optic gyro.

    PubMed

    Zhang, Qian; Wang, Lei; Liu, Zengjun; Zhang, Yiming

    2016-09-19

    The calibration of an inertial measurement unit (IMU) is a key technique to improve the preciseness of the inertial navigation system (INS) for missile, especially for the calibration of accelerometer scale factor. Traditional calibration method is generally based on the high accuracy turntable, however, it leads to expensive costs and the calibration results are not suitable to the actual operating environment. In the wake of developments in multi-axis rotational INS (RINS) with optical inertial sensors, self-calibration is utilized as an effective way to calibrate IMU on missile and the calibration results are more accurate in practical application. However, the introduction of multi-axis RINS causes additional calibration errors, including non-orthogonality errors of mechanical processing and non-horizontal errors of operating environment, it means that the multi-axis gimbals could not be regarded as a high accuracy turntable. As for its application on missiles, in this paper, after analyzing the relationship between the calibration error of accelerometer scale factor and non-orthogonality and non-horizontal angles, an innovative calibration procedure using the signals of fiber optic gyro and photoelectric encoder is proposed. The laboratory and vehicle experiment results validate the theory and prove that the proposed method relaxes the orthogonality requirement of rotation axes and eliminates the strict application condition of the system.

  14. A calibration rig for multi-component internal strain gauge balance using the new design-of-experiment (DOE) approach

    NASA Astrophysics Data System (ADS)

    Nouri, N. M.; Mostafapour, K.; Kamran, M.

    2018-02-01

    In a closed water-tunnel circuit, the multi-component strain gauge force and moment sensor (also known as balance) are generally used to measure hydrodynamic forces and moments acting on scaled models. These balances are periodically calibrated by static loading. Their performance and accuracy depend significantly on the rig and the method of calibration. In this research, a new calibration rig was designed and constructed to calibrate multi-component internal strain gauge balances. The calibration rig has six degrees of freedom and six different component-loading structures that can be applied separately and synchronously. The system was designed based on the applicability of formal experimental design techniques, using gravity for balance loading and balance positioning and alignment relative to gravity. To evaluate the calibration rig, a six-component internal balance developed by Iran University of Science and Technology was calibrated using response surface methodology. According to the results, calibration rig met all design criteria. This rig provides the means by which various methods of formal experimental design techniques can be implemented. The simplicity of the rig saves time and money in the design of experiments and in balance calibration while simultaneously increasing the accuracy of these activities.

  15. A Precise Calibration Technique for Measuring High Gas Temperatures

    NASA Technical Reports Server (NTRS)

    Gokoglu, Suleyman A.; Schultz, Donald F.

    2000-01-01

    A technique was developed for direct measurement of gas temperatures in the range of 2050 K 2700 K with improved accuracy and reproducibility. The technique utilized the low-emittance of certain fibrous materials, and the uncertainty of the technique was United by the uncertainty in the melting points of the materials, i.e., +/-15 K. The materials were pure, thin, metal-oxide fibers whose diameters varied from 60 microns to 400 microns in the experiments. The sharp increase in the emittance of the fibers upon melting was utilized as indication of reaching a known gas temperature. The accuracy of the technique was confirmed by both calculated low emittance values of transparent fibers, of order 0.01, up to a few degrees below their melting point and by the fiber-diameter independence of the results. This melting-point temperature was approached by increments not larger than 4 K, which was accomplished by controlled increases of reactant flow rates in hydrogen-air and/or hydrogen-oxygen flames. As examples of the applications of the technique, the gas-temperature measurements were used: (a) for assessing the uncertainty in inferring gas temperatures from thermocouple measurements, and (b) for calibrating an IR camera to measure gas temperatures. The technique offers an excellent calibration reference for other gas-temperature measurement methods to improve their accuracy and reliably extending their temperature range of applicability.

  16. A Precise Calibration Technique for Measuring High Gas Temperatures

    NASA Technical Reports Server (NTRS)

    Gokoglu, Suleyman A.; Schultz, Donald F.

    1999-01-01

    A technique was developed for direct measurement of gas temperatures in the range of 2050 K - 2700 K with improved accuracy and reproducibility. The technique utilized the low-emittance of certain fibrous Materials, and the uncertainty of the technique was limited by the uncertainty in the melting points of the materials, i.e., +/- 15 K. The materials were pure, thin, metal-oxide fibers whose diameters varied from 60 mm to 400 mm in the experiments. The sharp increase in the emittance of the fibers upon melting was utilized as indication of reaching a known gas temperature. The accuracy of the technique was confirmed by both calculated low emittance values of transparent fibers, of order 0.01, up to a few degrees below their melting point and by the fiber-diameter independence of the results. This melting-point temperature was approached by increments not larger than 4 K, which was accomplished by controlled increases of reactant flow rates in hydrogen-air and/or hydrogen- oxygen flames. As examples of the applications of the technique, the gas-temperature measurements were used (a) for assessing the uncertainty in infering gas temperatures from thermocouple measurements, and (b) for calibrating an IR camera to measure gas temperatures. The technique offers an excellent calibration reference for other gas-temperature measurement methods to improve their accuracy and reliably extending their temperature range of applicability.

  17. Multiplexed absorption tomography with calibration-free wavelength modulation spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Weiwei; Kaminski, Clemens F., E-mail: cfk23@cam.ac.uk

    2014-04-14

    We propose a multiplexed absorption tomography technique, which uses calibration-free wavelength modulation spectroscopy with tunable semiconductor lasers for the simultaneous imaging of temperature and species concentration in harsh combustion environments. Compared with the commonly used direct absorption spectroscopy (DAS) counterpart, the present variant enjoys better signal-to-noise ratios and requires no baseline fitting, a particularly desirable feature for high-pressure applications, where adjacent absorption features overlap and interfere severely. We present proof-of-concept numerical demonstrations of the technique using realistic phantom models of harsh combustion environments and prove that the proposed techniques outperform currently available tomography techniques based on DAS.

  18. Specifying and calibrating instrumentations for wideband electronic power measurements. [in switching circuits

    NASA Technical Reports Server (NTRS)

    Lesco, D. J.; Weikle, D. H.

    1980-01-01

    The wideband electric power measurement related topics of electronic wattmeter calibration and specification are discussed. Tested calibration techniques are described in detail. Analytical methods used to determine the bandwidth requirements of instrumentation for switching circuit waveforms are presented and illustrated with examples from electric vehicle type applications. Analog multiplier wattmeters, digital wattmeters and calculating digital oscilloscopes are compared. The instrumentation characteristics which are critical to accurate wideband power measurement are described.

  19. Self-calibrating d-scan: measuring ultrashort laser pulses on-target using an arbitrary pulse compressor.

    PubMed

    Alonso, Benjamín; Sola, Íñigo J; Crespo, Helder

    2018-02-19

    In most applications of ultrashort pulse lasers, temporal compressors are used to achieve a desired pulse duration in a target or sample, and precise temporal characterization is important. The dispersion-scan (d-scan) pulse characterization technique usually involves using glass wedges to impart variable, well-defined amounts of dispersion to the pulses, while measuring the spectrum of a nonlinear signal produced by those pulses. This works very well for broadband few-cycle pulses, but longer, narrower bandwidth pulses are much more difficult to measure this way. Here we demonstrate the concept of self-calibrating d-scan, which extends the applicability of the d-scan technique to pulses of arbitrary duration, enabling their complete measurement without prior knowledge of the introduced dispersion. In particular, we show that the pulse compressors already employed in chirped pulse amplification (CPA) systems can be used to simultaneously compress and measure the temporal profile of the output pulses on-target in a simple way, without the need of additional diagnostics or calibrations, while at the same time calibrating the often-unknown differential dispersion of the compressor itself. We demonstrate the technique through simulations and experiments under known conditions. Finally, we apply it to the measurement and compression of 27.5 fs pulses from a CPA laser.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.

    The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less

  1. Embedded calibration system for the DIII-D Langmuir probe analog fiber optic links

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, J. G.; Rajpal, R.; Mandaliya, H.

    2012-10-15

    This paper describes a generally applicable technique for simultaneously measuring offset and gain of 64 analog fiber optic data links used for the DIII-D fixed Langmuir probes by embedding a reference voltage waveform in the optical transmitted signal before every tokamak shot. The calibrated data channels allow calibration of the power supply control fiber optic links as well. The array of fiber optic links and the embedded calibration system described here makes possible the use of superior modern data acquisition electronics in the control room.

  2. Application of Genetic Algorithm (GA) Assisted Partial Least Square (PLS) Analysis on Trilinear and Non-trilinear Fluorescence Data Sets to Quantify the Fluorophores in Multifluorophoric Mixtures: Improving Quantification Accuracy of Fluorimetric Estimations of Dilute Aqueous Mixtures.

    PubMed

    Kumar, Keshav

    2018-03-01

    Excitation-emission matrix fluorescence (EEMF) and total synchronous fluorescence spectroscopy (TSFS) are the 2 fluorescence techniques that are commonly used for the analysis of multifluorophoric mixtures. These 2 fluorescence techniques are conceptually different and provide certain advantages over each other. The manual analysis of such highly correlated large volume of EEMF and TSFS towards developing a calibration model is difficult. Partial least square (PLS) analysis can analyze the large volume of EEMF and TSFS data sets by finding important factors that maximize the correlation between the spectral and concentration information for each fluorophore. However, often the application of PLS analysis on entire data sets does not provide a robust calibration model and requires application of suitable pre-processing step. The present work evaluates the application of genetic algorithm (GA) analysis prior to PLS analysis on EEMF and TSFS data sets towards improving the precision and accuracy of the calibration model. The GA algorithm essentially combines the advantages provided by stochastic methods with those provided by deterministic approaches and can find the set of EEMF and TSFS variables that perfectly correlate well with the concentration of each of the fluorophores present in the multifluorophoric mixtures. The utility of the GA assisted PLS analysis is successfully validated using (i) EEMF data sets acquired for dilute aqueous mixture of four biomolecules and (ii) TSFS data sets acquired for dilute aqueous mixtures of four carcinogenic polycyclic aromatic hydrocarbons (PAHs) mixtures. In the present work, it is shown that by using the GA it is possible to significantly improve the accuracy and precision of the PLS calibration model developed for both EEMF and TSFS data set. Hence, GA must be considered as a useful pre-processing technique while developing an EEMF and TSFS calibration model.

  3. Segmented Gamma Scanner for Small Containers of Uranium Processing Waste- 12295

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, K.E.; Smith, S.K.; Gailey, S.

    2012-07-01

    The Segmented Gamma Scanner (SGS) is commonly utilized in the assay of 55-gallon drums containing radioactive waste. Successfully deployed calibration methods include measurement of vertical line source standards in representative matrices and mathematical efficiency calibrations. The SGS technique can also be utilized to assay smaller containers, such as those used for criticality safety in uranium processing facilities. For such an application, a Can SGS System is aptly suited for the identification and quantification of radionuclides present in fuel processing wastes. Additionally, since the significant presence of uranium lumping can confound even a simple 'pass/fail' measurement regimen, the high-resolution gamma spectroscopymore » allows for the use of lump-detection techniques. In this application a lump correction is not required, but the application of a differential peak approach is used to simply identify the presence of U-235 lumps. The Can SGS is similar to current drum SGSs, but differs in the methodology for vertical segmentation. In the current drum SGS, the drum is placed on a rotator at a fixed vertical position while the detector, collimator, and transmission source are moved vertically to effect vertical segmentation. For the Can SGS, segmentation is more efficiently done by raising and lowering the rotator platform upon which the small container is positioned. This also reduces the complexity of the system mechanism. The application of the Can SGS introduces new challenges to traditional calibration and verification approaches. In this paper, we revisit SGS calibration methodology in the context of smaller waste containers, and as applied to fuel processing wastes. Specifically, we discuss solutions to the challenges introduced by requiring source standards to fit within the confines of the small containers and the unavailability of high-enriched uranium source standards. We also discuss the implementation of a previously used technique for identifying the presence of uranium lumping. The SGS technique is a well-accepted NDA technique applicable to containers of almost any size. It assumes a homogenous matrix and activity distribution throughout the entire container; an assumption that is at odds with the detection of lumps within the assay item typical of uranium-processing waste. This fact, in addition to the difficultly in constructing small reference standards of uranium-bearing materials, required the methodology used for performing an efficiency curve calibration to be altered. The solution discussed in this paper is demonstrated to provide good results for both the segment activity and full container activity when measuring heterogeneous source distributions. The application of this approach will need to be based on process knowledge of the assay items, as biases can be introduced if used with homogenous, or nearly homogenous, activity distributions. The bias will need to be quantified for each combination of container geometry and SGS scanning settings. One recommended approach for using the heterogeneous calibration discussed here is to assay each item using a homogenous calibration initially. Review of the segment activities compared to the full container activity will signal the presence of a non-uniform activity distribution as the segment activity will be grossly disproportionate to the full container activity. Upon seeing this result, the assay should either be reanalyzed or repeated using the heterogeneous calibration. (authors)« less

  4. Beat frequency quartz-enhanced photoacoustic spectroscopy for fast and calibration-free continuous trace-gas monitoring

    PubMed Central

    Wu, Hongpeng; Dong, Lei; Zheng, Huadan; Yu, Yajun; Ma, Weiguang; Zhang, Lei; Yin, Wangbao; Xiao, Liantuan; Jia, Suotang; Tittel, Frank K.

    2017-01-01

    Quartz-enhanced photoacoustic spectroscopy (QEPAS) is a sensitive gas detection technique which requires frequent calibration and has a long response time. Here we report beat frequency (BF) QEPAS that can be used for ultra-sensitive calibration-free trace-gas detection and fast spectral scan applications. The resonance frequency and Q-factor of the quartz tuning fork (QTF) as well as the trace-gas concentration can be obtained simultaneously by detecting the beat frequency signal generated when the transient response signal of the QTF is demodulated at its non-resonance frequency. Hence, BF-QEPAS avoids a calibration process and permits continuous monitoring of a targeted trace gas. Three semiconductor lasers were selected as the excitation source to verify the performance of the BF-QEPAS technique. The BF-QEPAS method is capable of measuring lower trace-gas concentration levels with shorter averaging times as compared to conventional PAS and QEPAS techniques and determines the electrical QTF parameters precisely. PMID:28561065

  5. The Fringe-Imaging Skin Friction Technique PC Application User's Manual

    NASA Technical Reports Server (NTRS)

    Zilliac, Gregory G.

    1999-01-01

    A personal computer application (CXWIN4G) has been written which greatly simplifies the task of extracting skin friction measurements from interferograms of oil flows on the surface of wind tunnel models. Images are first calibrated, using a novel approach to one-camera photogrammetry, to obtain accurate spatial information on surfaces with curvature. As part of the image calibration process, an auxiliary file containing the wind tunnel model geometry is used in conjunction with a two-dimensional direct linear transformation to relate the image plane to the physical (model) coordinates. The application then applies a nonlinear regression model to accurately determine the fringe spacing from interferometric intensity records as required by the Fringe Imaging Skin Friction (FISF) technique. The skin friction is found through application of a simple expression that makes use of lubrication theory to relate fringe spacing to skin friction.

  6. A simplified gross thrust computing technique for an afterburning turbofan engine

    NASA Technical Reports Server (NTRS)

    Hamer, M. J.; Kurtenbach, F. J.

    1978-01-01

    A simplified gross thrust computing technique extended to the F100-PW-100 afterburning turbofan engine is described. The technique uses measured total and static pressures in the engine tailpipe and ambient static pressure to compute gross thrust. Empirically evaluated calibration factors account for three-dimensional effects, the effects of friction and mass transfer, and the effects of simplifying assumptions for solving the equations. Instrumentation requirements and the sensitivity of computed thrust to transducer errors are presented. NASA altitude facility tests on F100 engines (computed thrust versus measured thrust) are presented, and calibration factors obtained on one engine are shown to be applicable to the second engine by comparing the computed gross thrust. It is concluded that this thrust method is potentially suitable for flight test application and engine maintenance on production engines with a minimum amount of instrumentation.

  7. MT3DMS: Model use, calibration, and validation

    USGS Publications Warehouse

    Zheng, C.; Hill, Mary C.; Cao, G.; Ma, R.

    2012-01-01

    MT3DMS is a three-dimensional multi-species solute transport model for solving advection, dispersion, and chemical reactions of contaminants in saturated groundwater flow systems. MT3DMS interfaces directly with the U.S. Geological Survey finite-difference groundwater flow model MODFLOW for the flow solution and supports the hydrologic and discretization features of MODFLOW. MT3DMS contains multiple transport solution techniques in one code, which can often be important, including in model calibration. Since its first release in 1990 as MT3D for single-species mass transport modeling, MT3DMS has been widely used in research projects and practical field applications. This article provides a brief introduction to MT3DMS and presents recommendations about calibration and validation procedures for field applications of MT3DMS. The examples presented suggest the need to consider alternative processes as models are calibrated and suggest opportunities and difficulties associated with using groundwater age in transport model calibration.

  8. Uncertainty Estimate for the Outdoor Calibration of Solar Pyranometers: A Metrologist Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reda, I.; Myers, D.; Stoffel, T.

    2008-12-01

    Pyranometers are used outdoors to measure solar irradiance. By design, this type of radiometer can measure the; total hemispheric (global) or diffuse (sky) irradiance when the detector is unshaded or shaded from the sun disk, respectively. These measurements are used in a variety of applications including solar energy conversion, atmospheric studies, agriculture, and materials science. Proper calibration of pyranometers is essential to ensure measurement quality. This paper describes a step-by-step method for calculating and reporting the uncertainty of the calibration, using the guidelines of the ISO 'Guide to the Expression of Uncertainty in Measurement' or GUM, that is applied tomore » the pyranometer; calibration procedures used at the National Renewable Energy Laboratory (NREL). The NREL technique; characterizes a responsivity function of a pyranometer as a function of the zenith angle, as well as reporting a single; calibration responsivity value for a zenith angle of 45 ..deg... The uncertainty analysis shows that a lower uncertainty can be achieved by using the response function of a pyranometer determined as a function of zenith angle, in lieu of just using; the average value at 45..deg... By presenting the contribution of each uncertainty source to the total uncertainty; users will be able to troubleshoot and improve their calibration process. The uncertainty analysis method can also be used to determine the uncertainty of different calibration techniques and applications, such as deriving the uncertainty of field measurements.« less

  9. A Single-Vector Force Calibration Method Featuring the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; Morton, M.; Draper, N.; Line, W.

    2001-01-01

    This paper proposes a new concept in force balance calibration. An overview of the state-of-the-art in force balance calibration is provided with emphasis on both the load application system and the experimental design philosophy. Limitations of current systems are detailed in the areas of data quality and productivity. A unique calibration loading system integrated with formal experimental design techniques has been developed and designated as the Single-Vector Balance Calibration System (SVS). This new concept addresses the limitations of current systems. The development of a quadratic and cubic calibration design is presented. Results from experimental testing are compared and contrasted with conventional calibration systems. Analyses of data are provided that demonstrate the feasibility of this concept and provide new insights into balance calibration.

  10. Evaluating and improving the performance of thin film force sensors within body and device interfaces.

    PubMed

    Likitlersuang, Jirapat; Leineweber, Matthew J; Andrysek, Jan

    2017-10-01

    Thin film force sensors are commonly used within biomechanical systems, and at the interface of the human body and medical and non-medical devices. However, limited information is available about their performance in such applications. The aims of this study were to evaluate and determine ways to improve the performance of thin film (FlexiForce) sensors at the body/device interface. Using a custom apparatus designed to load the sensors under simulated body/device conditions, two aspects were explored relating to sensor calibration and application. The findings revealed accuracy errors of 23.3±17.6% for force measurements at the body/device interface with conventional techniques of sensor calibration and application. Applying a thin rigid disc between the sensor and human body and calibrating the sensor using compliant surfaces was found to substantially reduce measurement errors to 2.9±2.0%. The use of alternative calibration and application procedures is recommended to gain acceptable measurement performance from thin film force sensors in body/device applications. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Calibrated LCD/TFT stimulus presentation for visual psychophysics in fMRI.

    PubMed

    Strasburger, H; Wüstenberg, T; Jäncke, L

    2002-11-15

    Standard projection techniques using liquid crystal (LCD) or thin-film transistor (TFT) technology show drastic distortions in luminance and contrast characteristics across the screen and across grey levels. Common luminance measurement and calibration techniques are not applicable in the vicinity of MRI scanners. With the aid of a fibre optic, we measured screen luminances for the full space of screen position and image grey values and on that basis developed a compensation technique that involves both luminance homogenisation and position-dependent gamma correction. By the technique described, images displayed to a subject in functional MRI can be specified with high precision by a matrix of desired luminance values rather than by local grey value.

  12. Standardization of Laser Methods and Techniques for Vibration Measurements and Calibrations

    NASA Astrophysics Data System (ADS)

    von Martens, Hans-Jürgen

    2010-05-01

    The realization and dissemination of the SI units of motion quantities (vibration and shock) have been based on laser interferometer methods specified in international documentary standards. New and refined laser methods and techniques developed by national metrology institutes and by leading manufacturers in the past two decades have been swiftly specified as standard methods for inclusion into in the series ISO 16063 of international documentary standards. A survey of ISO Standards for the calibration of vibration and shock transducers demonstrates the extended ranges and improved accuracy (measurement uncertainty) of laser methods and techniques for vibration and shock measurements and calibrations. The first standard for the calibration of laser vibrometers by laser interferometry or by a reference accelerometer calibrated by laser interferometry (ISO 16063-41) is on the stage of a Draft International Standard (DIS) and may be issued by the end of 2010. The standard methods with refined techniques proved to achieve wider measurement ranges and smaller measurement uncertainties than that specified in the ISO Standards. The applicability of different standardized interferometer methods to vibrations at high frequencies was recently demonstrated up to 347 kHz (acceleration amplitudes up to 350 km/s2). The relative deviations between the amplitude measurement results of the different interferometer methods that were applied simultaneously, differed by less than 1% in all cases.

  13. A Review of Calibration Transfer Practices and Instrument Differences in Spectroscopy.

    PubMed

    Workman, Jerome J

    2018-03-01

    Calibration transfer for use with spectroscopic instruments, particularly for near-infrared, infrared, and Raman analysis, has been the subject of multiple articles, research papers, book chapters, and technical reviews. There has been a myriad of approaches published and claims made for resolving the problems associated with transferring calibrations; however, the capability of attaining identical results over time from two or more instruments using an identical calibration still eludes technologists. Calibration transfer, in a precise definition, refers to a series of analytical approaches or chemometric techniques used to attempt to apply a single spectral database, and the calibration model developed using that database, for two or more instruments, with statistically retained accuracy and precision. Ideally, one would develop a single calibration for any particular application, and move it indiscriminately across instruments and achieve identical analysis or prediction results. There are many technical aspects involved in such precision calibration transfer, related to the measuring instrument reproducibility and repeatability, the reference chemical values used for the calibration, the multivariate mathematics used for calibration, and sample presentation repeatability and reproducibility. Ideally, a multivariate model developed on a single instrument would provide a statistically identical analysis when used on other instruments following transfer. This paper reviews common calibration transfer techniques, mostly related to instrument differences, and the mathematics of the uncertainty between instruments when making spectroscopic measurements of identical samples. It does not specifically address calibration maintenance or reference laboratory differences.

  14. Analysis of calibration materials to improve dual-energy CT scanning for petrophysical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayyalasomavaiula, K.; McIntyre, D.; Jain, J.

    2011-01-01

    Dual energy CT-scanning is a rapidly emerging imaging technique employed in non-destructive evaluation of various materials. Although CT (Computerized Tomography) has been used for characterizing rocks and visualizing and quantifying multiphase flow through rocks for over 25 years, most of the scanning is done at a voltage setting above 100 kV for taking advantage of the Compton scattering (CS) effect, which responds to density changes. Below 100 kV the photoelectric effect (PE) is dominant which responds to the effective atomic numbers (Zeff), which is directly related to the photo electric factor. Using the combination of the two effects helps inmore » better characterization of reservoir rocks. The most common technique for dual energy CT-scanning relies on homogeneous calibration standards to produce the most accurate decoupled data. However, the use of calibration standards with impurities increases the probability of error in the reconstructed data and results in poor rock characterization. This work combines ICP-OES (inductively coupled plasma optical emission spectroscopy) and LIBS (laser induced breakdown spectroscopy) analytical techniques to quantify the type and level of impurities in a set of commercially purchased calibration standards used in dual-energy scanning. The Zeff data on the calibration standards with and without impurity data were calculated using the weighted linear combination of the various elements present and used in calculating Zeff using the dual energy technique. Results show 2 to 5% difference in predicted Zeff values which may affect the corresponding log calibrations. The effect that these techniques have on improving material identification data is discussed and analyzed. The workflow developed in this paper will translate to a more accurate material identification estimates for unknown samples and improve calibration of well logging tools.« less

  15. Field Calibration of Wind Direction Sensor to the True North and Its Application to the Daegwanryung Wind Turbine Test Sites

    PubMed Central

    Lee, Jeong Wan

    2008-01-01

    This paper proposes a field calibration technique for aligning a wind direction sensor to the true north. The proposed technique uses the synchronized measurements of captured images by a camera, and the output voltage of a wind direction sensor. The true wind direction was evaluated through image processing techniques using the captured picture of the sensor with the least square sense. Then, the evaluated true value was compared with the measured output voltage of the sensor. This technique solves the discordance problem of the wind direction sensor in the process of installing meteorological mast. For this proposed technique, some uncertainty analyses are presented and the calibration accuracy is discussed. Finally, the proposed technique was applied to the real meteorological mast at the Daegwanryung test site, and the statistical analysis of the experimental testing estimated the values of stable misalignment and uncertainty level. In a strict sense, it is confirmed that the error range of the misalignment from the exact north could be expected to decrease within the credibility level. PMID:27873957

  16. Calibration of complex polarimetric SAR imagery using backscatter correlations

    NASA Technical Reports Server (NTRS)

    Klein, Jeffrey D.

    1992-01-01

    A new technique for calibration of multipolarization synthetic aperture radar (SAR) imagery is described. If scatterer reciprocity and lack of correlation between co- and cross-polarized radar echoes (for azimuthally symmetric distributed targets) is assumed, the effects of signal leakage between the radar data channels can be removed without the use of known ground targets. If known targets are available, all data channels may be calibrated relative to one another and radiometrically as well. The method is verified with simulations and application to airborne SAR data.

  17. Development of calibration techniques for ultrasonic hydrophone probes in the frequency range from 1 to 100 MHz

    PubMed Central

    Umchid, S.; Gopinath, R.; Srinivasan, K.; Lewin, P. A.; Daryoush, A. S.; Bansal, L.; El-Sherif, M.

    2009-01-01

    The primary objective of this work was to develop and optimize the calibration techniques for ultrasonic hydrophone probes used in acoustic field measurements up to 100 MHz. A dependable, 100 MHz calibration method was necessary to examine the behavior of a sub-millimeter spatial resolution fiber optic (FO) sensor and assess the need for such a sensor as an alternative tool for high frequency characterization of ultrasound fields. Also, it was of interest to investigate the feasibility of using FO probes in high intensity fields such as those employed in HIFU (High Intensity Focused Ultrasound) applications. In addition to the development and validation of a novel, 100 MHz calibration technique the innovative elements of this research include implementation and testing of a prototype FO sensor with an active diameter of about 10 μm that exhibits uniform sensitivity over the considered frequency range and does not require any spatial averaging corrections up to about 75 MHz. The results of the calibration measurements are presented and it is shown that the optimized calibration technique allows the sensitivity of the hydrophone probes to be determined as a virtually continuous function of frequency and is also well suited to verify the uniformity of the FO sensor frequency response. As anticipated, the overall uncertainty of the calibration was dependent on frequency and determined to be about ±12% (±1 dB) up to 40 MHz, ±20% (±1.5 dB) from 40 to 60 MHz and ±25% (±2 dB) from 60 to 100 MHz. The outcome of this research indicates that once fully developed and calibrated, the combined acousto-optic system will constitute a universal reference tool in the wide, 100 MHz bandwidth. PMID:19110289

  18. Evaluation of commercially available techniques and development of simplified methods for measuring grille airflows in HVAC systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Iain S.; Wray, Craig P.; Guillot, Cyril

    2003-08-01

    In this report, we discuss the accuracy of flow hoods for residential applications, based on laboratory tests and field studies. The results indicate that commercially available hoods are often inadequate to measure flows in residential systems, and that there can be a wide range of performance between different flow hoods. The errors are due to poor calibrations, sensitivity of existing hoods to grille flow non-uniformities, and flow changes from added flow resistance. We also evaluated several simple techniques for measuring register airflows that could be adopted by the HVAC industry and homeowners as simple diagnostics that are often as accuratemore » as commercially available devices. Our test results also show that current calibration procedures for flow hoods do not account for field application problems. As a result, organizations such as ASHRAE or ASTM need to develop a new standard for flow hood calibration, along with a new measurement standard to address field use of flow hoods.« less

  19. Infrared Stokes polarimetry and spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Kudenov, Michael William

    In this work, three methods of measuring the polarization state of light in the thermal infrared (3-12 mum) are modeled, simulated, calibrated and experimentally verified in the laboratory. The first utilizes the method of channeled spectropolarimetry (CP) to encode the Stokes polarization parameters onto the optical power spectrum. This channeled spectral technique is implemented with the use of two Yttrium Vanadate (YVO4) crystal retarders. A basic mathematical model for the system is presented, showing that all the Stokes parameters are directly present in the interferogram. Theoretical results are compared with real data from the system, an improved model is provided to simulate the effects of absorption within the crystal, and a modified calibration technique is introduced to account for this absorption. Lastly, effects due to interferometer instabilities on the reconstructions, including non-uniform sampling and interferogram translations, are investigated and techniques are employed to mitigate them. Second is the method of prismatic imaging polarimetry (PIP), which can be envisioned as the monochromatic application of channeled spectropolarimetry. Unlike CP, PIP encodes the 2-dimensional Stokes parameters in a scene onto spatial carrier frequencies. However, the calibration techniques derived in the infrared for CP are extremely similar to that of the PIP. Consequently, the PIP technique is implemented with a set of four YVO4 crystal prisms. A mathematical model for the polarimeter is presented in which diattenuation due to Fresnel effects and dichroism in the crystal are included. An improved polarimetric calibration technique is introduced to remove the diattenuation effects, along with the relative radiometric calibration required for the BPIP operating with a thermal background and large detector offsets. Data demonstrating emission polarization are presented from various blackbodies, which are compared to data from our Fourier transform infrared spectropolarimeter. Additionally, limitations in the PIP technique with regards to the spectral bandwidth and F/# of the imaging system are analyzed. A model able to predict the carrier frequency's fringe visibility is produced and experimentally verified, further reinforcing the PIP's limitations. The last technique is significantly different from CP or PIP and involves the simulation and calibration of a thermal infrared division of amplitude imaging Stokes polarimeter. For the first time, application of microbolometer focal plane array (FPA) technology to polarimetry is demonstrated. The sensor utilizes a wire-grid beamsplitter with imaging systems positioned at each output to analyze two orthogonal linear polarization states simultaneously. Combined with a form birefringent wave plate, the system is capable of snapshot imaging polarimetry in any one Stokes vector (S1, S2 or S3). Radiometric and polarimetric calibration procedures for the instrument are provided and the reduction matrices from the calibration are compared to rigorous coupled wave analysis (RCWA) and raytracing simulations. The design and optimization of the sensor's wire-grid beam splitter and wave plate are presented, along with their corresponding prescriptions. Polarimetric calibration error due to the spectrally broadband nature of the instrument is also overviewed. Image registration techniques for the sensor are discussed and data from the instrument are presented, demonstrating a microbolometer's ability to measure the small intensity variations corresponding to polarized emission in natural environments.

  20. Research relative to weather radar measurement techniques

    NASA Technical Reports Server (NTRS)

    Smith, Paul L.

    1992-01-01

    Research relative to weather radar measurement techniques, which involves some investigations related to measurement techniques applicable to meteorological radar systems in Thailand, is reported. A major part of the activity was devoted to instruction and discussion with Thai radar engineers, technicians, and meteorologists concerning the basic principles of radar meteorology and applications to specific problems, including measurement of rainfall and detection of wind shear/microburst hazards. Weather radar calibration techniques were also considered during this project. Most of the activity took place during two visits to Thailand, in December 1990 and February 1992.

  1. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  2. Optical See-Through Head Mounted Display Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise

    NASA Technical Reports Server (NTRS)

    Axholt, Magnus; Skoglund, Martin; Peterson, Stephen D.; Cooper, Matthew D.; Schoen, Thomas B.; Gustafsson, Fredrik; Ynnerman, Anders; Ellis, Stephen R.

    2010-01-01

    Augmented Reality (AR) is a technique by which computer generated signals synthesize impressions that are made to coexist with the surrounding real world as perceived by the user. Human smell, taste, touch and hearing can all be augmented, but most commonly AR refers to the human vision being overlaid with information otherwise not readily available to the user. A correct calibration is important on an application level, ensuring that e.g. data labels are presented at correct locations, but also on a system level to enable display techniques such as stereoscopy to function properly [SOURCE]. Thus, vital to AR, calibration methodology is an important research area. While great achievements already have been made, there are some properties in current calibration methods for augmenting vision which do not translate from its traditional use in automated cameras calibration to its use with a human operator. This paper uses a Monte Carlo simulation of a standard direct linear transformation camera calibration to investigate how user introduced head orientation noise affects the parameter estimation during a calibration procedure of an optical see-through head mounted display.

  3. Radiometric Calibration Techniques for Signal-of-Opportunity Reflectometers

    NASA Technical Reports Server (NTRS)

    Piepmeier, Jeffrey R.; Shah, Rashmi; Deshpande, Manohar; Johnson, Carey

    2014-01-01

    Bi-static reflection measurements utilizing global navigation satellite service (GNSS) or other signals of opportunity (SoOp) can be used to sense ocean and terrestrial surface properties. End-to-end calibration of GNSS-R has been performed using well-characterized reflection surface (e.g., water), direct path antenna, and receiver gain characterization. We propose an augmented approach using on-board receiver electronics for radiometric calibration of SoOp reflectometers utilizing direct and reflected signal receiving antennas. The method calibrates receiver and correlator gains and offsets utilizing a reference switch and common noise source. On-board electronic calibration sources, such as reference switches, noise diodes and loop-back circuits, have shown great utility in stabilizing total power and correlation microwave radiometer and scatterometer receiver electronics in L-band spaceborne instruments. Application to SoOp instruments is likely to bring several benefits. For example, application to provide short and long time scale calibration stability of the direct path channel, especially in low signal-to-noise ratio configurations, is directly analogous to the microwave radiometer problem. The direct path channel is analogous to the loopback path in a scatterometer to provide a reference of the transmitted power, although the receiver is independent from the reflected path channel. Thus, a common noise source can be used to measure the gain ratio of the two paths. Using these techniques long-term (days to weeks) calibration stability of spaceborne L-band scatterometer and radiometer has been achieved better than 0.1. Similar long-term stability would likely be needed for a spaceborne reflectometer mission to measure terrestrial properties such as soil moisture.

  4. Simulating muscular thin films using thermal contraction capabilities in finite element analysis tools.

    PubMed

    Webster, Victoria A; Nieto, Santiago G; Grosberg, Anna; Akkus, Ozan; Chiel, Hillel J; Quinn, Roger D

    2016-10-01

    In this study, new techniques for approximating the contractile properties of cells in biohybrid devices using Finite Element Analysis (FEA) have been investigated. Many current techniques for modeling biohybrid devices use individual cell forces to simulate the cellular contraction. However, such techniques result in long simulation runtimes. In this study we investigated the effect of the use of thermal contraction on simulation runtime. The thermal contraction model was significantly faster than models using individual cell forces, making it beneficial for rapidly designing or optimizing devices. Three techniques, Stoney׳s Approximation, a Modified Stoney׳s Approximation, and a Thermostat Model, were explored for calibrating thermal expansion/contraction parameters (TECPs) needed to simulate cellular contraction using thermal contraction. The TECP values were calibrated by using published data on the deflections of muscular thin films (MTFs). Using these techniques, TECP values that suitably approximate experimental deflections can be determined by using experimental data obtained from cardiomyocyte MTFs. Furthermore, a sensitivity analysis was performed in order to investigate the contribution of individual variables, such as elastic modulus and layer thickness, to the final calibrated TECP for each calibration technique. Additionally, the TECP values are applicable to other types of biohybrid devices. Two non-MTF models were simulated based on devices reported in the existing literature. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Technique for Radiometer and Antenna Array Calibration with Two Antenna Noise Diodes

    NASA Technical Reports Server (NTRS)

    Srinivasan, Karthik; Limaye, Ashutosh; Laymon, Charles; Meyer, Paul

    2011-01-01

    This paper presents a new technique to calibrate a microwave radiometer and phased array antenna system. This calibration technique uses a radiated noise source in addition to an injected noise sources for calibration. The plane of reference for this calibration technique is the face of the antenna and therefore can effectively calibration the gain fluctuations in the active phased array antennas. This paper gives the mathematical formulation for the technique and discusses the improvements brought by the method over the existing calibration techniques.

  6. Availability of High Quality TRMM Ground Validation Data from Kwajalein, RMI: A Practical Application of the Relative Calibration Adjustment Technique

    NASA Technical Reports Server (NTRS)

    Marks, David A.; Wolff, David B.; Silberstein, David S.; Tokay, Ali; Pippitt, Jason L.; Wang, Jianxin

    2008-01-01

    Since the Tropical Rainfall Measuring Mission (TRMM) satellite launch in November 1997, the TRMM Satellite Validation Office (TSVO) at NASA Goddard Space Flight Center (GSFC) has been performing quality control and estimating rainfall from the KPOL S-band radar at Kwajalein, Republic of the Marshall Islands. Over this period, KPOL has incurred many episodes of calibration and antenna pointing angle uncertainty. To address these issues, the TSVO has applied the Relative Calibration Adjustment (RCA) technique to eight years of KPOL radar data to produce Ground Validation (GV) Version 7 products. This application has significantly improved stability in KPOL reflectivity distributions needed for Probability Matching Method (PMM) rain rate estimation and for comparisons to the TRMM Precipitation Radar (PR). In years with significant calibration and angle corrections, the statistical improvement in PMM distributions is dramatic. The intent of this paper is to show improved stability in corrected KPOL reflectivity distributions by using the PR as a stable reference. Inter-month fluctuations in mean reflectivity differences between the PR and corrected KPOL are on the order of 1-2 dB, and inter-year mean reflectivity differences fluctuate by approximately 1 dB. This represents a marked improvement in stability with confidence comparable to the established calibration and uncertainty boundaries of the PR. The practical application of the RCA method has salvaged eight years of radar data that would have otherwise been unusable, and has made possible a high-quality database of tropical ocean-based reflectivity measurements and precipitation estimates for the research community.

  7. Calibration of passive remote observing optical and microwave instrumentation; Proceedings of the Meeting, Orlando, FL, Apr. 3-5, 1991

    NASA Technical Reports Server (NTRS)

    Guenther, Bruce W. (Editor)

    1991-01-01

    Various papers on the calibration of passive remote observing optical and microwave instrumentation are presented. Individual topics addressed include: on-board calibration device for a wide field-of-view instrument, calibration for the medium-resolution imaging spectrometer, cryogenic radiometers and intensity-stabilized lasers for EOS radiometric calibrations, radiometric stability of the Shuttle-borne solar backscatter ultraviolet spectrometer, ratioing radiometer for use with a solar diffuser, requirements of a solar diffuser and measurements of some candidate materials, reflectance stability analysis of Spectralon diffuse calibration panels, stray light effects on calibrations using a solar diffuser, radiometric calibration of SPOT 23 HRVs, surface and aerosol models for use in radiative transfer codes. Also addressed are: calibrated intercepts for solar radiometers used in remote sensor calibration, radiometric calibration of an airborne multispectral scanner, in-flight calibration of a helicopter-mounted Daedalus multispectral scanner, technique for improving the calibration of large-area sphere sources, remote colorimetry and its applications, spatial sampling errors for a satellite-borne scanning radiometer, calibration of EOS multispectral imaging sensors and solar irradiance variability.

  8. A novel data reduction technique for single slanted hot-wire measurements used to study incompressible compressor tip leakage flows

    NASA Astrophysics Data System (ADS)

    Berdanier, Reid A.; Key, Nicole L.

    2016-03-01

    The single slanted hot-wire technique has been used extensively as a method for measuring three velocity components in turbomachinery applications. The cross-flow orientation of probes with respect to the mean flow in rotating machinery results in detrimental prong interference effects when using multi-wire probes. As a result, the single slanted hot-wire technique is often preferred. Typical data reduction techniques solve a set of nonlinear equations determined by curve fits to calibration data. A new method is proposed which utilizes a look-up table method applied to a simulated triple-wire sensor with application to turbomachinery environments having subsonic, incompressible flows. Specific discussion regarding corrections for temperature and density changes present in a multistage compressor application is included, and additional consideration is given to the experimental error which accompanies each data reduction process. Hot-wire data collected from a three-stage research compressor with two rotor tip clearances are used to compare the look-up table technique with the traditional nonlinear equation method. The look-up table approach yields velocity errors of less than 5 % for test conditions deviating by more than 20 °C from calibration conditions (on par with the nonlinear solver method), while requiring less than 10 % of the computational processing time.

  9. Tone calibration technique: A digital signaling scheme for mobile applications

    NASA Technical Reports Server (NTRS)

    Davarian, F.

    1986-01-01

    Residual carrier modulation is conventionally used in a communication link to assist the receiver with signal demodulation and detection. Although suppressed carrier modulation has a slight power advantage over the residual carrier approach in systems enjoying a high level of stability, it lacks sufficient robustness to be used in channels severely contaminated by noise, interference and propagation effects. In mobile links, in particular, the vehicle motion and multipath waveform propagation affect the received carrier in an adverse fashion. A residual carrier scheme that uses a pilot carrier to calibrate a mobile channel against multipath fading anomalies is described. The benefits of this scheme, known as tone calibration technique, are described. A brief study of the system performance in the presence of implementation anomalies is also given.

  10. Temperature gradient scale length measurement: A high accuracy application of electron cyclotron emission without calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houshmandyar, S., E-mail: houshmandyar@austin.utexas.edu; Phillips, P. E.; Rowan, W. L.

    2016-11-15

    Calibration is a crucial procedure in electron temperature (T{sub e}) inference from a typical electron cyclotron emission (ECE) diagnostic on tokamaks. Although the calibration provides an important multiplying factor for an individual ECE channel, the parameter ΔT{sub e}/T{sub e} is independent of any calibration. Since an ECE channel measures the cyclotron emission for a particular flux surface, a non-perturbing change in toroidal magnetic field changes the view of that channel. Hence the calibration-free parameter is a measure of T{sub e} gradient. B{sub T}-jog technique is presented here which employs the parameter and the raw ECE signals for direct measurement ofmore » electron temperature gradient scale length.« less

  11. Videogrammetric Model Deformation Measurement Technique

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Liu, Tian-Shu

    2001-01-01

    The theory, methods, and applications of the videogrammetric model deformation (VMD) measurement technique used at NASA for wind tunnel testing are presented. The VMD technique, based on non-topographic photogrammetry, can determine static and dynamic aeroelastic deformation and attitude of a wind-tunnel model. Hardware of the system includes a video-rate CCD camera, a computer with an image acquisition frame grabber board, illumination lights, and retroreflective or painted targets on a wind tunnel model. Custom software includes routines for image acquisition, target-tracking/identification, target centroid calculation, camera calibration, and deformation calculations. Applications of the VMD technique at five large NASA wind tunnels are discussed.

  12. A New Approach to the Internal Calibration of Reverberation-Mapping Spectra

    NASA Astrophysics Data System (ADS)

    Fausnaugh, M. M.

    2017-02-01

    We present a new procedure for the internal (night-to-night) calibration of timeseries spectra, with specific applications to optical AGN reverberation mapping data. The traditional calibration technique assumes that the narrow [O iii] λ5007 emission-line profile is constant in time; given a reference [O iii] λ5007 line profile, nightly spectra are aligned by fitting for a wavelength shift, a flux rescaling factor, and a change in the spectroscopic resolution. We propose the following modifications to this procedure: (1) we stipulate a constant spectral resolution for the final calibrated spectra, (2) we employ a more flexible model for changes in the spectral resolution, and (3) we use a Bayesian modeling framework to assess uncertainties in the calibration. In a test case using data for MCG+08-11-011, these modifications result in a calibration precision of ˜1 millimagnitude, which is approximately a factor of five improvement over the traditional technique. At this level, other systematic issues (e.g., the nightly sensitivity functions and Feii contamination) limit the final precision of the observed light curves. We implement this procedure as a python package (mapspec), which we make available to the community.

  13. Quantitative methods for compensation of matrix effects and self-absorption in Laser Induced Breakdown Spectroscopy signals of solids

    NASA Astrophysics Data System (ADS)

    Takahashi, Tomoko; Thornton, Blair

    2017-12-01

    This paper reviews methods to compensate for matrix effects and self-absorption during quantitative analysis of compositions of solids measured using Laser Induced Breakdown Spectroscopy (LIBS) and their applications to in-situ analysis. Methods to reduce matrix and self-absorption effects on calibration curves are first introduced. The conditions where calibration curves are applicable to quantification of compositions of solid samples and their limitations are discussed. While calibration-free LIBS (CF-LIBS), which corrects matrix effects theoretically based on the Boltzmann distribution law and Saha equation, has been applied in a number of studies, requirements need to be satisfied for the calculation of chemical compositions to be valid. Also, peaks of all elements contained in the target need to be detected, which is a bottleneck for in-situ analysis of unknown materials. Multivariate analysis techniques are gaining momentum in LIBS analysis. Among the available techniques, principal component regression (PCR) analysis and partial least squares (PLS) regression analysis, which can extract related information to compositions from all spectral data, are widely established methods and have been applied to various fields including in-situ applications in air and for planetary explorations. Artificial neural networks (ANNs), where non-linear effects can be modelled, have also been investigated as a quantitative method and their applications are introduced. The ability to make quantitative estimates based on LIBS signals is seen as a key element for the technique to gain wider acceptance as an analytical method, especially in in-situ applications. In order to accelerate this process, it is recommended that the accuracy should be described using common figures of merit which express the overall normalised accuracy, such as the normalised root mean square errors (NRMSEs), when comparing the accuracy obtained from different setups and analytical methods.

  14. An investigation into force-moment calibration techniques applicable to a magnetic suspension and balance system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Eskins, Jonathan

    1988-01-01

    The problem of determining the forces and moments acting on a wind tunnel model suspended in a Magnetic Suspension and Balance System is addressed. Two calibration methods were investigated for three types of model cores, i.e., Alnico, Samarium-Cobalt, and a superconducting solenoid. Both methods involve calibrating the currents in the electromagnetic array against known forces and moments. The first is a static calibration method using calibration weights and a system of pulleys. The other method, dynamic calibration, involves oscillating the model and using its inertia to provide calibration forces and moments. Static calibration data, found to produce the most reliable results, is presented for three degrees of freedom at 0, 15, and -10 deg angle of attack. Theoretical calculations are hampered by the inability to represent iron-cored electromagnets. Dynamic calibrations, despite being quicker and easier to perform, are not as accurate as static calibrations. Data for dynamic calibrations at 0 and 15 deg is compared with the relevant static data acquired. Distortion of oscillation traces is cited as a major source of error in dynamic calibrations.

  15. Calibration of low-temperature ac susceptometers with a copper cylinder standard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, D.-X.; Skumryev, V.

    2010-02-15

    A high-quality low-temperature ac susceptometer is calibrated by comparing the measured ac susceptibility of a copper cylinder with its eddy-current ac susceptibility accurately calculated. Different from conventional calibration techniques that compare the measured results with the known property of a standard sample at certain fixed temperature T, field amplitude H{sub m}, and frequency f, to get a magnitude correction factor, here, the electromagnetic properties of the copper cylinder are unknown and are determined during the calibration of the ac susceptometer in the entire T, H{sub m}, and f range. It is shown that the maximum magnitude error and the maximummore » phase error of the susceptometer are less than 0.7% and 0.3 deg., respectively, in the region T=5-300 K and f=111-1111 Hz at H{sub m}=800 A/m, after a magnitude correction by a constant factor as done in a conventional calibration. However, the magnitude and phase errors can reach 2% and 4.3 deg. at 10 000 and 11 Hz, respectively. Since the errors are reproducible, a large portion of them may be further corrected after a calibration, the procedure for which is given. Conceptual discussions concerning the error sources, comparison with other calibration methods, and applications of ac susceptibility techniques are presented.« less

  16. Photo-reconnaissance applications of computer processing of images.

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1972-01-01

    Discussion of imaging processing techniques for enhancement and calibration of Jet Propulsion Laboratory imaging experiment pictures returned from NASA space vehicles such as Ranger, Mariner and Surveyor. Particular attention is given to data transmission, resolution vs recognition, and color aspects of digital data processing. The effectiveness of these techniques in applications to images from a wide variety of sources is noted. It is anticipated that the use of computer processing for enhancement of imagery will increase with the improvement and cost reduction of these techniques in the future.

  17. A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.

  18. Optical interferometry and Gaia parallaxes for a robust calibration of the Cepheid distance scale

    NASA Astrophysics Data System (ADS)

    Kervella, Pierre; Mérand, Antoine; Gallenne, Alexandre; Trahin, Boris; Borgniet, Simon; Pietrzynski, Grzegorz; Nardetto, Nicolas; Gieren, Wolfgang

    2018-04-01

    We present the modeling tool we developed to incorporate multi-technique observations of Cepheids in a single pulsation model: the Spectro-Photo-Interferometry of Pulsating Stars (SPIPS). The combination of angular diameters from optical interferometry, radial velocities and photometry with the coming Gaia DR2 parallaxes of nearby Galactic Cepheids will soon enable us to calibrate the projection factor of the classical Parallax-of-Pulsation method. This will extend its applicability to Cepheids too distant for accurate Gaia parallax measurements, and allow us to precisely calibrate the Leavitt law's zero point. As an example application, we present the SPIPS model of the long-period Cepheid RS Pup that provides a measurement of its projection factor, using the independent distance estimated from its light echoes.

  19. Raman lidar characterization using a reference lamp

    NASA Astrophysics Data System (ADS)

    Landulfo, Eduardo; da Costa, Renata F.; Rodrigues, Patricia F.; da Silva Lopes, Fábio J.

    2014-10-01

    The determination of the amount of water vapor in the atmosphere using lidar is a calibration dependent technique. Different collocated instruments are used for this purpose, like radiossoundings and microwave radiometers. When there are no collocated instruments available, an independente lamp mapping calibration technique can be used. Aiming to stabilish an independ technique for the calibration of the six channels Nd-YAG Raman lidar system located at the Center for Lasers and Applications (CLA), S˜ao Paulo, Brazil, an optical characterization of the system was first performed using a reference tungsten lamp. This characterization is useful to identify any possible distortions in the interference filters, telescope mirror and stray light contamination. In this paper we show three lamp mapping caracterizations (01/16/2014, 01/22/2014, 04/09/2014). The first day is used to demostrate how the tecnique is useful to detect stray light, the second one how it is sensible to the position of the filters and the third one demostrates a well optimized optical system.

  20. Workcell calibration for effective offline programming

    NASA Technical Reports Server (NTRS)

    Stiles, Roger D.; Jones, Clyde S.

    1989-01-01

    In the application of graphics systems for off-line programming (OLP) of robotic systems, the inevitability of errors in the model representation of real-world situations requires that a method to map these differences is incorporated as an integral part of the overall system progamming procedures. This paper discusses several proven robot-to-positioner calibration techniques necessary to reflect real-world parameters in a work-cell model. Particular attention is given to the procedures used to adjust a graphics model to an acceptable degree of accuracy for integration of OLP for the Space Shuttle Main Engine welding automation. Consideration is given to the levels of calibration, requirements, special considerations for coordinated motion, and calibration procedures.

  1. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    PubMed Central

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; Ball, Kenneth R.; Lance, Brent J.

    2016-01-01

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system. PMID:27713685

  2. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface.

    PubMed

    Waytowich, Nicholas R; Lawhern, Vernon J; Bohannon, Addison W; Ball, Kenneth R; Lance, Brent J

    2016-01-01

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.

  3. New calibration technique for water-vapor Raman lidar combined with the GNSS precipitable water vapor and the Meso-Scale Model

    NASA Astrophysics Data System (ADS)

    Kakihara, H.; Yabuki, M.; Kitafuji, F.; Tsuda, T.; Tsukamoto, M.; Hasegawa, T.; Hashiguchi, H.; Yamamoto, M.

    2017-12-01

    Atmospheric water vapor plays an important role in atmospheric chemistry and meteorology, with implications for climate change and severe weather. The Raman lidar technique is useful for observing water-vapor with high spatiotemporal resolutions. However, the calibration factor must be determined before observations. Because the calibration factor is generally evaluated by comparing Raman-signal results with those of independent measurement techniques (e.g., radiosonde), it is difficult to apply this technique to lidar sites where radiosonde observation cannot be carried out. In this study, we propose a new calibration technique for water-vapor Raman lidar using global navigation satellite system (GNSS)-derived precipitable water vapor (PWV) and Japan Meteorological Agency meso-scale model (MSM). The analysis was accomplished by fitting the GNSS-PWV to integrated water-vapor profiles combined with the MSM and the results of the lidar observations. The maximum height of the lidar signal applicable to this method was determined within 2.0 km by considering the signal noise mainly caused by low clouds. The MSM data was employed at higher regions that cannot apply the lidar data. This method can be applied to lidar signals lower than a limited height range due to weather conditions and lidar specifications. For example, Raman lidar using a laser operating in the ultraviolet C (UV-C) region has the advantage of daytime observation since there is no solar background radiation in the system. The observation range is, however, limited at altitudes lower than 1-3 km because of strong ozone absorption at the UV-C region. The new calibration technique will allow the utilization of various types of Raman lidar systems and provide many opportunities for calibration. We demonstrated the potential of this method by using the UV-C Raman lidar and GNSS observation data at the Shigaraki MU radar observatory (34°51'N, 136°06'E; 385m a.s.l.) of the Research Institute for Sustainable Humanosphere (RISH, Kyoto University, Japan, in June 2016. Differences of the calibration factor between the proposed method and the conventional method were 0.7% under optimal conditions such as clear skies and low ozone concentrations.

  4. Experimental Investigations of Non-Stationary Properties In Radiometer Receivers Using Measurements of Multiple Calibration References

    NASA Technical Reports Server (NTRS)

    Racette, Paul; Lang, Roger; Zhang, Zhao-Nan; Zacharias, David; Krebs, Carolyn A. (Technical Monitor)

    2002-01-01

    Radiometers must be periodically calibrated because the receiver response fluctuates. Many techniques exist to correct for the time varying response of a radiometer receiver. An analytical technique has been developed that uses generalized least squares regression (LSR) to predict the performance of a wide variety of calibration algorithms. The total measurement uncertainty including the uncertainty of the calibration can be computed using LSR. The uncertainties of the calibration samples used in the regression are based upon treating the receiver fluctuations as non-stationary processes. Signals originating from the different sources of emission are treated as simultaneously existing random processes. Thus, the radiometer output is a series of samples obtained from these random processes. The samples are treated as random variables but because the underlying processes are non-stationary the statistics of the samples are treated as non-stationary. The statistics of the calibration samples depend upon the time for which the samples are to be applied. The statistics of the random variables are equated to the mean statistics of the non-stationary processes over the interval defined by the time of calibration sample and when it is applied. This analysis opens the opportunity for experimental investigation into the underlying properties of receiver non stationarity through the use of multiple calibration references. In this presentation we will discuss the application of LSR to the analysis of various calibration algorithms, requirements for experimental verification of the theory, and preliminary results from analyzing experiment measurements.

  5. In situ calibration of the foil detector for an infrared imaging video bolometer using a carbon evaporation technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukai, K., E-mail: mukai.kiyofumi@LHD.nifs.ac.jp; Peterson, B. J.; SOKENDAI

    The InfraRed imaging Video Bolometer (IRVB) is a useful diagnostic for the multi-dimensional measurement of plasma radiation profiles. For the application of IRVB measurement to the neutron environment in fusion plasma devices such as the Large Helical Device (LHD), in situ calibration of the thermal characteristics of the foil detector is required. Laser irradiation tests of sample foils show that the reproducibility and uniformity of the carbon coating for the foil were improved using a vacuum evaporation method. Also, the principle of the in situ calibration system was justified.

  6. Rapid calibrated high-resolution hyperspectral imaging using tunable laser source

    NASA Astrophysics Data System (ADS)

    Nguyen, Lam K.; Margalith, Eli

    2009-05-01

    We present a novel hyperspectral imaging technique based on tunable laser technology. By replacing the broadband source and tunable filters of a typical NIR imaging instrument, several advantages are realized, including: high spectral resolution, highly variable field-of-views, fast scan-rates, high signal-to-noise ratio, and the ability to use optical fiber for efficient and flexible sample illumination. With this technique, high-resolution, calibrated hyperspectral images over the NIR range can be acquired in seconds. The performance of system features will be demonstrated on two example applications: detecting melamine contamination in wheat gluten and separating bovine protein from wheat protein in cattle feed.

  7. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  8. Non-invasive technique to measure biogeochemical parameters (pH and O2) in a microenvironment: Design and applications

    NASA Astrophysics Data System (ADS)

    Li, Biting; Seliman, Ayman; Pales, Ashley; Liang, Weizhen; Sams, Allison; Darnault, Christophe; Devol, Timothy

    2017-04-01

    The primary objectives of this research are to do the pH and O2 sensor foils calibration and then to test them in applications. Potentially, this project can be utilized to monitor the fate and transport of radionuclides in porous media. The information for physical and chemical parameters (e.g. pH and O2) is crucial to know when determining contaminants' behavior and transport in the environment. As a non-invasive method, optical imaging technique using a DSLR camera could capture data on the foil when it fluoresces, and gives a high temporal and spatial resolution during the experimental period. The calibration procedures were done in cuvettes in a row. The preliminary experiments could measure pH value in the range from 4.5 to 7.5, and O2 concentration from 0 mg/L to 20.74 mg/L. Applications of sensor foils have involved nano zero valent and acid rain experiments in order to obtain a gradient of parameter changes.

  9. Bayesian Treed Calibration: An Application to Carbon Capture With AX Sorbent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Lai, Kevin

    2017-01-02

    In cases where field or experimental measurements are not available, computer models can model real physical or engineering systems to reproduce their outcomes. They are usually calibrated in light of experimental data to create a better representation of the real system. Statistical methods, based on Gaussian processes, for calibration and prediction have been especially important when the computer models are expensive and experimental data limited. In this paper, we develop the Bayesian treed calibration (BTC) as an extension of standard Gaussian process calibration methods to deal with non-stationarity computer models and/or their discrepancy from the field (or experimental) data. Ourmore » proposed method partitions both the calibration and observable input space, based on a binary tree partitioning, into sub-regions where existing model calibration methods can be applied to connect a computer model with the real system. The estimation of the parameters in the proposed model is carried out using Markov chain Monte Carlo (MCMC) computational techniques. Different strategies have been applied to improve mixing. We illustrate our method in two artificial examples and a real application that concerns the capture of carbon dioxide with AX amine based sorbents. The source code and the examples analyzed in this paper are available as part of the supplementary materials.« less

  10. A Novel Miniature Wide-band Radiometer for Space Applications

    NASA Astrophysics Data System (ADS)

    Sykulska-Lawrence, H. M.

    2016-12-01

    Design, development and testing of a novel miniaturised infrared radiometer is described. The instrument opens up new possibilities in planetary science of deployment on smaller platforms - such as unmanned aerial vehicles and microprobes - to enable study of a planet's radiation balance, as well as terrestrial volcano plumes and trace gases in planetary atmospheres, using low-cost long-term observations. Thus a key enabling development is that of miniaturised, low-power and well-calibrated instrumentation. The talk reports advances in miniature technology to perform high accuracy visible / IR remote sensing measurements. The infrared radiometer is akin to those widely used for remote sensing for earth and space applications, which are currently either large instruments on orbiting platforms or medium-sized payloads on balloons. We use MEMS microfabrication techniques to shrink a conventional design, while combining the calibration benefits of large (>1kg) type radiometers with the flexibility and portability of a <10g device. The instrument measures broadband (0.2 to 100µm) upward and downward radiation fluxes, showing improvements in calibration stability and accuracy,with built-in calibration capability, incorporating traceability to temperature standards such as ITS-90. The miniature instrument described here was derived from a concept developed for a European Space Agency study, Dalomis (Proc. of 'i-SAIRAS 2005', Munich, 2005), which involved dropping multiple probes into the atmosphere of Venus from a balloon to sample numerous parts of the complex weather systems on the planet. Data from such an in-situ instrument would complement information from a satellite remote sensing instrument or balloon radiosonde. Moreover, the addition of an internal calibration standard facilitates comparisons between datasets. One of the main challenges for a reduced size device is calibration. We use an in-situ method whereby a blackbody source is integrated within the device and a micromirror switches the input to the detector between the measured signal and the calibration target. Achieving two well-calibrated radiometer channels within a small (<10g) payload is made possible by using modern micromachining techniques.

  11. Laser-induced incandescence calibration via gravimetric sampling

    NASA Technical Reports Server (NTRS)

    Choi, M. Y.; Vander Wal, R. L.; Zhou, Z.

    1996-01-01

    Absolute calibration of laser-induced incandescence (LII) is demonstrated via comparison of LII signal intensities with gravimetrically determined soot volume fractions. This calibration technique does not rely upon calculated or measured optical characteristics of soot. The variation of the LII signal with gravimetrically measured soot volume fractions ranging from 0.078 to 1.1 ppm established the linearly of the calibration. With the high spatial and temporal resolution capabilities of laser-induced incandescence (LII), the spatial and temporal fluctuations of the soot field within a gravimetric chimney were characterized. Radial uniformity of the soot volume fraction, f(sub v) was demonstrated with sufficient averaging of the single laser-shot LII images of the soot field thus confirming the validity of the calibration method for imaging applications. As illustration, instantaneous soot volume fractions within a Re = 5000 ethylene/air diffusion flame measured via planar LII were established quantitatively with this calibration.

  12. Definition and sensitivity of the conceptual MORDOR rainfall-runoff model parameters using different multi-criteria calibration strategies

    NASA Astrophysics Data System (ADS)

    Garavaglia, F.; Seyve, E.; Gottardi, F.; Le Lay, M.; Gailhard, J.; Garçon, R.

    2014-12-01

    MORDOR is a conceptual hydrological model extensively used in Électricité de France (EDF, French electric utility company) operational applications: (i) hydrological forecasting, (ii) flood risk assessment, (iii) water balance and (iv) climate change studies. MORDOR is a lumped, reservoir, elevation based model with hourly or daily areal rainfall and air temperature as the driving input data. The principal hydrological processes represented are evapotranspiration, direct and indirect runoff, ground water, snow accumulation and melt and routing. The model has been intensively used at EDF for more than 20 years, in particular for modeling French mountainous watersheds. In the matter of parameters calibration we propose and test alternative multi-criteria techniques based on two specific approaches: automatic calibration using single-objective functions and a priori parameter calibration founded on hydrological watershed features. The automatic calibration approach uses single-objective functions, based on Kling-Gupta efficiency, to quantify the good agreement between the simulated and observed runoff focusing on four different runoff samples: (i) time-series sample, (I) annual hydrological regime, (iii) monthly cumulative distribution functions and (iv) recession sequences.The primary purpose of this study is to analyze the definition and sensitivity of MORDOR parameters testing different calibration techniques in order to: (i) simplify the model structure, (ii) increase the calibration-validation performance of the model and (iii) reduce the equifinality problem of calibration process. We propose an alternative calibration strategy that reaches these goals. The analysis is illustrated by calibrating MORDOR model to daily data for 50 watersheds located in French mountainous regions.

  13. Study on rapid valid acidity evaluation of apple by fiber optic diffuse reflectance technique

    NASA Astrophysics Data System (ADS)

    Liu, Yande; Ying, Yibin; Fu, Xiaping; Jiang, Xuesong

    2004-03-01

    Some issues related to nondestructive evaluation of valid acidity in intact apples by means of Fourier transform near infrared (FTNIR) (800-2631nm) method were addressed. A relationship was established between the diffuse reflectance spectra recorded with a bifurcated optic fiber and the valid acidity. The data were analyzed by multivariate calibration analysis such as partial least squares (PLS) analysis and principal component regression (PCR) technique. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influence of data preprocessing and different spectra treatments were also investigated. Models based on smoothing spectra were slightly worse than models based on derivative spectra and the best result was obtained when the segment length was 5 and the gap size was 10. Depending on data preprocessing and multivariate calibration technique, the best prediction model had a correlation efficient (0.871), a low RMSEP (0.0677), a low RMSEC (0.056) and a small difference between RMSEP and RMSEC by PLS analysis. The results point out the feasibility of FTNIR spectral analysis to predict the fruit valid acidity non-destructively. The ratio of data standard deviation to the root mean square error of prediction (SDR) is better to be less than 3 in calibration models, however, the results cannot meet the demand of actual application. Therefore, further study is required for better calibration and prediction.

  14. Particle astronomy with a superconducting magnet.

    NASA Technical Reports Server (NTRS)

    Buffington, A.

    1972-01-01

    The magnetic spectrometer measures deflections of charged particles moving in a magnetic field and provides a direct means of determining the rigidity of charged primary cosmic rays up to about 100 GV/c rigidity. The underlying concepts of the method are reviewed, and factors delineating the applicable momentum range and accuracy are described along with calibration techniques. Previous experiments employing this technique are summarized, and prospects for future applications are evaluated with emphasis on separate measurement of electron and positron spectra and on isotopic separation.

  15. Development of an Expert Judgement Elicitation and Calibration Methodology for Risk Analysis in Conceptual Vehicle Design

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Keating, Charles; Conway, Bruce; Chytka, Trina

    2004-01-01

    A comprehensive expert-judgment elicitation methodology to quantify input parameter uncertainty and analysis tool uncertainty in a conceptual launch vehicle design analysis has been developed. The ten-phase methodology seeks to obtain expert judgment opinion for quantifying uncertainties as a probability distribution so that multidisciplinary risk analysis studies can be performed. The calibration and aggregation techniques presented as part of the methodology are aimed at improving individual expert estimates, and provide an approach to aggregate multiple expert judgments into a single probability distribution. The purpose of this report is to document the methodology development and its validation through application to a reference aerospace vehicle. A detailed summary of the application exercise, including calibration and aggregation results is presented. A discussion of possible future steps in this research area is given.

  16. Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach

    EPA Science Inventory

    Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...

  17. Calibration of neural networks using genetic algorithms, with application to optimal path planning

    NASA Technical Reports Server (NTRS)

    Smith, Terence R.; Pitney, Gilbert A.; Greenwood, Daniel

    1987-01-01

    Genetic algorithms (GA) are used to search the synaptic weight space of artificial neural systems (ANS) for weight vectors that optimize some network performance function. GAs do not suffer from some of the architectural constraints involved with other techniques and it is straightforward to incorporate terms into the performance function concerning the metastructure of the ANS. Hence GAs offer a remarkably general approach to calibrating ANS. GAs are applied to the problem of calibrating an ANS that finds optimal paths over a given surface. This problem involves training an ANS on a relatively small set of paths and then examining whether the calibrated ANS is able to find good paths between arbitrary start and end points on the surface.

  18. Application of Ensemble Detection and Analysis to Modeling Uncertainty in Non Stationary Process

    NASA Technical Reports Server (NTRS)

    Racette, Paul

    2010-01-01

    Characterization of non stationary and nonlinear processes is a challenge in many engineering and scientific disciplines. Climate change modeling and projection, retrieving information from Doppler measurements of hydrometeors, and modeling calibration architectures and algorithms in microwave radiometers are example applications that can benefit from improvements in the modeling and analysis of non stationary processes. Analyses of measured signals have traditionally been limited to a single measurement series. Ensemble Detection is a technique whereby mixing calibrated noise produces an ensemble measurement set. The collection of ensemble data sets enables new methods for analyzing random signals and offers powerful new approaches to studying and analyzing non stationary processes. Derived information contained in the dynamic stochastic moments of a process will enable many novel applications.

  19. Application of Least-Squares Adjustment Technique to Geometric Camera Calibration and Photogrammetric Flow Visualization

    NASA Technical Reports Server (NTRS)

    Chen, Fang-Jenq

    1997-01-01

    Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.

  20. Calibration approach for fluorescence lifetime determination for applications using time-gated detection and finite pulse width excitation.

    PubMed

    Keller, Scott B; Dudley, Jonathan A; Binzel, Katherine; Jasensky, Joshua; de Pedro, Hector Michael; Frey, Eric W; Urayama, Paul

    2008-10-15

    Time-gated techniques are useful for the rapid sampling of excited-state (fluorescence) emission decays in the time domain. Gated detectors coupled with bright, economical, nanosecond-pulsed light sources like flashlamps and nitrogen lasers are an attractive combination for bioanalytical and biomedical applications. Here we present a calibration approach for lifetime determination that is noniterative and that does not assume a negligible instrument response function (i.e., a negligible excitation pulse width) as does most current rapid lifetime determination approaches. Analogous to a transducer-based sensor, signals from fluorophores of known lifetime (0.5-12 ns) serve as calibration references. A fast avalanche photodiode and a GHz-bandwidth digital oscilloscope is used to detect transient emission from reference samples excited using a nitrogen laser. We find that the normalized time-integrated emission signal is proportional to the lifetime, which can be determined with good reproducibility (typically <100 ps) even for data with poor signal-to-noise ratios ( approximately 20). Results are in good agreement with simulations. Additionally, a new time-gating scheme for fluorescence lifetime imaging applications is proposed. In conclusion, a calibration-based approach is a valuable analysis tool for the rapid determination of lifetime in applications using time-gated detection and finite pulse width excitation.

  1. Concentration variance decay during magma mixing: a volcanic chronometer.

    PubMed

    Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B

    2015-09-21

    The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.

  2. Evaluation of ISCCP multisatellite radiance calibration for geostationary imager visible channels using the moon

    USGS Publications Warehouse

    Stone, Thomas C.; William B. Rossow,; Joseph Ferrier,; Laura M. Hinkelman,

    2013-01-01

    Since 1983, the International Satellite Cloud Climatology Project (ISCCP) has collected Earth radiance data from the succession of geostationary and polar-orbiting meteorological satellites operated by weather agencies worldwide. Meeting the ISCCP goals of global coverage and decade-length time scales requires consistent and stable calibration of the participating satellites. For the geostationary imager visible channels, ISCCP calibration provides regular periodic updates from regressions of radiances measured from coincident and collocated observations taken by Advanced Very High Resolution Radiometer instruments. As an independent check of the temporal stability and intersatellite consistency of ISCCP calibrations, we have applied lunar calibration techniques to geostationary imager visible channels using images of the Moon found in the ISCCP data archive. Lunar calibration enables using the reflected light from the Moon as a stable and consistent radiometric reference. Although the technique has general applicability, limitations of the archived image data have restricted the current study to Geostationary Operational Environmental Satellite and Geostationary Meteorological Satellite series. The results of this lunar analysis confirm that ISCCP calibration exhibits negligible temporal trends in sensor response but have revealed apparent relative biases between the satellites at various levels. However, these biases amount to differences of only a few percent in measured absolute reflectances. Since the lunar analysis examines only the lower end of the radiance range, the results suggest that the ISCCP calibration regression approach does not precisely determine the intercept or the zero-radiance response level. We discuss the impact of these findings on the development of consistent calibration for multisatellite global data sets.

  3. Abundances of isotopologues and calibration of CO2 greenhouse gas measurements

    NASA Astrophysics Data System (ADS)

    Tans, Pieter P.; Crotwell, Andrew M.; Thoning, Kirk W.

    2017-07-01

    We have developed a method to calculate the fractional distribution of CO2 across all of its component isotopologues based on measured δ13C and δ18O values. The fractional distribution can be used with known total CO2 to calculate the amount of substance fraction (mole fraction) of each component isotopologue in air individually. The technique is applicable to any molecule where isotopologue-specific values are desired. We used it with a new CO2 calibration system to account for isotopic differences among the primary CO2 standards that define the WMO X2007 CO2-in-air calibration scale and between the primary standards and standards in subsequent levels of the calibration hierarchy. The new calibration system uses multiple laser spectroscopic techniques to measure mole fractions of the three major CO2 isotopologues (16O12C16O, 16O13C16O, and 16O12C18O) individually. The three measured values are then combined into total CO2 (accounting for the rare unmeasured isotopologues), δ13C, and δ18O values. The new calibration system significantly improves our ability to transfer the WMO CO2 calibration scale with low uncertainty through our role as the World Meteorological Organization Global Atmosphere Watch Central Calibration Laboratory for CO2. Our current estimates for reproducibility of the new calibration system are ±0.01 µmol mol-1 CO2, ±0.2 ‰ δ13C, and ±0.2 ‰ δ18O, all at 68 % confidence interval (CI).

  4. Accounting For Nonlinearity In A Microwave Radiometer

    NASA Technical Reports Server (NTRS)

    Stelzried, Charles T.

    1991-01-01

    Simple mathematical technique found to account adequately for nonlinear component of response of microwave radiometer. Five prescribed temperatures measured to obtain quadratic calibration curve. Temperature assumed to vary quadratically with reading. Concept not limited to radiometric application; applicable to other measuring systems in which relationships between quantities to be determined and readings of instruments differ slightly from linearity.

  5. Nonlinear propagation model for ultrasound hydrophones calibration in the frequency range up to 100 MHz.

    PubMed

    Radulescu, E G; Wójcik, J; Lewin, P A; Nowicki, A

    2003-06-01

    To facilitate the implementation and verification of the new ultrasound hydrophone calibration techniques described in the companion paper (somewhere in this issue) a nonlinear propagation model was developed. A brief outline of the theoretical considerations is presented and the model's advantages and disadvantages are discussed. The results of simulations yielding spatial and temporal acoustic pressure amplitude are also presented and compared with those obtained using KZK and Field II models. Excellent agreement between all models is evidenced. The applicability of the model in discrete wideband calibration of hydrophones is documented in the companion paper somewhere in this volume.

  6. In situ calibration of an infrared imaging video bolometer in the Large Helical Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukai, K., E-mail: mukai.kiyofumi@LHD.nifs.ac.jp; Peterson, B. J.; Pandya, S. N.

    The InfraRed imaging Video Bolometer (IRVB) is a powerful diagnostic to measure multi-dimensional radiation profiles in plasma fusion devices. In the Large Helical Device (LHD), four IRVBs have been installed with different fields of view to reconstruct three-dimensional profiles using a tomography technique. For the application of the measurement to plasma experiments using deuterium gas in LHD in the near future, the long-term effect of the neutron irradiation on the heat characteristics of an IRVB foil should be taken into account by regular in situ calibration measurements. Therefore, in this study, an in situ calibration system was designed.

  7. Technique for Radiometer and Antenna Array Calibration with a Radiated Noise Diode

    NASA Technical Reports Server (NTRS)

    Srinivasan, Karthik; Limaye, Ashutosh; Laymon, Charles; Meyer, Paul

    2009-01-01

    This paper presents a new technique to calibrate a microwave radiometer and antenna array system. This calibration technique uses a radiated noise source in addition to two calibration sources internal to the radiometer. The method accurately calibrates antenna arrays with embedded active devices (such as amplifiers) which are used extensively in active phased array antennas.

  8. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    DOE PAGES

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; ...

    2016-09-22

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIGmore » method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.« less

  9. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIGmore » method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.« less

  10. New technology and techniques for x-ray mirror calibration at PANTER

    NASA Astrophysics Data System (ADS)

    Freyberg, Michael J.; Budau, Bernd; Burkert, Wolfgang; Friedrich, Peter; Hartner, Gisela; Misaki, Kazutami; Mühlegger, Martin

    2008-07-01

    The PANTER X-ray Test Facility has been utilized successfully for developing and calibrating X-ray astronomical instrumentation for observatories such as ROSAT, Chandra, XMM-Newton, Swift, etc. Future missions like eROSITA, SIMBOL-X, or XEUS require improved spatial resolution and broader energy band pass, both for optics and for cameras. Calibration campaigns at PANTER have made use of flight spare instrumentation for space applications; here we report on a new dedicated CCD camera for on-ground calibration, called TRoPIC. As the CCD is similar to ones used for eROSITA (pn-type, back-illuminated, 75 μm pixel size, frame store mode, 450 μm micron wafer thickness, etc.) it can serve as prototype for eROSITA camera development. New techniques enable and enhance the analysis of measurements of eROSITA shells or silicon pore optics. Specifically, we show how sub-pixel resolution can be utilized to improve spatial resolution and subsequently the characterization of of mirror shell quality and of point spread function parameters in particular, also relevant for position reconstruction of astronomical sources in orbit.

  11. Inflight Radiometric Calibration of New Horizons' Multispectral Visible Imaging Camera (MVIC)

    NASA Technical Reports Server (NTRS)

    Howett, C. J. A.; Parker, A. H.; Olkin, C. B.; Reuter, D. C.; Ennico, K.; Grundy, W. M.; Graps, A. L.; Harrison, K. P.; Throop, H. B.; Buie, M. W.; hide

    2016-01-01

    We discuss two semi-independent calibration techniques used to determine the inflight radiometric calibration for the New Horizons Multi-spectral Visible Imaging Camera (MVIC). The first calibration technique compares the measured number of counts (DN) observed from a number of well calibrated stars to those predicted using the component-level calibration. The ratio of these values provides a multiplicative factor that allows a conversation between the preflight calibration to the more accurate inflight one, for each detector. The second calibration technique is a channel-wise relative radiometric calibration for MVIC's blue, near-infrared and methane color channels using Hubble and New Horizons observations of Charon and scaling from the red channel stellar calibration. Both calibration techniques produce very similar results (better than 7% agreement), providing strong validation for the techniques used. Since the stellar calibration described here can be performed without a color target in the field of view and covers all of MVIC's detectors, this calibration was used to provide the radiometric keyword values delivered by the New Horizons project to the Planetary Data System (PDS). These keyword values allow each observation to be converted from counts to physical units; a description of how these keyword values were generated is included. Finally, mitigation techniques adopted for the gain drift observed in the near-infrared detector and one of the panchromatic framing cameras are also discussed.

  12. A Novel Miniature Wide-band Radiometer for Space Applications

    NASA Astrophysics Data System (ADS)

    Sykulska-Lawrence, Hanna

    2016-10-01

    Design, development and testing of a novel miniaturised infrared radiometer is described. The instrument opens up new possibilities in planetary science of deployment on smaller platforms - such as unmanned aerial vehicles and microprobes - to enable study of a planet's radiation balance, as well as terrestrial volcano plumes and trace gases in planetary atmospheres, using low-cost long-term observations. Thus a key enabling development is that of miniaturised, low-power and well-calibrated instrumentation.The paper reports advances in miniature technology to perform high accuracy visible / IR remote sensing measurements. The infrared radiometer is akin to those widely used for remote sensing for earth and space applications, which are currently either large instruments on orbiting platforms or medium-sized payloads on balloons. We use MEMS microfabrication techniques to shrink a conventional design, while combining the calibration benefits of large (>1kg) type radiometers with the flexibility and portability of a <10g device. The instrument measures broadband (0.2 to 100um) upward and downward radiation fluxes, with built-in calibration capability, incorporating traceability to temperature standards such as ITS-90.The miniature instrument described here was derived from a concept developed for a European Space Agency study, Dalomis (Proc. of 'i-SAIRAS 2005', Munich, 2005), which involved dropping multiple probes into the atmosphere of Venus from a balloon to sample numerous parts of the complex weather systems on the planet. Data from such an in-situ instrument would complement information from a satellite remote sensing instrument or balloon radiosonde. Moreover, the addition of an internal calibration standard facilitates comparisons between datasets.One of the main challenges for a reduced size device is calibration. We use an in-situ method whereby a blackbody source is integrated within the device and a micromirror switches the input to the detector between the measured signal and the calibration target. Achieving two well-calibrated radiometer channels within a small (<10g) payload is made possible by using micromachining techniques.

  13. Calibrated Noise Measurements with Induced Receiver Gain Fluctuations

    NASA Technical Reports Server (NTRS)

    Racette, Paul; Walker, David; Gu, Dazhen; Rajola, Marco; Spevacek, Ashly

    2011-01-01

    The lack of well-developed techniques for modeling changing statistical moments in our observations has stymied the application of stochastic process theory in science and engineering. These limitations were encountered when modeling the performance of radiometer calibration architectures and algorithms in the presence of non stationary receiver fluctuations. Analyses of measured signals have traditionally been limited to a single measurement series. Whereas in a radiometer that samples a set of noise references, the data collection can be treated as an ensemble set of measurements of the receiver state. Noise Assisted Data Analysis is a growing field of study with significant potential for aiding the understanding and modeling of non stationary processes. Typically, NADA entails adding noise to a signal to produce an ensemble set on which statistical analysis is performed. Alternatively as in radiometric measurements, mixing a signal with calibrated noise provides, through the calibration process, the means to detect deviations from the stationary assumption and thereby a measurement tool to characterize the signal's non stationary properties. Data sets comprised of calibrated noise measurements have been limited to those collected with naturally occurring fluctuations in the radiometer receiver. To examine the application of NADA using calibrated noise, a Receiver Gain Modulation Circuit (RGMC) was designed and built to modulate the gain of a radiometer receiver using an external signal. In 2010, an RGMC was installed and operated at the National Institute of Standards and Techniques (NIST) using their Noise Figure Radiometer (NFRad) and national standard noise references. The data collected is the first known set of calibrated noise measurements from a receiver with an externally modulated gain. As an initial step, sinusoidal and step-function signals were used to modulate the receiver gain, to evaluate the circuit characteristics and to study the performance of a variety of calibration algorithms. The receiver noise temperature and time-bandwidth product of the NFRad are calculated from the data. Statistical analysis using temporal-dependent calibration algorithms reveals that the natural occurring fluctuations in the receiver are stationary over long intervals (100s of seconds); however the receiver exhibits local non stationarity over the interval over which one set of reference measurements are collected. A variety of calibration algorithms have been applied to the data to assess algorithms' performance with the gain fluctuation signals. This presentation will describe the RGMC, experiment design and a comparative analysis of calibration algorithms.

  14. Remote sensing of evapotranspiration using automated calibration: Development and testing in the state of Florida

    NASA Astrophysics Data System (ADS)

    Evans, Aaron H.

    Thermal remote sensing is a powerful tool for measuring the spatial variability of evapotranspiration due to the cooling effect of vaporization. The residual method is a popular technique which calculates evapotranspiration by subtracting sensible heat from available energy. Estimating sensible heat requires aerodynamic surface temperature which is difficult to retrieve accurately. Methods such as SEBAL/METRIC correct for this problem by calibrating the relationship between sensible heat and retrieved surface temperature. Disadvantage of these calibrations are 1) user must manually identify extremely dry and wet pixels in image 2) each calibration is only applicable over limited spatial extent. Producing larger maps is operationally limited due to time required to manually calibrate multiple spatial extents over multiple days. This dissertation develops techniques which automatically detect dry and wet pixels. LANDSAT imagery is used because it resolves dry pixels. Calibrations using 1) only dry pixels and 2) including wet pixels are developed. Snapshots of retrieved evaporative fraction and actual evapotranspiration are compared to eddy covariance measurements for five study areas in Florida: 1) Big Cypress 2) Disney Wilderness 3) Everglades 4) near Gainesville, FL. 5) Kennedy Space Center. The sensitivity of evaporative fraction to temperature, available energy, roughness length and wind speed is tested. A technique for temporally interpolating evapotranspiration by fusing LANDSAT and MODIS is developed and tested. The automated algorithm is successful at detecting wet and dry pixels (if they exist). Including wet pixels in calibration and assuming constant atmospheric conductance significantly improved results for all but Big Cypress and Gainesville. Evaporative fraction is not very sensitive to instantaneous available energy but it is sensitive to temperature when wet pixels are included because temperature is required for estimating wet pixel evapotranspiration. Data fusion techniques only slightly outperformed linear interpolation. Eddy covariance comparison and temporal interpolation produced acceptable bias error for most cases suggesting automated calibration and interpolation could be used to predict monthly or annual ET. Maps demonstrating spatial patterns of evapotranspiration at field scale were successfully produced, but only for limited spatial extents. A framework has been established for producing larger maps by creating a mosaic of smaller individual maps.

  15. Strain Gauge Balance Calibration and Data Reduction at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Ferris, A. T. Judy

    1999-01-01

    This paper will cover the standard force balance calibration and data reduction techniques used at Langley Research Center. It will cover balance axes definition, balance type, calibration instrumentation, traceability of standards to NIST, calibration loading procedures, balance calibration mathematical model, calibration data reduction techniques, balance accuracy reporting, and calibration frequency.

  16. Dynamic edge warping - An experimental system for recovering disparity maps in weakly constrained systems

    NASA Technical Reports Server (NTRS)

    Boyer, K. L.; Wuescher, D. M.; Sarkar, S.

    1991-01-01

    Dynamic edge warping (DEW), a technique for recovering reasonably accurate disparity maps from uncalibrated stereo image pairs, is presented. No precise knowledge of the epipolar camera geometry is assumed. The technique is embedded in a system including structural stereopsis on the front end and robust estimation in digital photogrammetry on the other for the purpose of self-calibrating stereo image pairs. Once the relative camera orientation is known, the epipolar geometry is computed and the system can use this information to refine its representation of the object space. Such a system will find application in the autonomous extraction of terrain maps from stereo aerial photographs, for which camera position and orientation are unknown a priori, and for online autonomous calibration maintenance for robotic vision applications, in which the cameras are subject to vibration and other physical disturbances after calibration. This work thus forms a component of an intelligent system that begins with a pair of images and, having only vague knowledge of the conditions under which they were acquired, produces an accurate, dense, relative depth map. The resulting disparity map can also be used directly in some high-level applications involving qualitative scene analysis, spatial reasoning, and perceptual organization of the object space. The system as a whole substitutes high-level information and constraints for precise geometric knowledge in driving and constraining the early correspondence process.

  17. Determination of perfluorinated compounds in fish fillet homogenates: method validation and application to fillet homogenates from the Mississippi River.

    PubMed

    Malinsky, Michelle Duval; Jacoby, Cliffton B; Reagen, William K

    2011-01-10

    We report herein a simple protein precipitation extraction-liquid chromatography tandem mass spectrometry (LC/MS/MS) method, validation, and application for the analysis of perfluorinated carboxylic acids (C7-C12), perfluorinated sulfonic acids (C4, C6, and C8), and perfluorooctane sulfonamide (FOSA) in fish fillet tissue. The method combines a rapid homogenization and protein precipitation tissue extraction procedure using stable-isotope internal standard (IS) calibration. Method validation in bluegill (Lepomis macrochirus) fillet tissue evaluated the following: (1) method accuracy and precision in both extracted matrix-matched calibration and solvent (unextracted) calibration, (2) quantitation of mixed branched and linear isomers of perfluorooctanoate (PFOA) and perfluorooctanesulfonate (PFOS) with linear isomer calibration, (3) quantitation of low level (ppb) perfluorinated compounds (PFCs) in the presence of high level (ppm) PFOS, and (4) specificity from matrix interferences. Both calibration techniques produced method accuracy of at least 100±13% with a precision (%RSD) ≤18% for all target analytes. Method accuracy and precision results for fillet samples from nine different fish species taken from the Mississippi River in 2008 and 2009 are also presented. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. Development of buried wire gages for measurement of wall shear stress in Blastane experiments

    NASA Technical Reports Server (NTRS)

    Murthy, S. V.; Steinle, F. W.

    1986-01-01

    Buried Wire Gages operated from a Constant Temperature Anemometer System are among the special types of instrumentation to be used in the Boundary Layer Apparatus for Subsonic and Transonic flow Affected by Noise Environment (BLASTANE). These Gages are of a new type and need to be adapted for specific applications. Methods were developed to fabricate Gage inserts and mount those in the BLASTANE Instrumentation Plugs. A large number of Gages were prepared and operated from a Constant Temperature Anemometer System to derive some of the calibration constants for application to fluid-flow wall shear-stress measurements. The final stage of the calibration was defined, but could not be accomplished because of non-availability of a suitable flow simulating apparatus. This report provides a description of the Buried Wire Gage technique, an explanation of the method evolved for making proper Gages and the calibration constants, namely Temperature Coefficient of Resistance and Conduction Loss Factor.

  19. Medicina array demonstrator: calibration and radiation pattern characterization using a UAV-mounted radio-frequency source

    NASA Astrophysics Data System (ADS)

    Pupillo, G.; Naldi, G.; Bianchi, G.; Mattana, A.; Monari, J.; Perini, F.; Poloni, M.; Schiaffino, M.; Bolli, P.; Lingua, A.; Aicardi, I.; Bendea, H.; Maschio, P.; Piras, M.; Virone, G.; Paonessa, F.; Farooqui, Z.; Tibaldi, A.; Addamo, G.; Peverini, O. A.; Tascone, R.; Wijnholds, S. J.

    2015-06-01

    One of the most challenging aspects of the new-generation Low-Frequency Aperture Array (LFAA) radio telescopes is instrument calibration. The operational LOw-Frequency ARray (LOFAR) instrument and the future LFAA element of the Square Kilometre Array (SKA) require advanced calibration techniques to reach the expected outstanding performance. In this framework, a small array, called Medicina Array Demonstrator (MAD), has been designed and installed in Italy to provide a test bench for antenna characterization and calibration techniques based on a flying artificial test source. A radio-frequency tone is transmitted through a dipole antenna mounted on a micro Unmanned Aerial Vehicle (UAV) (hexacopter) and received by each element of the array. A modern digital FPGA-based back-end is responsible for both data-acquisition and data-reduction. A simple amplitude and phase equalization algorithm is exploited for array calibration owing to the high stability and accuracy of the developed artificial test source. Both the measured embedded element patterns and calibrated array patterns are found to be in good agreement with the simulated data. The successful measurement campaign has demonstrated that a UAV-mounted test source provides a means to accurately validate and calibrate the full-polarized response of an antenna/array in operating conditions, including consequently effects like mutual coupling between the array elements and contribution of the environment to the antenna patterns. A similar system can therefore find a future application in the SKA-LFAA context.

  20. Application of new techniques in the calibration of the TROPOMI-SWIR instrument (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tol, Paul; van Hees, Richard; van Kempen, Tim; Krijger, Matthijs; Cadot, Sidney; Aben, Ilse; Ludewig, Antje; Dingjan, Jos; Persijn, Stefan; Hoogeveen, Ruud

    2016-10-01

    The Tropospheric Monitoring Instrument (TROPOMI) on-board the Sentinel-5 Precursor satellite is an Earth-observing spectrometer with bands in the ultraviolet, visible, near infrared and short-wave infrared (SWIR). It provides daily global coverage of atmospheric trace gases relevant for tropospheric air quality and climate research. Three new techniques will be presented that are unique for the TROPOMI-SWIR spectrometer. The retrieval of methane and CO columns from the data of the SWIR band requires for each detector pixel an accurate instrument spectral response function (ISRF), i.e. the normalized signal as a function of wavelength. A new determination method for Earth-observing instruments has been used in the on-ground calibration, based on measurements with a SWIR optical parametric oscillator (OPO) that was scanned over the whole TROPOMI-SWIR spectral range. The calibration algorithm derives the ISRF without needing the absolute wavelength during the measurement. The same OPO has also been used to determine the two-dimensional stray-light distribution for each SWIR pixel with a dynamic range of 7 orders. This was achieved by combining measurements at several exposure times and taking saturation into account. The correction algorithm and data are designed to remove the mean stray-light distribution and a reflection that moves relative to the direct image, within the strict constraints of the available time for the L01b processing. A third new technique is an alternative calibration of the SWIR absolute radiance and irradiance using a black body at the temperature of melting silver. Unlike a standard FEL lamp, this source does not have to be calibrated itself, because the temperature is very stable and well known. Measurement methods, data analyses, correction algorithms and limitations of the new techniques will be presented.

  1. Data Assimilation - Advances and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less

  2. NIST Standard Reference Material 3600: Absolute Intensity Calibration Standard for Small-Angle X-ray Scattering

    DOE PAGES

    Allen, Andrew J.; Zhang, Fan; Kline, R. Joseph; ...

    2017-03-07

    The certification of a new standard reference material for small-angle scattering [NIST Standard Reference Material (SRM) 3600: Absolute Intensity Calibration Standard for Small-Angle X-ray Scattering (SAXS)], based on glassy carbon, is presented. Creation of this SRM relies on the intrinsic primary calibration capabilities of the ultra-small-angle X-ray scattering technique. This article describes how the intensity calibration has been achieved and validated in the certified Q range, Q = 0.008–0.25 Å –1, together with the purpose, use and availability of the SRM. The intensity calibration afforded by this robust and stable SRM should be applicable universally to all SAXS instruments thatmore » employ a transmission measurement geometry, working with a wide range of X-ray energies or wavelengths. As a result, the validation of the SRM SAXS intensity calibration using small-angle neutron scattering (SANS) is discussed, together with the prospects for including SANS in a future renewal certification.« less

  3. NIST Standard Reference Material 3600: Absolute Intensity Calibration Standard for Small-Angle X-ray Scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, Andrew J.; Zhang, Fan; Kline, R. Joseph

    The certification of a new standard reference material for small-angle scattering [NIST Standard Reference Material (SRM) 3600: Absolute Intensity Calibration Standard for Small-Angle X-ray Scattering (SAXS)], based on glassy carbon, is presented. Creation of this SRM relies on the intrinsic primary calibration capabilities of the ultra-small-angle X-ray scattering technique. This article describes how the intensity calibration has been achieved and validated in the certified Q range, Q = 0.008–0.25 Å –1, together with the purpose, use and availability of the SRM. The intensity calibration afforded by this robust and stable SRM should be applicable universally to all SAXS instruments thatmore » employ a transmission measurement geometry, working with a wide range of X-ray energies or wavelengths. As a result, the validation of the SRM SAXS intensity calibration using small-angle neutron scattering (SANS) is discussed, together with the prospects for including SANS in a future renewal certification.« less

  4. NIST Standard Reference Material 3600: Absolute Intensity Calibration Standard for Small-Angle X-ray Scattering.

    PubMed

    Allen, Andrew J; Zhang, Fan; Kline, R Joseph; Guthrie, William F; Ilavsky, Jan

    2017-04-01

    The certification of a new standard reference material for small-angle scattering [NIST Standard Reference Material (SRM) 3600: Absolute Intensity Calibration Standard for Small-Angle X-ray Scattering (SAXS)], based on glassy carbon, is presented. Creation of this SRM relies on the intrinsic primary calibration capabilities of the ultra-small-angle X-ray scattering technique. This article describes how the intensity calibration has been achieved and validated in the certified Q range, Q = 0.008-0.25 Å -1 , together with the purpose, use and availability of the SRM. The intensity calibration afforded by this robust and stable SRM should be applicable universally to all SAXS instruments that employ a transmission measurement geometry, working with a wide range of X-ray energies or wavelengths. The validation of the SRM SAXS intensity calibration using small-angle neutron scattering (SANS) is discussed, together with the prospects for including SANS in a future renewal certification.

  5. Field Data Collection: an Essential Element in Remote Sensing Applications

    NASA Technical Reports Server (NTRS)

    Pettinger, L. R.

    1971-01-01

    Field data collected in support of remote sensing projects are generally used for the following purposes: (1) calibration of remote sensing systems, (2) evaluation of experimental applications of remote sensing imagery on small test sites, and (3) designing and evaluating operational regional resource studies and inventories which are conducted using the remote sensing imagery obtained. Field data may be used to help develop a technique for a particular application, or to aid in the application of that technique to a resource evaluation or inventory problem for a large area. Scientists at the Forestry Remote Sensing Laboratory have utilized field data for both purposes. How meaningful field data has been collected in each case is discussed.

  6. A process for creating multimetric indices for large-scale aquatic surveys

    EPA Science Inventory

    Differences in sampling and laboratory protocols, differences in techniques used to evaluate metrics, and differing scales of calibration and application prohibit the use of many existing multimetric indices (MMIs) in large-scale bioassessments. We describe an approach to develop...

  7. Technique for calibrating angular measurement devices when calibration standards are unavailable

    NASA Technical Reports Server (NTRS)

    Finley, Tom D.

    1991-01-01

    A calibration technique is proposed that will allow the calibration of certain angular measurement devices without requiring the use of absolute standard. The technique assumes that the device to be calibrated has deterministic bias errors. A comparison device must be available that meets the same requirements. The two devices are compared; one device is then rotated with respect to the other, and a second comparison is performed. If the data are reduced using the described technique, the individual errors of the two devices can be determined.

  8. The Oxford Probe: an open access five-hole probe for aerodynamic measurements

    NASA Astrophysics Data System (ADS)

    Hall, B. F.; Povey, T.

    2017-03-01

    The Oxford Probe is an open access five-hole probe designed for experimental aerodynamic measurements. The open access probe can be manufactured by the end user via additive manufacturing (metal or plastic). The probe geometry, drawings, calibration maps, and software are available under a creative commons license. The purpose is to widen access to aerodynamic measurement techniques in education and research environments. There are many situations in which the open access probe will allow results of comparable accuracy to a well-calibrated commercial probe. We discuss the applications and limitations of the probe, and compare the calibration maps for 16 probes manufactured in different materials and at different scales, but with the same geometrical design.

  9. Design and calibration of zero-additional-phase SPIDER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baum, Peter; Riedle, Eberhard

    2005-09-01

    Zero-additional-phase spectral phase interferometry for direct electric field reconstruction (ZAP-SPIDER) is a novel technique for measuring the temporal shape and phase of ultrashort optical pulses directly at the interaction point of a spectroscopic experiment. The scheme is suitable for an extremely wide wavelength region from the ultraviolet to the near infrared. We present a comprehensive description of the experimental setup and design guidelines to effectively apply the technique to various wavelengths and pulse durations. The calibration of the setup and procedures to check the consistency of the measurement are discussed in detail. We show experimental data for various center wavelengthsmore » and pulse durations down to 7 fs to verify the applicability to a wide range of pulse parameters.« less

  10. Single-Vector Calibration of Wind-Tunnel Force Balances

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; DeLoach, R.

    2003-01-01

    An improved method of calibrating a wind-tunnel force balance involves the use of a unique load application system integrated with formal experimental design methodology. The Single-Vector Force Balance Calibration System (SVS) overcomes the productivity and accuracy limitations of prior calibration methods. A force balance is a complex structural spring element instrumented with strain gauges for measuring three orthogonal components of aerodynamic force (normal, axial, and side force) and three orthogonal components of aerodynamic torque (rolling, pitching, and yawing moments). Force balances remain as the state-of-the-art instrument that provide these measurements on a scale model of an aircraft during wind tunnel testing. Ideally, each electrical channel of the balance would respond only to its respective component of load, and it would have no response to other components of load. This is not entirely possible even though balance designs are optimized to minimize these undesirable interaction effects. Ultimately, a calibration experiment is performed to obtain the necessary data to generate a mathematical model and determine the force measurement accuracy. In order to set the independent variables of applied load for the calibration 24 NASA Tech Briefs, October 2003 experiment, a high-precision mechanical system is required. Manual deadweight systems have been in use at Langley Research Center (LaRC) since the 1940s. These simple methodologies produce high confidence results, but the process is mechanically complex and labor-intensive, requiring three to four weeks to complete. Over the past decade, automated balance calibration systems have been developed. In general, these systems were designed to automate the tedious manual calibration process resulting in an even more complex system which deteriorates load application quality. The current calibration approach relies on a one-factor-at-a-time (OFAT) methodology, where each independent variable is incremented individually throughout its full-scale range, while all other variables are held at a constant magnitude. This OFAT approach has been widely accepted because of its inherent simplicity and intuitive appeal to the balance engineer. LaRC has been conducting research in a "modern design of experiments" (MDOE) approach to force balance calibration. Formal experimental design techniques provide an integrated view to the entire calibration process covering all three major aspects of an experiment; the design of the experiment, the execution of the experiment, and the statistical analyses of the data. In order to overcome the weaknesses in the available mechanical systems and to apply formal experimental techniques, a new mechanical system was required. The SVS enables the complete calibration of a six-component force balance with a series of single force vectors.

  11. Proceedings of the Third Airborne Imaging Spectrometer Data Analysis Workshop

    NASA Technical Reports Server (NTRS)

    Vane, Gregg (Editor)

    1987-01-01

    Summaries of 17 papers presented at the workshop are published. After an overview of the imaging spectrometer program, time was spent discussing AIS calibration, performance, information extraction techniques, and the application of high spectral resolution imagery to problems of geology and botany.

  12. Resolving small signal measurements in experimental plasma environments using calibrated subtraction of noise signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fimognari, P. J., E-mail: PJFimognari@XanthoTechnologies.com; Demers, D. R.; Chen, X.

    2014-11-15

    The performance of many diagnostic and control systems within fusion and other fields of research are often detrimentally affected by spurious noise signals. This is particularly true for those (such as radiation or particle detectors) working with very small signals. Common sources of radiated and conducted noise in experimental fusion environments include the plasma itself and instrumentation. The noise complicates data analysis, as illustrated by noise on signals measured with the heavy ion beam probe (HIBP) installed on the Madison Symmetric Torus. The noise is time-varying and often exceeds the secondary ion beam current (in contrast with previous applications). Analysismore » of the noise identifies the dominant source as photoelectric emission from the detectors induced by ultraviolet light from the plasma. This has led to the development of a calibrated subtraction technique, which largely removes the undesired temporal noise signals from data. The advantages of the technique for small signal measurement applications are demonstrated through improvements realized on HIBP fluctuation measurements.« less

  13. Spectral Radiance of a Large-Area Integrating Sphere Source

    PubMed Central

    Walker, James H.; Thompson, Ambler

    1995-01-01

    The radiance and irradiance calibration of large field-of-view scanning and imaging radiometers for remote sensing and surveillance applications has resulted in the development of novel calibration techniques. One of these techniques is the employment of large-area integrating sphere sources as radiance or irradiance secondary standards. To assist the National Aeronautical and Space Administration’s space based ozone measurement program, a commercially available large-area internally illuminated integrating sphere source’s spectral radiance was characterized in the wavelength region from 230 nm to 400 nm at the National Institute of Standards and Technology. Spectral radiance determinations and spatial mappings of the source indicate that carefully designed large-area integrating sphere sources can be measured with a 1 % to 2 % expanded uncertainty (two standard deviation estimate) in the near ultraviolet with spatial nonuniformities of 0.6 % or smaller across a 20 cm diameter exit aperture. A method is proposed for the calculation of the final radiance uncertainties of the source which includes the field of view of the instrument being calibrated. PMID:29151725

  14. Instrumental Response Model and Detrending for the Dark Energy Camera

    DOE PAGES

    Bernstein, G. M.; Abbott, T. M. C.; Desai, S.; ...

    2017-09-14

    We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less

  15. Non-contact thrust stand calibration method for repetitively pulsed electric thrusters.

    PubMed

    Wong, Andrea R; Toftul, Alexandra; Polzin, Kurt A; Pearson, J Boise

    2012-02-01

    A thrust stand calibration technique for use in testing repetitively pulsed electric thrusters for in-space propulsion has been developed and tested using a modified hanging pendulum thrust stand. In the implementation of this technique, current pulses are applied to a solenoid to produce a pulsed magnetic field that acts against a permanent magnet mounted to the thrust stand pendulum arm. The force on the magnet is applied in this non-contact manner, with the entire pulsed force transferred to the pendulum arm through a piezoelectric force transducer to provide a time-accurate force measurement. Modeling of the pendulum arm dynamics reveals that after an initial transient in thrust stand motion the quasi-steady average deflection of the thrust stand arm away from the unforced or "zero" position can be related to the average applied force through a simple linear Hooke's law relationship. Modeling demonstrates that this technique is universally applicable except when the pulsing period is increased to the point where it approaches the period of natural thrust stand motion. Calibration data were obtained using a modified hanging pendulum thrust stand previously used for steady-state thrust measurements. Data were obtained for varying impulse bit at constant pulse frequency and for varying pulse frequency. The two data sets exhibit excellent quantitative agreement with each other. The overall error on the linear regression fit used to determine the calibration coefficient was roughly 1%.

  16. Instrumental Response Model and Detrending for the Dark Energy Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, G. M.; Abbott, T. M. C.; Desai, S.

    We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less

  17. A New Probe of Dust Attenuation in Star-Forming Galaxies

    NASA Astrophysics Data System (ADS)

    Leitherer, Claus

    2017-08-01

    We propose to develop, calibrate and test a new technique to measure dust attenuation in star-forming galaxies. The technique utilizes the strong stellar-wind emission lines in Wolf-Rayet stars, which are routinely observed in galaxy spectra locally and up to redshift 3. The He II 1640 and 4686 features are recombination lines whose intrinsic ratio is almost exclusively determined by atomic physics. Therefore it can serve as a stellar dust probe in the same way as the nebular hydrogen-line ratio can be used to measure the reddening of the gas phase. Archival spectra of Wolf-Rayet stars will be analyzed to calibrate the method, and panchromatic FOS and STIS spectra of nearby star-forming galaxies will be used as a first application. The new technique allows us to study stellar and nebular attenuation in galaxies separately and to test its effects at different stellar age and mass regimes.

  18. Improved RF Measurements of SRF Cavity Quality Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzbauer, J. P.; Contreras, C.; Pischalnikov, Y.

    SRF cavity quality factors can be accurately measured using RF-power based techniques only when the cavity is very close to critically coupled. This limitation is from systematic errors driven by non-ideal RF components. When the cavity is not close to critically coupled, these systematic effects limit the accuracy of the measurements. The combination of the complex base-band envelopes of the cavity RF signals in combination with a trombone in the circuit allow the relative calibration of the RF signals to be extracted from the data and systematic effects to be characterized and suppressed. The improved calibration allows accurate measurements tomore » be made over a much wider range of couplings. Demonstration of these techniques during testing of a single-spoke resonator with a coupling factor of near 7 will be presented, along with recommendations for application of these techniques.« less

  19. Concentration variance decay during magma mixing: a volcanic chronometer

    PubMed Central

    Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.

    2015-01-01

    The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555

  20. ASTM clustering for improving coal analysis by near-infrared spectroscopy.

    PubMed

    Andrés, J M; Bona, M T

    2006-11-15

    Multivariate analysis techniques have been applied to near-infrared (NIR) spectra coals to investigate the relationship between nine coal properties (moisture (%), ash (%), volatile matter (%), fixed carbon (%), heating value (kcal/kg), carbon (%), hydrogen (%), nitrogen (%) and sulphur (%)) and the corresponding predictor variables. In this work, a whole set of coal samples was grouped into six more homogeneous clusters following the ASTM reference method for classification prior to the application of calibration methods to each coal set. The results obtained showed a considerable improvement of the error determination compared with the calibration for the whole sample set. For some groups, the established calibrations approached the quality required by the ASTM/ISO norms for laboratory analysis. To predict property values for a new coal sample it is necessary the assignation of that sample to its respective group. Thus, the discrimination and classification ability of coal samples by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS) in the NIR range was also studied by applying Soft Independent Modelling of Class Analogy (SIMCA) and Linear Discriminant Analysis (LDA) techniques. Modelling of the groups by SIMCA led to overlapping models that cannot discriminate for unique classification. On the other hand, the application of Linear Discriminant Analysis improved the classification of the samples but not enough to be satisfactory for every group considered.

  1. Technical note: Aerosol light absorption measurements with a carbon analyser - Calibration and precision estimates

    NASA Astrophysics Data System (ADS)

    Ammerlaan, B. A. J.; Holzinger, R.; Jedynska, A. D.; Henzing, J. S.

    2017-09-01

    Equivalent Black Carbon (EBC) and Elemental Carbon (EC) are different mass metrics to quantify the amount of combustion aerosol. Both metrics have their own measurement technique. In state-of-the-art carbon analysers, optical measurements are used to correct for organic carbon that is not evolving because of pyrolysis. These optical measurements are sometimes used to apply the technique of absorption photometers. Here, we use the transmission measurements of our carbon analyser for simultaneous determination of the elemental carbon concentration and the absorption coefficient. We use MAAP data from the CESAR observatory, the Netherlands, to correct for aerosol-filter interactions by linking the attenuation coefficient from the carbon analyser to the absorption coefficient measured by the MAAP. Application of the calibration to an independent data set of MAAP and OC/EC observations for the same location shows that the calibration is applicable to other observation periods. Because of simultaneous measurements of light absorption properties of the aerosol and elemental carbon, variation in the mass absorption efficiency (MAE) can be studied. We further show that the absorption coefficients and MAE in this set-up are determined within a precision of 10% and 12%, respectively. The precisions could be improved to 4% and 8% when the light transmission signal in the carbon analyser is very stable.

  2. In situ pre-growth calibration using reflectance as a control strategy for MOCVD fabrication of device structures

    NASA Astrophysics Data System (ADS)

    Breiland, William G.; Hou, Hong Q.; Chui, Herman C.; Hammons, Burrel E.

    1997-04-01

    In situ normal incidence reflectance, combined with a virtual interface model, is being used routinely on a commercial metal organic chemical vapor deposition reactor to measure growth rates of compound semiconductor films. The technique serves as a pre-growth calibration tool analogous to the use of reflection high-energy electron diffraction in molecular beam epitaxy as well as a real-time monitor throughout the run. An application of the method to the growth of a vertical cavity surface emitting laser (VCSEL) device structure is presented. All necessary calibration information can be obtained using a single run lasting less than 1 h. Working VCSEL devices are obtained on the first try after calibration. Repeated runs have yielded ±0.3% reproducibility of the Fabry-Perot cavity wavelength over the course of more than 100 runs.

  3. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System.

    PubMed

    Wu, Defeng; Chen, Tianfei; Li, Aiguo

    2016-08-30

    A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.

  4. Atomic Resonance Radiation Energetics Investigation as a Diagnostic Method for Non-Equilibrium Hypervelocity Flows

    NASA Technical Reports Server (NTRS)

    Meyer, Scott A.; Bershader, Daniel; Sharma, Surendra P.; Deiwert, George S.

    1996-01-01

    Absorption measurements with a tunable vacuum ultraviolet light source have been proposed as a concentration diagnostic for atomic oxygen, and the viability of this technique is assessed in light of recent measurements. The instrumentation, as well as initial calibration measurements, have been reported previously. We report here additional calibration measurements performed to study the resonance broadening line shape for atomic oxygen. The application of this diagnostic is evaluated by considering the range of suitable test conditions and requirements, and by identifying issues that remain to be addressed.

  5. Advanced Ecosystem Mapping Techniques for Large Arctic Study Domains Using Calibrated High-Resolution Imagery

    NASA Astrophysics Data System (ADS)

    Macander, M. J.; Frost, G. V., Jr.

    2015-12-01

    Regional-scale mapping of vegetation and other ecosystem properties has traditionally relied on medium-resolution remote sensing such as Landsat (30 m) and MODIS (250 m). Yet, the burgeoning availability of high-resolution (<=2 m) imagery and ongoing advances in computing power and analysis tools raises the prospect of performing ecosystem mapping at fine spatial scales over large study domains. Here we demonstrate cutting-edge mapping approaches over a ~35,000 km² study area on Alaska's North Slope using calibrated and atmospherically-corrected mosaics of high-resolution WorldView-2 and GeoEye-1 imagery: (1) an a priori spectral approach incorporating the Satellite Imagery Automatic Mapper (SIAM) algorithms; (2) image segmentation techniques; and (3) texture metrics. The SIAM spectral approach classifies radiometrically-calibrated imagery to general vegetation density categories and non-vegetated classes. The SIAM classes were developed globally and their applicability in arctic tundra environments has not been previously evaluated. Image segmentation, or object-based image analysis, automatically partitions high-resolution imagery into homogeneous image regions that can then be analyzed based on spectral, textural, and contextual information. We applied eCognition software to delineate waterbodies and vegetation classes, in combination with other techniques. Texture metrics were evaluated to determine the feasibility of using high-resolution imagery to algorithmically characterize periglacial surface forms (e.g., ice-wedge polygons), which are an important physical characteristic of permafrost-dominated regions but which cannot be distinguished by medium-resolution remote sensing. These advanced mapping techniques provide products which can provide essential information supporting a broad range of ecosystem science and land-use planning applications in northern Alaska and elsewhere in the circumpolar Arctic.

  6. Tenth Biennial Coherent Laser Radar Technology and Applications Conference

    NASA Technical Reports Server (NTRS)

    Kavaya, Michael J. (Compiler)

    1999-01-01

    The tenth conference on coherent laser radar technology and applications is the latest in a series beginning in 1980 which provides a forum for exchange of information on recent events current status, and future directions of coherent laser radar (or lidar or lader) technology and applications. This conference emphasizes the latest advancement in the coherent laser radar field, including theory, modeling, components, systems, instrumentation, measurements, calibration, data processing techniques, operational uses, and comparisons with other remote sensing technologies.

  7. Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta

    2012-10-01

    A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.

  8. Characterization of neutron calibration fields at the TINT's 50 Ci americium-241/beryllium neutron irradiator

    NASA Astrophysics Data System (ADS)

    Liamsuwan, T.; Channuie, J.; Ratanatongchai, W.

    2015-05-01

    Reliable measurement of neutron radiation is important for monitoring and protection in workplace where neutrons are present. Although Thailand has been familiar with applications of neutron sources and neutron beams for many decades, there is no calibration facility dedicated to neutron measuring devices available in the country. Recently, Thailand Institute of Nuclear Technology (TINT) has set up a multi-purpose irradiation facility equipped with a 50 Ci americium-241/beryllium neutron irradiator. The facility is planned to be used for research, nuclear analytical techniques and, among other applications, calibration of neutron measuring devices. In this work, the neutron calibration fields were investigated in terms of neutron energy spectra and dose equivalent rates using Monte Carlo simulations, an in-house developed neutron spectrometer and commercial survey meters. The characterized neutron fields can generate neutron dose equivalent rates ranging from 156 μSv/h to 3.5 mSv/h with nearly 100% of dose contributed by neutrons of energies larger than 0.01 MeV. The gamma contamination was less than 4.2-7.5% depending on the irradiation configuration. It is possible to use the described neutron fields for calibration test and routine quality assurance of neutron dose rate meters and passive dosemeters commonly used in radiation protection dosimetry.

  9. A Summary of Lightpipe Radiation Thermometry Research at NIST

    PubMed Central

    Tsai, Benjamin K.

    2006-01-01

    During the last 10 years, research in light-pipe radiation thermometry has significantly reduced the uncertainties for temperature measurements in semiconductor processing. The National Institute of Standards and Technology (NIST) has improved the calibration of lightpipe radiation thermometers (LPRTs), the characterization procedures for LPRTs, the in situ calibration of LPRTs using thin-film thermocouple (TFTC) test wafers, and the application of model-based corrections to improve LPRT spectral radiance temperatures. Collaboration with industry on implementing techniques and ideas established at NIST has led to improvements in temperature measurements in semiconductor processing. LPRTs have been successfully calibrated at NIST for rapid thermal processing (RTP) applications using a sodium heat-pipe blackbody between 700 °C and 900 °C with an uncertainty of about 0.3 °C (k = 1) traceable to the International Temperature Scale of 1990. Employing appropriate effective emissivity models, LPRTs have been used to determine the wafer temperature in the NIST RTP Test Bed with an uncertainty of 3.5 °C. Using a TFTC wafer for calibration, the LPRT can measure the wafer temperature in the NIST RTP Test Bed with an uncertainty of 2.3 °C. Collaborations with industry in characterizing and calibrating LPRTs will be summarized, and future directions for LPRT research will be discussed. PMID:27274914

  10. A Summary of Lightpipe Radiation Thermometry Research at NIST.

    PubMed

    Tsai, Benjamin K

    2006-01-01

    During the last 10 years, research in light-pipe radiation thermometry has significantly reduced the uncertainties for temperature measurements in semiconductor processing. The National Institute of Standards and Technology (NIST) has improved the calibration of lightpipe radiation thermometers (LPRTs), the characterization procedures for LPRTs, the in situ calibration of LPRTs using thin-film thermocouple (TFTC) test wafers, and the application of model-based corrections to improve LPRT spectral radiance temperatures. Collaboration with industry on implementing techniques and ideas established at NIST has led to improvements in temperature measurements in semiconductor processing. LPRTs have been successfully calibrated at NIST for rapid thermal processing (RTP) applications using a sodium heat-pipe blackbody between 700 °C and 900 °C with an uncertainty of about 0.3 °C (k = 1) traceable to the International Temperature Scale of 1990. Employing appropriate effective emissivity models, LPRTs have been used to determine the wafer temperature in the NIST RTP Test Bed with an uncertainty of 3.5 °C. Using a TFTC wafer for calibration, the LPRT can measure the wafer temperature in the NIST RTP Test Bed with an uncertainty of 2.3 °C. Collaborations with industry in characterizing and calibrating LPRTs will be summarized, and future directions for LPRT research will be discussed.

  11. On the Use of Deep Convective Clouds to Calibrate AVHRR Data

    NASA Technical Reports Server (NTRS)

    Doelling, David R.; Nguyen, Louis; Minnis, Patrick

    2004-01-01

    Remote sensing of cloud and radiation properties from National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) satellites requires constant monitoring of the visible sensors. NOAA satellites do not have onboard visible calibration and need to be calibrated vicariously in order to determine the calibration and the degradation rate. Deep convective clouds are extremely bright and cold, are at the tropopause, have nearly a Lambertian reflectance, and provide predictable albedos. The use of deep convective clouds as calibration targets is developed into a calibration technique and applied to NOAA-16 and NOAA-17. The technique computes the relative gain drift over the life-span of the satellite. This technique is validated by comparing the gain drifts derived from inter-calibration of coincident AVHRR and Moderate-Resolution Imaging Spectroradiometer (MODIS) radiances. A ray-matched technique, which uses collocated, coincident, and co-angled pixel satellite radiance pairs is used to intercalibrate MODIS and AVHRR. The deep convective cloud calibration technique was found to be independent of solar zenith angle, by using well calibrated Visible Infrared Scanner (VIRS) radiances onboard the Tropical Rainfall Measuring Mission (TRMM) satellite, which precesses through all solar zenith angles in 23 days.

  12. Calibration and validation of projection lithography in chemically amplified resist systems using fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Mason, Michael D.; Ray, Krishanu; Feke, Gilbert D.; Grober, Robert D.; Pohlers, Gerd; Cameron, James F.

    2003-05-01

    Coumarin 6 (C6), a pH sensitive fluorescent molecule were doped into commercial resist systems to demonstrate a cost-effective fluorescence microscopy technique for detecting latent photoacid images in exposed chemically amplified resist films. The fluorescenec image contrast is optimized by carefully selecting optical filters to match the spectroscopic properties of C6 in the resist matrices. We demonstrate the potential of this technique for two sepcific non-invasive applications. First, a fast, conventient, fluorescence technique is demonstrated for determination of quantum yeidsl of photo-acid generation. Since the Ka of C6 in the 193nm resist system lies wihtin the range of acid concentrations that can be photogenerated, we have used this technique to evaluate the acid generation efficiency of various photo-acid generators (PAGs). The technique is based on doping the resist formulations containing the candidate PAGs with C6, coating one wafer per PAG, patterning the wafer with a dose ramp and spectroscopically imaging the wafers. The fluorescence of each pattern in the dose ramp is measured as a single image and analyzed with the optical titration model. Second, a nondestructive in-line diagnostic technique is developed for the focus calibration and validation of a projection lithography system. Our experimental results show excellent correlation between the fluorescence images and scanning electron microscope analysis of developed features. This technique has successfully been applied in both deep UV resists e.g., Shipley UVIIHS resist and 193 nm resists e.g., Shipley Vema-type resist. This method of focus calibration has also been extended to samples with feature sizes below the diffraction limit where the pitch between adjacent features is on the order of 300 nm. Image capture, data analysis, and focus latitude verification are all computer controlled from a single hardware/software platform. Typical focus calibration curves can be obtained within several minutes.

  13. ITER-like antenna capacitors voltage probes: Circuit/electromagnetic calculations and calibrations.

    PubMed

    Helou, W; Dumortier, P; Durodié, F; Lombard, G; Nicholls, K

    2016-10-01

    The analyses illustrated in this manuscript have been performed in order to provide the required data for the amplitude-and-phase calibration of the D-dot voltage probes used in the ITER-like antenna at the Joint European Torus tokamak. Their equivalent electrical circuit has been extracted and analyzed, and it has been compared to the one of voltage probes installed in simple transmission lines. A radio-frequency calibration technique has been formulated and exact mathematical relations have been derived. This technique mixes in an elegant fashion data extracted from measurements and numerical calculations to retrieve the calibration factors. The latter have been compared to previous calibration data with excellent agreement proving the robustness of the proposed radio-frequency calibration technique. In particular, it has been stressed that it is crucial to take into account environmental parasitic effects. A low-frequency calibration technique has been in addition formulated and analyzed in depth. The equivalence between the radio-frequency and low-frequency techniques has been rigorously demonstrated. The radio-frequency calibration technique is preferable in the case of the ITER-like antenna due to uncertainties on the characteristics of the cables connected at the inputs of the voltage probes. A method to extract the effect of a mismatched data acquisition system has been derived for both calibration techniques. Finally it has been outlined that in the case of the ITER-like antenna voltage probes can be in addition used to monitor the currents at the inputs of the antenna.

  14. A Method to Test Model Calibration Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less

  15. A Method to Test Model Calibration Techniques: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less

  16. Simplified multiple headspace extraction gas chromatographic technique for determination of monomer solubility in water.

    PubMed

    Chai, X S; Schork, F J; DeCinque, Anthony

    2005-04-08

    This paper reports an improved headspace gas chromatographic (GC) technique for determination of monomer solubilities in water. The method is based on a multiple headspace extraction GC technique developed previously [X.S. Chai, Q.X. Hou, F.J. Schork, J. Appl. Polym. Sci., in press], but with the major modification in the method calibration technique. As a result, only a few iterations of headspace extraction and GC measurement are required, which avoids the "exhaustive" headspace extraction, and thus the experimental time for each analysis. For highly insoluble monomers, effort must be made to minimize adsorption in the headspace sampling channel, transportation conduit and capillary column by using higher operating temperature and a short capillary column in the headspace sampler and GC system. For highly water soluble monomers, a new calibration method is proposed. The combinations of these technique modifications results in a method that is simple, rapid and automated. While the current focus of the authors is on the determination of monomer solubility in aqueous solutions, the method should be applicable to determination of solubility of any organic in water.

  17. Evaluation of platinum resistance thermometers

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Dillon-Townes, Lawrence A.

    1988-01-01

    An evaluation procedure for the characterization of industrial platinum resistance thermometers (PRTs) for use in the temperature range -120 to 160 C was investigated. This evaluation procedure consisted of calibration, thermal stability and hysteresis testing of four surface measuring PRTs. Five different calibration schemes were investigated for these sensors. The IPTS-68 formulation produced the most accurate result, yielding average sensor systematic error of 0.02 C and random error of 0.1 C. The sensors were checked for thermal stability by successive and thermal cycling between room temperature, 160 C, and boiling point of nitrogen. All the PRTs suffered from instability and hysteresis. The applicability of the self-heating technique as an in situ method for checking the calibration of PRTs located inside wind tunnels was investigated.

  18. Method of assaying uranium with prompt fission and thermal neutron borehole logging adjusted by borehole physical characteristics. [Patient application

    DOEpatents

    Barnard, R.W.; Jensen, D.H.

    1980-11-05

    Uranium formations are assayed by prompt fission neutron logging techniques. The uranium in the formation is proportional to the ratio of epithermal counts to thermal or epithermal dieaway. Various calibration factors enhance the accuracy of the measurement.

  19. Simulating Reflex Induced Changes in the Acoustic Impedance of the Ear.

    ERIC Educational Resources Information Center

    Sirlin, Mindy W.; Levitt, Harry

    1991-01-01

    A simple procedure for measuring changes in the acoustic impedance of the ear is described. The technique has several applications, including simulation using a standard coupler of changes in real ear impedance produced by the acoustic reflex, and calibration of response time of an otoadmittance meter. (Author/DB)

  20. Diameter Growth Models for Inventory Applications

    Treesearch

    Ronald E. McRoberts; Christopher W. Woodall; Veronica C. Lessard; Margaret R. Holdaway

    2002-01-01

    Distant-independent, individual-tree, diametar growth models were constructed to update information for forest inventory plots measured in previous years. The models are nonlinear in the parameters and were calibrated weighted nonlinear least squares techniques and forest inventory plot data. Analyses of residuals indicated that model predictions compare favorably to...

  1. Advanced millimeter wave imaging systems

    NASA Technical Reports Server (NTRS)

    Schuchardt, J. M.; Gagliano, J. A.; Stratigos, J. A.; Webb, L. L.; Newton, J. M.

    1980-01-01

    Unique techniques are being utilized to develop self-contained imaging radiometers operating at single and multiple frequencies near 35, 95 and 183 GHz. These techniques include medium to large antennas for high spatial resolution, lowloss open structures for RF confinemnt and calibration, wide bandwidths for good sensitivity plus total automation of the unit operation and data collection. Applications include: detection of severe storms, imaging of motor vehicles, and the remote sensing of changes in material properties.

  2. Psychophysical contrast calibration

    PubMed Central

    To, Long; Woods, Russell L; Goldstein, Robert B; Peli, Eli

    2013-01-01

    Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing. PMID:23643843

  3. Vicarious Calibration of EO-1 Hyperion

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Thome, Kurt; Lawrence, Ong

    2012-01-01

    The Hyperion imaging spectrometer on the Earth Observing-1 satellite is the first high-spatial resolution imaging spectrometer to routinely acquire science-grade data from orbit. Data gathered with this instrument needs to be quantitative and accurate in order to derive meaningful information about ecosystem properties and processes. Also, comprehensive and long-term ecological studies require these data to be comparable over time, between coexisting sensors and between generations of follow-on sensors. One method to assess the radiometric calibration is the reflectance-based approach, a common technique used for several other earth science sensors covering similar spectral regions. This work presents results of radiometric calibration of Hyperion based on the reflectance-based approach of vicarious calibration implemented by University of Arizona during 2001 2005. These results show repeatability to the 2% level and accuracy on the 3 5% level for spectral regions not affected by strong atmospheric absorption. Knowledge of the stability of the Hyperion calibration from moon observations allows for an average absolute calibration based on the reflectance-based results to be determined and applicable for the lifetime of Hyperion.

  4. A novel method for determining calibration and behavior of PVDF ultrasonic hydrophone probes in the frequency range up to 100 MHz.

    PubMed

    Bleeker, H J; Lewin, P A

    2000-01-01

    A new calibration technique for PVDF ultrasonic hydrophone probes is described. Current implementation of the technique allows determination of hydrophone frequency response between 2 and 100 MHz and is based on the comparison of theoretically predicted and experimentally determined pressure-time waveforms produced by a focused, circular source. The simulation model was derived from the time domain algorithm that solves the non linear KZK (Khokhlov-Zabolotskaya-Kuznetsov) equation describing acoustic wave propagation. The calibration technique data were experimentally verified using independent calibration procedures in the frequency range from 2 to 40 MHz using a combined time delay spectrometry and reciprocity approach or calibration data provided by the National Physical Laboratory (NPL), UK. The results of verification indicated good agreement between the results obtained using KZK and the above-mentioned independent calibration techniques from 2 to 40 MHz, with the maximum discrepancy of 18% at 30 MHz. The frequency responses obtained using different hydrophone designs, including several membrane and needle probes, are presented, and it is shown that the technique developed provides a desirable tool for independent verification of primary calibration techniques such as those based on optical interferometry. Fundamental limitations of the presented calibration method are also examined.

  5. Laboratory calibration of the calcium carbonate clumped isotope thermometer in the 25-250 °C temperature range

    NASA Astrophysics Data System (ADS)

    Kluge, Tobias; John, Cédric M.; Jourdan, Anne-Lise; Davis, Simon; Crawshaw, John

    2015-05-01

    Many fields of Earth sciences benefit from the knowledge of mineral formation temperatures. For example, carbonates are extensively used for reconstruction of the Earth's past climatic variations by determining ocean, lake, and soil paleotemperatures. Furthermore, diagenetic minerals and their formation or alteration temperature may provide information about the burial history of important geological units and can have practical applications, for instance, for reconstructing the geochemical and thermal histories of hydrocarbon reservoirs. Carbonate clumped isotope thermometry is a relatively new technique that can provide the formation temperature of carbonate minerals without requiring a priori knowledge of the isotopic composition of the initial solution. It is based on the temperature-dependent abundance of the rare 13C-18O bonds in carbonate minerals, specified as a Δ47 value. The clumped isotope thermometer has been calibrated experimentally from 1 °C to 70 °C. However, higher temperatures that are relevant to geological processes have so far not been directly calibrated in the laboratory. In order to close this calibration gap and to provide a robust basis for the application of clumped isotopes to high-temperature geological processes we precipitated CaCO3 (mainly calcite) in the laboratory between 23 and 250 °C. We used two different precipitation techniques: first, minerals were precipitated from a CaCO3 supersaturated solution at atmospheric pressure (23-91 °C), and, second, from a solution resulting from the mixing of CaCl2 and NaHCO3 in a pressurized reaction vessel at a pressure of up to 80 bar (25-250 °C).

  6. Novel Hyperspectral Sun Photometer for Satellite Remote Sensing Data Radiometric Calibration and Atmospheric Aerosol Studies

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Ryan, Robert E.; Holekamp, Kara; Harrington, Gary; Frisbie, Troy

    2006-01-01

    A simple and cost-effective, hyperspectral sun photometer for radiometric vicarious remote sensing system calibration, air quality monitoring, and potentially in-situ planetary climatological studies, was developed. The device was constructed solely from off the shelf components and was designed to be easily deployable for support of short-term verification and validation data collects. This sun photometer not only provides the same data products as existing multi-band sun photometers, this device requires a simpler setup, less data acquisition time and allows for a more direct calibration approach. Fielding this instrument has also enabled Stennis Space Center (SSC) Applied Sciences Directorate personnel to cross calibrate existing sun photometers. This innovative research will position SSC personnel to perform air quality assessments in support of the NASA Applied Sciences Program's National Applications program element as well as to develop techniques to evaluate aerosols in a Martian or other planetary atmosphere.

  7. Optogalvanic wavelength calibration for laser monitoring of reactive atmospheric species

    NASA Technical Reports Server (NTRS)

    Webster, C. R.

    1982-01-01

    Laser-based techniques have been successfully employed for monitoring atmospheric species of importance to stratospheric ozone chemistry or tropospheric air quality control. When spectroscopic methods using tunable lasers are used, a simultaneously recorded reference spectrum is required for wavelength calibration. For stable species this is readily achieved by incorporating into the sensing instrument a reference cell containing the species to be monitored. However, when the species of interest is short-lived, this approach is unsuitable. It is proposed that wavelength calibration for short-lived species may be achieved by generating the species of interest in an electrical or RF discharge and using optogalvanic detection as a simple, sensitive, and reliable means of recording calibration spectra. The wide applicability of this method is emphasized. Ultraviolet, visible, or infrared lasers, either CW or pulsed, may be used in aircraft, balloon, or shuttle experiments for sensing atoms, molecules, radicals, or ions.

  8. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks.

    PubMed

    Su, Po-Chang; Shen, Ju; Xu, Wanxin; Cheung, Sen-Ching S; Luo, Ying

    2018-01-15

    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds.

  9. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks †

    PubMed Central

    Shen, Ju; Xu, Wanxin; Luo, Ying

    2018-01-01

    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds. PMID:29342968

  10. Calibrating the ChemCam LIBS for Carbonate Minerals on Mars

    DOE R&D Accomplishments Database

    Wiens, Roger C.; Clegg, Samuel M.; Ollila, Ann M.; Barefield, James E.; Lanza, Nina; Newsom, Horton E.

    2009-01-01

    The ChemCam instrument suite on board the NASA Mars Science Laboratory (MSL) rover includes the first LIBS instrument for extraterrestrial applications. Here we examine carbonate minerals in a simulated martian environment using the LIDS technique in order to better understand the in situ signature of these materials on Mars. Both chemical composition and rock type are determined using multivariate analysis (MVA) techniques. Composition is confirmed using scanning electron microscopy (SEM) techniques. Our initial results suggest that ChemCam can recognize and differentiate between carbonate materials on Mars.

  11. OpenDA Open Source Generic Data Assimilation Environment and its Application in Process Models

    NASA Astrophysics Data System (ADS)

    El Serafy, Ghada; Verlaan, Martin; Hummel, Stef; Weerts, Albrecht; Dhondia, Juzer

    2010-05-01

    Data Assimilation techniques are essential elements in state-of-the-art development of models and their optimization with data in the field of groundwater, surface water and soil systems. They are essential tools in calibration of complex modelling systems and improvement of model forecasts. The OpenDA is a new and generic open source data assimilation environment for application to a choice of physical process models, applied to case dependent domains. OpenDA was introduced recently when the developers of Costa, an open-source TU Delft project [http://www.costapse.org; Van Velzen and Verlaan; 2007] and those of the DATools from the former WL|Delft Hydraulics [El Serafy et al 2007; Weerts et al. 2009] decided to join forces. OpenDA makes use of a set of interfaces that describe the interaction between models, observations and data assimilation algorithms. It focuses on flexible applications in portable systems for modelling geophysical processes. It provides a generic interfacing protocol that allows combination of the implemented data assimilation techniques with, in principle, any time-stepping model duscribing a process(atmospheric processes, 3D circulation, 2D water level, sea surface temperature, soil systems, groundwater etc.). Presently, OpenDA features filtering techniques and calibration techniques. The presentation will give an overview of the OpenDA and the results of some of its practical applications. Application of data assimilation in portable operational forecasting systems—the DATools assimilation environment, El Serafy G.Y., H. Gerritsen, S. Hummel, A. H. Weerts, A.E. Mynett and M. Tanaka (2007), Journal of Ocean Dynamics, DOI 10.1007/s10236-007-0124-3, pp.485-499. COSTA a problem solving environment for data assimilation applied for hydrodynamical modelling, Van Velzen and Verlaan (2007), Meteorologische Zeitschrift, Volume 16, Number 6, December 2007 , pp. 777-793(17). Application of generic data assimilation tools (DATools) for flood forecasting purposes, A.H. Weerts, G.Y.H. El Serafy, S. Hummel, J. Dhondia, and H. Gerritsen (2009), accepted by Geoscience & Computers.

  12. Calibration of polarimetric radar systems with good polarization isolation

    NASA Technical Reports Server (NTRS)

    Sarabandi, Kamal; Ulaby, Fawwaz T.; Tassoudji, M. Ali

    1990-01-01

    A practical technique is proposed for calibrating single-antenna polarimetric radar systems using a metal sphere plus any second target with a strong cross-polarized radar cross section. This technique assumes perfect isolation between antenna ports. It is shown that all magnitudes and phases (relative to one of the like-polarized linear polarization configurations) of the radar transfer function can be calibrated without knowledge of the scattering matrix of the second target. Comparison of the values measured (using this calibration technique) for a tilted cylinder at X-band with theoretical values shows agreement within + or - 0.3 dB in magnitude and + or - 5 degrees in phase. The radar overall cross-polarization isolation was 25 dB. The technique is particularly useful for calibrating a radar under field conditions, because it does not require the careful alignment of calibration targets.

  13. Alzheimer's Disease Assessment: A Review and Illustrations Focusing on Item Response Theory Techniques.

    PubMed

    Balsis, Steve; Choudhury, Tabina K; Geraci, Lisa; Benge, Jared F; Patrick, Christopher J

    2018-04-01

    Alzheimer's disease (AD) affects neurological, cognitive, and behavioral processes. Thus, to accurately assess this disease, researchers and clinicians need to combine and incorporate data across these domains. This presents not only distinct methodological and statistical challenges but also unique opportunities for the development and advancement of psychometric techniques. In this article, we describe relatively recent research using item response theory (IRT) that has been used to make progress in assessing the disease across its various symptomatic and pathological manifestations. We focus on applications of IRT to improve scoring, test development (including cross-validation and adaptation), and linking and calibration. We conclude by describing potential future multidimensional applications of IRT techniques that may improve the precision with which AD is measured.

  14. New calibration technique for KCD-based megavoltage imaging

    NASA Astrophysics Data System (ADS)

    Samant, Sanjiv S.; Zheng, Wei; DiBianca, Frank A.; Zeman, Herbert D.; Laughter, Joseph S.

    1999-05-01

    In megavoltage imaging, current commercial electronic portal imaging devices (EPIDs), despite having the advantage of immediate digital imaging over film, suffer from poor image contrast and spatial resolution. The feasibility of using a kinestatic charge detector (KCD) as an EPID to provide superior image contrast and spatial resolution for portal imaging has already been demonstrated in a previous paper. The KCD system had the additional advantage of requiring an extremely low dose per acquired image, allowing for superior imaging to be reconstructed form a single linac pulse per image pixel. The KCD based images utilized a dose of two orders of magnitude less that for EPIDs and film. Compared with the current commercial EPIDs and film, the prototype KCD system exhibited promising image qualities, despite being handicapped by the use of a relatively simple image calibration technique, and the performance limits of medical linacs on the maximum linac pulse frequency and energy flux per pulse delivered. This image calibration technique fixed relative image pixel values based on a linear interpolation of extrema provided by an air-water calibration, and accounted only for channel-to-channel variations. The counterpart of this for area detectors is the standard flat fielding method. A comprehensive calibration protocol has been developed. The new technique additionally corrects for geometric distortions due to variations in the scan velocity, and timing artifacts caused by mis-synchronization between the linear accelerator and the data acquisition system (DAS). The role of variations in energy flux (2 - 3%) on imaging is demonstrated to be not significant for the images considered. The methodology is presented, and the results are discussed for simulated images. It also allows for significant improvements in the signal-to- noise ratio (SNR) by increasing the dose using multiple images without having to increase the linac pulse frequency or energy flux per pulse. The application of this protocol to a KCD system under construction is expected shortly.

  15. Uncertainty quantification for constitutive model calibration of brain tissue.

    PubMed

    Brewick, Patrick T; Teferra, Kirubel

    2018-05-31

    The results of a study comparing model calibration techniques for Ogden's constitutive model that describes the hyperelastic behavior of brain tissue are presented. One and two-term Ogden models are fit to two different sets of stress-strain experimental data for brain tissue using both least squares optimization and Bayesian estimation. For the Bayesian estimation, the joint posterior distribution of the constitutive parameters is calculated by employing Hamiltonian Monte Carlo (HMC) sampling, a type of Markov Chain Monte Carlo method. The HMC method is enriched in this work to intrinsically enforce the Drucker stability criterion by formulating a nonlinear parameter constraint function, which ensures the constitutive model produces physically meaningful results. Through application of the nested sampling technique, 95% confidence bounds on the constitutive model parameters are identified, and these bounds are then propagated through the constitutive model to produce the resultant bounds on the stress-strain response. The behavior of the model calibration procedures and the effect of the characteristics of the experimental data are extensively evaluated. It is demonstrated that increasing model complexity (i.e., adding an additional term in the Ogden model) improves the accuracy of the best-fit set of parameters while also increasing the uncertainty via the widening of the confidence bounds of the calibrated parameters. Despite some similarity between the two data sets, the resulting distributions are noticeably different, highlighting the sensitivity of the calibration procedures to the characteristics of the data. For example, the amount of uncertainty reported on the experimental data plays an essential role in how data points are weighted during the calibration, and this significantly affects how the parameters are calibrated when combining experimental data sets from disparate sources. Published by Elsevier Ltd.

  16. Calibration of a polarimetric imaging SAR

    NASA Technical Reports Server (NTRS)

    Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.

    1991-01-01

    Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.

  17. User-friendly freehand ultrasound calibration using Lego bricks and automatic registration.

    PubMed

    Xiao, Yiming; Yan, Charles Xiao Bo; Drouin, Simon; De Nigris, Dante; Kochanowska, Anna; Collins, D Louis

    2016-09-01

    As an inexpensive, noninvasive, and portable clinical imaging modality, ultrasound (US) has been widely employed in many interventional procedures for monitoring potential tissue deformation, surgical tool placement, and locating surgical targets. The application requires the spatial mapping between 2D US images and 3D coordinates of the patient. Although positions of the devices (i.e., ultrasound transducer) and the patient can be easily recorded by a motion tracking system, the spatial relationship between the US image and the tracker attached to the US transducer needs to be estimated through an US calibration procedure. Previously, various calibration techniques have been proposed, where a spatial transformation is computed to match the coordinates of corresponding features in a physical phantom and those seen in the US scans. However, most of these methods are difficult to use for novel users. We proposed an ultrasound calibration method by constructing a phantom from simple Lego bricks and applying an automated multi-slice 2D-3D registration scheme without volumetric reconstruction. The method was validated for its calibration accuracy and reproducibility. Our method yields a calibration accuracy of [Formula: see text] mm and a calibration reproducibility of 1.29 mm. We have proposed a robust, inexpensive, and easy-to-use ultrasound calibration method.

  18. Characterization of the ionosphere above the Murchison Radio Observatory using the Murchison Widefield Array

    NASA Astrophysics Data System (ADS)

    Jordan, C. H.; Murray, S.; Trott, C. M.; Wayth, R. B.; Mitchell, D. A.; Rahimi, M.; Pindor, B.; Procopio, P.; Morgan, J.

    2017-11-01

    We detail new techniques for analysing ionospheric activity, using Epoch of Reionization data sets obtained with the Murchison Widefield Array, calibrated by the `real-time system' (RTS). Using the high spatial- and temporal-resolution information of the ionosphere provided by the RTS calibration solutions over 19 nights of observing, we find four distinct types of ionospheric activity, and have developed a metric to provide an `at a glance' value for data quality under differing ionospheric conditions. For each ionospheric type, we analyse variations of this metric as we reduce the number of pierce points, revealing that a modest number of pierce points is required to identify the intensity of ionospheric activity; it is possible to calibrate in real-time, providing continuous information of the phase screen. We also analyse temporal correlations, determine diffractive scales, examine the relative fractions of time occupied by various types of ionospheric activity and detail a method to reconstruct the total electron content responsible for the ionospheric data we observe. These techniques have been developed to be instrument agnostic, useful for application on LOw Frequency ARray and Square Kilometre Array-Low.

  19. High Accuracy Transistor Compact Model Calibrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hembree, Charles E.; Mar, Alan; Robertson, Perry J.

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirementsmore » require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.« less

  20. Complete elliptical ring geometry provides energy and instrument calibration for synchrotron-based two-dimensional X-ray diffraction

    PubMed Central

    Hart, Michael L.; Drakopoulos, Michael; Reinhard, Christina; Connolley, Thomas

    2013-01-01

    A complete calibration method to characterize a static planar two-dimensional detector for use in X-ray diffraction at an arbitrary wavelength is described. This method is based upon geometry describing the point of intersection between a cone’s axis and its elliptical conic section. This point of intersection is neither the ellipse centre nor one of the ellipse focal points, but some other point which lies in between. The presented solution is closed form, algebraic and non-iterative in its application, and gives values for the X-ray beam energy, the sample-to-detector distance, the location of the beam centre on the detector surface and the detector tilt relative to the incident beam. Previous techniques have tended to require prior knowledge of either the X-ray beam energy or the sample-to-detector distance, whilst other techniques have been iterative. The new calibration procedure is performed by collecting diffraction data, in the form of diffraction rings from a powder standard, at known displacements of the detector along the beam path. PMID:24068840

  1. Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination

    PubMed Central

    Fasano, Giancarmine; Grassi, Michele

    2017-01-01

    In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective-n-Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal. PMID:28946651

  2. Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination.

    PubMed

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2017-09-24

    In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective- n -Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal.

  3. Quality evaluation of frozen guava and yellow passion fruit pulps by NIR spectroscopy and chemometrics.

    PubMed

    Alamar, Priscila D; Caramês, Elem T S; Poppi, Ronei J; Pallone, Juliana A L

    2016-07-01

    The present study investigated the application of near infrared spectroscopy as a green, quick, and efficient alternative to analytical methods currently used to evaluate the quality (moisture, total sugars, acidity, soluble solids, pH and ascorbic acid) of frozen guava and passion fruit pulps. Fifty samples were analyzed by near infrared spectroscopy (NIR) and reference methods. Partial least square regression (PLSR) was used to develop calibration models to relate the NIR spectra and the reference values. Reference methods indicated adulteration by water addition in 58% of guava pulp samples and 44% of yellow passion fruit pulp samples. The PLS models produced lower values of root mean squares error of calibration (RMSEC), root mean squares error of prediction (RMSEP), and coefficient of determination above 0.7. Moisture and total sugars presented the best calibration models (RMSEP of 0.240 and 0.269, respectively, for guava pulp; RMSEP of 0.401 and 0.413, respectively, for passion fruit pulp) which enables the application of these models to determine adulteration in guava and yellow passion fruit pulp by water or sugar addition. The models constructed for calibration of quality parameters of frozen fruit pulps in this study indicate that NIR spectroscopy coupled with the multivariate calibration technique could be applied to determine the quality of guava and yellow passion fruit pulp. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Calibration of the Root Zone Water Quality Model and Application of Data Assimilation Techniques to Estimate Profile Soil Moisture

    USDA-ARS?s Scientific Manuscript database

    Estimation of soil moisture has received considerable attention in the areas of hydrology, agriculture, meteorology and environmental studies because of its role in the partitioning water and energy at the land surface. In this study, the USDA, Agricultural Research Service, Root Zone Water Quality ...

  5. Evaluating the role of evapotranspiration remote sensing data in improving hydrological modeling predictability

    NASA Astrophysics Data System (ADS)

    Herman, Matthew R.; Nejadhashemi, A. Pouyan; Abouali, Mohammad; Hernandez-Suarez, Juan Sebastian; Daneshvar, Fariborz; Zhang, Zhen; Anderson, Martha C.; Sadeghi, Ali M.; Hain, Christopher R.; Sharifi, Amirreza

    2018-01-01

    As the global demands for the use of freshwater resources continues to rise, it has become increasingly important to insure the sustainability of this resources. This is accomplished through the use of management strategies that often utilize monitoring and the use of hydrological models. However, monitoring at large scales is not feasible and therefore model applications are becoming challenging, especially when spatially distributed datasets, such as evapotranspiration, are needed to understand the model performances. Due to these limitations, most of the hydrological models are only calibrated for data obtained from site/point observations, such as streamflow. Therefore, the main focus of this paper is to examine whether the incorporation of remotely sensed and spatially distributed datasets can improve the overall performance of the model. In this study, actual evapotranspiration (ETa) data was obtained from the two different sets of satellite based remote sensing data. One dataset estimates ETa based on the Simplified Surface Energy Balance (SSEBop) model while the other one estimates ETa based on the Atmosphere-Land Exchange Inverse (ALEXI) model. The hydrological model used in this study is the Soil and Water Assessment Tool (SWAT), which was calibrated against spatially distributed ETa and single point streamflow records for the Honeyoey Creek-Pine Creek Watershed, located in Michigan, USA. Two different techniques, multi-variable and genetic algorithm, were used to calibrate the SWAT model. Using the aforementioned datasets, the performance of the hydrological model in estimating ETa was improved using both calibration techniques by achieving Nash-Sutcliffe efficiency (NSE) values >0.5 (0.73-0.85), percent bias (PBIAS) values within ±25% (±21.73%), and root mean squared error - observations standard deviation ratio (RSR) values <0.7 (0.39-0.52). However, the genetic algorithm technique was more effective with the ETa calibration while significantly reducing the model performance for estimating the streamflow (NSE: 0.32-0.52, PBIAS: ±32.73%, and RSR: 0.63-0.82). Meanwhile, using the multi-variable technique, the model performance for estimating the streamflow was maintained with a high level of accuracy (NSE: 0.59-0.61, PBIAS: ±13.70%, and RSR: 0.63-0.64) while the evapotranspiration estimations were improved. Results from this assessment shows that incorporation of remotely sensed and spatially distributed data can improve the hydrological model performance if it is coupled with a right calibration technique.

  6. A nonlinear propagation model-based phase calibration technique for membrane hydrophones.

    PubMed

    Cooling, Martin P; Humphrey, Victor F

    2008-01-01

    A technique for the phase calibration of membrane hydrophones in the frequency range up to 80 MHz is described. This is achieved by comparing measurements and numerical simulation of a nonlinearly distorted test field. The field prediction is obtained using a finite-difference model that solves the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation in the frequency domain. The measurements are made in the far field of a 3.5 MHz focusing circular transducer in which it is demonstrated that, for the high drive level used, spatial averaging effects due to the hydrophone's finite-receive area are negligible. The method provides a phase calibration of the hydrophone under test without the need for a device serving as a phase response reference, but it requires prior knowledge of the amplitude sensitivity at the fundamental frequency. The technique is demonstrated using a 50-microm thick bilaminar membrane hydrophone, for which the results obtained show functional agreement with predictions of a hydrophone response model. Further validation of the results is obtained by application of the response to the measurement of the high amplitude waveforms generated by a modern biomedical ultrasonic imaging system. It is demonstrated that full deconvolution of the calculated complex frequency response of a nonideal hydrophone results in physically realistic measurements of the transmitted waveforms.

  7. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajami, N K; Duan, Q; Gao, X

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less

  8. Pulsating stars and the distance scale

    NASA Astrophysics Data System (ADS)

    Macri, Lucas

    2017-09-01

    I present an overview of the latest results from the SH0ES project, which obtained homogeneous Hubble Space Telescope (HST) photometry in the optical and near-infrared for ˜ 3500 and ˜ 2300 Cepheids, respectively, across 19 supernova hosts and 4 calibrators to determine the value of H0 with a total uncertainty of 2.4%. I discuss the current 3.4σ "tension" between this local measurement and predictions of H0 based on observations of the CMB and the assumption of "standard" ΛCDM. I review ongoing efforts to reach σ(H0) = 1%, including recent advances on the absolute calibration of Milky Way Cepheid period-luminosity relations (PLRs) using a novel astrometric technique with HST. Lastly, I highlight recent results from another collaboration on the development of new statistical techniques to detect, classify and phase extragalactic Miras using noisy and sparsely-sampled observations. I present preliminary Mira PLRs at various wavelengths based on the application of these techniques to a survey of M33.

  9. ASD FieldSpec Calibration Setup and Techniques

    NASA Technical Reports Server (NTRS)

    Olive, Dan

    2001-01-01

    This paper describes the Analytical Spectral Devices (ASD) Fieldspec Calibration Setup and Techniques. The topics include: 1) ASD Fieldspec FR Spectroradiometer; 2) Components of Calibration; 3) Equipment list; 4) Spectral Setup; 5) Spectral Calibration; 6) Radiometric and Linearity Setup; 7) Radiometric setup; 8) Datadets Required; 9) Data files; and 10) Field of View Measurement. This paper is in viewgraph form.

  10. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    DTIC Science & Technology

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  11. Design of system calibration for effective imaging

    NASA Astrophysics Data System (ADS)

    Varaprasad Babu, G.; Rao, K. M. M.

    2006-12-01

    A CCD based characterization setup comprising of a light source, CCD linear array, Electronics for signal conditioning/ amplification, PC interface has been developed to generate images at varying densities and at multiple view angles. This arrangement is used to simulate and evaluate images by Super Resolution technique with multiple overlaps and yaw rotated images at different view angles. This setup also generates images at different densities to analyze the response of the detector port wise separately. The light intensity produced by the source needs to be calibrated for proper imaging by the high sensitive CCD detector over the FOV. One approach is to design a complex integrating sphere arrangement which costs higher for such applications. Another approach is to provide a suitable intensity feed back correction wherein the current through the lamp is controlled in a closed loop arrangement. This method is generally used in the applications where the light source is a point source. The third method is to control the time of exposure inversely to the lamp variations where lamp intensity is not possible to control. In this method, light intensity during the start of each line is sampled and the correction factor is applied for the full line. The fourth method is to provide correction through Look Up Table where the response of all the detectors are normalized through the digital transfer function. The fifth method is to have a light line arrangement where the light through multiple fiber optic cables are derived from a single source and arranged them in line. This is generally applicable and economical for low width cases. In our applications, a new method wherein an inverse multi density filter is designed which provides an effective calibration for the full swath even at low light intensities. The light intensity along the length is measured, an inverse density is computed, a correction filter is generated and implemented in the CCD based Characterization setup. This paper describes certain novel techniques of design and implementation of system calibration for effective Imaging to produce better quality data product especially while handling high resolution data.

  12. Financial model calibration using consistency hints.

    PubMed

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  13. Application of remotely sensed multispectral data to automated analysis of marshland vegetation. Inference to the location of breeding habitats of the salt marsh mosquito (Aedes Sollicitans)

    NASA Technical Reports Server (NTRS)

    Cibula, W. G.

    1976-01-01

    The techniques used for the automated classification of marshland vegetation and for the color-coded display of remotely acquired data to facilitate the control of mosquito breeding are presented. A multispectral scanner system and its mode of operation are described, and the computer processing techniques are discussed. The procedures for the selection of calibration sites are explained. Three methods for displaying color-coded classification data are presented.

  14. HIGH-FIDELITY RADIO ASTRONOMICAL POLARIMETRY USING A MILLISECOND PULSAR AS A POLARIZED REFERENCE SOURCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Straten, W., E-mail: vanstraten.willem@gmail.com

    2013-01-15

    A new method of polarimetric calibration is presented in which the instrumental response is derived from regular observations of PSR J0437-4715 based on the assumption that the mean polarized emission from this millisecond pulsar remains constant over time. The technique is applicable to any experiment in which high-fidelity polarimetry is required over long timescales; it is demonstrated by calibrating 7.2 years of high-precision timing observations of PSR J1022+1001 made at the Parkes Observatory. Application of the new technique followed by arrival time estimation using matrix template matching yields post-fit residuals with an uncertainty-weighted standard deviation of 880 ns, two timesmore » smaller than that of arrival time residuals obtained via conventional methods of calibration and arrival time estimation. The precision achieved by this experiment yields the first significant measurements of the secular variation of the projected semimajor axis, the precession of periastron, and the Shapiro delay; it also places PSR J1022+1001 among the 10 best pulsars regularly observed as part of the Parkes Pulsar Timing Array (PPTA) project. It is shown that the timing accuracy of a large fraction of the pulsars in the PPTA is currently limited by the systematic timing error due to instrumental polarization artifacts. More importantly, long-term variations of systematic error are correlated between different pulsars, which adversely affects the primary objectives of any pulsar timing array experiment. These limitations may be overcome by adopting the techniques presented in this work, which relax the demand for instrumental polarization purity and thereby have the potential to reduce the development cost of next-generation telescopes such as the Square Kilometre Array.« less

  15. Flash X-ray with image enhancement applied to combustion events

    NASA Astrophysics Data System (ADS)

    White, K. J.; McCoy, D. G.

    1983-10-01

    Flow visualization of interior ballistic processes by use of X-rays has placed more stringent requirements on flash X-ray techniques. The problem of improving radiographic contrast of propellants in X-ray transparent chambers was studied by devising techniques for evaluating, measuring and reducing the effects of scattering from both the test object and structures in the test area. X-ray film and processing is reviewed and techniques for evaluating and calibrating these are outlined. Finally, after X-ray techniques were optimized, the application of image enhancement processing which can improve image quality is described. This technique was applied to X-ray studies of the combustion of very high burning rate (VHBR) propellants and stick propellant charges.

  16. Self-Calibration and Laser Energy Monitor Validations for a Double-Pulsed 2-Micron CO2 Integrated Path Differential Absorption Lidar Application

    NASA Technical Reports Server (NTRS)

    Refaat, Tamer F.; Singh, Upendra N.; Petros, Mulugeta; Remus, Ruben; Yu, Jirong

    2015-01-01

    Double-pulsed 2-micron integrated path differential absorption (IPDA) lidar is well suited for atmospheric CO2 remote sensing. The IPDA lidar technique relies on wavelength differentiation between strong and weak absorbing features of the gas normalized to the transmitted energy. In the double-pulse case, each shot of the transmitter produces two successive laser pulses separated by a short interval. Calibration of the transmitted pulse energies is required for accurate CO2 measurement. Design and calibration of a 2-micron double-pulse laser energy monitor is presented. The design is based on an InGaAs pin quantum detector. A high-speed photo-electromagnetic quantum detector was used for laser-pulse profile verification. Both quantum detectors were calibrated using a reference pyroelectric thermal detector. Calibration included comparing the three detection technologies in the single-pulsed mode, then comparing the quantum detectors in the double-pulsed mode. In addition, a self-calibration feature of the 2-micron IPDA lidar is presented. This feature allows one to monitor the transmitted laser energy, through residual scattering, with a single detection channel. This reduces the CO2 measurement uncertainty. IPDA lidar ground validation for CO2 measurement is presented for both calibrated energy monitor and self-calibration options. The calibrated energy monitor resulted in a lower CO2 measurement bias, while self-calibration resulted in a better CO2 temporal profiling when compared to the in situ sensor.

  17. Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays.

    PubMed

    Itoh, Yuta; Klinker, Gudrun

    2015-04-01

    A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.

  18. A convenient technique for polarimetric calibration of single-antenna radar systems

    NASA Technical Reports Server (NTRS)

    Sarabandi, Kamal; Ulaby, Fawwaz T.

    1990-01-01

    A practical technique for calibrating single-antenna polarimetric radar systems is introduced. This technique requires only a single calibration target such as a conducting sphere or a trihedral corner reflector to calibrate the radar system, both in amplitude and phase, for all linear polarization configurations. By using a metal sphere, which is orientation independent, error in calibration measurement is minimized while simultaneously calibrating the crosspolarization channels. The antenna system and two orthogonal channels (in free space) are modeled as a four-port passive network. Upon using the reciprocity relations for the passive network and assuming the crosscoupling terms of the antenna to be equal, the crosstalk factors of the antenna system and the transmit and receive channel imbalances can be obtained from measurement of the backscatter from a metal sphere. For an X-band radar system with crosspolarization isolation of 25 dB, comparison of values measured for a sphere and a cylinder with theoretical values shows agreement within 0.4 dB in magnitude and 5 deg in phase. An effective polarization isolation of 50 dB is achieved using this calibration technique.

  19. Foreword to the Special Issue on the 11th Specialist Meeting on Microwave Radiometry and Remote Sensing Applications (MicroRad 2010)

    NASA Technical Reports Server (NTRS)

    Le Vine, David M; Jackson, Thomas J.; Kim, Edward J.; Lang, Roger H.

    2011-01-01

    The Specialist Meeting on Microwave Radiometry and Remote Sensing of the Environment (MicroRad 2010) was held in Washington, DC from March 1 to 4, 2010. The objective of MicroRad 2010 was to provide an open forum to report and discuss recent advances in the field of microwave radiometry, particularly with application to remote sensing of the environment. The meeting was highly successful, with more than 200 registrations representing 48 countries. There were 80 oral presentations and more than 100 posters. MicroRad has become a venue for the microwave radiometry community to present new research results, instrument designs, and applications to an audience that is conversant in these issues. The meeting was divided into 16 sessions (listed in order of presentation): 1) SMOS Mission; 2) Future Passive Microwave Remote Sensing Missions; 3) Theory and Physical Principles of Electromagnetic Models; 4) Field Experiment Results; 5) Soil Moisture and Vegetation; 6) Snow and Cryosphere; 7) Passive/Active Microwave Remote Sensing Synergy; 8) Oceans; 9) Atmospheric Sounding and Assimilation; 10) Clouds and Precipitation; 11) Instruments and Advanced Techniques I; 12) Instruments and Advanced Techniques II; 13) Cross Calibration of Satellite Radiometers; 14) Calibration Theory and Methodology; 15) New Technologies for Microwave Radiometry; 16) Radio Frequency Interference.

  20. Developments in laser Doppler blood perfusion monitoring

    NASA Astrophysics Data System (ADS)

    Leahy, Martin J.; de Mul, Frits F. M.; Nilsson, Gert E.; Maniewski, Roman; Liebert, Adam

    2003-03-01

    This paper reviews the development and use of laser Doppler perfusion monitors and imagers. Despite their great success and almost universal applicability in microcirculation research, they have had great difficulty in converting to widespread clinical application. The enormous interest in microvascular blood perfusion coupled with the 'ease of use' of the technique has led to 2000+ publications citing its use. However, useful results can only be achieved with an understanding of the basic principles of the instrumentation and its application in the various clinical disciplines. The basic technical background is explored and definitions of blood perfusion and laser Doppler perfusion are established. The calibration method is then described together with potential routes to standardisation. A guide to the limitations in application of the technique gives the user a clear indication of what can be achieved in new studies as well as possible inadequacy in some published investigations. Finally some clinical applications have found acceptability and these will be explored.

  1. An efficient swarm intelligence approach to feature selection based on invasive weed optimization: Application to multivariate calibration and classification using spectroscopic data

    NASA Astrophysics Data System (ADS)

    Sheykhizadeh, Saheleh; Naseri, Abdolhossein

    2018-04-01

    Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively.

  2. An efficient swarm intelligence approach to feature selection based on invasive weed optimization: Application to multivariate calibration and classification using spectroscopic data.

    PubMed

    Sheykhizadeh, Saheleh; Naseri, Abdolhossein

    2018-04-05

    Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Dynamic photogrammetric calibration of industrial robots

    NASA Astrophysics Data System (ADS)

    Maas, Hans-Gerd

    1997-07-01

    Today's developments in industrial robots focus on aims like gain of flexibility, improvement of the interaction between robots and reduction of down-times. A very important method to achieve these goals are off-line programming techniques. In contrast to conventional teach-in-robot programming techniques, where sequences of actions are defined step-by- step via remote control on the real object, off-line programming techniques design complete robot (inter-)action programs in a CAD/CAM environment. This poses high requirements to the geometric accuracy of a robot. While the repeatability of robot poses in the teach-in mode is often better than 0.1 mm, the absolute pose accuracy potential of industrial robots is usually much worse due to tolerances, eccentricities, elasticities, play, wear-out, load, temperature and insufficient knowledge of model parameters for the transformation from poses into robot axis angles. This fact necessitates robot calibration techniques, including the formulation of a robot model describing kinematics and dynamics of the robot, and a measurement technique to provide reference data. Digital photogrammetry as an accurate, economic technique with realtime potential offers itself for this purpose. The paper analyzes the requirements posed to a measurement technique by industrial robot calibration tasks. After an overview on measurement techniques used for robot calibration purposes in the past, a photogrammetric robot calibration system based on off-the- shelf lowcost hardware components will be shown and results of pilot studies will be discussed. Besides aspects of accuracy, reliability and self-calibration in a fully automatic dynamic photogrammetric system, realtime capabilities are discussed. In the pilot studies, standard deviations of 0.05 - 0.25 mm in the three coordinate directions could be achieved over a robot work range of 1.7 X 1.5 X 1.0 m3. The realtime capabilities of the technique allow to go beyond kinematic robot calibration and perform dynamic robot calibration as well as photogrammetric on-line control of a robot in action.

  4. Non-Contact Thrust Stand Calibration Method for Repetitively-Pulsed Electric Thrusters

    NASA Technical Reports Server (NTRS)

    Wong, Andrea R.; Toftul, Alexandra; Polzin, Kurt A.; Pearson, J. Boise

    2011-01-01

    A thrust stand calibration technique for use in testing repetitively-pulsed electric thrusters for in-space propulsion has been developed and tested using a modified hanging pendulum thrust stand. In the implementation of this technique, current pulses are applied to a solenoidal coil to produce a pulsed magnetic field that acts against the magnetic field produced by a permanent magnet mounted to the thrust stand pendulum arm. The force on the magnet is applied in this non-contact manner, with the entire pulsed force transferred to the pendulum arm through a piezoelectric force transducer to provide a time-accurate force measurement. Modeling of the pendulum arm dynamics reveals that after an initial transient in thrust stand motion the quasisteady average deflection of the thrust stand arm away from the unforced or zero position can be related to the average applied force through a simple linear Hooke s law relationship. Modeling demonstrates that this technique is universally applicable except when the pulsing period is increased to the point where it approaches the period of natural thrust stand motion. Calibration data were obtained using a modified hanging pendulum thrust stand previously used for steady-state thrust measurements. Data were obtained for varying impulse bit at constant pulse frequency and for varying pulse frequency. The two data sets exhibit excellent quantitative agreement with each other as the constant relating average deflection and average thrust match within the errors on the linear regression curve fit of the data. Quantitatively, the error on the calibration coefficient is roughly 1% of the coefficient value.

  5. 3D-calibration of three- and four-sensor hot-film probes based on collocated sonic using neural networks

    NASA Astrophysics Data System (ADS)

    Kit, Eliezer; Liberzon, Dan

    2016-09-01

    High resolution measurements of turbulence in the atmospheric boundary layer (ABL) are critical to the understanding of physical processes and parameterization of important quantities, such as the turbulent kinetic energy dissipation. Low spatio-temporal resolution of standard atmospheric instruments, sonic anemometers and LIDARs, limits their suitability for fine-scale measurements of ABL. The use of miniature hot-films is an alternative technique, although such probes require frequent calibration, which is logistically untenable in field setups. Accurate and truthful calibration is crucial for the multi-hot-films applications in atmospheric studies, because the ability to conduct calibration in situ ultimately determines the turbulence measurements quality. Kit et al (2010 J. Atmos. Ocean. Technol. 27 23-41) described a novel methodology for calibration of hot-film probes using a collocated sonic anemometer combined with a neural network (NN) approach. An important step in the algorithm is the generation of a calibration set for NN training by an appropriate low-pass filtering of the high resolution voltages, measured by the hot-film-sensors and low resolution velocities acquired by the sonic. In Kit et al (2010 J. Atmos. Ocean. Technol. 27 23-41), Kit and Grits (2011 J. Atmos. Ocean. Technol. 28 104-10) and Vitkin et al (2014 Meas. Sci. Technol. 25 75801), the authors reported on successful use of this approach for in situ calibration, but also on the method’s limitations and restricted range of applicability. In their earlier work, a jet facility and a probe, comprised of two orthogonal x-hot-films, were used for calibration and for full dataset generation. In the current work, a comprehensive laboratory study of 3D-calibration of two multi-hot-film probes (triple- and four-sensor) using a grid flow was conducted. The probes were embedded in a collocated sonic, and their relative pitch and yaw orientation to the mean flow was changed by means of motorized traverses. The study demonstrated that NN-calibration is a powerful tool for calibration of multi-sensor 3D-hot film probes embedded in a collocated sonic, and can be employed in long-lasting field campaigns.

  6. Evaluation of Piecewise Polynomial Equations for Two Types of Thermocouples

    PubMed Central

    Chen, Andrew; Chen, Chiachung

    2013-01-01

    Thermocouples are the most frequently used sensors for temperature measurement because of their wide applicability, long-term stability and high reliability. However, one of the major utilization problems is the linearization of the transfer relation between temperature and output voltage of thermocouples. The linear calibration equation and its modules could be improved by using regression analysis to help solve this problem. In this study, two types of thermocouple and five temperature ranges were selected to evaluate the fitting agreement of different-order polynomial equations. Two quantitative criteria, the average of the absolute error values |e|ave and the standard deviation of calibration equation estd, were used to evaluate the accuracy and precision of these calibrations equations. The optimal order of polynomial equations differed with the temperature range. The accuracy and precision of the calibration equation could be improved significantly with an adequate higher degree polynomial equation. The technique could be applied with hardware modules to serve as an intelligent sensor for temperature measurement. PMID:24351627

  7. Novel Hyperspectral Sun Photometer for Satellite Remote Sensing Data Radiometeic Calibration and Atmospheric Aerosol Studies

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Ryan, Robert E.; Holekamp, Kara; Harrington, Gary; Frisbie, Troy

    2006-01-01

    A simple and cost-effective, hyperspectral sun photometer for radiometric vicarious remote sensing system calibration, air quality monitoring, and potentially in-situ planetary climatological studies, was developed. The device was constructed solely from off the shelf components and was designed to be easily deployable for support of short-term verification and validation data collects. This sun photometer not only provides the same data products as existing multi-band sun photometers but also the potential of hyperspectral optical depth and diffuse-to-global products. As compared to traditional sun photometers, this device requires a simpler setup, less data acquisition time and allows for a more direct calibration approach. Fielding this instrument has also enabled Stennis Space Center (SSC) Applied Sciences Directorate personnel to cross-calibrate existing sun photometers. This innovative research will position SSC personnel to perform air quality assessments in support of the NASA Applied Sciences Program's National Applications program element as well as to develop techniques to evaluate aerosols in a Martian or other planetary atmosphere.

  8. Contact-free calibration of an asymmetric multi-layer interferometer for the surface force balance

    NASA Astrophysics Data System (ADS)

    Balabajew, Marco; van Engers, Christian D.; Perkin, Susan

    2017-12-01

    The Surface Force Balance (SFB, also known as Surface Force Apparatus, SFA) has provided important insights into many phenomena within the field of colloid and interface science. The technique relies on using white light interferometry to measure the distance between surfaces with sub-nanometer resolution. Up until now, the determination of the distance between the surfaces required a so-called "contact calibration," an invasive procedure during which the surfaces are brought into mechanical contact. This requirement for a contact calibration limits the range of experimental systems that can be investigated with SFB, for example, it precludes experiments with substrates that would be irreversibly modified or damaged by mechanical contact. Here we present a non-invasive method to measure absolute distances without performing a contact calibration. The method can be used for both "symmetric" and "asymmetric" systems. We foresee many applications for this general approach including, most immediately, experiments using single layer graphene electrodes in the SFB which may be damaged when brought into mechanical contact.

  9. Application of Fluorescence Spectrometry With Multivariate Calibration to the Enantiomeric Recognition of Fluoxetine in Pharmaceutical Preparations.

    PubMed

    Poláček, Roman; Májek, Pavel; Hroboňová, Katarína; Sádecká, Jana

    2016-04-01

    Fluoxetine is the most prescribed antidepressant chiral drug worldwide. Its enantiomers have a different duration of serotonin inhibition. A novel simple and rapid method for determination of the enantiomeric composition of fluoxetine in pharmaceutical pills is presented. Specifically, emission, excitation, and synchronous fluorescence techniques were employed to obtain the spectral data, which with multivariate calibration methods, namely, principal component regression (PCR) and partial least square (PLS), were investigated. The chiral recognition of fluoxetine enantiomers in the presence of β-cyclodextrin was based on diastereomeric complexes. The results of the multivariate calibration modeling indicated good prediction abilities. The obtained results for tablets were compared with those from chiral HPLC and no significant differences are shown by Fisher's (F) test and Student's t-test. The smallest residuals between reference or nominal values and predicted values were achieved by multivariate calibration of synchronous fluorescence spectral data. This conclusion is supported by calculated values of the figure of merit.

  10. SU-E-T-118: Dose Verification for Accuboost Applicators Using TLD, Ion Chamber and Gafchromic Film Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chisela, W; Yao, R; Dorbu, G

    Purpose: To verify dose delivered with HDR Accuboost applicators using TLD, ion chamber and Gafchromic film measurements and to examine applicator leakage. Methods: A microSelectron HDR unit was used to deliver a dose of 50cGy to the mid-plane of a 62mm thick solid water phantom using dwell times from Monte Carlo pre-calculated nomograms for a 60mm, 70mm Round and 60mm Skin-Dose Optimized (SDO) applicators respectively. GafChromic EBT3+ film was embedded in the phantom midplane horizontally to measure dose distribution. Absolute dose was also measured with TLDs and an ADCL calibrated parallel-plate ion chamber placed in the film plane at fieldmore » center for each applicator. The film was calibrated using 6MV x-ray beam. TLDs were calibrated in a Cs-137 source at UW-Madison calibration laboratory. Radiation leakage through the tungsten alloy shell was measured with a film wrapped around outside surface of a 60mm Round applicator. Results: Measured maximum doses at field center are consistently lower than predicated by 5.8% for TLD, 8.8% for ion chamber, and 2.6% for EBT3+ film on average, with measurement uncertainties of 2.2%, 0.3%, and 2.9% for TLD, chamber, film respectively. The total standard uncertainties for ion chamber and Gafchromic film measurement are 4.9% and 4.6% respectively[1]. The area defined by the applicator aperture was covered by 80% of maximum dose for 62mm compression thickness. When 100cGy is delivered to mid-plane with a 60mm Round applicator, surface dose ranges from 60cGy to a maximum of 145cGy, which occurs at source entrance to the applicator. Conclusion: Measured doses by all three techniques are consistently lower than predicted in our measurements. For a compression thickness of 62 mm, the field size defined by the applicator is only covered by 80% of prescribed dose. Radiation leakage of up to 145cGy was found at the source entrance of applicators.« less

  11. Systematic Calibration for a Backpacked Spherical Photogrammetry Imaging System

    NASA Astrophysics Data System (ADS)

    Rau, J. Y.; Su, B. W.; Hsiao, K. W.; Jhan, J. P.

    2016-06-01

    A spherical camera can observe the environment for almost 720 degrees' field of view in one shoot, which is useful for augmented reality, environment documentation, or mobile mapping applications. This paper aims to develop a spherical photogrammetry imaging system for the purpose of 3D measurement through a backpacked mobile mapping system (MMS). The used equipment contains a Ladybug-5 spherical camera, a tactical grade positioning and orientation system (POS), i.e. SPAN-CPT, and an odometer, etc. This research aims to directly apply photogrammetric space intersection technique for 3D mapping from a spherical image stereo-pair. For this purpose, several systematic calibration procedures are required, including lens distortion calibration, relative orientation calibration, boresight calibration for direct georeferencing, and spherical image calibration. The lens distortion is serious on the ladybug-5 camera's original 6 images. Meanwhile, for spherical image mosaicking from these original 6 images, we propose the use of their relative orientation and correct their lens distortion at the same time. However, the constructed spherical image still contains systematic error, which will reduce the 3D measurement accuracy. Later for direct georeferencing purpose, we need to establish a ground control field for boresight/lever-arm calibration. Then, we can apply the calibrated parameters to obtain the exterior orientation parameters (EOPs) of all spherical images. In the end, the 3D positioning accuracy after space intersection will be evaluated, including EOPs obtained by structure from motion method.

  12. Measurements in liquid fuel sprays

    NASA Technical Reports Server (NTRS)

    Chigier, N.

    1984-01-01

    Techniques for studying the events directly preceding combustion in the liquid fuel sprays are being used to provide information as a function of space and time on droplet size, shape, number density, position, angle of flight and velocity. Spray chambers were designed and constructed for: (1) air-assist liquid fuel research sprays; (2) high pressure and temperature chamber for pulsed diesel fuel sprays; and (3) coal-water slurry sprays. Recent results utilizing photography, cinematography, and calibration of the Malvern particle sizer are reported. Systems for simultaneous measurement of velocity and particle size distributions using laser Doppler anemometry interferometry and the application of holography in liquid fuel sprays are being calibrated.

  13. MONTE CARLO SIMULATION OF THE BREMSSTRAHLUNG RADIATION FOR THE MEASUREMENT OF AN INTERNAL CONTAMINATION WITH PURE-BETA EMITTERS IN VIVO.

    PubMed

    Fantínová, K; Fojtík, P; Malátová, I

    2016-09-01

    Rapid measurement techniques are required for a large-scale emergency monitoring of people. In vivo measurement of the bremsstrahlung radiation produced by incorporated pure-beta emitters can offer a rapid technique for the determination of such radionuclides in the human body. This work presents a method for the calibration of spectrometers, based on the use of UPh-02T (so-called IGOR) phantom and specific (90)Sr/(90)Y sources, which can account for recent as well as previous contaminations. The process of the whole- and partial-body counter calibration in combination with application of a Monte Carlo code offers readily extension also to other pure-beta emitters and various exposure scenarios. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. A relative-intensity two-color phosphor thermography system

    NASA Technical Reports Server (NTRS)

    Merski, N. Ronald

    1991-01-01

    The NASA LaRC has developed a relative-intensity two-color phosphor thermography system. This system has become a standard technique for acquiring aerothermodynamic data in LaRC Hypersonic Facilities Complex (HFC). The relative intensity theory and its application to the LaRC phosphor thermography system is discussed along with the investment casting technique which is critical to the utilization of the phosphor method for aerothermodynamic studies. Various approaches to obtaining quantitative heat transfer data using thermographic phosphors are addressed and comparisons between thin-film data and thermographic phosphor data on an orbiter-like configuration are presented. In general, data from these two techniques are in good agreement. A discussion is given on the application of phosphors to integration heat transfer data reduction techniques (the thin film method) and preliminary heat transfer data obtained on a calibration sphere using thin-film equations are presented. Finally, plans for a new phosphor system which uses target recognition software are discussed.

  15. Recent flight-test results of optical airdata techniques

    NASA Technical Reports Server (NTRS)

    Bogue, Rodney K.

    1993-01-01

    Optical techniques for measuring airdata parameters were demonstrated with promising results on high performance fighter aircraft. These systems can measure the airspeed vector, and some are not as dependent on special in-flight calibration processes as current systems. Optical concepts for measuring freestream static temperature and density are feasible for in-flight applications. The best feature of these concepts is that the air data measurements are obtained nonintrusively, and for the most part well into the freestream region of the flow field about the aircraft. Current requirements for measuring air data at high angle of attack, and future need to measure the same information at hypersonic flight conditions place strains on existing techniques. Optical technology advances show outstanding potential for application in future programs and promise to make common use of optical concepts a reality. Results from several flight-test programs are summarized, and the technology advances required to make optical airdata techniques practical are identified.

  16. Enhanced anatomical calibration in human movement analysis.

    PubMed

    Donati, Marco; Camomilla, Valentina; Vannozzi, Giuseppe; Cappozzo, Aurelio

    2007-07-01

    The representation of human movement requires knowledge of both movement and morphology of bony segments. The determination of subject-specific morphology data and their registration with movement data is accomplished through an anatomical calibration procedure (calibrated anatomical systems technique: CAST). This paper describes a novel approach to this calibration (UP-CAST) which, as compared with normally used techniques, achieves better repeatability, a shorter application time, and can be effectively performed by non-skilled examiners. Instead of the manual location of prominent bony anatomical landmarks, the description of which is affected by subjective interpretation, a large number of unlabelled points is acquired over prominent parts of the subject's bone, using a wand fitted with markers. A digital model of a template-bone is then submitted to isomorphic deformation and re-orientation to optimally match the above-mentioned points. The locations of anatomical landmarks are automatically made available. The UP-CAST was validated considering the femur as a paradigmatic case. Intra- and inter-examiner repeatability of the identification of anatomical landmarks was assessed both in vivo, using average weight subjects, and on bare bones. Accuracy of the identification was assessed using the anatomical landmark locations manually located on bare bones as reference. The repeatability of this method was markedly higher than that reported in the literature and obtained using the conventional palpation (ranges: 0.9-7.6 mm and 13.4-17.9, respectively). Accuracy resulted, on average, in a maximal error of 11 mm. Results suggest that the principal source of variability resides in the discrepancy between subject's and template bone morphology and not in the inter-examiner differences. The UP-CAST anatomical calibration could be considered a promising alternative to conventional calibration contributing to a more repeatable 3D human movement analysis.

  17. Application of Temperature Sensitivities During Iterative Strain-Gage Balance Calibration Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    A new method is discussed that may be used to correct wind tunnel strain-gage balance load predictions for the influence of residual temperature effects at the location of the strain-gages. The method was designed for the iterative analysis technique that is used in the aerospace testing community to predict balance loads from strain-gage outputs during a wind tunnel test. The new method implicitly applies temperature corrections to the gage outputs during the load iteration process. Therefore, it can use uncorrected gage outputs directly as input for the load calculations. The new method is applied in several steps. First, balance calibration data is analyzed in the usual manner assuming that the balance temperature was kept constant during the calibration. Then, the temperature difference relative to the calibration temperature is introduced as a new independent variable for each strain--gage output. Therefore, sensors must exist near the strain--gages so that the required temperature differences can be measured during the wind tunnel test. In addition, the format of the regression coefficient matrix needs to be extended so that it can support the new independent variables. In the next step, the extended regression coefficient matrix of the original calibration data is modified by using the manufacturer specified temperature sensitivity of each strain--gage as the regression coefficient of the corresponding temperature difference variable. Finally, the modified regression coefficient matrix is converted to a data reduction matrix that the iterative analysis technique needs for the calculation of balance loads. Original calibration data and modified check load data of NASA's MC60D balance are used to illustrate the new method.

  18. Fiber optic and laser sensors IV; Proceedings of the Meeting, Cambridge, MA, Sept. 22-24, 1986

    NASA Technical Reports Server (NTRS)

    De Paula, Ramon P. (Editor); Udd, Eric (Editor)

    1987-01-01

    The conference presents papers on industrial uses of fiber optic sensors, point and distributed polarimetric optical fiber sensors, fiber optic electric field sensor technology, micromachined resonant structures, single-mode fibers for sensing applications, and measurement techniques for magnetic field gradient detection. Consideration is also given to electric field meter and temperature measurement techniques for the power industry, the calibration of high-temperature fiber-optic microbend pressure transducers, and interferometric sensors for dc measurands. Other topics include the recognition of colors and collision avoidance in robotics using optical fiber sensors, the loss compensation of intensity-modulating fiber-optic sensors, and an embedded optical fiber strain tensor for composite structure applications.

  19. Calibration of High Heat Flux Sensors at NIST

    PubMed Central

    Murthy, A. V.; Tsai, B. K.; Gibson, C. E.

    1997-01-01

    An ongoing program at the National Institute of Standards and Technology (NIST) is aimed at improving and standardizing heat-flux sensor calibration methods. The current calibration needs of U.S. science and industry exceed the current NIST capability of 40 kW/m2 irradiance. In achieving this goal, as well as meeting lower-level non-radiative heat flux calibration needs of science and industry, three different types of calibration facilities currently are under development at NIST: convection, conduction, and radiation. This paper describes the research activities associated with the NIST Radiation Calibration Facility. Two different techniques, transfer and absolute, are presented. The transfer calibration technique employs a transfer standard calibrated with reference to a radiometric standard for calibrating the sensors using a graphite tube blackbody. Plans for an absolute calibration facility include the use of a spherical blackbody and a cooled aperture and sensor-housing assembly to calibrate the sensors in a low convective environment. PMID:27805156

  20. State updating and calibration period selection to improve dynamic monthly streamflow forecasts for an environmental flow management application

    NASA Astrophysics Data System (ADS)

    Gibbs, Matthew S.; McInerney, David; Humphrey, Greer; Thyer, Mark A.; Maier, Holger R.; Dandy, Graeme C.; Kavetski, Dmitri

    2018-02-01

    Monthly to seasonal streamflow forecasts provide useful information for a range of water resource management and planning applications. This work focuses on improving such forecasts by considering the following two aspects: (1) state updating to force the models to match observations from the start of the forecast period, and (2) selection of a shorter calibration period that is more representative of the forecast period, compared to a longer calibration period traditionally used. The analysis is undertaken in the context of using streamflow forecasts for environmental flow water management of an open channel drainage network in southern Australia. Forecasts of monthly streamflow are obtained using a conceptual rainfall-runoff model combined with a post-processor error model for uncertainty analysis. This model set-up is applied to two catchments, one with stronger evidence of non-stationarity than the other. A range of metrics are used to assess different aspects of predictive performance, including reliability, sharpness, bias and accuracy. The results indicate that, for most scenarios and metrics, state updating improves predictive performance for both observed rainfall and forecast rainfall sources. Using the shorter calibration period also improves predictive performance, particularly for the catchment with stronger evidence of non-stationarity. The results highlight that a traditional approach of using a long calibration period can degrade predictive performance when there is evidence of non-stationarity. The techniques presented can form the basis for operational monthly streamflow forecasting systems and provide support for environmental decision-making.

  1. Novel Applications of Rapid Prototyping in Gamma-ray and X-ray Imaging

    PubMed Central

    Miller, Brian W.; Moore, Jared W.; Gehm, Michael E.; Furenlid, Lars R.; Barrett, Harrison H.

    2010-01-01

    Advances in 3D rapid-prototyping printers, 3D modeling software, and casting techniques allow for the fabrication of cost-effective, custom components in gamma-ray and x-ray imaging systems. Applications extend to new fabrication methods for custom collimators, pinholes, calibration and resolution phantoms, mounting and shielding components, and imaging apertures. Details of the fabrication process for these components are presented, specifically the 3D printing process, cold casting with a tungsten epoxy, and lost-wax casting in platinum. PMID:22984341

  2. A multi-model fusion strategy for multivariate calibration using near and mid-infrared spectra of samples from brewing industry

    NASA Astrophysics Data System (ADS)

    Tan, Chao; Chen, Hui; Wang, Chao; Zhu, Wanping; Wu, Tong; Diao, Yuanbo

    2013-03-01

    Near and mid-infrared (NIR/MIR) spectroscopy techniques have gained great acceptance in the industry due to their multiple applications and versatility. However, a success of application often depends heavily on the construction of accurate and stable calibration models. For this purpose, a simple multi-model fusion strategy is proposed. It is actually the combination of Kohonen self-organizing map (KSOM), mutual information (MI) and partial least squares (PLSs) and therefore named as KMICPLS. It works as follows: First, the original training set is fed into a KSOM for unsupervised clustering of samples, on which a series of training subsets are constructed. Thereafter, on each of the training subsets, a MI spectrum is calculated and only the variables with higher MI values than the mean value are retained, based on which a candidate PLS model is constructed. Finally, a fixed number of PLS models are selected to produce a consensus model. Two NIR/MIR spectral datasets from brewing industry are used for experiments. The results confirms its superior performance to two reference algorithms, i.e., the conventional PLS and genetic algorithm-PLS (GAPLS). It can build more accurate and stable calibration models without increasing the complexity, and can be generalized to other NIR/MIR applications.

  3. Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras

    NASA Astrophysics Data System (ADS)

    Quinn, Mark Kenneth

    2018-05-01

    Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.

  4. Sensor calibration of polymeric Hopkinson bars for dynamic testing of soft materials

    NASA Astrophysics Data System (ADS)

    Martarelli, Milena; Mancini, Edoardo; Lonzi, Barbara; Sasso, Marco

    2018-02-01

    Split Hopkinson pressure bar (SHPB) testing is one of the most common techniques for the estimation of the constitutive behaviour of metallic materials. In this paper, the characterisation of soft rubber-like materials has been addressed by means of polymeric bars thanks to their reduced mechanical impedance. Due to their visco-elastic nature, polymeric bars are more sensitive to temperature changes than metallic bars, and due to their low conductance, the strain gauges used to measure the propagating wave in an SHPB may be exposed to significant heating. Consequently, a calibration procedure has been proposed to estimate quantitatively the temperature influence on strain gauge output. Furthermore, the calibration is used to determine the elastic modulus of the polymeric bars, which is an important parameter for the synchronisation of the propagation waves measured in the input and output bar strain gate stations, and for the correct determination of stress and strain evolution within the specimen. An example of the application has been reported in order to demonstrate the effectiveness of the technique. Different tests at different strain rates have been carried out on samples made of nytrile butadyene rubber (NBR) from the same injection moulding batch. Thanks to the correct synchronisation of the measured propagation waves measured by the strain gauges and applying the calibrated coefficients, the mechanical behaviour of the NBR material is obtained in terms of strain-rate-strain and stress-strain engineering curves.

  5. Calibration and validation of TRUST MRI for the estimation of cerebral blood oxygenation

    PubMed Central

    Lu, Hanzhang; Xu, Feng; Grgac, Ksenija; Liu, Peiying; Qin, Qin; van Zijl, Peter

    2011-01-01

    Recently, a T2-Relaxation-Under-Spin-Tagging (TRUST) MRI technique was developed to quantitatively estimate blood oxygen saturation fraction (Y) via the measurement of pure blood T2. This technique has shown promise for normalization of fMRI signals, for the assessment of oxygen metabolism, and in studies of cognitive aging and multiple sclerosis. However, a human validation study has not been conducted. In addition, the calibration curve used to convert blood T2 to Y has not accounted for the effects of hematocrit (Hct). In the present study, we first conducted experiments on blood samples under physiologic conditions, and the Carr-Purcell-Meiboom-Gill (CPMG) T2 was determined for a range of Y and Hct values. The data were fitted to a two-compartment exchange model to allow the characterization of a three-dimensional plot that can serve to calibrate the in vivo data. Next, in a validation study in humans, we showed that arterial Y estimated using TRUST MRI was 0.837±0.036 (N=7) during the inhalation of 14% O2, which was in excellent agreement with the gold-standard Y values of 0.840±0.036 based on Pulse-Oximetry. These data suggest that the availability of this calibration plot should enhance the applicability of TRUST MRI for non-invasive assessment of cerebral blood oxygenation. PMID:21590721

  6. Development of an in situ calibration technique for combustible gas detectors

    NASA Technical Reports Server (NTRS)

    Shumar, J. W.; Wynveen, R. A.; Lance, N., Jr.; Lantz, J. B.

    1977-01-01

    This paper describes the development of an in situ calibration procedure for combustible gas detectors (CGD). The CGD will be a necessary device for future space vehicles as many subsystems in the Environmental Control/Life Support System utilize or produce hydrogen (H2) gas. Existing calibration techniques are time-consuming and require support equipment such as an environmental chamber and calibration gas supply. The in situ calibration procedure involves utilization of a water vapor electrolysis cell for the automatic in situ generation of a H2/air calibration mixture within the flame arrestor of the CGD. The development effort concluded with the successful demonstration of in situ span calibrations of a CGD.

  7. Automation is an Effective Way to Improve Quality of Verification (Calibration) of Measuring Instruments

    NASA Astrophysics Data System (ADS)

    Golobokov, M.; Danilevich, S.

    2018-04-01

    In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.

  8. Effects of including surface depressions in the application of the Precipitation-Runoff Modeling System in the Upper Flint River Basin, Georgia

    USGS Publications Warehouse

    Viger, Roland J.; Hay, Lauren E.; Jones, John W.; Buell, Gary R.

    2010-01-01

    This report documents an extension of the Precipitation Runoff Modeling System that accounts for the effect of a large number of water-holding depressions in the land surface on the hydrologic response of a basin. Several techniques for developing the inputs needed by this extension also are presented. These techniques include the delineation of the surface depressions, the generation of volume estimates for the surface depressions, and the derivation of model parameters required to describe these surface depressions. This extension is valuable for applications in basins where surface depressions are too small or numerous to conveniently model as discrete spatial units, but where the aggregated storage capacity of these units is large enough to have a substantial effect on streamflow. In addition, this report documents several new model concepts that were evaluated in conjunction with the depression storage functionality, including: ?hydrologically effective? imperviousness, rates of hydraulic conductivity, and daily streamflow routing. All of these techniques are demonstrated as part of an application in the Upper Flint River Basin, Georgia. Simulated solar radiation, potential evapotranspiration, and water balances match observations well, with small errors for the first two simulated data in June and August because of differences in temperatures from the calibration and evaluation periods for those months. Daily runoff simulations show increasing accuracy with streamflow and a good fit overall. Including surface depression storage in the model has the effect of decreasing daily streamflow for all but the lowest flow values. The report discusses the choices and resultant effects involved in delineating and parameterizing these features. The remaining enhancements to the model and its application provide a more realistic description of basin geography and hydrology that serve to constrain the calibration process to more physically realistic parameter values.

  9. Energy response calibration of photon-counting detectors using x-ray fluorescence: a feasibility study.

    PubMed

    Cho, H-M; Ding, H; Ziemer, B P; Molloi, S

    2014-12-07

    Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using x-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for x-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm(2) in detection area. The angular dependence of x-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded x-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of x-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of x-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic x-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the x-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory.

  10. Energy response calibration of photon-counting detectors using x-ray fluorescence: a feasibility study

    NASA Astrophysics Data System (ADS)

    Cho, H.-M.; Ding, H.; Ziemer, BP; Molloi, S.

    2014-12-01

    Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using x-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for x-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3  ×  3 mm2 in detection area. The angular dependence of x-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded x-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of x-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of x-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic x-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the x-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory.

  11. Energy response calibration of photon-counting detectors using X-ray fluorescence: a feasibility study

    PubMed Central

    Cho, H-M; Ding, H; Ziemer, BP; Molloi, S

    2014-01-01

    Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using X-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for X-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm2 in detection area. The angular dependence of X-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded X-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of X-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of X-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic X-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the X-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory. PMID:25369288

  12. LIF Density Measurement Calibration Using a Reference Cell

    NASA Technical Reports Server (NTRS)

    Domonkos, Matthew T.; Williams, George J., Jr.; Lyons, Valerie J. (Technical Monitor)

    2002-01-01

    Flight qualification of ion thrusters typically requires testing on the order of 10,000 hours. Extensive knowledge of wear mechanisms and rates is necessary to establish design confidence prior to long duration tests. Consequently, real-time erosion rate measurements offer the potential both to reduce development costs and to enhance knowledge of the dependency of component wear on operating conditions. Several previous studies have used laser induced fluorescence (LIF) to measure real-time, in situ erosion rates of ion thruster accelerator grids. Those studies provided only relative measurements of the erosion rate. In the present investigation, a molybdenum tube was resistively heated such that the evaporation rate yielded densities within the tube on the order of those expected from accelerator grid erosion. A pulsed UV laser was used to pump the ground state molybdenum at 345.64nm, and the non-resonant fluorescence at 550-nm was collected using a bandpass filter and a photomultiplier tube or intensified CCD array. The sensitivity of the fluorescence was evaluated to determine the limitations of the calibration technique. The suitability of the diagnostic calibration technique was assessed for application to ion engine erosion rate measurements.

  13. The 'sniffer-patch' technique for detection of neurotransmitter release.

    PubMed

    Allen, T G

    1997-05-01

    A wide variety of techniques have been employed for the detection and measurement of neurotransmitter release from biological preparations. Whilst many of these methods offer impressive levels of sensitivity, few are able to combine sensitivity with the necessary temporal and spatial resolution required to study quantal release from single cells. One detection method that is seeing a revival of interest and has the potential to fill this niche is the so-called 'sniffer-patch' technique. In this article, specific examples of the practical aspects of using this technique are discussed along with the procedures involved in calibrating these biosensors to extend their applications to provide quantitative, in addition to simple qualitative, measurements of quantal transmitter release.

  14. Image-guided Navigation of Single-element Focused Ultrasound Transducer

    PubMed Central

    Kim, Hyungmin; Chiu, Alan; Park, Shinsuk; Yoo, Seung-Schik

    2014-01-01

    The spatial specificity and controllability of focused ultrasound (FUS), in addition to its ability to modify the excitability of neural tissue, allows for the selective and reversible neuromodulation of the brain function, with great potential in neurotherapeutics. Intra-operative magnetic resonance imaging (MRI) guidance (in short, MRg) has limitations due to its complicated examination logistics, such as fixation through skull screws to mount the stereotactic frame, simultaneous sonication in the MRI environment, and restrictions in choosing MR-compatible materials. In order to overcome these limitations, an image-guidance system based on optical tracking and pre-operative imaging data is developed, separating the imaging acquisition for guidance and sonication procedure for treatment. Techniques to define the local coordinates of the focal point of sonication are presented. First, mechanical calibration detects the concentric rotational motion of a rigid-body optical tracker, attached to a straight rod mimicking the sonication path, pivoted at the virtual FUS focus. The spatial error presented in the mechanical calibration was compensated further by MRI-based calibration, which estimates the spatial offset between the navigated focal point and the ground-truth location of the sonication focus obtained from a temperature-sensitive MR sequence. MRI-based calibration offered a significant decrease in spatial errors (1.9±0.8 mm; 57% reduction) compared to the mechanical calibration method alone (4.4±0.9 mm). Using the presented method, pulse-mode FUS was applied to the motor area of the rat brain, and successfully stimulated the motor cortex. The presented techniques can be readily adapted for the transcranial application of FUS to intact human brain. PMID:25232203

  15. Evaluation of the Applicability of Solar and Lamp Radiometric Calibrations of a Precision Sun Photometer Operating Between 300 and 1025 nm

    NASA Technical Reports Server (NTRS)

    Schmid, Beat; Spyak, Paul R.; Biggar, Stuart F.; Joerg, Sekler; Ingold, Thomas; Maetzler, Christian; Kaempfer, Niklaus

    2000-01-01

    Over a period of 3 year a precision Sun photometer (SPM) operating between 300 and 1025 nm was calibrated four times at three different high-mountain sites in Switzerland, Germany, and the United States by means of the Langley-plot technique. We found that for atmospheric window wavelengths the total error (2 sigma-statistical plus systematic errors) of the calibration constants V(sub 0)(lambda), the SPM voltage in the absence of any attenuating atmosphere, can be kept below 1.60% in the UV-A and blue, 0.9% in the mid-visible, and 0.6% in the near-infra red spectral region. For SPM channels within strong water-vapor or ozone absorption bands a modified Langley-plot technique was used to determine V(sub 0)(lambda) with a lower accuracy. Within the same period of time, we calibrated the SPM five times using irradiance standard lamps in the optical labs of the Physikalisch-Meteorologisches Observatorium Davos and World Radiation Center, Switzerland, and of the Remote Sensing Group of the Optical Sciences Center, University of Arizona, Tucson, Arizona. The lab calibration method requires knowledge of the extraterrestrial spectral irradiance. When we refer the standard lamp results to the World Radiation Center extraterrestrial solar irradiance spectrum, they agree with the Langley results within 2% at 6 or 13 SPM wavelengths. The largest disagreement (4.4%) is found for the channel centered at 610 nm. The results of these intercomparisons change significantly when the lamp results are referred to two different extraterrestrial solar irradiance spectra that have become recently available.

  16. Statistical photocalibration of photodetectors for radiometry without calibrated light sources

    NASA Astrophysics Data System (ADS)

    Yielding, Nicholas J.; Cain, Stephen C.; Seal, Michael D.

    2018-01-01

    Calibration of CCD arrays for identifying bad pixels and achieving nonuniformity correction is commonly accomplished using dark frames. This kind of calibration technique does not achieve radiometric calibration of the array since only the relative response of the detectors is computed. For this, a second calibration is sometimes utilized by looking at sources with known radiances. This process can be used to calibrate photodetectors as long as a calibration source is available and is well-characterized. A previous attempt at creating a procedure for calibrating a photodetector using the underlying Poisson nature of the photodetection required calculations of the skewness of the photodetector measurements. Reliance on the third moment of measurement meant that thousands of samples would be required in some cases to compute that moment. A photocalibration procedure is defined that requires only first and second moments of the measurements. The technique is applied to image data containing a known light source so that the accuracy of the technique can be surmised. It is shown that the algorithm can achieve accuracy of nearly 2.7% of the predicted number of photons using only 100 frames of image data.

  17. Laser-Induced Fluorescence Helps Diagnose Plasma Processes

    NASA Technical Reports Server (NTRS)

    Beattie, J. R.; Mattosian, J. N.; Gaeta, C. J.; Turley, R. S.; Williams, J. D.; Williamson, W. S.

    1994-01-01

    Technique developed to provide in situ monitoring of rates of ion sputter erosion of accelerator electrodes in ion thrusters also used for ground-based applications to monitor, calibrate, and otherwise diagnose plasma processes in fabrication of electronic and optical devices. Involves use of laser-induced-fluorescence measurements, which provide information on rates of ion etching, inferred rates of sputter deposition, and concentrations of contaminants.

  18. Improved Radial Velocity Precision with a Tunable Laser Calibrator

    NASA Astrophysics Data System (ADS)

    Cramer, Claire; Brown, S.; Dupree, A. K.; Lykke, K. R.; Smith, A.; Szentgyorgyi, A.

    2010-01-01

    We present radial velocities obtained using a novel laser-based wavelength calibration technique. We have built a prototype laser calibrator for the Hectochelle spectrograph at the MMT 6.5 m telescope. The Hectochelle is a high-dispersion, fiber-fed, multi-object spectrograph capable of recording up to 240 spectra simultaneously with a resolving power of 40000. The standard wavelength calibration method makes use of spectra from thorium-argon hollow cathode lamps shining directly onto the fibers. The difference in light path between calibration and science light as well as the uneven distribution of spectral lines are believed to introduce errors of up to several hundred m/s in the wavelength scale. Our tunable laser wavelength calibrator solves these problems. The laser is bright enough for use with a dome screen, allowing the calibration light path to better match the science light path. Further, the laser is tuned in regular steps across a spectral order to generate a calibration spectrum, creating a comb of evenly-spaced lines on the detector. Using the solar spectrum reflected from the atmosphere to record the same spectrum in every fiber, we show that laser wavelength calibration brings radial velocity uncertainties down below 100 m/s. We present these results as well as an application of tunable laser calibration to stellar radial velocities determined with the infrared Ca triplet in globular clusters M15 and NGC 7492. We also suggest how the tunable laser could be useful for other instruments, including single-object, cross-dispersed echelle spectrographs, and adapted for infrared spectroscopy.

  19. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining

    PubMed Central

    Mendikute, Alberto; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-01-01

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras. PMID:28891946

  20. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining.

    PubMed

    Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-09-09

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras.

  1. The Characterization of a Piston Displacement-Type Flowmeter Calibration Facility and the Calibration and Use of Pulsed Output Type Flowmeters

    PubMed Central

    Mattingly, G. E.

    1992-01-01

    Critical measurement performance of fluid flowmeters requires proper and quantified verification data. These data should be generated using calibration and traceability techniques established for these verification purposes. In these calibration techniques, the calibration facility should be well-characterized and its components and performance properly traced to pertinent higher standards. The use of this calibrator to calibrate flowmeters should be appropriately established and the manner in which the calibrated flowmeter is used should be specified in accord with the conditions of the calibration. These three steps: 1) characterizing the calibration facility itself, 2) using the characterized facility to calibrate a flowmeter, and 3) using the calibrated flowmeter to make a measurement are described and the pertinent equations are given for an encoded-stroke, piston displacement-type calibrator and a pulsed output flowmeter. It is concluded that, given these equations and proper instrumentation of this type of calibrator, very high levels of performance can be attained and, in turn, these can be used to achieve high fluid flow rate measurement accuracy with pulsed output flowmeters. PMID:28053444

  2. MO-D-BRD-04: NIST Air-Kerma Standard for Electronic Brachytherapy Calibrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitch, M.

    Electronic brachytherapy (eBT) has seen an insurgence of manufacturers entering the US market for use in radiation therapy. In addition to the established interstitial, intraluminary, and intracavitary applications of eBT, many centers are now using eBT to treat skin lesions. It is important for medical physicists working with electronic brachytherapy sources to understand the basic physics principles of the sources themselves as well as the variety of applications for which they are being used. The calibration of the sources is different from vendor to vendor and the traceability of calibrations has evolved as new sources came to market. In 2014,more » a new air-kerma based standard was introduced by the National Institute of Standards and Technology (NIST) to measure the output of an eBT source. Eventually commercial treatment planning systems should accommodate this new standard and provide NIST traceability to the end user. The calibration and commissioning of an eBT system is unique to its application and typically entails a list of procedural recommendations by the manufacturer. Commissioning measurements are performed using a variety of methods, some of which are modifications of existing AAPM Task Group protocols. A medical physicist should be familiar with the different AAPM Task Group recommendations for applicability to eBT and how to properly adapt them to their needs. In addition to the physical characteristics of an eBT source, the photon energy is substantially lower than from HDR Ir-192 sources. Consequently, tissue-specific dosimetry and radiobiological considerations are necessary when comparing these brachytherapy modalities and when making clinical decisions as a radiation therapy team. In this session, the physical characteristics and calibration methodologies of eBt sources will be presented as well as radiobiology considerations and other important clinical considerations. Learning Objectives: To understand the basic principles of electronic brachytherapy and the various applications for which it is being used. To understand the physics of the calibration and commissioning for electronic brachytherapy sources To understand the unique radiobiology and clinical implementation of electronic brachytherapy systems for skin and IORT techniques Xoft, Inc. contributed funding toward development of the NIST electronic brachytherapy facility (Michael Mitch).The University of Wisconsin (Wesley Culberson) has received research support funding from Xoft, Inc. Zoubir Ouhib has received partial funding from Elekta Esteya.« less

  3. Application of a weighted-averaging method for determining paleosalinity: a tool for restoration of south Florida's estuaries

    USGS Publications Warehouse

    Wingard, G.L.; Hudley, J.W.

    2012-01-01

    A molluscan analogue dataset is presented in conjunction with a weighted-averaging technique as a tool for estimating past salinity patterns in south Florida’s estuaries and developing targets for restoration based on these reconstructions. The method, here referred to as cumulative weighted percent (CWP), was tested using modern surficial samples collected in Florida Bay from sites located near fixed water monitoring stations that record salinity. The results were calibrated using species weighting factors derived from examining species occurrence patterns. A comparison of the resulting calibrated species-weighted CWP (SW-CWP) to the observed salinity at the water monitoring stations averaged over a 3-year time period indicates, on average, the SW-CWP comes within less than two salinity units of estimating the observed salinity. The SW-CWP reconstructions were conducted on a core from near the mouth of Taylor Slough to illustrate the application of the method.

  4. Technique for Radiometer and Antenna Array Calibration - TRAAC

    NASA Technical Reports Server (NTRS)

    Meyer, Paul; Sims, William; Varnavas, Kosta; McCracken, Jeff; Srinivasan, Karthik; Limaye, Ashutosh; Laymon, Charles; Richeson. James

    2012-01-01

    Highly sensitive receivers are used to detect minute amounts of emitted electromagnetic energy. Calibration of these receivers is vital to the accuracy of the measurements. Traditional calibration techniques depend on calibration reference internal to the receivers as reference for the calibration of the observed electromagnetic energy. Such methods can only calibrate errors in measurement introduced by the receiver only. The disadvantage of these existing methods is that they cannot account for errors introduced by devices, such as antennas, used for capturing electromagnetic radiation. This severely limits the types of antennas that can be used to make measurements with a high degree of accuracy. Complex antenna systems, such as electronically steerable antennas (also known as phased arrays), while offering potentially significant advantages, suffer from a lack of a reliable and accurate calibration technique. The proximity of antenna elements in an array results in interaction between the electromagnetic fields radiated (or received) by the individual elements. This phenomenon is called mutual coupling. The new calibration method uses a known noise source as a calibration load to determine the instantaneous characteristics of the antenna. The noise source is emitted from one element of the antenna array and received by all the other elements due to mutual coupling. This received noise is used as a calibration standard to monitor the stability of the antenna electronics.

  5. Air data position-error calibration using state reconstruction techniques

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.; Larson, T. J.; Ehernberger, L. J.

    1984-01-01

    During the highly maneuverable aircraft technology (HiMAT) flight test program recently completed at NASA Ames Research Center's Dryden Flight Research Facility, numerous problems were experienced in airspeed calibration. This necessitated the use of state reconstruction techniques to arrive at a position-error calibration. For the HiMAT aircraft, most of the calibration effort was expended on flights in which the air data pressure transducers were not performing accurately. Following discovery of this problem, the air data transducers of both aircraft were wrapped in heater blankets to correct the problem. Additional calibration flights were performed, and from the resulting data a satisfactory position-error calibration was obtained. This calibration and data obtained before installation of the heater blankets were used to develop an alternate calibration method. The alternate approach took advantage of high-quality inertial data that was readily available. A linearized Kalman filter (LKF) was used to reconstruct the aircraft's wind-relative trajectory; the trajectory was then used to separate transducer measurement errors from the aircraft position error. This calibration method is accurate and inexpensive. The LKF technique has an inherent advantage of requiring that no flight maneuvers be specially designed for airspeed calibrations. It is of particular use when the measurements of the wind-relative quantities are suspected to have transducer-related errors.

  6. The Status of the Tropical Rainfall Measuring Mission (TRMM) after 2 Years in Orbit

    NASA Technical Reports Server (NTRS)

    Kummerow, C.; Simpson, J.; Thiele, O.; Barnes, W.; Chang, A. T. C.; Stocker, E.; Adler, R. F.; Hou, A.; Kakar, R.; Wentz, F.

    1999-01-01

    The Tropical Rainfall Measuring Mission (TRMM) satellite was launched on November 27, 1997, and data from all the instruments first became available approximately 30 days after launch. Since then, much progress has been made in the calibration of the sensors, the improvement of the rainfall algorithms, in related modeling applications and in new datasets tailored specifically for these applications. This paper reports the latest results regarding the calibration of the TRMM Microwave Imager, (TMI), Precipitation Radar (PR) and Visible and Infrared Sensor (VIRS). For the TMI, a new product is in place that corrects for a still unknown source of radiation leaking in to the TMI receiver. The PR calibration has been adjusted upward slightly (by 0.6 dBZ) to better match ground reference targets, while the VIRS calibration remains largely unchanged. In addition to the instrument calibration, great strides have been made with the rainfall algorithms as well, with the new rainfall products agreeing with each other to within less than 20% over monthly zonally averaged statistics. The TRMM Science Data and Information System (TSDIS) has responded equally well by making a number of new products, including real-time and fine resolution gridded rainfall fields available to the modeling community. The TRMM Ground Validation (GV) program is also responding with improved radar calibration techniques and rainfall algorithms to provide more accurate GV products which will be further enhanced with the new multiparameter 10 cm radar being developed for TRMM validation and precipitation studies. Progress in these various areas has, in turn, led to exciting new developments in the modeling area where Data Assimilation, and Weather Forecast models are showing dramatic improvements after the assimilation of observed rainfall fields.

  7. Mobile Soil Moisture Sensing in High Elevations: Applications of the Cosmic Ray Neutron Sensor Technique in Heterogeneous Terrain

    NASA Astrophysics Data System (ADS)

    Franz, T. E.; Avery, W. A.; Wahbi, A.; Dercon, G.; Heng, L.; Strauss, P.

    2017-12-01

    The use of the Cosmic Ray Neutron Sensor (CRNS) for the detection of field-scale soil moisture ( 20 ha) has been the subject of a multitude research applications over the past decade. One exciting area within agriculture aims to provide soil moisture and soil property information for irrigation scheduling. The CRNS technology exists in both a stationary and mobile form. The use of a mobile CRNS opens possibilities for application in many diverse environments. This work details the use of a mobile "backpack" CRNS device in high elevation heterogeneous terrain in the alpine mountains of western Austria. This research demonstrates the utilization of established calibration and validation techniques associated with the use of the CRNS within difficult to reach landscapes that are either inaccessible or impractical to both the stationary CRNS and other more traditional soil moisture sensing technology. Field work was conducted during the summer of 2016 in the Rauris valley of the Austrian Alps at three field sites located at different representative elevations within the same Rauris watershed. Calibrations of the "backpack" CRNS were performed at each site along with data validation via in-situ Time Domain Reflectometry (TDR) and gravimetric soil sampling. Validation data show that the relationship between in-situ soil moisture data determined via TDR and soil sampling and soil moisture data determined via the mobile CRNS is strong (RMSE <2.5 % volumetric). The efficacy of this technique in remote alpine landscapes shows great potential for use in early warning systems for landslides and flooding, watershed hydrology, and high elevation agricultural water management.

  8. Curved-flow, rolling-flow, and oscillatory pure-yawing wind-tunnel test methods for determination of dynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.; Grafton, S. B.; Lutze, F. H.

    1981-01-01

    The test capabilities of the Stability Wind Tunnel of the Virginia Polytechnic Institute and State University are described, and calibrations for curved and rolling flow techniques are given. Oscillatory snaking tests to determine pure yawing derivatives are considered. Representative aerodynamic data obtained for a current fighter configuration using the curved and rolling flow techniques are presented. The application of dynamic derivatives obtained in such tests to the analysis of airplane motions in general, and to high angle of attack flight conditions in particular, is discussed.

  9. Characteristics of Coplanar Waveguide on Sapphire for High Temperature Applications (25 to 400 degrees C)

    NASA Technical Reports Server (NTRS)

    Ponchak, George E.; Jordan, Jennifer L.; Scardelletti, Maximilian; Stalker, Amy R.

    2007-01-01

    This paper presents the characteristics of coplanar waveguide transmission lines fabricated on R-plane sapphire substrates as a function of temperature across the temperature range of 25 to 400 C. Effective permittivity and attenuation are measured on a high temperature probe station. Two techniques are used to obtain the transmission line characteristics, a Thru-Reflect-Line calibration technique that yields the propagation coefficient and resonant stubs. To a first order fit of the data, the effective permittivity and the attenuation increase linearly with temperature.

  10. An implantable blood pressure and flow transmitter.

    NASA Technical Reports Server (NTRS)

    Rader, R. D.; Meehan, J. P.; Henriksen, J. K. C.

    1973-01-01

    A miniature totally implantable FM/FM telemetry system has been developed to simultaneously measure blood pressure and blood flow, thus providing an appreciation of the hemodynamics of the circulation to the entire body or to a particular organ. Developed for work with animal subjects, the telemetry system's transmission time is controlled by an RF signal that permits an operating life of several months. Pressure is detected by a miniature intravascular transducer and flow is detected by an extravascular interferometric ultrasonic technique. Both pressure and flow are calibrated prior to implanting. The pressure calibration can be checked after the implanting by cannulation; flow calibration can be verified only at the end of the experiment by determining the voltage output from the implanted sensing system as a function of several measured flow rates. The utility of this device has been established by its use in investigating canine renal circulation during exercise, emotional encounters, administration of drugs, and application of accelerative forces.

  11. Geometrical pose and structural estimation from a single image for automatic inspection of filter components

    NASA Astrophysics Data System (ADS)

    Liu, Yonghuai; Rodrigues, Marcos A.

    2000-03-01

    This paper describes research on the application of machine vision techniques to a real time automatic inspection task of air filter components in a manufacturing line. A novel calibration algorithm is proposed based on a special camera setup where defective items would show a large calibration error. The algorithm makes full use of rigid constraints derived from the analysis of geometrical properties of reflected correspondence vectors which have been synthesized into a single coordinate frame and provides a closed form solution to the estimation of all parameters. For a comparative study of performance, we also developed another algorithm based on this special camera setup using epipolar geometry. A number of experiments using synthetic data have shown that the proposed algorithm is generally more accurate and robust than the epipolar geometry based algorithm and that the geometric properties of reflected correspondence vectors provide effective constraints to the calibration of rigid body transformations.

  12. A review of calibrated blood oxygenation level-dependent (BOLD) methods for the measurement of task-induced changes in brain oxygen metabolism

    PubMed Central

    Blockley, Nicholas P.; Griffeth, Valerie E. M.; Simon, Aaron B.; Buxton, Richard B.

    2013-01-01

    The dynamics of the blood oxygenation level-dependent (BOLD) response are dependent on changes in cerebral blood flow, cerebral blood volume and the cerebral metabolic rate of oxygen consumption. Furthermore, the amplitude of the response is dependent on the baseline physiological state, defined by the haematocrit, oxygen extraction fraction and cerebral blood volume. As a result of this complex dependence, the accurate interpretation of BOLD data and robust intersubject comparisons when the baseline physiology is varied are difficult. The calibrated BOLD technique was developed to address these issues. However, the methodology is complex and its full promise has not yet been realised. In this review, the theoretical underpinnings of calibrated BOLD, and issues regarding this theory that are still to be resolved, are discussed. Important aspects of practical implementation are reviewed and reported applications of this methodology are presented. PMID:22945365

  13. Effect of Correlated Precision Errors on Uncertainty of a Subsonic Venturi Calibration

    NASA Technical Reports Server (NTRS)

    Hudson, S. T.; Bordelon, W. J., Jr.; Coleman, H. W.

    1996-01-01

    An uncertainty analysis performed in conjunction with the calibration of a subsonic venturi for use in a turbine test facility produced some unanticipated results that may have a significant impact in a variety of test situations. Precision uncertainty estimates using the preferred propagation techniques in the applicable American National Standards Institute/American Society of Mechanical Engineers standards were an order of magnitude larger than precision uncertainty estimates calculated directly from a sample of results (discharge coefficient) obtained at the same experimental set point. The differences were attributable to the effect of correlated precision errors, which previously have been considered negligible. An analysis explaining this phenomenon is presented. The article is not meant to document the venturi calibration, but rather to give a real example of results where correlated precision terms are important. The significance of the correlated precision terms could apply to many test situations.

  14. Link calibration against receiver calibration: an assessment of GPS time transfer uncertainties

    NASA Astrophysics Data System (ADS)

    Rovera, G. D.; Torre, J.-M.; Sherwood, R.; Abgrall, M.; Courde, C.; Laas-Bourez, M.; Uhrich, P.

    2014-10-01

    We present a direct comparison between two different techniques for the relative calibration of time transfer between remote time scales when using the signals transmitted by the Global Positioning System (GPS). Relative calibration estimates the delay of equipment or the delay of a time transfer link with respect to reference equipment. It is based on the circulation of some travelling GPS equipment between the stations in the network, against which the local equipment is measured. Two techniques can be considered: first a station calibration by the computation of the hardware delays of the local GPS equipment; second the computation of a global hardware delay offset for the time transfer between the reference points of two remote time scales. This last technique is called a ‘link’ calibration, with respect to the other one, which is a ‘receiver’ calibration. The two techniques require different measurements on site, which change the uncertainty budgets, and we discuss this and related issues. We report on one calibration campaign organized during Autumn 2013 between Observatoire de Paris (OP), Paris, France, Observatoire de la Côte d'Azur (OCA), Calern, France, and NERC Space Geodesy Facility (SGF), Herstmonceux, United Kingdom. The travelling equipment comprised two GPS receivers of different types, along with the required signal generator and distribution amplifier, and one time interval counter. We show the different ways to compute uncertainty budgets, leading to improvement factors of 1.2 to 1.5 on the hardware delay uncertainties when comparing the relative link calibration to the relative receiver calibration.

  15. Artifact correction and absolute radiometric calibration techniques employed in the Landsat 7 image assessment system

    USGS Publications Warehouse

    Boncyk, Wayne C.; Markham, Brian L.; Barker, John L.; Helder, Dennis

    1996-01-01

    The Landsat-7 Image Assessment System (IAS), part of the Landsat-7 Ground System, will calibrate and evaluate the radiometric and geometric performance of the Enhanced Thematic Mapper Plus (ETM +) instrument. The IAS incorporates new instrument radiometric artifact correction and absolute radiometric calibration techniques which overcome some limitations to calibration accuracy inherent in historical calibration methods. Knowledge of ETM + instrument characteristics gleaned from analysis of archival Thematic Mapper in-flight data and from ETM + prelaunch tests allow the determination and quantification of the sources of instrument artifacts. This a priori knowledge will be utilized in IAS algorithms designed to minimize the effects of the noise sources before calibration, in both ETM + image and calibration data.

  16. Applications of inductively coupled plasma mass spectrometry and laser ablation inductively coupled plasma mass spectrometry in materials science

    NASA Astrophysics Data System (ADS)

    Becker, Johanna Sabine

    2002-12-01

    Inductively coupled plasma mass spectrometry (ICP-MS) and laser ablation ICP-MS (LA-ICP-MS) have been applied as the most important inorganic mass spectrometric techniques having multielemental capability for the characterization of solid samples in materials science. ICP-MS is used for the sensitive determination of trace and ultratrace elements in digested solutions of solid samples or of process chemicals (ultrapure water, acids and organic solutions) for the semiconductor industry with detection limits down to sub-picogram per liter levels. Whereas ICP-MS on solid samples (e.g. high-purity ceramics) sometimes requires time-consuming sample preparation for its application in materials science, and the risk of contamination is a serious drawback, a fast, direct determination of trace elements in solid materials without any sample preparation by LA-ICP-MS is possible. The detection limits for the direct analysis of solid samples by LA-ICP-MS have been determined for many elements down to the nanogram per gram range. A deterioration of detection limits was observed for elements where interferences with polyatomic ions occur. The inherent interference problem can often be solved by applying a double-focusing sector field mass spectrometer at higher mass resolution or by collision-induced reactions of polyatomic ions with a collision gas using an ICP-MS fitted with collision cell. The main problem of LA-ICP-MS is quantification if no suitable standard reference materials with a similar matrix composition are available. The calibration problem in LA-ICP-MS can be solved using on-line solution-based calibration, and different procedures, such as external calibration and standard addition, have been discussed with respect to their application in materials science. The application of isotope dilution in solution-based calibration for trace metal determination in small amounts of noble metals has been developed as a new calibration strategy. This review discusses new analytical developments and possible applications of ICP-MS and LA-ICP-MS for the quantitative determination of trace elements and in surface analysis for materials science.

  17. Poster - 16: Time-resolved diode dosimetry for in vivo proton therapy range verification: calibration through numerical modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toltz, Allison; Hoesl, Michaela; Schuemann, Jan

    Purpose: A method to refine the implementation of an in vivo, adaptive proton therapy range verification methodology was investigated. Simulation experiments and in-phantom measurements were compared to validate the calibration procedure of a time-resolved diode dosimetry technique. Methods: A silicon diode array system has been developed and experimentally tested in phantom for passively scattered proton beam range verification by correlating properties of the detector signal to the water equivalent path length (WEPL). The implementation of this system requires a set of calibration measurements to establish a beam-specific diode response to WEPL fit for the selected ‘scout’ beam in a solidmore » water phantom. This process is both tedious, as it necessitates a separate set of measurements for every ‘scout’ beam that may be appropriate to the clinical case, as well as inconvenient due to limited access to the clinical beamline. The diode response to WEPL relationship for a given ‘scout’ beam may be determined within a simulation environment, facilitating the applicability of this dosimetry technique. Measurements for three ‘scout’ beams were compared against simulated detector response with Monte Carlo methods using the Tool for Particle Simulation (TOPAS). Results: Detector response in water equivalent plastic was successfully validated against simulation for spread out Bragg peaks of range 10 cm, 15 cm, and 21 cm (168 MeV, 177 MeV, and 210 MeV) with adjusted R{sup 2} of 0.998. Conclusion: Feasibility has been shown for performing calibration of detector response for a given ‘scout’ beam through simulation for the time resolved diode dosimetry technique.« less

  18. Calibration Method for IATS and Application in Multi-Target Monitoring Using Coded Targets

    NASA Astrophysics Data System (ADS)

    Zhou, Yueyin; Wagner, Andreas; Wunderlich, Thomas; Wasmeier, Peter

    2017-06-01

    The technique of Image Assisted Total Stations (IATS) has been studied for over ten years and is composed of two major parts: one is the calibration procedure which combines the relationship between the camera system and the theodolite system; the other is the automatic target detection on the image by various methods of photogrammetry or computer vision. Several calibration methods have been developed, mostly using prototypes with an add-on camera rigidly mounted on the total station. However, these prototypes are not commercially available. This paper proposes a calibration method based on Leica MS50 which has two built-in cameras each with a resolution of 2560 × 1920 px: an overview camera and a telescope (on-axis) camera. Our work in this paper is based on the on-axis camera which uses the 30-times magnification of the telescope. The calibration consists of 7 parameters to estimate. We use coded targets, which are common tools in photogrammetry for orientation, to detect different targets in IATS images instead of prisms and traditional ATR functions. We test and verify the efficiency and stability of this monitoring method with multi-target.

  19. Relative radiometric calibration for multispectral remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Ren, Hsuan

    2006-10-01

    Our environment has been changed continuously by nature causes or human activities. In order to identify what has been changed during certain time period, we need to spend enormous resources to collect all kinds of data and analyze them. With remote sensing images, change detection has become one efficient and inexpensive technique. It has wide applications including disaster management, agriculture analysis, environmental monitoring and military reconnaissance. To detect the changes between two remote sensing images collected at different time, radiometric calibration is one of the most important processes. Under the different weather and atmosphere conditions, even the same material might be resulting distinct radiance spectrum in two images. In this case, they will be misclassified as changes and false alarm rate will also increase. To achieve absolute calibration, i.e., to convert the radiance to reflectance spectrum, the information about the atmosphere condition or ground reference materials with known reflectance spectrum is needed but rarely available. In this paper, we present relative radiometric calibration methods which transform image pair into similar atmospheric effect instead of remove it in absolutely calibration, so that the information of atmosphere condition is not required. A SPOT image pair will be used for experiment to demonstrate the performance.

  20. Calibration of cathode strip gains in multiwire drift chambers of the GlueX experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berdnikov, V. V.; Somov, S. V.; Pentchev, L.

    A technique for calibrating cathode strip gains in multiwire drift chambers of the GlueX experiment is described. The accuracy of the technique is estimated based on Monte Carlo generated data with known gain coefficients in the strip signal channels. One of the four detector sections has been calibrated using cosmic rays. Results of drift chamber calibration on the accelerator beam upon inclusion in the GlueX experimental setup are presented.

  1. Accuracy and efficiency of published film dosimetry techniques using a flat-bed scanner and EBT3 film.

    PubMed

    Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T

    2018-03-01

    Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.

  2. Oximetry using multispectral imaging: theory and application

    NASA Astrophysics Data System (ADS)

    MacKenzie, Lewis E.; Harvey, Andrew R.

    2018-06-01

    Multispectral imaging (MSI) is a technique for measurement of blood oxygen saturation in vivo that can be applied using various imaging modalities to provide new insights into physiology and disease development. This tutorial aims to provide a thorough introduction to the theory and application of MSI oximetry for researchers new to the field, whilst also providing detailed information for more experienced researchers. The optical theory underlying two-wavelength oximetry, three-wavelength oximetry, pulse oximetry, and multispectral oximetry algorithms are described in detail. The varied challenges of applying MSI oximetry to in vivo applications are outlined and discussed, covering: the optical properties of blood and tissue, optical paths in blood vessels, tissue auto-fluorescence, oxygen diffusion, and common oximetry artefacts. Essential image processing techniques for MSI are discussed, in particular, image acquisition, image registration strategies, and blood vessel line profile fitting. Calibration and validation strategies for MSI are discussed, including comparison techniques, physiological interventions, and phantoms. The optical principles and unique imaging capabilities of various cutting-edge MSI oximetry techniques are discussed, including photoacoustic imaging, spectroscopic optical coherence tomography, and snapshot MSI.

  3. Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bekele, E. G.; Nicklow, J. W.

    2005-12-01

    Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.

  4. Measuring the photodetector frequency response for ultrasonic applications by a heterodyne system with difference- frequency servo control.

    PubMed

    Koch, Christian

    2010-05-01

    A technique for the calibration of photodiodes in ultrasonic measurement systems using standard and cost-effective optical and electronic components is presented. A heterodyne system was realized using two commercially available distributed feedback lasers, and the required frequency stability and resolution were ensured by a difference-frequency servo control scheme. The frequency-sensitive element generating the error signal for the servo loop comprised a delay-line discriminator constructed from electronic elements. Measurements were carried out at up to 450 MHz, and the uncertainties of about 5% (k = 2) can be further reduced by improved radio frequency power measurement without losing the feature of using only simple elements. The technique initially dedicated to the determination of the frequency response of photodetectors applied in ultrasonic applications can be transferred to other application fields of optical measurements.

  5. Uptake and release of polar compounds in SDB-RPS Empore disks; implications for their use as passive samplers.

    PubMed

    Shaw, Melanie; Eaglesham, Geoff; Mueller, Jochen F

    2009-03-01

    Demand for sensitive monitoring tools to detect trace levels of pollutants in aquatic environments has led to investigation of sorbents to complement the suite of passive sampling phases currently in use. Styrenedivinylbenzene-reverse phase sulfonated (SDB-RPS) sorbents have a high affinity for polar organic compounds such as herbicides. However, the applicability of the performance reference compound (PRC) concept as an in situ calibration method for passive samplers that use this or similar sampling phases has yet to be validated. In this study, laboratory based calibration experiments were conducted to compare the uptake kinetics of several key pesticides with the release of three pre-loaded PRCs in Chemcatchers using SDB-RPS Empore disks deployed with a membrane and without (naked). For compounds with log K(OW) values ranging from 1.8 to 4.0, uptake into samplers with a membrane and without was linear over 30d and 10d, respectively. While uptake was linear and reproducible, PRC loss was not linear, meaning that the dissipation rates of these PRCs cannot be used to estimate field exposure conditions on uptake rates. An alternative in situ calibration technique using PRC loaded polydimethylsiloxane (PDMS) disks deployed alongside the Empore disk samplers as a surrogate calibration phase has been tested in the current study and shows promise for future applications.

  6. The Application of FT-IR Spectroscopy for Quality Control of Flours Obtained from Polish Producers

    PubMed Central

    Ceglińska, Alicja; Reder, Magdalena; Ciemniewska-Żytkiewicz, Hanna

    2017-01-01

    Samples of wheat, spelt, rye, and triticale flours produced by different Polish mills were studied by both classic chemical methods and FT-IR MIR spectroscopy. An attempt was made to statistically correlate FT-IR spectral data with reference data with regard to content of various components, for example, proteins, fats, ash, and fatty acids as well as properties such as moisture, falling number, and energetic value. This correlation resulted in calibrated and validated statistical models for versatile evaluation of unknown flour samples. The calibration data set was used to construct calibration models with use of the CSR and the PLS with the leave one-out, cross-validation techniques. The calibrated models were validated with a validation data set. The results obtained confirmed that application of statistical models based on MIR spectral data is a robust, accurate, precise, rapid, inexpensive, and convenient methodology for determination of flour characteristics, as well as for detection of content of selected flour ingredients. The obtained models' characteristics were as follows: R2 = 0.97, PRESS = 2.14; R2 = 0.96, PRESS = 0.69; R2 = 0.95, PRESS = 1.27; R2 = 0.94, PRESS = 0.76, for content of proteins, lipids, ash, and moisture level, respectively. Best results of CSR models were obtained for protein, ash, and crude fat (R2 = 0.86; 0.82; and 0.78, resp.). PMID:28243483

  7. The Role of Wakes in Modelling Tidal Current Turbines

    NASA Astrophysics Data System (ADS)

    Conley, Daniel; Roc, Thomas; Greaves, Deborah

    2010-05-01

    The eventual proper development of arrays of Tidal Current Turbines (TCT) will require a balance which maximizes power extraction while minimizing environmental impacts. Idealized analytical analogues and simple 2-D models are useful tools for investigating questions of a general nature but do not represent a practical tool for application to realistic cases. Some form of 3-D numerical simulations will be required for such applications and the current project is designed to develop a numerical decision-making tool for use in planning large scale TCT projects. The project is predicated on the use of an existing regional ocean modelling framework (the Regional Ocean Modelling System - ROMS) which is modified to enable the user to account for the effects of TCTs. In such a framework where mixing processes are highly parametrized, the fidelity of the quantitative results is critically dependent on the parameter values utilized. In light of the early stage of TCT development and the lack of field scale measurements, the calibration of such a model is problematic. In the absence of explicit calibration data sets, the device wake structure has been identified as an efficient feature for model calibration. This presentation will discuss efforts to design an appropriate calibration scheme which focuses on wake decay and the motivation for this approach, techniques applied, validation results from simple test cases and limitations shall be presented.

  8. A calibration method for the higher modes of a micro-mechanical cantilever

    NASA Astrophysics Data System (ADS)

    Shatil, N. R.; Homer, M. E.; Picco, L.; Martin, P. G.; Payton, O. D.

    2017-05-01

    Micro-mechanical cantilevers are increasingly being used as a characterisation tool in both material and biological sciences. New non-destructive applications are being developed that rely on the information encoded within the cantilever's higher oscillatory modes, such as atomic force microscopy techniques that measure the non-topographic properties of a sample. However, these methods require the spring constants of the cantilever at higher modes to be known in order to quantify their results. Here, we show how to calibrate the micro-mechanical cantilever and find the effective spring constant of any mode. The method is uncomplicated to implement, using only the properties of the cantilever and the fundamental mode that are straightforward to measure.

  9. Handy Microscopic Close-Range Videogrammetry

    NASA Astrophysics Data System (ADS)

    Esmaeili, F.; Ebadi, H.

    2017-09-01

    The modeling of small-scale objects is used in different applications such as medicine, industry, and cultural heritage. The capability of modeling small-scale objects using imaging with the help of hand USB digital microscopes and use of videogrammetry techniques has been implemented and evaluated in this paper. Use of this equipment and convergent imaging of the environment for modeling, provides an appropriate set of images for generation of three-dimensional models. The results of the measurements made with the help of a microscope micrometer calibration ruler have demonstrated that self-calibration of a hand camera-microscope set can help obtain a three-dimensional detail extraction precision of about 0.1 millimeters on small-scale environments.

  10. Calibration techniques and results in the soft X-ray and extreme ultraviolet for components of the Extreme Ultraviolet Explorer Satellite

    NASA Technical Reports Server (NTRS)

    Malina, Roger F.; Jelinsky, Patrick; Bowyer, Stuart

    1986-01-01

    The calibration facilities and techniques for the Extreme Ultraviolet Explorer (EUVE) from 44 to 2500 A are described. Key elements include newly designed radiation sources and a collimated monochromatic EUV beam. Sample results for the calibration of the EUVE filters, detectors, gratings, collimators, and optics are summarized.

  11. A 2.4-GHz Energy-Efficient Transmitter for Wireless Medical Applications.

    PubMed

    Qi Zhang; Peng Feng; Zhiqing Geng; Xiaozhou Yan; Nanjian Wu

    2011-02-01

    A 2.4-GHz energy-efficient transmitter (TX) for wireless medical applications is presented in this paper. It consists of four blocks: a phase-locked loop (PLL) synthesizer with a direct frequency presetting technique, a class-B power amplifier, a digital processor, and nonvolatile memory (NVM). The frequency presetting technique can accurately preset the carrier frequency of the voltage-controlled oscillator and reduce the lock-in time of the PLL synthesizer, further increasing the data rate of communication with low power consumption. The digital processor automatically compensates preset frequency variation with process, voltage, and temperature. The NVM stores the presetting signals and calibration data so that the TX can avoid the repetitive calibration process and save the energy in practical applications. The design is implemented in 0.18- μm radio-frequency complementary metal-oxide semiconductor process and the active area is 1.3 mm (2). The TX achieves 0-dBm output power with a maximum data rate of 4 Mb/s/2 Mb/s and dissipates 2.7-mA/5.4-mA current from a 1.8-V power supply for on-off keying/frequency-shift keying modulation, respectively. The corresponding energy efficiency is 1.2 nJ/b·mW and 4.8 nJ/b· mW when normalized to the transmitting power.

  12. Comparison of spectral radiance responsivity calibration techniques used for backscatter ultraviolet satellite instruments

    NASA Astrophysics Data System (ADS)

    Kowalewski, M. G.; Janz, S. J.

    2015-02-01

    Methods of absolute radiometric calibration of backscatter ultraviolet (BUV) satellite instruments are compared as part of an effort to minimize pre-launch calibration uncertainties. An internally illuminated integrating sphere source has been used for the Shuttle Solar BUV, Total Ozone Mapping Spectrometer, Ozone Mapping Instrument, and Global Ozone Monitoring Experiment 2 using standardized procedures traceable to national standards. These sphere-based spectral responsivities agree to within the derived combined standard uncertainty of 1.87% relative to calibrations performed using an external diffuser illuminated by standard irradiance sources, the customary spectral radiance responsivity calibration method for BUV instruments. The combined standard uncertainty for these calibration techniques as implemented at the NASA Goddard Space Flight Center’s Radiometric Calibration and Development Laboratory is shown to less than 2% at 250 nm when using a single traceable calibration standard.

  13. Vicarious calibrations of HICO data acquired from the International Space Station.

    PubMed

    Gao, Bo-Cai; Li, Rong-Rong; Lucke, Robert L; Davis, Curtiss O; Bevilacqua, Richard M; Korwan, Daniel R; Montes, Marcos J; Bowles, Jeffrey H; Corson, Michael R

    2012-05-10

    The Hyperspectral Imager for the Coastal Ocean (HICO) presently onboard the International Space Station (ISS) is an imaging spectrometer designed for remote sensing of coastal waters. The instrument is not equipped with any onboard spectral and radiometric calibration devices. Here we describe vicarious calibration techniques that have been used in converting the HICO raw digital numbers to calibrated radiances. The spectral calibration is based on matching atmospheric water vapor and oxygen absorption bands and extraterrestrial solar lines. The radiometric calibration is based on comparisons between HICO and the EOS/MODIS data measured over homogeneous desert areas and on spectral reflectance properties of coral reefs and water clouds. Improvements to the present vicarious calibration techniques are possible as we gain more in-depth understanding of the HICO laboratory calibration data and the ISS HICO data in the future.

  14. Examples of Current and Future Uses of Neural-Net Image Processing for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    Feed forward artificial neural networks are very convenient for performing correlated interpolation of pairs of complex noisy data sets as well as detecting small changes in image data. Image-to-image, image-to-variable and image-to-index applications have been tested at Glenn. Early demonstration applications are summarized including image-directed alignment of optics, tomography, flow-visualization control of wind-tunnel operations and structural-model-trained neural networks. A practical application is reviewed that employs neural-net detection of structural damage from interference fringe patterns. Both sensor-based and optics-only calibration procedures are available for this technique. These accomplishments have generated the knowledge necessary to suggest some other applications for NASA and Government programs. A tomography application is discussed to support Glenn's Icing Research tomography effort. The self-regularizing capability of a neural net is shown to predict the expected performance of the tomography geometry and to augment fast data processing. Other potential applications involve the quantum technologies. It may be possible to use a neural net as an image-to-image controller of an optical tweezers being used for diagnostics of isolated nano structures. The image-to-image transformation properties also offer the potential for simulating quantum computing. Computer resources are detailed for implementing the black box calibration features of the neural nets.

  15. Calibration of a horizontally acting force transducer with the use of a simple pendulum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taberner, Andrew J.; Hunter, Ian W.; BioInstrumentation Laboratory, Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 and Institute for Soldier Nanotechnologies, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139

    This article details the implementation of a method for calibrating horizontally measuring force transducers using a pendulum. The technique exploits the sinusoidal inertial force generated by a suspended mass as it pendulates about a point on the measurement axis of the force transducer. The method is used to calibrate a reconfigurable, custom-made force transducer based on exchangeable cantilevers with stiffness ranging from 10 to 10{sup 4} N/m. In this implementation, the relative combined standard uncertainty in the calibrated transducer stiffness is 0.41% while the repeatability of the calibration technique is 0.46%.

  16. A Review of LIDAR Radiometric Processing: From Ad Hoc Intensity Correction to Rigorous Radiometric Calibration.

    PubMed

    Kashani, Alireza G; Olsen, Michael J; Parrish, Christopher E; Wilson, Nicholas

    2015-11-06

    In addition to precise 3D coordinates, most light detection and ranging (LIDAR) systems also record "intensity", loosely defined as the strength of the backscattered echo for each measured point. To date, LIDAR intensity data have proven beneficial in a wide range of applications because they are related to surface parameters, such as reflectance. While numerous procedures have been introduced in the scientific literature, and even commercial software, to enhance the utility of intensity data through a variety of "normalization", "correction", or "calibration" techniques, the current situation is complicated by a lack of standardization, as well as confusing, inconsistent use of terminology. In this paper, we first provide an overview of basic principles of LIDAR intensity measurements and applications utilizing intensity information from terrestrial, airborne topographic, and airborne bathymetric LIDAR. Next, we review effective parameters on intensity measurements, basic theory, and current intensity processing methods. We define terminology adopted from the most commonly-used conventions based on a review of current literature. Finally, we identify topics in need of further research. Ultimately, the presented information helps lay the foundation for future standards and specifications for LIDAR radiometric calibration.

  17. First International Symposium on Strain Gauge Balances. Pt. 1

    NASA Technical Reports Server (NTRS)

    Tripp, John S. (Editor); Tcheng, Ping (Editor)

    1999-01-01

    The first International Symposium on Strain Gauge Balances was sponsored and held at NASA Langley Research Center during October 22-25, 1996. The symposium provided an open international forum for presentation, discussion, and exchange of technical information among wind tunnel test technique specialists and strain gauge balance designers. The Symposium also served to initiate organized professional activities among the participating and relevant international technical communities. Over 130 delegates from 15 countries were in attendance. The program opened with a panel discussion, followed by technical paper sessions, and guided tours of the National Transonic Facility (NTF) wind tunnel, a local commercial balance fabrication facility, and the LaRC balance calibration laboratory. The opening panel discussion addressed "Future Trends in Balance Development and Applications." Forty-six technical papers were presented in 11 technical sessions covering the following areas: calibration, automatic calibration, data reduction, facility reports, design, accuracy and uncertainty analysis, strain gauges, instrumentation, balance design, thermal effects, finite element analysis, applications, and special balances. At the conclusion of the Symposium, a steering committee representing most of the nations and several U.S. organizations attending the Symposium was established to initiate planning for a second international balance symposium, to be held in 1999 in the UK.

  18. A travelling standard for radiopharmaceutical production centres in Italy

    NASA Astrophysics Data System (ADS)

    Capogni, M.; de Felice, P.; Fazio, A.

    Short-lived radionuclides, γ, β+ and/or β- emitters, such as 18F, and 99mTc, particularly useful for nuclear medicine applications, both diagnostic and in radiotherapy, can be produced with high-specific activity in a small biomedical cyclotron or by a radionuclide generator. While [18F]Fludeoxyglucose ([18F]FDG) is a widely used radiopharmaceutical for positron emission tomography, the development of innovative diagnostic techniques and therapies involves the use of new radio-labelled molecules and emerging radionuclides, such as 64Cu and 124I. During the last 3 years, an extensive supply of [18F]FDG was started by many production sites in Italy, and new radiopharmaceuticals are being studied for future nuclear medical applications. Therefore, a special nuclear medicine research programme for primary standard development and transferral to the end-users has been carried out by the ENEA-INMRI. Because of the short half-lives of these nuclides, a portable well-type ionisation chamber was established as a secondary travelling standard. This device has been calibrated and transported to the radiopharmaceutical production centres in Italy where the local instrumentation, typically radionuclide calibrators, has been calibrated by a simple comparison, with an uncertainty level lower than 2%.

  19. First International Symposium on Strain Gauge Balances. Part 2

    NASA Technical Reports Server (NTRS)

    Tripp, John S (Editor); Tcheng, Ping (Editor)

    1999-01-01

    The first International Symposium on Strain Gauge Balances was sponsored and held at NASA Langley Research Center during October 22-25, 1996. The symposium provided an open international forum for presentation, discussion, and exchange of technical information among wind tunnel test technique specialists and strain gauge balance designers. The Symposium also served to initiate organized professional activities among the participating and relevant international technical communities. Over 130 delegates from 15 countries were in attendance. The program opened with a panel discussion, followed by technical paper sessions, and guided tours of the National Transonic Facility (NTF) wind tunnel, a local commercial balance fabrication facility, and the LaRC balance calibration laboratory. The opening panel discussion addressed "Future Trends in Balance Development and Applications." Forty-six technical papers were presented in 11 technical sessions covering the following areas: calibration, automatic calibration, data reduction, facility reports, design, accuracy and uncertainty analysis, strain gauges, instrumentation, balance design, thermal effects, finite element analysis, applications, and special balances. At the conclusion of the Symposium, a steering committee representing most of the nations and several U.S. organizations attending the Symposium was established to initiate planning for a second international balance symposium, to be held in 1999 in the UK.

  20. Absolute efficiency calibration of 6LiF-based solid state thermal neutron detectors

    NASA Astrophysics Data System (ADS)

    Finocchiaro, Paolo; Cosentino, Luigi; Lo Meo, Sergio; Nolte, Ralf; Radeck, Desiree

    2018-03-01

    The demand for new thermal neutron detectors as an alternative to 3He tubes in research, industrial, safety and homeland security applications, is growing. These needs have triggered research and development activities about new generations of thermal neutron detectors, characterized by reasonable efficiency and gamma rejection comparable to 3He tubes. In this paper we show the state of the art of a promising low-cost technique, based on commercial solid state silicon detectors coupled with thin neutron converter layers of 6LiF deposited onto carbon fiber substrates. A few configurations were studied with the GEANT4 simulation code, and the intrinsic efficiency of the corresponding detectors was calibrated at the PTB Thermal Neutron Calibration Facility. The results show that the measured intrinsic detection efficiency is well reproduced by the simulations, therefore validating the simulation tool in view of new designs. These neutron detectors have also been tested at neutron beam facilities like ISIS (Rutherford Appleton Laboratory, UK) and n_TOF (CERN) where a few samples are already in operation for beam flux and 2D profile measurements. Forthcoming applications are foreseen for the online monitoring of spent nuclear fuel casks in interim storage sites.

  1. Hydrologic calibration of paired watersheds using a MOSUM approach

    DOE PAGES

    Ssegane, H.; Amatya, D. M.; Muwamba, A.; ...

    2015-01-09

    Paired watershed studies have historically been used to quantify hydrologic effects of land use and management practices by concurrently monitoring two neighboring watersheds (a control and a treatment) during the calibration (pre-treatment) and post-treatment periods. This study characterizes seasonal water table and flow response to rainfall during the calibration period and tests a change detection technique of moving sums of recursive residuals (MOSUM) to select calibration periods for each control-treatment watershed pair when the regression coefficients for daily water table elevation (WTE) were most stable to reduce regression model uncertainty. The control and treatment watersheds included 1–3 year intensively managedmore » loblolly pine ( Pinus taeda L.) with natural understory, same age loblolly pine intercropped with switchgrass ( Panicum virgatum), 14–15 year thinned loblolly pine with natural understory (control), and switchgrass only. Although monitoring during the calibration period spanned 2009 to 2012, silvicultural operational practices that occurred during this period such as harvesting of existing stand and site preparation for pine and switchgrass establishment may have acted as external factors, potentially shifting hydrologic calibration relationships between control and treatment watersheds. Results indicated that MOSUM was able to detect significant changes in regression parameters for WTE due to silvicultural operations. This approach also minimized uncertainty of calibration relationships which could otherwise mask marginal treatment effects. All calibration relationships developed using this MOSUM method were quantifiable, strong, and consistent with Nash–Sutcliffe Efficiency (NSE) greater than 0.97 for WTE and NSE greater than 0.92 for daily flow, indicating its applicability for choosing calibration periods of paired watershed studies.« less

  2. Improved Neural Signal Classification in a Rapid Serial Visual Presentation Task Using Active Learning.

    PubMed

    Marathe, Amar R; Lawhern, Vernon J; Wu, Dongrui; Slayback, David; Lance, Brent J

    2016-03-01

    The application space for brain-computer interface (BCI) technologies is rapidly expanding with improvements in technology. However, most real-time BCIs require extensive individualized calibration prior to use, and systems often have to be recalibrated to account for changes in the neural signals due to a variety of factors including changes in human state, the surrounding environment, and task conditions. Novel approaches to reduce calibration time or effort will dramatically improve the usability of BCI systems. Active Learning (AL) is an iterative semi-supervised learning technique for learning in situations in which data may be abundant, but labels for the data are difficult or expensive to obtain. In this paper, we apply AL to a simulated BCI system for target identification using data from a rapid serial visual presentation (RSVP) paradigm to minimize the amount of training samples needed to initially calibrate a neural classifier. Our results show AL can produce similar overall classification accuracy with significantly less labeled data (in some cases less than 20%) when compared to alternative calibration approaches. In fact, AL classification performance matches performance of 10-fold cross-validation (CV) in over 70% of subjects when training with less than 50% of the data. To our knowledge, this is the first work to demonstrate the use of AL for offline electroencephalography (EEG) calibration in a simulated BCI paradigm. While AL itself is not often amenable for use in real-time systems, this work opens the door to alternative AL-like systems that are more amenable for BCI applications and thus enables future efforts for developing highly adaptive BCI systems.

  3. Respiratory monitoring by inductive plethysmography in unrestrained subjects using position sensor-adjusted calibration.

    PubMed

    Brüllmann, Gregor; Fritsch, Karsten; Thurnheer, Robert; Bloch, Konrad E

    2010-01-01

    Portable respiratory inductive plethysmography (RIP) is promising for noninvasive monitoring of breathing patterns in unrestrained subjects. However, its use has been hampered by requiring recalibration after changes in body position. To facilitate RIP application in unrestrained subjects, we developed a technique for adjustment of RIP calibration using position sensor feedback. Five healthy subjects and 12 patients with lung disease were monitored by portable RIP with sensors incorporated within a body garment. Unrestrained individuals were studied during 40-60 min while supine, sitting and upright/walking. Position was changed repeatedly every 5-10 min. Initial qualitative diagnostic calibration followed by volume scaling in absolute units during 20 breaths in different positions by flow meter provided position-specific volume-motion coefficients for RIP. These were applied during subsequent monitoring in corresponding positions according to feedback from 4 accelerometers placed at the chest and thigh. Accuracy of RIP was evaluated by face mask pneumotachography. Position sensor feedback allowed accurate adjustment of RIP calibration during repeated position changes in subjects and patients as reflected in a minor mean difference (bias) in breath-by-breath tidal volumes estimated by RIP and flow meter of 0.02 liters (not significant) and limits of agreement (+/-2 SD) of +/-19% (2,917 comparisons). An average of 10 breaths improved precision of RIP (limits of agreement +/-14%). RIP calibration incorporating position sensor feedback greatly enhances the application of RIP as a valuable, unobtrusive tool to investigate respiratory physiology and ventilatory limitation in unrestrained healthy subjects and patients with lung disease during everyday activities including position changes. Copyright 2009 S. Karger AG, Basel.

  4. The influence of temperature calibration on the OC-EC results from a dual-optics thermal carbon analyzer

    NASA Astrophysics Data System (ADS)

    Pavlovic, J.; Kinsey, J. S.; Hays, M. D.

    2014-09-01

    Thermal-optical analysis (TOA) is a widely used technique that fractionates carbonaceous aerosol particles into organic and elemental carbon (OC and EC), or carbonate. Thermal sub-fractions of evolved OC and EC are also used for source identification and apportionment; thus, oven temperature accuracy during TOA analysis is essential. Evidence now indicates that the "actual" sample (filter) temperature and the temperature measured by the built-in oven thermocouple (or set-point temperature) can differ by as much as 50 °C. This difference can affect the OC-EC split point selection and consequently the OC and EC fraction and sub-fraction concentrations being reported, depending on the sample composition and in-use TOA method and instrument. The present study systematically investigates the influence of an oven temperature calibration procedure for TOA. A dual-optical carbon analyzer that simultaneously measures transmission and reflectance (TOT and TOR) is used, functioning under the conditions of both the National Institute of Occupational Safety and Health Method 5040 (NIOSH) and Interagency Monitoring of Protected Visual Environment (IMPROVE) protocols. The application of the oven calibration procedure to our dual-optics instrument significantly changed NIOSH 5040 carbon fractions (OC and EC) and the IMPROVE OC fraction. In addition, the well-known OC-EC split difference between NIOSH and IMPROVE methods is even further perturbed following the instrument calibration. Further study is needed to determine if the widespread application of this oven temperature calibration procedure will indeed improve accuracy and our ability to compare among carbonaceous aerosol studies that use TOA.

  5. Simple constant-current-regulated power supply

    NASA Technical Reports Server (NTRS)

    Priebe, D. H. E.; Sturman, J. C.

    1977-01-01

    Supply incorporates soft-start circuit that slowly ramps current up to set point at turn-on. Supply consists of full-wave rectifier, regulating pass transistor, current feedback circuit, and quad single-supply operational-amplifier circuit providing control. Technique is applicable to any system requiring constant dc current, such as vacuum tube equipment, heaters, or battery charges; it has been used to supply constant current for instrument calibration.

  6. Determination of thickness of thin turbid painted over-layers using micro-scale spatially offset Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Conti, Claudia; Realini, Marco; Colombo, Chiara; Botteon, Alessandra; Bertasa, Moira; Striova, Jana; Barucci, Marco; Matousek, Pavel

    2016-12-01

    We present a method for estimating the thickness of thin turbid layers using defocusing micro-spatially offset Raman spectroscopy (micro-SORS). The approach, applicable to highly turbid systems, enables one to predict depths in excess of those accessible with conventional Raman microscopy. The technique can be used, for example, to establish the paint layer thickness on cultural heritage objects, such as panel canvases, mural paintings, painted statues and decorated objects. Other applications include analysis in polymer, biological and biomedical disciplines, catalytic and forensics sciences where highly turbid overlayers are often present and where invasive probing may not be possible or is undesirable. The method comprises two stages: (i) a calibration step for training the method on a well characterized sample set with a known thickness, and (ii) a prediction step where the prediction of layer thickness is carried out non-invasively on samples of unknown thickness of the same chemical and physical make up as the calibration set. An illustrative example of a practical deployment of this method is the analysis of larger areas of paintings. In this case, first, a calibration would be performed on a fragment of painting of a known thickness (e.g. derived from cross-sectional analysis) and subsequently the analysis of thickness across larger areas of painting could then be carried out non-invasively. The performance of the method is compared with that of the more established optical coherence tomography (OCT) technique on identical sample set. This article is part of the themed issue "Raman spectroscopy in art and archaeology".

  7. Solution to the Problem of Calibration of Low-Cost Air Quality Measurement Sensors in Networks.

    PubMed

    Miskell, Georgia; Salmond, Jennifer A; Williams, David E

    2018-04-27

    We provide a simple, remote, continuous calibration technique suitable for application in a hierarchical network featuring a few well-maintained, high-quality instruments ("proxies") and a larger number of low-cost devices. The ideas are grounded in a clear definition of the purpose of a low-cost network, defined here as providing reliable information on air quality at small spatiotemporal scales. The technique assumes linearity of the sensor signal. It derives running slope and offset estimates by matching mean and standard deviations of the sensor data to values derived from proxies over the same time. The idea is extremely simple: choose an appropriate proxy and an averaging-time that is sufficiently long to remove the influence of short-term fluctuations but sufficiently short that it preserves the regular diurnal variations. The use of running statistical measures rather than cross-correlation of sites means that the method is robust against periods of missing data. Ideas are first developed using simulated data and then demonstrated using field data, at hourly and 1 min time-scales, from a real network of low-cost semiconductor-based sensors. Despite the almost naïve simplicity of the method, it was robust for both drift detection and calibration correction applications. We discuss the use of generally available geographic and environmental data as well as microscale land-use regression as means to enhance the proxy estimates and to generalize the ideas to other pollutants with high spatial variability, such as nitrogen dioxide and particulates. These improvements can also be used to minimize the required number of proxy sites.

  8. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements.

    PubMed

    Franesqui, Miguel A; Yepes, Jorge; García-González, Cándida

    2017-08-01

    This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA). The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled "Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves" (Franesqui et al., 2017) [1].

  9. Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com

    2016-06-15

    The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiationmore » of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.« less

  10. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    NASA Astrophysics Data System (ADS)

    Soltanzadeh, I.; Azadi, M.; Vakili, G. A.

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  11. Absolute calibration of Doppler coherence imaging velocity images

    NASA Astrophysics Data System (ADS)

    Samuell, C. M.; Allen, S. L.; Meyer, W. H.; Howard, J.

    2017-08-01

    A new technique has been developed for absolutely calibrating a Doppler Coherence Imaging Spectroscopy interferometer for measuring plasma ion and neutral velocities. An optical model of the interferometer is used to generate zero-velocity reference images for the plasma spectral line of interest from a calibration source some spectral distance away. Validation of this technique using a tunable diode laser demonstrated an accuracy better than 0.2 km/s over an extrapolation range of 3.5 nm; a two order of magnitude improvement over linear approaches. While a well-characterized and very stable interferometer is required, this technique opens up the possibility of calibrated velocity measurements in difficult viewing geometries and for complex spectral line-shapes.

  12. Polarized Power Spectra from HERA-19 Commissioning Data: Effect of Calibration Techniques

    NASA Astrophysics Data System (ADS)

    Chichura, Paul; Igarashi, Amy; Fox Fortino, Austin; Kohn, Saul; Aguirre, James; HERA Collaboration

    2018-01-01

    Studying the Epoch of Reionization (EOR) is crucial for cosmologists as it not only provides information about the first generation of stars and galaxies, but it may also help answer any number of fundamental astrophysical questions. The Hydrogen Epoch of Reionization Array (HERA) is doing this by examining emission from the 21cm hyperfine transition of neutral hydrogen, which has been identified as a promising probe of reionization. Currently, HERA is still in its commissioning phase; 37 of the planned 350 dishes have been constructed and analysis has begun for data received from the first 19 dishes built. With the creation of fully polarized power spectra, we investigate how different data calibration techniques affect the power spectra and whether or not ordering these techniques in different ways affects the results. These calibration techniques include using both non-imaging redundant measurements within the array to calibrate, as well as more traditional approaches based on imaging and calibrating to a model of sky. We explore the degree to which the different calibration schemes affect leakage of foreground emission to regions of Fourier space where EoR the power spectrum is expected to be measurable.

  13. All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement.

    PubMed

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi

    2016-01-30

    This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of -20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system.

  14. All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement

    PubMed Central

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi

    2016-01-01

    This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of −20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system. PMID:26840316

  15. A multi-model fusion strategy for multivariate calibration using near and mid-infrared spectra of samples from brewing industry.

    PubMed

    Tan, Chao; Chen, Hui; Wang, Chao; Zhu, Wanping; Wu, Tong; Diao, Yuanbo

    2013-03-15

    Near and mid-infrared (NIR/MIR) spectroscopy techniques have gained great acceptance in the industry due to their multiple applications and versatility. However, a success of application often depends heavily on the construction of accurate and stable calibration models. For this purpose, a simple multi-model fusion strategy is proposed. It is actually the combination of Kohonen self-organizing map (KSOM), mutual information (MI) and partial least squares (PLSs) and therefore named as KMICPLS. It works as follows: First, the original training set is fed into a KSOM for unsupervised clustering of samples, on which a series of training subsets are constructed. Thereafter, on each of the training subsets, a MI spectrum is calculated and only the variables with higher MI values than the mean value are retained, based on which a candidate PLS model is constructed. Finally, a fixed number of PLS models are selected to produce a consensus model. Two NIR/MIR spectral datasets from brewing industry are used for experiments. The results confirms its superior performance to two reference algorithms, i.e., the conventional PLS and genetic algorithm-PLS (GAPLS). It can build more accurate and stable calibration models without increasing the complexity, and can be generalized to other NIR/MIR applications. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique

    NASA Astrophysics Data System (ADS)

    Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua

    2018-05-01

    A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.

  17. Experience with novel technologies for direct measurement of atmospheric NO2

    NASA Astrophysics Data System (ADS)

    Hueglin, Christoph; Hundt, Morten; Mueller, Michael; Schwarzenbach, Beat; Tuzson, Bela; Emmenegger, Lukas

    2017-04-01

    Nitrogen dioxide (NO2) is an air pollutant that has a large impact on human health and ecosystems, and it plays a key role in the formation of ozone and secondary particulate matter. Consequently, legal limit values for NO2 are set in the EU and elsewhere, and atmospheric observation networks typically include NO2 in their measurement programmes. Atmospheric NO2 is principally measured by chemiluminescence detection, an indirect measurement technique that requires conversion of NO2 into nitrogen monoxide (NO) and finally calculation of NO2 from the difference between total nitrogen oxides (NOx) and NO. Consequently, NO2 measurements with the chemiluminescence method have a relatively high measurement uncertainty and can be biased depending on the selectivity of the applied NO2 conversion method. In the past years, technologies for direct and selective measurement of NO2 have become available, e.g. cavity attenuated phase shift spectroscopy (CAPS), cavity enhanced laser absorption spectroscopy and quantum cascade laser absorption spectrometry (QCLAS). These technologies offer clear advantages over the indirect chemiluminescence method. We tested the above mentioned direct measurement techniques for NO2 over extended time periods at atmospheric measurement stations and report on our experience including comparisons with co-located chemiluminescence instruments equipped with molybdenum as well as photolytic NO2 converters. A still open issue related to the direct measurement of NO2 is instrument calibration. Accurate and traceable reference standards and NO2 calibration gases are needed. We present results from the application of different calibration strategies based on the use of static NO2 calibration gases as well as dynamic NO2 calibration gases produced by permeation and by gas-phase titration (GPT).

  18. Load Modeling and Calibration Techniques for Power System Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, Forrest S.; Mayhorn, Ebony T.; Elizondo, Marcelo A.

    2011-09-23

    Load modeling is the most uncertain area in power system simulations. Having an accurate load model is important for power system planning and operation. Here, a review of load modeling and calibration techniques is given. This paper is not comprehensive, but covers some of the techniques most commonly found in the literature. The advantages and disadvantages of each technique are outlined.

  19. A calibration method for fringe reflection technique based on the analytical phase-slope description

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong

    2018-05-01

    The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.

  20. Facilities and Techniques for X-Ray Diagnostic Calibration in the 100-eV to 100-keV Energy Range

    NASA Astrophysics Data System (ADS)

    Gaines, J. L.; Wittmayer, F. J.

    1986-08-01

    The Lawrence Livermore National Laboratory (LLNL) has been a pioneer in the field of x-ray diagnostic calibration for more than 20 years. We have built steady state x-ray sources capable of supplying fluorescent lines of high spectral purity in the 100-eV to 100-keV energy range, and these sources have been used in the calibration of x-ray detectors, mirrors, crystals, filters, and film. This paper discusses our calibration philosophy and techniques, and describes some of our x-ray sources. Examples of actual calibration data are presented as well.

  1. Application of borehole geophysics to water-resources investigations

    USGS Publications Warehouse

    Keys, W.S.; MacCary, L.M.

    1971-01-01

    This manual is intended to be a guide for hydrologists using borehole geophysics in ground-water studies. The emphasis is on the application and interpretation of geophysical well logs, and not on the operation of a logger. It describes in detail those logging techniques that have been utilized within the Water Resources Division of the U.S. Geological Survey, and those used in petroleum investigations that have potential application to hydrologic problems. Most of the logs described can be made by commercial logging service companies, and many can be made with small water-well loggers. The general principles of each technique and the rules of log interpretation are the same, regardless of differences in instrumentation. Geophysical well logs can be interpreted to determine the lithology, geometry, resistivity, formation factor, bulk density, porosity, permeability, moisture content, and specific yield of water-bearing rocks, and to define the source, movement, and chemical and physical characteristics of ground water. Numerous examples of logs are used to illustrate applications and interpretation in various ground-water environments. The interrelations between various types of logs are emphasized, and the following aspects are described for each of the important logging techniques: Principles and applications, instrumentation, calibration and standardization, radius of investigation, and extraneous effects.

  2. New technique for calibrating hydrocarbon gas flowmeters

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Puster, R. L.

    1984-01-01

    A technique for measuring calibration correction factors for hydrocarbon mass flowmeters is described. It is based on the Nernst theorem for matching the partial pressure of oxygen in the combustion products of the test hydrocarbon, burned in oxygen-enriched air, with that in normal air. It is applied to a widely used type of commercial thermal mass flowmeter for a number of hydrocarbons. The calibration correction factors measured using this technique are in good agreement with the values obtained by other independent procedures. The technique is successfully applied to the measurement of differences as low as one percent of the effective hydrocarbon content of the natural gas test samples.

  3. Application of validation data for assessing spatial interpolation methods for 8-h ozone or other sparsely monitored constituents.

    PubMed

    Joseph, John; Sharif, Hatim O; Sunil, Thankam; Alamgir, Hasanat

    2013-07-01

    The adverse health effects of high concentrations of ground-level ozone are well-known, but estimating exposure is difficult due to the sparseness of urban monitoring networks. This sparseness discourages the reservation of a portion of the monitoring stations for validation of interpolation techniques precisely when the risk of overfitting is greatest. In this study, we test a variety of simple spatial interpolation techniques for 8-h ozone with thousands of randomly selected subsets of data from two urban areas with monitoring stations sufficiently numerous to allow for true validation. Results indicate that ordinary kriging with only the range parameter calibrated in an exponential variogram is the generally superior method, and yields reliable confidence intervals. Sparse data sets may contain sufficient information for calibration of the range parameter even if the Moran I p-value is close to unity. R script is made available to apply the methodology to other sparsely monitored constituents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Statistical Inference of a RANS closure for a Jet-in-Crossflow simulation

    NASA Astrophysics Data System (ADS)

    Heyse, Jan; Edeling, Wouter; Iaccarino, Gianluca

    2016-11-01

    The jet-in-crossflow is found in several engineering applications, such as discrete film cooling for turbine blades, where a coolant injected through hols in the blade's surface protects the component from the hot gases leaving the combustion chamber. Experimental measurements using MRI techniques have been completed for a single hole injection into a turbulent crossflow, providing full 3D averaged velocity field. For such flows of engineering interest, Reynolds-Averaged Navier-Stokes (RANS) turbulence closure models are often the only viable computational option. However, RANS models are known to provide poor predictions in the region close to the injection point. Since these models are calibrated on simple canonical flow problems, the obtained closure coefficient estimates are unlikely to extrapolate well to more complex flows. We will therefore calibrate the parameters of a RANS model using statistical inference techniques informed by the experimental jet-in-crossflow data. The obtained probabilistic parameter estimates can in turn be used to compute flow fields with quantified uncertainty. Stanford Graduate Fellowship in Science and Engineering.

  5. SWAT: Model use, calibration, and validation

    USDA-ARS?s Scientific Manuscript database

    SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...

  6. Instrument inter-comparison of glyoxal, methyl glyoxal and NO2 under simulated atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Thalman, R.; Baeza-Romero, M. T.; Ball, S. M.; Borrás, E.; Daniels, M. J. S.; Goodall, I. C. A.; Henry, S. B.; Karl, T.; Keutsch, F. N.; Kim, S.; Mak, J.; Monks, P. S.; Muñoz, A.; Orlando, J.; Peppe, S.; Rickard, A. R.; Ródenas, M.; Sánchez, P.; Seco, R.; Su, L.; Tyndall, G.; Vázquez, M.; Vera, T.; Waxman, E.; Volkamer, R.

    2014-08-01

    The α-dicarbonyl compounds glyoxal (CHOCHO) and methyl glyoxal (CH3C(O)CHO) are produced in the atmosphere by the oxidation of hydrocarbons, and emitted directly from pyrogenic sources. Measurements of ambient concentrations inform about the rate of hydrocarbon oxidation, oxidative capacity, and secondary organic aerosol (SOA) formation. We present results from a comprehensive instrument comparison effort at 2 simulation chamber facilities in the US and Europe that included 9 instruments, and 7 different measurement techniques: Broadband Cavity Enhanced Absorption Spectroscopy (BBCEAS), Cavity Enhanced Differential Optical Absorption Spectroscopy (CE-DOAS), White-cell DOAS, Fourier Transform Infra-Red Spectroscopy (FTIR, two separate instruments), Laser Induced Phosphoresence (LIP), Solid Phase Micro Extraction (SPME), and Proton Transfer Reaction Mass Spectrometry (PTR-ToF-MS, two separate instruments; only methyl glyoxal as no significant response was observed for glyoxal). Experiments at the National Center for Atmospheric Research (NCAR) compare 3 independent sources of calibration as a function of temperature (293 K to 330 K). Calibrations from absorption cross-section spectra at UV-visible and IR wavelengths are found to agree within 2% for glyoxal, and 4% for methyl glyoxal at all temperatures; further calibrations based on ion-molecule rate constant calculations agreed within 5% for methyl glyoxal at all temperatures. At the EUropean PHOtoREactor (EUPHORE) all measurements are calibrated from the same UV-visible spectra (either directly or indirectly), thus minimizing potential systematic bias. We find excellent linearity under idealized conditions (pure glyoxal or methyl glyoxal, R2 > 0.96), and in complex gas mixtures characteristic of dry photochemical smog systems (o-xylene/NOx and isoprene/NOx, R2 > 0.95; R2 ~ 0.65 for offline SPME measurements of methyl glyoxal). The correlations are more variable in humid ambient air mixtures (RH > 45%) for methyl glyoxal (0.58 < R2 < 0.68) than for glyoxal (0.79 < R2 < 0.99). The intercepts of correlations were insignificant for the most part; slopes varied by less than 5% for instruments that also measure NO2. For glyoxal and methyl glyoxal the slopes varied by less than 12% and 17% (both 3-sigma) between inherently calibrated instruments (i.e., calibration from knowledge of the absorption cross-section). We find a larger variability among in situ techniques that employ external calibration sources (75% to 90%, 3-sigma), and/or techniques that employ offline analysis. Our inter-comparison reveal existing differences in reports about precision and detection limits in the literature, and enables comparison on a common basis by observing a common airmass. Finally, we evaluate the influence of interfering species (e.g., NO2, O3 and H2O) of relevance in field and laboratory applications. Techniques now exist to conduct fast and accurate measurements of glyoxal at ambient concentrations, and methyl glyoxal under simulated conditions. However, techniques to measure methyl glyoxal at ambient concentrations remain a challenge, and would be desirable.

  7. Fruit Quality Evaluation Using Spectroscopy Technology: A Review

    PubMed Central

    Wang, Hailong; Peng, Jiyu; Xie, Chuanqi; Bao, Yidan; He, Yong

    2015-01-01

    An overview is presented with regard to applications of visible and near infrared (Vis/NIR) spectroscopy, multispectral imaging and hyperspectral imaging techniques for quality attributes measurement and variety discrimination of various fruit species, i.e., apple, orange, kiwifruit, peach, grape, strawberry, grape, jujube, banana, mango and others. Some commonly utilized chemometrics including pretreatment methods, variable selection methods, discriminant methods and calibration methods are briefly introduced. The comprehensive review of applications, which concentrates primarily on Vis/NIR spectroscopy, are arranged according to fruit species. Most of the applications are focused on variety discrimination or the measurement of soluble solids content (SSC), acidity and firmness, but also some measurements involving dry matter, vitamin C, polyphenols and pigments have been reported. The feasibility of different spectral modes, i.e., reflectance, interactance and transmittance, are discussed. Optimal variable selection methods and calibration methods for measuring different attributes of different fruit species are addressed. Special attention is paid to sample preparation and the influence of the environment. Areas where further investigation is needed and problems concerning model robustness and model transfer are identified. PMID:26007736

  8. Low Altitude AVIRIS Data for Mapping Land Cover in Yellowstone National Park: Use of Isodata Clustering Techniques

    NASA Technical Reports Server (NTRS)

    Spruce, Joe

    2001-01-01

    Yellowstone National Park (YNP) contains a diversity of land cover. YNP managers need site-specific land cover maps, which may be produced more effectively using high-resolution hyperspectral imagery. ISODATA clustering techniques have aided operational multispectral image classification and may benefit certain hyperspectral data applications if optimally applied. In response, a study was performed for an area in northeast YNP using 11 select bands of low-altitude AVIRIS data calibrated to ground reflectance. These data were subjected to ISODATA clustering and Maximum Likelihood Classification techniques to produce a moderately detailed land cover map. The latter has good apparent overall agreement with field surveys and aerial photo interpretation.

  9. Mabs monograph, air blast instrumentation, 1943-1993 measurement techniques and instrumentation. Volume 1. The nuclear era. 1945-1963. Technical report, 17 September 1992-31 May 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reisler, R.E.; Keefer, J.H.; Ethridge, N.H.

    1995-03-01

    Blast wave measurement techniques and instrumentation developed by Military Applications of Blast Simulators (MABS) participating countries to study blast phenomena during the nuclear era are summarized. Passive and active gages both mechanical self-recording and electronic systems deployed on kiloton and megaton explosive tests during the period 1945-1963 are presented. The country and the year the gage was introduced are included with the description. References are also provided. Volume 2 covers measurement techniques and instrumentation for the period 1959-1993 and Volume 3 covers structural target and gage calibration from 1943 to 1993.

  10. Remote sensing science for the Nineties; Proceedings of IGARSS '90 - 10th Annual International Geoscience and Remote Sensing Symposium, University of Maryland, College Park, May 20-24, 1990. Vols. 1, 2, & 3

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Various papers on remote sensing (RS) for the nineties are presented. The general topics addressed include: subsurface methods, radar scattering, oceanography, microwave models, atmospheric correction, passive microwave systems, RS in tropical forests, moderate resolution land analysis, SAR geometry and SNR improvement, image analysis, inversion and signal processing for geoscience, surface scattering, rain measurements, sensor calibration, wind measurements, terrestrial ecology, agriculture, geometric registration, subsurface sediment geology, radar modulation mechanisms, radar ocean scattering, SAR calibration, airborne radar systems, water vapor retrieval, forest ecosystem dynamics, land analysis, multisensor data fusion. Also considered are: geologic RS, RS sensor optical measurements, RS of snow, temperature retrieval, vegetation structure, global change, artificial intelligence, SAR processing techniques, geologic RS field experiment, stochastic modeling, topography and Digital Elevation model, SAR ocean waves, spaceborne lidar and optical, sea ice field measurements, millimeter waves, advanced spectroscopy, spatial analysis and data compression, SAR polarimetry techniques. Also discussed are: plant canopy modeling, optical RS techniques, optical and IR oceanography, soil moisture, sea ice back scattering, lightning cloud measurements, spatial textural analysis, SAR systems and techniques, active microwave sensing, lidar and optical, radar scatterometry, RS of estuaries, vegetation modeling, RS systems, EOS/SAR Alaska, applications for developing countries, SAR speckle and texture.

  11. NICOLAU: compact unit for photometric characterization of automotive lighting from near-field measurements

    NASA Astrophysics Data System (ADS)

    Royo, Santiago; Arranz, Maria J.; Arasa, Josep; Cattoen, Michel; Bosch, Thierry

    2005-02-01

    The present works depicts a measurement technique intended to enhance the characterization procedures of the photometric emissions of automotive headlamps, with potential applications to any light source emission, either automotive or non-automotive. A CCD array with a precisely characterized optical system is used for sampling the luminance field of the headlamp just a few centimetres in front of it, by combining deflectometric techniques (yielding the direction of the light beams) and photometric techniques (yielding the energy travelling in each direction). The CCD array scans the measurement plane using a self-developed mechanical unit and electronics, and then image-processing techniques are used for obtaining the photometric behaviour of the headlamp in any given plane, in particular in the plane and positions required by current normative, but also on the road, on traffic signs, etc. An overview of the construction of the system, of the considered principle of measurement, and of the main calibrations performed on the unit is presented. First results concerning relative measurements are presented compared both to reference data from a photometric tunnel and from a plane placed 5m away from the source. Preliminary results for the absolute photometric calibration of the system are also presented for different illumination beams of different headlamps (driving and passing beam).

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siranosian, Antranik Antonio; Schembri, Philip Edward; Luscher, Darby Jon

    The Los Alamos National Laboratory's Weapon Systems Engineering division's Advanced Engineering Analysis group employs material constitutive models of composites for use in simulations of components and assemblies of interest. Experimental characterization, modeling and prediction of the macro-scale (i.e. continuum) behaviors of these composite materials is generally difficult because they exhibit nonlinear behaviors on the meso- (e.g. micro-) and macro-scales. Furthermore, it can be difficult to measure and model the mechanical responses of the individual constituents and constituent interactions in the composites of interest. Current efforts to model such composite materials rely on semi-empirical models in which meso-scale properties are inferredmore » from continuum level testing and modeling. The proposed approach involves removing the difficulties of interrogating and characterizing micro-scale behaviors by scaling-up the problem to work with macro-scale composites, with the intention of developing testing and modeling capabilities that will be applicable to the mesoscale. This approach assumes that the physical mechanisms governing the responses of the composites on the meso-scale are reproducible on the macro-scale. Working on the macro-scale simplifies the quantification of composite constituents and constituent interactions so that efforts can be focused on developing material models and the testing techniques needed for calibration and validation. Other benefits to working with macro-scale composites include the ability to engineer and manufacture—potentially using additive manufacturing techniques—composites that will support the application of advanced measurement techniques such as digital volume correlation and three-dimensional computed tomography imaging, which would aid in observing and quantifying complex behaviors that are exhibited in the macro-scale composites of interest. Ultimately, the goal of this new approach is to develop a meso-scale composite modeling framework, applicable to many composite materials, and the corresponding macroscale testing and test data interrogation techniques to support model calibration.« less

  13. [Applications of near infrared reflectance spectroscopy in detecting chitin, ergosterol and mycotoxins].

    PubMed

    Yi, Yong-Yan; Li, De-Rong; Zhang, Yun-Wei; Yang, Fu-Yu

    2009-07-01

    The invasion extent and harmfulness of fungi can be determined by chitin, ergosterol and mycotoxins. It is important to monitor chitin, ergosterol and mycotoxins changes to prevent contamination of forage and feed products, and effectively control the sustainable development of the mildew. Predication of these chemical materials was often completed by laboratory analysis, which was time-consuming and cumbersome and could not reflect the results in time in the past. Near infrared reflectance spectroscopy (NIRS) is a rapid, convenient, highly efficient, nondestructive and low-cost analytical technique, which has been widely used in various fields such as food field and feed field for quantitative and qualitative analysis. It has a great potentiality of application in quality analysis. In this paper, the principle and the characteristic of NIRS and its applications in food, forage, feed and other agriculture products quality analysis were introduced. Its applications in fungal biomass (chitin, ergosterol) and mycotoxins were mainly reviewed. NIRS was used to quantify chitin, ergosterol and mycotoxins. Calibration equations and validation equations for these materials were developed. It is also expected that NIRS will play a more and more important role in the field of fungi with the establishment of calibration equation and improvement of model database.

  14. Technique for the metrology calibration of a Fourier transform spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Locke D.; Naylor, David A

    2008-11-10

    A method is presented for using a Fourier transform spectrometer (FTS) to calibrate the metrology of a second FTS. This technique is particularly useful when the second FTS is inside a cryostat or otherwise inaccessible.

  15. A fast combination calibration of foreground and background for pipelined ADCs

    NASA Astrophysics Data System (ADS)

    Kexu, Sun; Lenian, He

    2012-06-01

    This paper describes a fast digital calibration scheme for pipelined analog-to-digital converters (ADCs). The proposed method corrects the nonlinearity caused by finite opamp gain and capacitor mismatch in multiplying digital-to-analog converters (MDACs). The considered calibration technique takes the advantages of both foreground and background calibration schemes. In this combination calibration algorithm, a novel parallel background calibration with signal-shifted correlation is proposed, and its calibration cycle is very short. The details of this technique are described in the example of a 14-bit 100 Msample/s pipelined ADC. The high convergence speed of this background calibration is achieved by three means. First, a modified 1.5-bit stage is proposed in order to allow the injection of a large pseudo-random dithering without missing code. Second, before correlating the signal, it is shifted according to the input signal so that the correlation error converges quickly. Finally, the front pipeline stages are calibrated simultaneously rather than stage by stage to reduce the calibration tracking constants. Simulation results confirm that the combination calibration has a fast startup process and a short background calibration cycle of 2 × 221 conversions.

  16. Nitric oxide selective electrodes.

    PubMed

    Davies, Ian R; Zhang, Xueji

    2008-01-01

    Since nitric oxide (NO) was identified as the endothelial-derived relaxing factor in the late 1980s, many approaches have attempted to provide an adequate means for measuring physiological levels of NO. Although several techniques have been successful in achieving this aim, the electrochemical method has proved the only technique that can reliably measure physiological levels of NO in vitro, in vivo, and in real time. We describe here the development of electrochemical sensors for NO, including the fabrication of sensors, the detection principle, calibration, detection limits, selectivity, and response time. Furthermore, we look at the many experimental applications where NO selective electrodes have been successfully used.

  17. Radiometric Calibration of the Earth Observing System's Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Slater, Philip N. (Principal Investigator)

    1997-01-01

    The work on the grant was mainly directed towards developing new, accurate, redundant methods for the in-flight, absolute radiometric calibration of satellite multispectral imaging systems and refining the accuracy of methods already in use. Initially the work was in preparation for the calibration of MODIS and HIRIS (before the development of that sensor was canceled), with the realization it would be applicable to most imaging multi- or hyper-spectral sensors provided their spatial or spectral resolutions were not too coarse. The work on the grant involved three different ground-based, in-flight calibration methods reflectance-based radiance-based and diffuse-to-global irradiance ratio used with the reflectance-based method. This continuing research had the dual advantage of: (1) developing several independent methods to create the redundancy that is essential for the identification and hopefully the elimination of systematic errors; and (2) refining the measurement techniques and algorithms that can be used not only for improving calibration accuracy but also for the reverse process of retrieving ground reflectances from calibrated remote-sensing data. The grant also provided the support necessary for us to embark on other projects such as the ratioing radiometer approach to on-board calibration (this has been further developed by SBRS as the 'solar diffuser stability monitor' and is incorporated into the most important on-board calibration system for MODIS)- another example of the work, which was a spin-off from the grant funding, was a study of solar diffuser materials. Journal citations, titles and abstracts of publications authored by faculty, staff, and students are also attached.

  18. Vicarious calibration of the Geostationary Ocean Color Imager.

    PubMed

    Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram; Oh, Im Sang

    2015-09-07

    Measurements of ocean color from Geostationary Ocean Color Imager (GOCI) with a moderate spatial resolution and a high temporal frequency demonstrate high value for a number of oceanographic applications. This study aims to propose and evaluate the calibration of GOCI as needed to achieve the level of radiometric accuracy desired for ocean color studies. Previous studies reported that the GOCI retrievals of normalized water-leaving radiances (nLw) are biased high for all visible bands due to the lack of vicarious calibration. The vicarious calibration approach described here relies on the assumed constant aerosol characteristics over the open-ocean sites to accurately estimate atmospheric radiances for the two near-infrared (NIR) bands. The vicarious calibration of visible bands is performed using in situ nLw measurements and the satellite-estimated atmospheric radiance using two NIR bands over the case-1 waters. Prior to this analysis, the in situ nLw spectra in the NIR are corrected by the spectrum optimization technique based on the NIR similarity spectrum assumption. The vicarious calibration gain factors derived for all GOCI bands (except 865nm) significantly improve agreement in retrieved remote-sensing reflectance (Rrs) relative to in situ measurements. These gain factors are independent of angular geometry and possible temporal variability. To further increase the confidence in the calibration gain factors, a large data set from shipboard measurements and AERONET-OC is used in the validation process. It is shown that the absolute percentage difference of the atmospheric correction results from the vicariously calibrated GOCI system is reduced by ~6.8%.

  19. Development and characterization of sub-monolayer coatings as novel calibration samples for X-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Hönicke, Philipp; Krämer, Markus; Lühl, Lars; Andrianov, Konstantin; Beckhoff, Burkhard; Dietsch, Rainer; Holz, Thomas; Kanngießer, Birgit; Weißbach, Danny; Wilhein, Thomas

    2018-07-01

    With the advent of both modern X-ray fluorescence (XRF) methods and improved analytical reliability requirements the demand for suitable reference samples has increased. Especially in nanotechnology with the very low areal mass depositions, quantification becomes considerably more difficult. However, the availability of suited reference samples is drastically lower than the demand. Physical vapor deposition techniques have been enhanced significantly in the last decade driven by the need for extremely precise film parameters in multilayer production. We have applied those techniques for the development of layer-like reference samples with mass depositions in the ng-range and well below for Ca, Cu, Pb, Mo, Pd, Pb, La, Fe and Ni. Numerous other elements would also be possible. Several types of reference samples were fabricated: multi-elemental layer and extremely low (sub-monolayer) samples for various applications in XRF and total-reflection XRF analysis. Those samples were characterized and compared at three different synchrotron radiation beamlines at the BESSY II electron storage ring employing the reference-free XRF approach based on physically calibrated instrumentation. In addition, the homogeneity of the multi-elemental coatings was checked at the P04 beamline at DESY. The measurements demonstrate the high precision achieved in the manufacturing process as well as the versatility of application fields for the presented reference samples.

  20. Reduction of Radiometric Miscalibration—Applications to Pushbroom Sensors

    PubMed Central

    Rogaß, Christian; Spengler, Daniel; Bochow, Mathias; Segl, Karl; Lausch, Angela; Doktor, Daniel; Roessner, Sigrid; Behling, Robert; Wetzel, Hans-Ulrich; Kaufmann, Hermann

    2011-01-01

    The analysis of hyperspectral images is an important task in Remote Sensing. Foregoing radiometric calibration results in the assignment of incident electromagnetic radiation to digital numbers and reduces the striping caused by slightly different responses of the pixel detectors. However, due to uncertainties in the calibration some striping remains. This publication presents a new reduction framework that efficiently reduces linear and nonlinear miscalibrations by an image-driven, radiometric recalibration and rescaling. The proposed framework—Reduction Of Miscalibration Effects (ROME)—considering spectral and spatial probability distributions, is constrained by specific minimisation and maximisation principles and incorporates image processing techniques such as Minkowski metrics and convolution. To objectively evaluate the performance of the new approach, the technique was applied to a variety of commonly used image examples and to one simulated and miscalibrated EnMAP (Environmental Mapping and Analysis Program) scene. Other examples consist of miscalibrated AISA/Eagle VNIR (Visible and Near Infrared) and Hawk SWIR (Short Wave Infrared) scenes of rural areas of the region Fichtwald in Germany and Hyperion scenes of the Jalal-Abad district in Southern Kyrgyzstan. Recovery rates of approximately 97% for linear and approximately 94% for nonlinear miscalibrated data were achieved, clearly demonstrating the benefits of the new approach and its potential for broad applicability to miscalibrated pushbroom sensor data. PMID:22163960

  1. Enhanced Quality Control in Pharmaceutical Applications by Combining Raman Spectroscopy and Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Martinez, J. C.; Guzmán-Sepúlveda, J. R.; Bolañoz Evia, G. R.; Córdova, T.; Guzmán-Cabrera, R.

    2018-06-01

    In this work, we applied machine learning techniques to Raman spectra for the characterization and classification of manufactured pharmaceutical products. Our measurements were taken with commercial equipment, for accurate assessment of variations with respect to one calibrated control sample. Unlike the typical use of Raman spectroscopy in pharmaceutical applications, in our approach the principal components of the Raman spectrum are used concurrently as attributes in machine learning algorithms. This permits an efficient comparison and classification of the spectra measured from the samples under study. This also allows for accurate quality control as all relevant spectral components are considered simultaneously. We demonstrate our approach with respect to the specific case of acetaminophen, which is one of the most widely used analgesics in the market. In the experiments, commercial samples from thirteen different laboratories were analyzed and compared against a control sample. The raw data were analyzed based on an arithmetic difference between the nominal active substance and the measured values in each commercial sample. The principal component analysis was applied to the data for quantitative verification (i.e., without considering the actual concentration of the active substance) of the difference in the calibrated sample. Our results show that by following this approach adulterations in pharmaceutical compositions can be clearly identified and accurately quantified.

  2. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    NASA Astrophysics Data System (ADS)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  3. Effects of line fiducial parameters and beamforming on ultrasound calibration

    PubMed Central

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Peters, Terry M.; Chen, Elvis C. S.

    2017-01-01

    Abstract. Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures. PMID:28331886

  4. Effects of line fiducial parameters and beamforming on ultrasound calibration.

    PubMed

    Ameri, Golafsoun; Baxter, John S H; McLeod, A Jonathan; Peters, Terry M; Chen, Elvis C S

    2017-01-01

    Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures.

  5. ALMA High Frequency Techniques

    NASA Astrophysics Data System (ADS)

    Meyer, J. D.; Mason, B.; Impellizzeri, V.; Kameno, S.; Fomalont, E.; Chibueze, J.; Takahashi, S.; Remijan, A.; Wilson, C.; ALMA Science Team

    2015-12-01

    The purpose of the ALMA High Frequency Campaign is to improve the quality and efficiency of science observing in Bands 8, 9, and 10 (385-950 GHz), the highest frequencies available to the ALMA project. To this end, we outline observing modes which we have demonstrated to improve high frequency calibration for the 12m array and the ACA, and we present the calibration of the total power antennas at these frequencies. Band-to-band (B2B) transfer and bandwidth switching (BWSW), techniques which improve the speed and accuracy of calibration at the highest frequencies, are most necessary in Bands 8, 9, and 10 due to the rarity of strong calibrators. These techniques successfully enable increased signal-to-noise on the calibrator sources (and better calibration solutions) by measuring the calibrators at lower frequencies (B2B) or in wider bandwidths (BWSW) compared to the science target. We have also demonstrated the stability of the bandpass shape to better than 2.4% for 1 hour, hidden behind random noise, in Band 9. Finally, total power observing using the dual sideband receivers in Bands 9 and 10 requires the separation of the two sidebands; this procedure has been demonstrated in Band 9 and is undergoing further testing in Band 10.

  6. Automated image analysis of alpha-particle autoradiographs of human bone

    NASA Astrophysics Data System (ADS)

    Hatzialekou, Urania; Henshaw, Denis L.; Fews, A. Peter

    1988-01-01

    Further techniques [4,5] for the analysis of CR-39 α-particle autoradiographs have been developed for application to α-autoradiography of autopsy bone at natural levels for exposure. The most significant new approach is the use of fully automated image analysis using a system developed in this laboratory. A 5 cm × 5 cm autoradiograph of tissue in which the activity is below 1 Bq kg -1 is scanned to both locate and measure the recorded α-particle tracks at a rate of 5 cm 2/h. Improved methods of calibration have also been developed. The techniques are described and in order to illustrate their application, a bone sample contaminated with 239Pu is analysed. Results from natural levels are the subject of a separate publication.

  7. Instrument intercomparison of glyoxal, methyl glyoxal and NO2 under simulated atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Thalman, R.; Baeza-Romero, M. T.; Ball, S. M.; Borrás, E.; Daniels, M. J. S.; Goodall, I. C. A.; Henry, S. B.; Karl, T.; Keutsch, F. N.; Kim, S.; Mak, J.; Monks, P. S.; Muñoz, A.; Orlando, J.; Peppe, S.; Rickard, A. R.; Ródenas, M.; Sánchez, P.; Seco, R.; Su, L.; Tyndall, G.; Vázquez, M.; Vera, T.; Waxman, E.; Volkamer, R.

    2015-04-01

    The α-dicarbonyl compounds glyoxal (CHOCHO) and methyl glyoxal (CH3C(O)CHO) are produced in the atmosphere by the oxidation of hydrocarbons and emitted directly from pyrogenic sources. Measurements of ambient concentrations inform about the rate of hydrocarbon oxidation, oxidative capacity, and secondary organic aerosol (SOA) formation. We present results from a comprehensive instrument comparison effort at two simulation chamber facilities in the US and Europe that included nine instruments, and seven different measurement techniques: broadband cavity enhanced absorption spectroscopy (BBCEAS), cavity-enhanced differential optical absorption spectroscopy (CE-DOAS), white-cell DOAS, Fourier transform infrared spectroscopy (FTIR, two separate instruments), laser-induced phosphorescence (LIP), solid-phase micro extraction (SPME), and proton transfer reaction mass spectrometry (PTR-ToF-MS, two separate instruments; for methyl glyoxal only because no significant response was observed for glyoxal). Experiments at the National Center for Atmospheric Research (NCAR) compare three independent sources of calibration as a function of temperature (293-330 K). Calibrations from absorption cross-section spectra at UV-visible and IR wavelengths are found to agree within 2% for glyoxal, and 4% for methyl glyoxal at all temperatures; further calibrations based on ion-molecule rate constant calculations agreed within 5% for methyl glyoxal at all temperatures. At the European Photoreactor (EUPHORE) all measurements are calibrated from the same UV-visible spectra (either directly or indirectly), thus minimizing potential systematic bias. We find excellent linearity under idealized conditions (pure glyoxal or methyl glyoxal, R2 > 0.96), and in complex gas mixtures characteristic of dry photochemical smog systems (o-xylene/NOx and isoprene/NOx, R2 > 0.95; R2 ∼ 0.65 for offline SPME measurements of methyl glyoxal). The correlations are more variable in humid ambient air mixtures (RH > 45%) for methyl glyoxal (0.58 < R2 < 0.68) than for glyoxal (0.79 < R2 < 0.99). The intercepts of correlations were insignificant for the most part (below the instruments' experimentally determined detection limits); slopes further varied by less than 5% for instruments that could also simultaneously measure NO2. For glyoxal and methyl glyoxal the slopes varied by less than 12 and 17% (both 3-σ) between direct absorption techniques (i.e., calibration from knowledge of the absorption cross section). We find a larger variability among in situ techniques that employ external calibration sources (75-90%, 3-σ), and/or techniques that employ offline analysis. Our intercomparison reveals existing differences in reports about precision and detection limits in the literature, and enables comparison on a common basis by observing a common air mass. Finally, we evaluate the influence of interfering species (e.g., NO2, O3 and H2O) of relevance in field and laboratory applications. Techniques now exist to conduct fast and accurate measurements of glyoxal at ambient concentrations, and methyl glyoxal under simulated conditions. However, techniques to measure methyl glyoxal at ambient concentrations remain a challenge, and would be desirable.

  8. OEDIPE: a new graphical user interface for fast construction of numerical phantoms and MCNP calculations.

    PubMed

    Franck, D; de Carlan, L; Pierrat, N; Broggio, D; Lamart, S

    2007-01-01

    Although great efforts have been made to improve the physical phantoms used to calibrate in vivo measurement systems, these phantoms represent a single average counting geometry and usually contain a uniform distribution of the radionuclide over the tissue substitute. As a matter of fact, significant corrections must be made to phantom-based calibration factors in order to obtain absolute calibration efficiencies applicable to a given individual. The importance of these corrections is particularly crucial when considering in vivo measurements of low energy photons emitted by radionuclides deposited in the lung such as actinides. Thus, it was desirable to develop a method for calibrating in vivo measurement systems that is more sensitive to these types of variability. Previous works have demonstrated the possibility of such a calibration using the Monte Carlo technique. Our research programme extended such investigations to the reconstruction of numerical anthropomorphic phantoms based on personal physiological data obtained by computed tomography. New procedures based on a new graphical user interface (GUI) for development of computational phantoms for Monte Carlo calculations and data analysis are being developed to take advantage of recent progress in image-processing codes. This paper presents the principal features of this new GUI. Results of calculations and comparison with experimental data are also presented and discussed in this work.

  9. Automated response matching for organic scintillation detector arrays

    NASA Astrophysics Data System (ADS)

    Aspinall, M. D.; Joyce, M. J.; Cave, F. D.; Plenteda, R.; Tomanin, A.

    2017-07-01

    This paper identifies a digitizer technology with unique features that facilitates feedback control for the realization of a software-based technique for automatically calibrating detector responses. Three such auto-calibration techniques have been developed and are described along with an explanation of the main configuration settings and potential pitfalls. Automating this process increases repeatability, simplifies user operation, enables remote and periodic system calibration where consistency across detectors' responses are critical.

  10. Dither Gyro Scale Factor Calibration: GOES-16 Flight Experience

    NASA Technical Reports Server (NTRS)

    Reth, Alan D.; Freesland, Douglas C.; Krimchansky, Alexander

    2018-01-01

    This poster is a sequel to a paper presented at the 34th Annual AAS Guidance and Control Conference in 2011, which first introduced dither-based calibration of gyro scale factors. The dither approach uses very small excitations, avoiding the need to take instruments offline during gyro scale factor calibration. In 2017, the dither calibration technique was successfully used to estimate gyro scale factors on the GOES-16 satellite. On-orbit dither calibration results were compared to more traditional methods using large angle spacecraft slews about each gyro axis, requiring interruption of science. The results demonstrate that the dither technique can estimate gyro scale factors to better than 2000 ppm during normal science observations.

  11. Uplink Array Calibration via Far-Field Power Maximization

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V.; Mukai, R.; Lee, D.

    2006-01-01

    Uplink antenna arrays have the potential to greatly increase the Deep Space Network s high-data-rate uplink capabilities as well as useful range, and to provide additional uplink signal power during critical spacecraft emergencies. While techniques for calibrating an array of receive antennas have been addressed previously, proven concepts for uplink array calibration have yet to be demonstrated. This article describes a method of utilizing the Moon as a natural far-field reflector for calibrating a phased array of uplink antennas. Using this calibration technique, the radio frequency carriers transmitted by each antenna of the array are optimally phased to ensure that the uplink power received by the spacecraft is maximized.

  12. A short-term ensemble wind speed forecasting system for wind power applications

    NASA Astrophysics Data System (ADS)

    Baidya Roy, S.; Traiteur, J. J.; Callicutt, D.; Smith, M.

    2011-12-01

    This study develops an adaptive, blended forecasting system to provide accurate wind speed forecasts 1 hour ahead of time for wind power applications. The system consists of an ensemble of 21 forecasts with different configurations of the Weather Research and Forecasting Single Column Model (WRFSCM) and a persistence model. The ensemble is calibrated against observations for a 2 month period (June-July, 2008) at a potential wind farm site in Illinois using the Bayesian Model Averaging (BMA) technique. The forecasting system is evaluated against observations for August 2008 at the same site. The calibrated ensemble forecasts significantly outperform the forecasts from the uncalibrated ensemble while significantly reducing forecast uncertainty under all environmental stability conditions. The system also generates significantly better forecasts than persistence, autoregressive (AR) and autoregressive moving average (ARMA) models during the morning transition and the diurnal convective regimes. This forecasting system is computationally more efficient than traditional numerical weather prediction models and can generate a calibrated forecast, including model runs and calibration, in approximately 1 minute. Currently, hour-ahead wind speed forecasts are almost exclusively produced using statistical models. However, numerical models have several distinct advantages over statistical models including the potential to provide turbulence forecasts. Hence, there is an urgent need to explore the role of numerical models in short-term wind speed forecasting. This work is a step in that direction and is likely to trigger a debate within the wind speed forecasting community.

  13. Calibration and evaluation of a dispersant application system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shum, J.S.

    1987-05-01

    The report presents recommended methods for calibrating and operating boat-mounted dispersant application systems. Calibration of one commercially-available system and several unusual problems encountered in calibration are described. Charts and procedures for selecting pump rates and other operating parameters in order to achieve a desired dosage are provided. The calibration was performed at the EPA's Oil and Hazardous Materials Simulated Environmental Test Tank (OHMSETT) facility in Leonardo, New Jersey.

  14. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  15. Commodity-Free Calibration

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Commodity-free calibration is a reaction rate calibration technique that does not require the addition of any commodities. This technique is a specific form of the reaction rate technique, where all of the necessary reactants, other than the sample being analyzed, are either inherent in the analyzing system or specifically added or provided to the system for a reason other than calibration. After introduction, the component of interest is exposed to other reactants or flow paths already present in the system. The instrument detector records one of the following to determine the rate of reaction: the increase in the response of the reaction product, a decrease in the signal of the analyte response, or a decrease in the signal from the inherent reactant. With this data, the initial concentration of the analyte is calculated. This type of system can analyze and calibrate simultaneously, reduce the risk of false positives and exposure to toxic vapors, and improve accuracy. Moreover, having an excess of the reactant already present in the system eliminates the need to add commodities, which further reduces cost, logistic problems, and potential contamination. Also, the calculations involved can be simplified by comparison to those of the reaction rate technique. We conducted tests with hypergols as an initial investigation into the feasiblility of the technique.

  16. Application of satellite estimates of rainfall distribution to simulate the potential for malaria transmission in Africa

    NASA Astrophysics Data System (ADS)

    Yamana, T. K.; Eltahir, E. A.

    2009-12-01

    The Hydrology, Entomology and Malaria Transmission Simulator (HYDREMATS) is a mechanistic model developed to assess malaria risk in areas where the disease is water-limited. This model relies on precipitation inputs as its primary forcing. Until now, applications of the model have used ground-based precipitation observations. However, rain gauge networks in the areas most affected by malaria are often sparse. The increasing availability of satellite based rainfall estimates could greatly extend the range of the model. The minimum temporal resolution of precipitation data needed was determined to be one hour. The CPC Morphing technique (CMORPH ) distributed by NOAA fits this criteria, as it provides 30-minute estimates at 8km resolution. CMORPH data were compared to ground observations in four West African villages, and calibrated to reduce overestimation and false alarm biases. The calibrated CMORPH data were used to force HYDREMATS, resulting in outputs for mosquito populations, vectorial capacity and malaria transmission.

  17. 48 CFR 52.222-51 - Exemption from Application of the Service Contract Act to Contracts for Maintenance, Calibration...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of the Service Contract Act to Contracts for Maintenance, Calibration, or Repair of Certain Equipment....222-51 Exemption from Application of the Service Contract Act to Contracts for Maintenance... clause: Exemption From Application of the Service Contract Act to Contracts for Maintenance, Calibration...

  18. Towards quantitative magnetic particle imaging: A comparison with magnetic particle spectroscopy

    NASA Astrophysics Data System (ADS)

    Paysen, Hendrik; Wells, James; Kosch, Olaf; Steinhoff, Uwe; Trahms, Lutz; Schaeffter, Tobias; Wiekhorst, Frank

    2018-05-01

    Magnetic Particle Imaging (MPI) is a quantitative imaging modality with promising features for several biomedical applications. Here, we study quantitatively the raw data obtained during MPI measurements. We present a method for the calibration of the MPI scanner output using measurements from a magnetic particle spectrometer (MPS) to yield data in units of magnetic moments. The calibration technique is validated in a simplified MPI mode with a 1D excitation field. Using the calibrated results from MPS and MPI, we determine and compare the detection limits for each system. The detection limits were found to be 5.10-12 Am2 for MPS and 3.6.10-10 Am2 for MPI. Finally, the quantitative information contained in a standard MPI measurement with a 3D excitation is analyzed and compared to the previous results, showing a decrease in signal amplitudes of the odd harmonics related to the case of 1D excitation. We propose physical explanations for all acquired results; and discuss the possible benefits for the improvement of MPI technology.

  19. Construction and Calibration of a Difference Frequency Laser Spectrometer and New THZ Frequency Measurements of water and Ammonia

    NASA Technical Reports Server (NTRS)

    Pearson, J. C.; Pickett, Herbert M.; Chen, Pin; Matsuura, Shuji; Blake, Geoffry A.

    1999-01-01

    A three laser system based on 852nm DBR lasers has been constructed and used to generate radiation in the 750 GHz to 1600 GHz frequency region. The system works by locking two of the three lasers to modes of an ultra low expansion Fabry-Perot cavity. The third laser is offset locked to one of the cavity locked lasers with conventional microwave techniques. The signal from the offset laser and the other cavity locked laser are injected into a Master Oscillator Power Amplifier (MOPA), amplified and focused on a low temperature grown GaAs photomixer, which radiates the difference frequency. The system has been calibrated with molecular lines to better than one part in 10(exp 7). In this paper we present the application of this system to the v(sub 2) in inversion band of Ammonia and the ground and v(sub 2) states of water. A discussion of the system design, the calibration and the new spectral measurements will be presented.

  20. Swarm Optimization-Based Magnetometer Calibration for Personal Handheld Devices

    PubMed Central

    Ali, Abdelrahman; Siddharth, Siddharth; Syed, Zainab; El-Sheimy, Naser

    2012-01-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a processor that generates position and orientation solutions by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are usually corrupted by several errors, including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO)-based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometers. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. Furthermore, the proposed algorithm can help in the development of Pedestrian Navigation Devices (PNDs) when combined with inertial sensors and GPS/Wi-Fi for indoor navigation and Location Based Services (LBS) applications.

  1. Radiometric responsivity determination for Feature Identification and Location Experiment (FILE) flown on space shuttle mission

    NASA Technical Reports Server (NTRS)

    Wilson, R. G.; Davis, R. E.; Wright, R. E., Jr.; Sivertson, W. E., Jr.; Bullock, G. F.

    1986-01-01

    A procedure was developed to obtain the radiometric (radiance) responsivity of the Feature Identification and Local Experiment (FILE) instrument in preparation for its flight on Space Shuttle Mission 41-G (November 1984). This instrument was designed to obtain Earth feature radiance data in spectral bands centered at 0.65 and 0.85 microns, along with corroborative color and color-infrared photographs, and to collect data to evaluate a technique for in-orbit autonomous classification of the Earth's primary features. The calibration process incorporated both solar radiance measurements and radiative transfer model predictions in estimating expected radiance inputs to the FILE on the Shuttle. The measured data are compared with the model predictions, and the differences observed are discussed. Application of the calibration procedure to the FILE over an 18-month period indicated a constant responsivity characteristic. This report documents the calibration procedure and the associated radiometric measurements and predictions that were part of the instrument preparation for flight.

  2. MO-D-BRD-03: Radiobiology and Commissioning of Electronic Brachytherapy for IORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J.

    2015-06-15

    Electronic brachytherapy (eBT) has seen an insurgence of manufacturers entering the US market for use in radiation therapy. In addition to the established interstitial, intraluminary, and intracavitary applications of eBT, many centers are now using eBT to treat skin lesions. It is important for medical physicists working with electronic brachytherapy sources to understand the basic physics principles of the sources themselves as well as the variety of applications for which they are being used. The calibration of the sources is different from vendor to vendor and the traceability of calibrations has evolved as new sources came to market. In 2014,more » a new air-kerma based standard was introduced by the National Institute of Standards and Technology (NIST) to measure the output of an eBT source. Eventually commercial treatment planning systems should accommodate this new standard and provide NIST traceability to the end user. The calibration and commissioning of an eBT system is unique to its application and typically entails a list of procedural recommendations by the manufacturer. Commissioning measurements are performed using a variety of methods, some of which are modifications of existing AAPM Task Group protocols. A medical physicist should be familiar with the different AAPM Task Group recommendations for applicability to eBT and how to properly adapt them to their needs. In addition to the physical characteristics of an eBT source, the photon energy is substantially lower than from HDR Ir-192 sources. Consequently, tissue-specific dosimetry and radiobiological considerations are necessary when comparing these brachytherapy modalities and when making clinical decisions as a radiation therapy team. In this session, the physical characteristics and calibration methodologies of eBt sources will be presented as well as radiobiology considerations and other important clinical considerations. Learning Objectives: To understand the basic principles of electronic brachytherapy and the various applications for which it is being used. To understand the physics of the calibration and commissioning for electronic brachytherapy sources To understand the unique radiobiology and clinical implementation of electronic brachytherapy systems for skin and IORT techniques Xoft, Inc. contributed funding toward development of the NIST electronic brachytherapy facility (Michael Mitch).The University of Wisconsin (Wesley Culberson) has received research support funding from Xoft, Inc. Zoubir Ouhib has received partial funding from Elekta Esteya.« less

  3. MO-D-BRD-01: Clinical Implementation of An Electronic Brachytherapy Program for the Skin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouhib, Z.

    2015-06-15

    Electronic brachytherapy (eBT) has seen an insurgence of manufacturers entering the US market for use in radiation therapy. In addition to the established interstitial, intraluminary, and intracavitary applications of eBT, many centers are now using eBT to treat skin lesions. It is important for medical physicists working with electronic brachytherapy sources to understand the basic physics principles of the sources themselves as well as the variety of applications for which they are being used. The calibration of the sources is different from vendor to vendor and the traceability of calibrations has evolved as new sources came to market. In 2014,more » a new air-kerma based standard was introduced by the National Institute of Standards and Technology (NIST) to measure the output of an eBT source. Eventually commercial treatment planning systems should accommodate this new standard and provide NIST traceability to the end user. The calibration and commissioning of an eBT system is unique to its application and typically entails a list of procedural recommendations by the manufacturer. Commissioning measurements are performed using a variety of methods, some of which are modifications of existing AAPM Task Group protocols. A medical physicist should be familiar with the different AAPM Task Group recommendations for applicability to eBT and how to properly adapt them to their needs. In addition to the physical characteristics of an eBT source, the photon energy is substantially lower than from HDR Ir-192 sources. Consequently, tissue-specific dosimetry and radiobiological considerations are necessary when comparing these brachytherapy modalities and when making clinical decisions as a radiation therapy team. In this session, the physical characteristics and calibration methodologies of eBt sources will be presented as well as radiobiology considerations and other important clinical considerations. Learning Objectives: To understand the basic principles of electronic brachytherapy and the various applications for which it is being used. To understand the physics of the calibration and commissioning for electronic brachytherapy sources To understand the unique radiobiology and clinical implementation of electronic brachytherapy systems for skin and IORT techniques Xoft, Inc. contributed funding toward development of the NIST electronic brachytherapy facility (Michael Mitch).The University of Wisconsin (Wesley Culberson) has received research support funding from Xoft, Inc. Zoubir Ouhib has received partial funding from Elekta Esteya.« less

  4. MO-D-BRD-00: Electronic Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Electronic brachytherapy (eBT) has seen an insurgence of manufacturers entering the US market for use in radiation therapy. In addition to the established interstitial, intraluminary, and intracavitary applications of eBT, many centers are now using eBT to treat skin lesions. It is important for medical physicists working with electronic brachytherapy sources to understand the basic physics principles of the sources themselves as well as the variety of applications for which they are being used. The calibration of the sources is different from vendor to vendor and the traceability of calibrations has evolved as new sources came to market. In 2014,more » a new air-kerma based standard was introduced by the National Institute of Standards and Technology (NIST) to measure the output of an eBT source. Eventually commercial treatment planning systems should accommodate this new standard and provide NIST traceability to the end user. The calibration and commissioning of an eBT system is unique to its application and typically entails a list of procedural recommendations by the manufacturer. Commissioning measurements are performed using a variety of methods, some of which are modifications of existing AAPM Task Group protocols. A medical physicist should be familiar with the different AAPM Task Group recommendations for applicability to eBT and how to properly adapt them to their needs. In addition to the physical characteristics of an eBT source, the photon energy is substantially lower than from HDR Ir-192 sources. Consequently, tissue-specific dosimetry and radiobiological considerations are necessary when comparing these brachytherapy modalities and when making clinical decisions as a radiation therapy team. In this session, the physical characteristics and calibration methodologies of eBt sources will be presented as well as radiobiology considerations and other important clinical considerations. Learning Objectives: To understand the basic principles of electronic brachytherapy and the various applications for which it is being used. To understand the physics of the calibration and commissioning for electronic brachytherapy sources To understand the unique radiobiology and clinical implementation of electronic brachytherapy systems for skin and IORT techniques Xoft, Inc. contributed funding toward development of the NIST electronic brachytherapy facility (Michael Mitch).The University of Wisconsin (Wesley Culberson) has received research support funding from Xoft, Inc. Zoubir Ouhib has received partial funding from Elekta Esteya.« less

  5. MO-D-BRD-02: Radiological Physics and Surface Lesion Treatments with Electronic Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulkerson, R.

    Electronic brachytherapy (eBT) has seen an insurgence of manufacturers entering the US market for use in radiation therapy. In addition to the established interstitial, intraluminary, and intracavitary applications of eBT, many centers are now using eBT to treat skin lesions. It is important for medical physicists working with electronic brachytherapy sources to understand the basic physics principles of the sources themselves as well as the variety of applications for which they are being used. The calibration of the sources is different from vendor to vendor and the traceability of calibrations has evolved as new sources came to market. In 2014,more » a new air-kerma based standard was introduced by the National Institute of Standards and Technology (NIST) to measure the output of an eBT source. Eventually commercial treatment planning systems should accommodate this new standard and provide NIST traceability to the end user. The calibration and commissioning of an eBT system is unique to its application and typically entails a list of procedural recommendations by the manufacturer. Commissioning measurements are performed using a variety of methods, some of which are modifications of existing AAPM Task Group protocols. A medical physicist should be familiar with the different AAPM Task Group recommendations for applicability to eBT and how to properly adapt them to their needs. In addition to the physical characteristics of an eBT source, the photon energy is substantially lower than from HDR Ir-192 sources. Consequently, tissue-specific dosimetry and radiobiological considerations are necessary when comparing these brachytherapy modalities and when making clinical decisions as a radiation therapy team. In this session, the physical characteristics and calibration methodologies of eBt sources will be presented as well as radiobiology considerations and other important clinical considerations. Learning Objectives: To understand the basic principles of electronic brachytherapy and the various applications for which it is being used. To understand the physics of the calibration and commissioning for electronic brachytherapy sources To understand the unique radiobiology and clinical implementation of electronic brachytherapy systems for skin and IORT techniques Xoft, Inc. contributed funding toward development of the NIST electronic brachytherapy facility (Michael Mitch).The University of Wisconsin (Wesley Culberson) has received research support funding from Xoft, Inc. Zoubir Ouhib has received partial funding from Elekta Esteya.« less

  6. An Investigation of Acoustic Cavitation Produced by Pulsed Ultrasound

    DTIC Science & Technology

    1987-12-01

    S~ PVDF Hydrophone Sensitivity Calibration Curves C. DESCRIPTION OF TEST AND CALIBRATION TECHNIQUE We chose the reciprocity technique for calibration...NAVAL POSTGRADUATE SCHOOLN a n Monterey, Calif ornia ITHESIS AN INVESTIGATION OF ACOUSTIC CAVITATION PRODUCED BY PULSED ULTRASOUND by Robert L. Bruce...INVESTIGATION OF ACOUSTIC CAVITATION PRODUCED B~Y PULSED ULTRASOUND !2 PERSONAL AUTHOR(S) .RR~r. g~rtL_ 1DLJN, Rober- ., Jr. 13a TYPE OF REPORT )3b TIME

  7. Improved dewpoint-probe calibration

    NASA Technical Reports Server (NTRS)

    Stephenson, J. G.; Theodore, E. A.

    1978-01-01

    Relatively-simple pressure-control apparatus calibrates dewpoint probes considerably faster than conventional methods, with no loss of accuracy. Technique requires only pressure measurement at each calibration point and single absolute-humidity measurement at beginning of run. Several probes can be calibrated simultaneously and points can be checked above room temperature.

  8. Using multiple IMUs in a stacked filter configuration for calibration and fine alignment

    NASA Astrophysics Data System (ADS)

    El-Osery, Aly; Bruder, Stephen; Wedeward, Kevin

    2018-05-01

    Determination of a vehicle or person's position and/or orientation is a critical task for a multitude of applications ranging from automated cars and first responders to missiles and fighter jets. Most of these applications rely primarily on global navigation satellite systems, e.g., GPS, which are highly vulnerable to degradation whether by environmental factors or malicious actions. The use of inertial navigation techniques has been shown to provide increased reliability of navigation systems in these situations. Due to advances in MEMS technology and processing capabilities, the use of small and low-cost inertial measurement units (IMUs) are becoming increasingly feasible, which results in small size, weight and power (SWaP) solutions. A known limitation of MEMS IMUs are errors that causes the navigation solution to drift; furthermore, calibration and initialization are challenging tasks. In this paper, we investigate the use of multiple IMUs to aid in calibrating the navigation system and obtaining accurate initialization by performing fine alignment. By using a centralized filter, physical constraints between the multiple IMUs on a rigid body are leveraged to provide relative updates, which in turn aids in the estimation of the individual biases and scale-factors. Developed algorithms will be validated through simulation and actual measurements using low-cost IMUs.

  9. Testing of a Microwave Blade Tip Clearance Sensor at the NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Woike, Mark R.; Roeder, James W.; Hughes, Christopher E.; Bencic, Timothy J.

    2009-01-01

    The development of new active tip clearance control and structural health monitoring schemes in turbine engines and other types of rotating machinery requires sensors that are highly accurate and can operate in a high-temperature environment. The use of a microwave sensor to acquire blade tip clearance and tip timing measurements is being explored at the NASA Glenn Research Center. The microwave blade tip clearance sensor works on principles that are very similar to a short-range radar system. The sensor sends a continuous microwave signal towards a target and measures the reflected signal. The phase difference of the reflected signal is directly proportional to the distance between the sensor and the target being measured. This type of sensor is beneficial in that it has the ability to operate at extremely high temperatures and is unaffected by contaminants that may be present in turbine engines. The use of microwave sensors for this application is a new concept. Techniques on calibrating the sensors along with installation effects are not well quantified as they are for other sensor technologies. Developing calibration techniques and evaluating installation effects are essential in using these sensors to make tip clearance and tip timing measurements. As a means of better understanding these issues, the microwave sensors were used on a benchtop calibration rig, a large axial vane fan, and a turbofan. Background on the microwave tip clearance sensor, an overview of their calibration, and the results from their use on the axial vane fan and the turbofan will be presented in this paper.

  10. Testing of a Microwave Blade Tip Clearance Sensor at the NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Woike, Mark R.; Roeder, James W.; Hughes, Christopher E.; Bencic, Timothy J.

    2009-01-01

    The development of new active tip clearance control and structural health monitoring schemes in turbine engines and other types of rotating machinery requires sensors that are highly accurate and can operate in a high temperature environment. The use of a microwave sensor to acquire blade tip clearance and tip timing measurements is being explored at the NASA Glenn Research Center. The microwave blade tip clearance sensor works on principles that are very similar to a short range radar system. The sensor sends a continuous microwave signal towards a target and measures the reflected signal. The phase difference of the reflected signal is directly proportional to the distance between the sensor and the target being measured. This type of sensor is beneficial in that it has the ability to operate at extremely high temperatures and is unaffected by contaminants that may be present in turbine engines. The use of microwave sensors for this application is a new concept. Techniques on calibrating the sensors along with installation effects are not well quantified as they are for other sensor technologies. Developing calibration techniques and evaluating installation effects are essential in using these sensors to make tip clearance and tip timing measurements. As a means of better understanding these issues, the microwave sensors were used on a bench top calibration rig, a large axial vane fan, and a turbofan. Background on the microwave tip clearance sensor, an overview of their calibration, and the results from their use on the axial vane fan and the turbofan will be presented in this paper.

  11. Coloration Determination of Spectral Darkening Occurring on a Broadband Earth Observing Radiometer: Application to Clouds and the Earth's Radiant Energy System (CERES)

    NASA Technical Reports Server (NTRS)

    Matthews, Grant; Priestley, Kory; Loeb, Norman G.; Loukachine, Konstantin; Thomas, Susan; Walikainen, Dale; Wielicki, Bruce A.

    2006-01-01

    It is estimated that in order to best detect real changes in the Earth s climate system, space based instrumentation measuring the Earth Radiation Budget (ERB) must remain calibrated with a stability of 0.3% per decade. Such stability is beyond the specified accuracy of existing ERB programs such as the Clouds and the Earth s Radiant Energy System (CERES, using three broadband radiometric scanning channels: the shortwave 0.3 - 5microns, total 0.3. > 100microns, and window 8 - 12microns). It has been shown that when in low earth orbit, optical response to blue/UV radiance can be reduced significantly due to UV hardened contaminants deposited on the surface of the optics. Since typical onboard calibration lamps do not emit sufficient energy in the blue/UV region, this darkening is not directly measurable using standard internal calibration techniques. This paper describes a study using a model of contaminant deposition and darkening, in conjunction with in-flight vicarious calibration techniques, to derive the spectral shape of darkening to which a broadband instrument is subjected. Ultimately the model uses the reflectivity of Deep Convective Clouds as a stability metric. The results of the model when applied to the CERES instruments on board the EOS Terra satellite are shown. Given comprehensive validation of the model, these results will allow the CERES spectral responses to be updated accordingly prior to any forthcoming data release in an attempt to reach the optimum stability target that the climate community requires.

  12. Performance of the score systems Acute Physiology and Chronic Health Evaluation II and III at an interdisciplinary intensive care unit, after customization

    PubMed Central

    Markgraf, Rainer; Deutschinoff, Gerd; Pientka, Ludger; Scholten, Theo; Lorenz, Cristoph

    2001-01-01

    Background: Mortality predictions calculated using scoring scales are often not accurate in populations other than those in which the scales were developed because of differences in case-mix. The present study investigates the effect of first-level customization, using a logistic regression technique, on discrimination and calibration of the Acute Physiology and Chronic Health Evaluation (APACHE) II and III scales. Method: Probabilities of hospital death for patients were estimated by applying APACHE II and III and comparing these with observed outcomes. Using the split sample technique, a customized model to predict outcome was developed by logistic regression. The overall goodness-of-fit of the original and the customized models was assessed. Results: Of 3383 consecutive intensive care unit (ICU) admissions over 3 years, 2795 patients could be analyzed, and were split randomly into development and validation samples. The discriminative powers of APACHE II and III were unchanged by customization (areas under the receiver operating characteristic [ROC] curve 0.82 and 0.85, respectively). Hosmer-Lemeshow goodness-of-fit tests showed good calibration for APACHE II, but insufficient calibration for APACHE III. Customization improved calibration for both models, with a good fit for APACHE III as well. However, fit was different for various subgroups. Conclusions: The overall goodness-of-fit of APACHE III mortality prediction was improved significantly by customization, but uniformity of fit in different subgroups was not achieved. Therefore, application of the customized model provides no advantage, because differences in case-mix still limit comparisons of quality of care. PMID:11178223

  13. Development of a generic auto-calibration package for regional ecological modeling and application in the Central Plains of the United States

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang; Li, Zhengpeng; Dahal, Devendra; Young, Claudia J.; Schmidt, Gail L.; Liu, Jinxun; Davis, Brian; Sohl, Terry L.; Werner, Jeremy M.; Oeding, Jennifer

    2014-01-01

    Process-oriented ecological models are frequently used for predicting potential impacts of global changes such as climate and land-cover changes, which can be useful for policy making. It is critical but challenging to automatically derive optimal parameter values at different scales, especially at regional scale, and validate the model performance. In this study, we developed an automatic calibration (auto-calibration) function for a well-established biogeochemical model—the General Ensemble Biogeochemical Modeling System (GEMS)-Erosion Deposition Carbon Model (EDCM)—using data assimilation technique: the Shuffled Complex Evolution algorithm and a model-inversion R package—Flexible Modeling Environment (FME). The new functionality can support multi-parameter and multi-objective auto-calibration of EDCM at the both pixel and regional levels. We also developed a post-processing procedure for GEMS to provide options to save the pixel-based or aggregated county-land cover specific parameter values for subsequent simulations. In our case study, we successfully applied the updated model (EDCM-Auto) for a single crop pixel with a corn–wheat rotation and a large ecological region (Level II)—Central USA Plains. The evaluation results indicate that EDCM-Auto is applicable at multiple scales and is capable to handle land cover changes (e.g., crop rotations). The model also performs well in capturing the spatial pattern of grain yield production for crops and net primary production (NPP) for other ecosystems across the region, which is a good example for implementing calibration and validation of ecological models with readily available survey data (grain yield) and remote sensing data (NPP) at regional and national levels. The developed platform for auto-calibration can be readily expanded to incorporate other model inversion algorithms and potential R packages, and also be applied to other ecological models.

  14. Proposed low-energy absolute calibration of nuclear recoils in a dual-phase noble element TPC using D-D neutron scattering kinematics

    NASA Astrophysics Data System (ADS)

    Verbus, J. R.; Rhyne, C. A.; Malling, D. C.; Genecov, M.; Ghosh, S.; Moskowitz, A. G.; Chan, S.; Chapman, J. J.; de Viveiros, L.; Faham, C. H.; Fiorucci, S.; Huang, D. Q.; Pangilinan, M.; Taylor, W. C.; Gaitskell, R. J.

    2017-04-01

    We propose a new technique for the calibration of nuclear recoils in large noble element dual-phase time projection chambers used to search for WIMP dark matter in the local galactic halo. This technique provides an in situ measurement of the low-energy nuclear recoil response of the target media using the measured scattering angle between multiple neutron interactions within the detector volume. The low-energy reach and reduced systematics of this calibration have particular significance for the low-mass WIMP sensitivity of several leading dark matter experiments. Multiple strategies for improving this calibration technique are discussed, including the creation of a new type of quasi-monoenergetic neutron source with a minimum possible peak energy of 272 keV. We report results from a time-of-flight-based measurement of the neutron energy spectrum produced by an Adelphi Technology, Inc. DD108 neutron generator, confirming its suitability for the proposed nuclear recoil calibration.

  15. Laboratory instrumentation and techniques for characterizing multi-junction solar cells for space applications

    NASA Technical Reports Server (NTRS)

    Woodyard, James R.

    1995-01-01

    Multi-junction solar cells are attractive for space applications because they can be designed to convert a larger fraction of AMO into electrical power at a lower cost than single-junction cells. The performance of multi-junction cells is much more sensitive to the spectral irradiance of the illuminating source than single-junction cells. The design of high efficiency multi-junction cells for space applications requires matching the optoelectronic properties of the junctions to AMO spectral irradiance. Unlike single-junction cells, it is not possible to carry out quantum efficiency measurements using only a monochromatic probe beam and determining the cell short-circuit current assuming linearity of the quantum efficiency. Additionally, current-voltage characteristics can not be calculated from measurements under non-AMO light sources using spectral-correction methods. There are reports in the literature on characterizing the performance of multi junction cells by measuring and convoluting the quantum efficiency of each junction with the spectral irradiance; the technique is of limited value for the characterization of cell performance under AMO power-generating conditions. We report the results of research to develop instrumentation and techniques for characterizing multi junction solar cells for space . An integrated system is described which consists of a standard lamp, spectral radiometer, dual-source solar simulator, and personal computer based current-voltage and quantum efficiency equipment. The spectral radiometer is calibrated regularly using the tungsten-halogen standard lamp which has a calibration based on NIST scales. The solar simulator produces the light bias beam for current-voltage and cell quantum efficiency measurements. The calibrated spectral radiometer is used to 'fit' the spectral irradiance of the dual-source solar simulator to WRL AMO data. The quantum efficiency apparatus includes a monochromatic probe beam for measuring the absolute cell quantum efficiency at various voltage biases, including the voltage bias corresponding to the maximum-power point under AMO light bias. The details of the procedures to 'fit' the spectral irradiance to AMO will be discussed. An assessment of the role of the accuracy of the 'fit' of the spectral irradiance and probe beam intensity on measured cell characteristics will be presented. quantum efficiencies were measured with both spectral light bias and AMO light bias; the measurements show striking differences. Spectral irradiances were convoluted with cell quantum efficiencies to calculate cell currents as function of voltage. The calculated currents compare with measured currents at the 1% level. Measurements on a variety of multi-junction cells will be presented. The dependence of defects in junctions on cell quantum efficiencies measured under light and voltage bias conditions will be presented. Comments will be made on issues related to standards for calibration, and limitations of the instrumentation and techniques. Expeditious development of multi-junction solar cell technology for space presents challenges for cell characterization in the laboratory.

  16. Measurement of HONO, HNCO, and other inorganic acids by negative-ion proton-transfer chemical-ionization mass spectrometry (NI-PT-CIMS): application to biomass burning emissions

    Treesearch

    J. M. Roberts; P. Veres; C. Warneke; J. A. Neuman; R. A. Washenfelder; S. S. Brown; M. Baasandorj; J. B. Burkholder; I. R. Burling; T. J. Johnson; R. J. Yokelson; J. de Gouw

    2010-01-01

    A negative-ion proton transfer chemical ionization mass spectrometric technique (NI-PT-CIMS), using acetate as the reagent ion, was applied to the measurement of volatile inorganic acids of atmospheric interest: hydrochloric (HCl), nitrous (HONO), nitric 5 (HNO3), and isocyanic (HNCO) acids. Gas phase calibrations through the sampling inlet showed the method to be...

  17. Boresight Calibration of Construction Misalignments for 3D Scanners Built with a 2D Laser Rangefinder Rotating on Its Optical Center

    PubMed Central

    Morales, Jesús; Martínez, Jorge L.; Mandow, Anthony; Reina, Antonio J.; Pequeño-Boter, Alejandro; García-Cerezo, Alfonso

    2014-01-01

    Many applications, like mobile robotics, can profit from acquiring dense, wide-ranging and accurate 3D laser data. Off-the-shelf 2D scanners are commonly customized with an extra rotation as a low-cost, lightweight and low-power-demanding solution. Moreover, aligning the extra rotation axis with the optical center allows the 3D device to maintain the same minimum range as the 2D scanner and avoids offsets in computing Cartesian coordinates. The paper proposes a practical procedure to estimate construction misalignments based on a single scan taken from an arbitrary position in an unprepared environment that contains planar surfaces of unknown dimensions. Inherited measurement limitations from low-cost 2D devices prevent the estimation of very small translation misalignments, so the calibration problem reduces to obtaining boresight parameters. The distinctive approach with respect to previous plane-based intrinsic calibration techniques is the iterative maximization of both the flatness and the area of visible planes. Calibration results are presented for a case study. The method is currently being applied as the final stage in the production of a commercial 3D rangefinder. PMID:25347585

  18. Dynamic pressure sensor calibration techniques offering expanded bandwidth with increased resolution

    NASA Astrophysics Data System (ADS)

    Wisniewiski, David

    2015-03-01

    Advancements in the aerospace, defense and energy markets are being made possible by increasingly more sophisticated systems and sub-systems which rely upon critical information to be conveyed from the physical environment being monitored through ever more specialized, extreme environment sensing components. One sensing parameter of particular interest is dynamic pressure measurement. Crossing the boundary of all three markets (i.e. aerospace, defense and energy) is dynamic pressure sensing which is used in research and development of gas turbine technology, and subsequently embedded into a control loop used for long-term monitoring. Applications include quantifying the effects of aircraft boundary layer ingestion into the engine inlet to provide a reliable and robust design. Another application includes optimization of combustor dynamics by "listening" to the acoustic signature so that fuel-to-air mixture can be adjusted in real-time to provide cost operating efficiencies and reduced NOx emissions. With the vast majority of pressure sensors supplied today being calibrated either statically or "quasi" statically, the dynamic response characterization of the frequency dependent sensitivity (i.e. transfer function) of the pressure sensor is noticeably absent. The shock tube has been shown to be an efficient vehicle to provide frequency response of pressure sensors from extremely high frequencies down to 500 Hz. Recent development activity has lowered this starting frequency; thereby augmenting the calibration bandwidth with increased frequency resolution so that as the pressure sensor is used in an actual test application, more understanding of the physical measurement can be ascertained by the end-user.

  19. Assessment of MODIS On-Orbit Calibration Using a Deep Convective Cloud Technique

    NASA Technical Reports Server (NTRS)

    Mu, Qiaozhen; Wu, Aisheng; Chang, Tiejun; Angal, Amit; Link, Daniel; Xiong, Xiaoxiong; Doelling, David R.; Bhatt, Rajendra

    2016-01-01

    The MODerate Resolution Imaging Spectroradiometer (MODIS) sensors onboard Terra and Aqua satellites are calibrated on-orbit with a solar diffuser (SD) for the reflective solar bands (RSB). The MODIS sensors are operating beyond their designed lifetime and hence present a major challenge to maintain the calibration accuracy. The degradation of the onboard SD is tracked by a solar diffuser stability monitor (SDSM) over a wavelength range from 0.41 to 0.94 micrometers. Therefore, any degradation of the SD beyond 0.94 micrometers cannot be captured by the SDSM. The uncharacterized degradation at wavelengths beyond this limit could adversely affect the Level 1B (L1B) product. To reduce the calibration uncertainties caused by the SD degradation, invariant Earth-scene targets are used to monitor and calibrate the MODIS L1B product. The use of deep convective clouds (DCCs) is one such method and particularly significant for the short-wave infrared (SWIR) bands in assessing their long-term calibration stability. In this study, we use the DCC technique to assess the performance of the Terra and Aqua MODIS Collection-6 L1B for RSB 1 3- 7, and 26, with spectral coverage from 0.47 to 2.13 micrometers. Results show relatively stable trends in Terra and Aqua MODIS reflectance for most bands. Careful attention needs to be paid to Aqua band 1, Terra bands 3 and 26 as their trends are larger than 1% during the study time period. We check the feasibility of using the DCC technique to assess the stability in MODIS bands 17-19. The assessment test on response versus scan angle (RVS) calibration shows substantial trend difference for Aqua band 1between different angles of incidence (AOIs). The DCC technique can be used to improve the RVS calibration in the future.

  20. Revised radiometric calibration technique for LANDSAT-4 Thematic Mapper data by the Canada Centre for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.

    1984-01-01

    Observations of raw image data, raw radiometric calibration data, and background measurements extracted from the raw data streams on high density tape reveal major shortcomings in a technique proposed by the Canadian Center for Remote Sensing in 1982 for the radiometric correction of TM data. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and data corrected using the earlier proposed technique is explained and the correction required for these factors as a function of individual scan line number for each detector is described. How the revised technique can be incorporated into an operational environment is demonstrated.

  1. Calibration of a thin metal foil for infrared imaging video bolometer to estimate the spatial variation of thermal diffusivity using a photo-thermal technique.

    PubMed

    Pandya, Shwetang N; Peterson, Byron J; Sano, Ryuichi; Mukai, Kiyofumi; Drapiko, Evgeny A; Alekseyev, Andrey G; Akiyama, Tsuyoshi; Itomi, Muneji; Watanabe, Takashi

    2014-05-01

    A thin metal foil is used as a broad band radiation absorber for the InfraRed imaging Video Bolometer (IRVB), which is a vital diagnostic for studying three-dimensional radiation structures from high temperature plasmas in the Large Helical Device. The two-dimensional (2D) heat diffusion equation of the foil needs to be solved numerically to estimate the radiation falling on the foil through a pinhole geometry. The thermal, physical, and optical properties of the metal foil are among the inputs to the code besides the spatiotemporal variation of temperature, for reliable estimation of the exhaust power from the plasma illuminating the foil. The foil being very thin and of considerable size, non-uniformities in these properties need to be determined by suitable calibration procedures. The graphite spray used for increasing the surface emissivity also contributes to a change in the thermal properties. This paper discusses the application of the thermographic technique for determining the spatial variation of the effective in-plane thermal diffusivity of the thin metal foil and graphite composite. The paper also discusses the advantages of this technique in the light of limitations and drawbacks presented by other calibration techniques being practiced currently. The technique is initially applied to a material of known thickness and thermal properties for validation and finally to thin foils of gold and platinum both with two different thicknesses. It is observed that the effect of the graphite layer on the estimation of the thermal diffusivity becomes more pronounced for thinner foils and the measured values are approximately 2.5-3 times lower than the literature values. It is also observed that the percentage reduction in thermal diffusivity due to the coating is lower for high thermal diffusivity materials such as gold. This fact may also explain, albeit partially, the higher sensitivity of the platinum foil as compared to gold.

  2. Calibration of a thin metal foil for infrared imaging video bolometer to estimate the spatial variation of thermal diffusivity using a photo-thermal technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Shwetang N., E-mail: pandya.shwetang@LHD.nifs.ac.jp; Sano, Ryuichi; Peterson, Byron J.

    A thin metal foil is used as a broad band radiation absorber for the InfraRed imaging Video Bolometer (IRVB), which is a vital diagnostic for studying three-dimensional radiation structures from high temperature plasmas in the Large Helical Device. The two-dimensional (2D) heat diffusion equation of the foil needs to be solved numerically to estimate the radiation falling on the foil through a pinhole geometry. The thermal, physical, and optical properties of the metal foil are among the inputs to the code besides the spatiotemporal variation of temperature, for reliable estimation of the exhaust power from the plasma illuminating the foil.more » The foil being very thin and of considerable size, non-uniformities in these properties need to be determined by suitable calibration procedures. The graphite spray used for increasing the surface emissivity also contributes to a change in the thermal properties. This paper discusses the application of the thermographic technique for determining the spatial variation of the effective in-plane thermal diffusivity of the thin metal foil and graphite composite. The paper also discusses the advantages of this technique in the light of limitations and drawbacks presented by other calibration techniques being practiced currently. The technique is initially applied to a material of known thickness and thermal properties for validation and finally to thin foils of gold and platinum both with two different thicknesses. It is observed that the effect of the graphite layer on the estimation of the thermal diffusivity becomes more pronounced for thinner foils and the measured values are approximately 2.5–3 times lower than the literature values. It is also observed that the percentage reduction in thermal diffusivity due to the coating is lower for high thermal diffusivity materials such as gold. This fact may also explain, albeit partially, the higher sensitivity of the platinum foil as compared to gold.« less

  3. Cross-correlation redshift calibration without spectroscopic calibration samples in DES Science Verification Data

    NASA Astrophysics Data System (ADS)

    Davis, C.; Rozo, E.; Roodman, A.; Alarcon, A.; Cawthon, R.; Gatti, M.; Lin, H.; Miquel, R.; Rykoff, E. S.; Troxel, M. A.; Vielzeuf, P.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Drlica-Wagner, A.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gaztanaga, E.; Gerdes, D. W.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Jain, B.; James, D. J.; Jeltema, T.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Martini, P.; Melchior, P.; Ogando, R. L. C.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Vikram, V.; Walker, A. R.; Wechsler, R. H.

    2018-06-01

    Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogues with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty of Δz ˜ ±0.01. We forecast that our proposal can, in principle, control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Our results provide strong motivation to launch a programme to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.

  4. Cross-correlation redshift calibration without spectroscopic calibration samples in DES Science Verification Data

    DOE PAGES

    Davis, C.; Rozo, E.; Roodman, A.; ...

    2018-03-26

    Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogs with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty ofmore » $$\\Delta z \\sim \\pm 0.01$$. We forecast that our proposal can in principle control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Here, our results provide strong motivation to launch a program to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.« less

  5. Cross-correlation redshift calibration without spectroscopic calibration samples in DES Science Verification Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, C.; Rozo, E.; Roodman, A.

    Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogs with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty ofmore » $$\\Delta z \\sim \\pm 0.01$$. We forecast that our proposal can in principle control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Here, our results provide strong motivation to launch a program to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.« less

  6. Study of deuteron induced reactions on natural iron and copper and their use for monitoring beam parameters and for thin layer activation technique

    NASA Astrophysics Data System (ADS)

    Takács, S.; Tárkányi, F.; Sonck, M.; Hermanne, A.; Sudár, S.

    1997-02-01

    Excitation functions of deuteron induced nuclear reactions on natural iron and copper have been studied in the frame of a systematic investigation of charged particle induced nuclear reactions on metals for different applications. The excitation functions were measured up to 20 MeV deuteron energy by using stacked foil technique and activation method. The measured and the evaluated literature data showed that some reaction can be recommended for monitoring deuteron beams, and the excitation functions can be used to determine calibration curves for Thin Layer Activation Technique (TLA). Cross sections calculated by statistical model theory, STAPRE, taking into account preequilibrium effect are in reasonable agreement with the experimental results.

  7. Calibration of the clumped isotope thermometer for planktic foraminifers

    NASA Astrophysics Data System (ADS)

    Meinicke, N.; Ho, S. L.; Nürnberg, D.; Tripati, A. K.; Jansen, E.; Dokken, T.; Schiebel, R.; Meckler, A. N.

    2017-12-01

    Many proxies for past ocean temperature suffer from secondary influences or require species-specific calibrations that might not be applicable on longer time scales. Being thermodynamically based and thus independent of seawater composition, clumped isotopes in carbonates (Δ47) have the potential to circumvent such issues affecting other proxies and provide reliable temperature reconstructions far back in time and in unknown settings. Although foraminifers are commonly used for paleoclimate reconstructions, their use for clumped isotope thermometry has been hindered so far by large sample-size requirements. Existing calibration studies suggest that data from a variety of foraminifer species agree with synthetic carbonate calibrations (Tripati, et al., GCA, 2010; Grauel, et al., GCA, 2013). However, these studies did not include a sufficient number of samples to fully assess the existence of species-specific effects, and data coverage was especially sparse in the low temperature range (<10 °C). To expand the calibration database of clumped isotopes in planktic foraminifers, especially for colder temperatures (<10°C), we present new Δ47 data analysed on 14 species of planktic foraminifers from 13 sites, covering a temperature range of 1-29 °C. Our method allows for analysis of smaller sample sizes (3-5 mg), hence also the measurement of multiple species from the same samples. We analyzed surface-dwelling ( 0-50 m) species and deep-dwelling (habitat depth up to several hundred meters) planktic foraminifers from the same sites to evaluate species-specific effects and to assess the feasibility of temperature reconstructions for different water depths. We also assess the effects of different techniques in estimating foraminifer calcification temperature on the calibration. Finally, we compare our calibration to existing clumped isotope calibrations. Our results confirm previous findings that indicate no species-specific effects on the Δ47-temperature relationship measured in planktic foraminifers.

  8. Application of the precipitation-runoff model in the Warrior coal field, Alabama

    USGS Publications Warehouse

    Kidd, Robert E.; Bossong, C.R.

    1987-01-01

    A deterministic precipitation-runoff model, the Precipitation-Runoff Modeling System, was applied in two small basins located in the Warrior coal field, Alabama. Each basin has distinct geologic, hydrologic, and land-use characteristics. Bear Creek basin (15.03 square miles) is undisturbed, is underlain almost entirely by consolidated coal-bearing rocks of Pennsylvanian age (Pottsville Formation), and is drained by an intermittent stream. Turkey Creek basin (6.08 square miles) contains a surface coal mine and is underlain by both the Pottsville Formation and unconsolidated clay, sand, and gravel deposits of Cretaceous age (Coker Formation). Aquifers in the Coker Formation sustain flow through extended rainless periods. Preliminary daily and storm calibrations were developed for each basin. Initial parameter and variable values were determined according to techniques recommended in the user's manual for the modeling system and through field reconnaissance. Parameters with meaningful sensitivity were identified and adjusted to match hydrograph shapes and to compute realistic water year budgets. When the developed calibrations were applied to data exclusive of the calibration period as a verification exercise, results were comparable to those for the calibration period. The model calibrations included preliminary parameter values for the various categories of geology and land use in each basin. The parameter values for areas underlain by the Pottsville Formation in the Bear Creek basin were transferred directly to similar areas in the Turkey Creek basin, and these parameter values were held constant throughout the model calibration. Parameter values for all geologic and land-use categories addressed in the two calibrations can probably be used in ungaged basins where similar conditions exist. The parameter transfer worked well, as a good calibration was obtained for Turkey Creek basin.

  9. Comparison of three-way and four-way calibration for the real-time quantitative analysis of drug hydrolysis in complex dynamic samples by excitation-emission matrix fluorescence.

    PubMed

    Yin, Xiao-Li; Gu, Hui-Wen; Liu, Xiao-Lu; Zhang, Shan-Hui; Wu, Hai-Long

    2018-03-05

    Multiway calibration in combination with spectroscopic technique is an attractive tool for online or real-time monitoring of target analyte(s) in complex samples. However, how to choose a suitable multiway calibration method for the resolution of spectroscopic-kinetic data is a troubling problem in practical application. In this work, for the first time, three-way and four-way fluorescence-kinetic data arrays were generated during the real-time monitoring of the hydrolysis of irinotecan (CPT-11) in human plasma by excitation-emission matrix fluorescence. Alternating normalization-weighted error (ANWE) and alternating penalty trilinear decomposition (APTLD) were used as three-way calibration for the decomposition of the three-way kinetic data array, whereas alternating weighted residual constraint quadrilinear decomposition (AWRCQLD) and alternating penalty quadrilinear decomposition (APQLD) were applied as four-way calibration to the four-way kinetic data array. The quantitative results of the two kinds of calibration models were fully compared from the perspective of predicted real-time concentrations, spiked recoveries of initial concentration, and analytical figures of merit. The comparison study demonstrated that both three-way and four-way calibration models could achieve real-time quantitative analysis of the hydrolysis of CPT-11 in human plasma under certain conditions. However, it was also found that both of them possess some critical advantages and shortcomings during the process of dynamic analysis. The conclusions obtained in this paper can provide some helpful guidance for the reasonable selection of multiway calibration models to achieve the real-time quantitative analysis of target analyte(s) in complex dynamic systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Calibration and verification of thermographic cameras for geometric measurements

    NASA Astrophysics Data System (ADS)

    Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.

    2011-03-01

    Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better results for the Nec camera.

  11. 3D hyperpolarized C-13 EPI with calibrationless parallel imaging

    NASA Astrophysics Data System (ADS)

    Gordon, Jeremy W.; Hansen, Rie B.; Shin, Peter J.; Feng, Yesu; Vigneron, Daniel B.; Larson, Peder E. Z.

    2018-04-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and temporal resolution. Calibrationless parallel imaging approaches are well-suited for this application because they eliminate the need to acquire coil profile maps or auto-calibration data. In this work, we explored the utility of a calibrationless parallel imaging method (SAKE) and corresponding sampling strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism.

  12. In Vivo Mitochondrial Oxygen Tension Measured by a Delayed Fluorescence Lifetime Technique

    PubMed Central

    Mik, Egbert G.; Johannes, Tanja; Zuurbier, Coert J.; Heinen, Andre; Houben-Weerts, Judith H. P. M.; Balestra, Gianmarco M.; Stap, Jan; Beek, Johan F.; Ince, Can

    2008-01-01

    Mitochondrial oxygen tension (mitoPO2) is a key parameter for cellular function, which is considered to be affected under various pathophysiological circumstances. Although many techniques for assessing in vivo oxygenation are available, no technique for measuring mitoPO2 in vivo exists. Here we report in vivo measurement of mitoPO2 and the recovery of mitoPO2 histograms in rat liver by a novel optical technique under normal and pathological circumstances. The technique is based on oxygen-dependent quenching of the delayed fluorescence lifetime of protoporphyrin IX. Application of 5-aminolevulinic acid enhanced mitochondrial protoporphyrin IX levels and induced oxygen-dependent delayed fluorescence in various tissues, without affecting mitochondrial respiration. Using fluorescence microscopy, we demonstrate in isolated hepatocytes that the signal is of mitochondrial origin. The delayed fluorescence lifetime was calibrated in isolated hepatocytes and isolated perfused livers. Ultimately, the technique was applied to measure mitoPO2 in rat liver in vivo. The results demonstrate mitoPO2 values of ∼30–40 mmHg. mitoPO2 was highly sensitive to small changes in inspired oxygen concentration around atmospheric oxygen level. Ischemia-reperfusion interventions showed altered mitoPO2 distribution, which flattened overall compared to baseline conditions. The reported technology is scalable from microscopic to macroscopic applications, and its reliance on an endogenous compound greatly enhances its potential field of applications. PMID:18641065

  13. A robust calibration technique for acoustic emission systems based on momentum transfer from a ball drop

    USGS Publications Warehouse

    McLaskey, Gregory C.; Lockner, David A.; Kilgore, Brian D.; Beeler, Nicholas M.

    2015-01-01

    We describe a technique to estimate the seismic moment of acoustic emissions and other extremely small seismic events. Unlike previous calibration techniques, it does not require modeling of the wave propagation, sensor response, or signal conditioning. Rather, this technique calibrates the recording system as a whole and uses a ball impact as a reference source or empirical Green’s function. To correctly apply this technique, we develop mathematical expressions that link the seismic moment $M_{0}$ of internal seismic sources (i.e., earthquakes and acoustic emissions) to the impulse, or change in momentum $\\Delta p $, of externally applied seismic sources (i.e., meteor impacts or, in this case, ball impact). We find that, at low frequencies, moment and impulse are linked by a constant, which we call the force‐moment‐rate scale factor $C_{F\\dot{M}} = M_{0}/\\Delta p$. This constant is equal to twice the speed of sound in the material from which the seismic sources were generated. Next, we demonstrate the calibration technique on two different experimental rock mechanics facilities. The first example is a saw‐cut cylindrical granite sample that is loaded in a triaxial apparatus at 40 MPa confining pressure. The second example is a 2 m long fault cut in a granite sample and deformed in a large biaxial apparatus at lower stress levels. Using the empirical calibration technique, we are able to determine absolute source parameters including the seismic moment, corner frequency, stress drop, and radiated energy of these magnitude −2.5 to −7 seismic events.

  14. A practical approach to spectral calibration of short wavelength infrared hyper-spectral imaging systems

    NASA Astrophysics Data System (ADS)

    Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    Near-infrared spectroscopy is a promising, rapidly developing, reliable and noninvasive technique, used extensively in the biomedicine and in pharmaceutical industry. With the introduction of acousto-optic tunable filters (AOTF) and highly sensitive InGaAs focal plane sensor arrays, real-time high resolution hyper-spectral imaging has become feasible for a number of new biomedical in vivo applications. However, due to the specificity of the AOTF technology and lack of spectral calibration standardization, maintaining long-term stability and compatibility of the acquired hyper-spectral images across different systems is still a challenging problem. Efficiently solving both is essential as the majority of methods for analysis of hyper-spectral images relay on a priori knowledge extracted from large spectral databases, serving as the basis for reliable qualitative or quantitative analysis of various biological samples. In this study, we propose and evaluate fast and reliable spectral calibration of hyper-spectral imaging systems in the short wavelength infrared spectral region. The proposed spectral calibration method is based on light sources or materials, exhibiting distinct spectral features, which enable robust non-rigid registration of the acquired spectra. The calibration accounts for all of the components of a typical hyper-spectral imaging system such as AOTF, light source, lens and optical fibers. The obtained results indicated that practical, fast and reliable spectral calibration of hyper-spectral imaging systems is possible, thereby assuring long-term stability and inter-system compatibility of the acquired hyper-spectral images.

  15. Adaptive on-line calibration for around-view monitoring system using between-camera homography estimation

    NASA Astrophysics Data System (ADS)

    Lim, Sungsoo; Lee, Seohyung; Kim, Jun-geon; Lee, Daeho

    2018-01-01

    The around-view monitoring (AVM) system is one of the major applications of advanced driver assistance systems and intelligent transportation systems. We propose an on-line calibration method, which can compensate misalignments for AVM systems. Most AVM systems use fisheye undistortion, inverse perspective transformation, and geometrical registration methods. To perform these procedures, the parameters for each process must be known; the procedure by which the parameters are estimated is referred to as the initial calibration. However, when only using the initial calibration data, we cannot compensate misalignments, caused by changing equilibria of cars. Moreover, even small changes such as tire pressure levels, passenger weight, or road conditions can affect a car's equilibrium. Therefore, to compensate for this misalignment, additional techniques are necessary, specifically an on-line calibration method. On-line calibration can recalculate homographies, which can correct any degree of misalignment using the unique features of ordinary parking lanes. To extract features from the parking lanes, this method uses corner detection and a pattern matching algorithm. From the extracted features, homographies are estimated using random sample consensus and parameter estimation. Finally, the misaligned epipolar geographies are compensated via the estimated homographies. Thus, the proposed method can render image planes parallel to the ground. This method does not require any designated patterns and can be used whenever cars are placed in a parking lot. The experimental results show the robustness and efficiency of the method.

  16. Determination of Flavonoids in Wine by High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    da Queija, Celeste; Queirós, M. A.; Rodrigues, Ligia M.

    2001-02-01

    The experiment presented is an application of HPLC to the analysis of flavonoids in wines, designed for students of instrumental methods. It is done in two successive 4-hour laboratory sessions. While the hydrolysis of the wines is in progress, the students prepare the calibration curves with standard solutions of flavonoids and calculate the regression lines and correlation coefficients. During the second session they analyze the hydrolyzed wine samples and calculate the concentrations of the flavonoids using the calibration curves obtained earlier. This laboratory work is very attractive to students because they deal with a common daily product whose components are reported to have preventive and therapeutic effects. Furthermore, students can execute preparative work and apply a more elaborate technique that is nowadays an indispensable tool in instrumental analysis.

  17. Coal rib response during bench mining: A case study

    PubMed Central

    Sears, Morgan M.; Rusnak, John; Van Dyke, Mark; Rashed, Gamal; Mohamed, Khaled; Sloan, Michael

    2018-01-01

    In 2016, room-and-pillar mining provided nearly 40% of underground coal production in the United States. Over the past decade, rib falls have resulted in 12 fatalities, representing 28% of the ground fall fatalities in U.S. underground coal mines. Nine of these 12 fatalities (75%) have occurred in room-and-pillar mines. The objective of this research is to study the geomechanics of bench room-and-pillar mining and the associated response of high pillar ribs at overburden depths greater than 300 m. This paper provides a definition of the bench technique, the pillar response due to loading, observational data for a case history, a calibrated numerical model of the observed rib response, and application of this calibrated model to a second site. PMID:29862125

  18. The NPOESS Community Collaborative Calibration/Validation Program for the NPOESS Preparatory Project

    NASA Astrophysics Data System (ADS)

    Kilcoyne, H.; Feeley, J.; Guenther, B.; Hoffman, C. W.; Reed, B.; St. Germain, K.; Zhou, L.; Plonski, M.; Hauss, B.

    2009-12-01

    The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP) Calibration and Validation (Cal/Val) team is currently executing pre-launch activities and planning post-launch activities to efficiently integrate the NPOESS Sensor Data Records (SDRs) and Environmental Data Records (EDRs) into Customer applications to reduce risk in achieving NPOESS Mission Success. The NPP Cal/Val Team, led by the Integrated Program Office (IPO), includes members from the Contractor team producing the data products and subject matter experts from the Customer and User communities, bringing together the expertise with the production algorithms, product use, and science community. This presentation will highlight the progress made in the past year in defining the post-launch activity schedule, involvement of the science and operational data users, and techniques and correlative data used.

  19. [Applications of near infrared reflectance spectroscopy technique to determination of forage mycotoxins].

    PubMed

    Xu, Qing-Fang; Han, Jian-Guo; Yu, Zhu; Yue, Wen-Bin

    2010-05-01

    The near infrared reflectance spectroscopy technique (NIRS) has been explored at many fields such as agriculture, food, chemical, medicine, and so on, due to its rapid, effective, non-destructive, and on-line characteristics. Fungi invasion in forage materials during processing and storage would generate mycotoxins, which were harmful for people and animal through food chains. The determination of mycotoxins included the overelaborated pretreatments such as milling, extracting, chromatography and subsequent process such as enzyme linked immunosorbent assay, high performance liquid chromatography, and thin layer chromatography. The authors hope that high precision and low detection limit spectrum instrument, and software technology and calibration model of mycotoxins determination, will fast measure accurately the quality and quantity of mycotoxins, which will provide basis for reasonable process and utilization of forage and promote the application of NIRS in the safety livestock product.

  20. 2D laser-collision induced fluorescence in low-pressure argon discharges

    DOE PAGES

    Barnat, E. V.; Weatherford, B. R.

    2015-09-25

    Development and application of laser-collision induced fluorescence (LCIF) diagnostic technique is presented for the use of interrogating argon plasma discharges. Key atomic states of argon utilized for the LCIF method are identified. A simplified two-state collisional radiative model is then used to establish scaling relations between the LCIF, electron density, and reduced electric fields ( E/N). The procedure used to generate, detect and calibrate the LCIF in controlled plasma environments is discussed in detail. LCIF emanating from an argon discharge is then presented for electron densities spanning 10 9 e cm –3 to 10 12 e cm –3 and reducedmore » electric fields spanning 0.1 Td to 40 Td. Lastly, application of the LCIF technique for measuring the spatial distribution of both electron densities and reduced electric field is demonstrated.« less

  1. Daytime sky polarization calibration limitations

    NASA Astrophysics Data System (ADS)

    Harrington, David M.; Kuhn, Jeffrey R.; Ariste, Arturo López

    2017-01-01

    The daytime sky has recently been demonstrated as a useful calibration tool for deriving polarization cross-talk properties of large astronomical telescopes. The Daniel K. Inouye Solar Telescope and other large telescopes under construction can benefit from precise polarimetric calibration of large mirrors. Several atmospheric phenomena and instrumental errors potentially limit the technique's accuracy. At the 3.67-m AEOS telescope on Haleakala, we performed a large observing campaign with the HiVIS spectropolarimeter to identify limitations and develop algorithms for extracting consistent calibrations. Effective sampling of the telescope optical configurations and filtering of data for several derived parameters provide robustness to the derived Mueller matrix calibrations. Second-order scattering models of the sky show that this method is relatively insensitive to multiple-scattering in the sky, provided calibration observations are done in regions of high polarization degree. The technique is also insensitive to assumptions about telescope-induced polarization, provided the mirror coatings are highly reflective. Zemax-derived polarization models show agreement between the functional dependence of polarization predictions and the corresponding on-sky calibrations.

  2. Spatial calibration of an optical see-through head mounted display

    PubMed Central

    Gilson, Stuart J.; Fitzgibbon, Andrew W.; Glennerster, Andrew

    2010-01-01

    We present here a method for calibrating an optical see-through Head Mounted Display (HMD) using techniques usually applied to camera calibration (photogrammetry). Using a camera placed inside the HMD to take pictures simultaneously of a tracked object and features in the HMD display, we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the HMD (width, height, focal length, optic centre and principal ray of the display). Our method gives low re-projection errors and, unlike existing methods, involves no time-consuming and error-prone human measurements, nor any prior estimates about the HMD geometry. PMID:18599125

  3. Computer vision applications for coronagraphic optical alignment and image processing.

    PubMed

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  4. OSL technique for studies of jasper samples

    NASA Astrophysics Data System (ADS)

    Teixeira, Maria Inês; Caldas, Linda V. E.

    2014-02-01

    Jasper samples (green, red, brown, ocean and striped) were studied in relation to their optically stimulated luminescence (OSL) dosimetric properties, in this work. Since 2000, the radiation metrology group of IPEN has studied different stones as new materials for application in high-dose dosimetry. The jasper samples were exposed to different radiation doses, using the Gamma-cell 220 system (60Co) of IPEN. Calibration curves were obtained for the jasper samples between 50 Gy and 300 kGy. The reproducibility of the OSL response and the lower detection doses were determined. All five types of jasper samples showed their usefulness as irradiation indicators and as high-dose dosimeters, using the OSL technique.

  5. Microwave remote sensing: Active and passive. Volume 1 - Microwave remote sensing fundamentals and radiometry

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T.; Moore, R. K.; Fung, A. K.

    1981-01-01

    The three components of microwave remote sensing (sensor-scene interaction, sensor design, and measurement techniques), and the applications to geoscience are examined. The history of active and passive microwave sensing is reviewed, along with fundamental principles of electromagnetic wave propagation, antennas, and microwave interaction with atmospheric constituents. Radiometric concepts are reviewed, particularly for measurement problems for atmospheric and terrestrial sources of natural radiation. Particular attention is given to the emission by atmospheric gases, clouds, and rain as described by the radiative transfer function. Finally, the operation and performance characteristics of radiometer receivers are discussed, particularly for measurement precision, calibration techniques, and imaging considerations.

  6. Initial Radiometric Calibration of the AWiFS using Vicarious Calibration Techniques

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Thome, Kurtis; Aaron, David; Leigh, Larry

    2006-01-01

    NASA SSC maintains four ASD FieldSpec FR spectroradiometers: 1) Laboratory transfer radiometers; 2) Ground surface reflectance for V&V field collection activities. Radiometric Calibration consists of a NIST-calibrated integrating sphere which serves as a source with known spectral radiance. Spectral Calibration consists of a laser and pen lamp illumination of integrating sphere. Environmental Testing includes temperature stability tests performed in environmental chamber.

  7. Calibration, reconstruction, and rendering of cylindrical millimeter-wave image data

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Hall, Thomas E.

    2011-05-01

    Cylindrical millimeter-wave imaging systems and technology have been under development at the Pacific Northwest National Laboratory (PNNL) for several years. This technology has been commercialized, and systems are currently being deployed widely across the United States and internationally. These systems are effective at screening for concealed items of all types; however, new sensor designs, image reconstruction techniques, and image rendering algorithms could potentially improve performance. At PNNL, a number of specific techniques have been developed recently to improve cylindrical imaging methods including wideband techniques, combining data from full 360-degree scans, polarimetric imaging techniques, calibration methods, and 3-D data visualization techniques. Many of these techniques exploit the three-dimensionality of the cylindrical imaging technique by optimizing the depth resolution of the system and using this information to enhance detection. Other techniques, such as polarimetric methods, exploit scattering physics of the millimeter-wave interaction with concealed targets on the body. In this paper, calibration, reconstruction, and three-dimensional rendering techniques will be described that optimize the depth information in these images and the display of the images to the operator.

  8. Radiometric modeling and calibration of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) ground based measurement experiment

    NASA Astrophysics Data System (ADS)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-12-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data collected during the moon tracking and viewing experiment events. From which, we derive the lunar surface temperature and emissivity associated with the moon viewing measurements.

  9. Potential for calibration of geostationary meteorological satellite imagers using the Moon

    USGS Publications Warehouse

    Stone, T.C.; Kieffer, H.H.; Grant, I.F.; ,

    2005-01-01

    Solar-band imagery from geostationary meteorological satellites has been utilized in a number of important applications in Earth Science that require radiometric calibration. Because these satellite systems typically lack on-board calibrators, various techniques have been employed to establish "ground truth", including observations of stable ground sites and oceans, and cross-calibrating with coincident observations made by instruments with on-board calibration systems. The Moon appears regularly in the margins and corners of full-disk operational images of the Earth acquired by meteorological instruments with a rectangular field of regard, typically several times each month, which provides an excellent opportunity for radiometric calibration. The USGS RObotic Lunar Observatory (ROLO) project has developed the capability for on-orbit calibration using the Moon via a model for lunar spectral irradiance that accommodates the geometries of illumination and viewing by a spacecraft. The ROLO model has been used to determine on-orbit response characteristics for several NASA EOS instruments in low Earth orbit. Relative response trending with precision approaching 0.1% per year has been achieved for SeaWiFS as a result of the long time-series of lunar observations collected by that instrument. The method has a demonstrated capability for cross-calibration of different instruments that have viewed the Moon. The Moon appears skewed in high-resolution meteorological images, primarily due to satellite orbital motion during acquisition; however, the geometric correction for this is straightforward. By integrating the lunar disk image to an equivalent irradiance, and using knowledge of the sensor's spectral response, a calibration can be developed through comparison against the ROLO lunar model. The inherent stability of the lunar surface means that lunar calibration can be applied to observations made at any time, including retroactively. Archived geostationary imager data that contains the Moon can be used to develop response histories for these instruments, regardless of their current operational status.

  10. Contributed Review: Absolute spectral radiance calibration of fiber-optic shock-temperature pyrometers using a coiled-coil irradiance standard lamp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fat’yanov, O. V., E-mail: fatyan1@gps.caltech.edu; Asimow, P. D., E-mail: asimow@gps.caltech.edu

    2015-10-15

    We describe an accurate and precise calibration procedure for multichannel optical pyrometers such as the 6-channel, 3-ns temporal resolution instrument used in the Caltech experimental geophysics laboratory. We begin with a review of calibration sources for shock temperatures in the 3000-30 000 K range. High-power, coiled tungsten halogen standards of spectral irradiance appear to be the only practical alternative to NIST-traceable tungsten ribbon lamps, which are no longer available with large enough calibrated area. However, non-uniform radiance complicates the use of such coiled lamps for reliable and reproducible calibration of pyrometers that employ imaging or relay optics. Careful analysis of documentedmore » methods of shock pyrometer calibration to coiled irradiance standard lamps shows that only one technique, not directly applicable in our case, is free of major radiometric errors. We provide a detailed description of the modified Caltech pyrometer instrument and a procedure for its absolute spectral radiance calibration, accurate to ±5%. We employ a designated central area of a 0.7× demagnified image of a coiled-coil tungsten halogen lamp filament, cross-calibrated against a NIST-traceable tungsten ribbon lamp. We give the results of the cross-calibration along with descriptions of the optical arrangement, data acquisition, and processing. We describe a procedure to characterize the difference between the static and dynamic response of amplified photodetectors, allowing time-dependent photodiode correction factors for spectral radiance histories from shock experiments. We validate correct operation of the modified Caltech pyrometer with actual shock temperature experiments on single-crystal NaCl and MgO and obtain very good agreement with the literature data for these substances. We conclude with a summary of the most essential requirements for error-free calibration of a fiber-optic shock-temperature pyrometer using a high-power coiled tungsten halogen irradiance standard lamp.« less

  11. Radiometric Modeling and Calibration of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS)Ground Based Measurement Experiment

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-01-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere s thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data collected during the moon tracking and viewing experiment events. From which, we derive the lunar surface temperature and emissivity associated with the moon viewing measurements.

  12. Videometric Applications in Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Radeztsky, R. H.; Liu, Tian-Shu

    1997-01-01

    Videometric measurements in wind tunnels can be very challenging due to the limited optical access, model dynamics, optical path variability during testing, large range of temperature and pressure, hostile environment, and the requirements for high productivity and large amounts of data on a daily basis. Other complications for wind tunnel testing include the model support mechanism and stringent surface finish requirements for the models in order to maintain aerodynamic fidelity. For these reasons nontraditional photogrammetric techniques and procedures sometimes must be employed. In this paper several such applications are discussed for wind tunnels which include test conditions with Mach number from low speed to hypersonic, pressures from less than an atmosphere to nearly seven atmospheres, and temperatures from cryogenic to above room temperature. Several of the wind tunnel facilities are continuous flow while one is a short duration blowdown facility. Videometric techniques and calibration procedures developed to measure angle of attack, the change in wing twist and bending induced by aerodynamic load, and the effects of varying model injection rates are described. Some advantages and disadvantages of these techniques are given and comparisons are made with non-optical and more traditional video photogrammetric techniques.

  13. Prospects for laser-induced breakdown spectroscopy for biomedical applications: a review.

    PubMed

    Singh, Vivek Kumar; Rai, Awadhesh Kumar

    2011-09-01

    We review the different spectroscopic techniques including the most recent laser-induced breakdown spectroscopy (LIBS) for the characterization of materials in any phase (solid, liquid or gas) including biological materials. A brief history of the laser and its application in bioscience is presented. The development of LIBS, its working principle and its instrumentation (different parts of the experimental set up) are briefly summarized. The generation of laser-induced plasma and detection of light emitted from this plasma are also discussed. The merit and demerits of LIBS are discussed in comparison with other conventional analytical techniques. The work done using the laser in the biomedical field is also summarized. The analysis of different tissues, mineral analysis in different organs of the human body, characterization of different types of stone formed in the human body, analysis of biological aerosols using the LIBS technique are also summarized. The unique abilities of LIBS including detection of molecular species and calibration-free LIBS are compared with those of other conventional techniques including atomic absorption spectroscopy, inductively coupled plasma atomic emission spectroscopy and mass spectroscopy, and X-ray fluorescence.

  14. Requirements for Calibration in Noninvasive Glucose Monitoring by Raman Spectroscopy

    PubMed Central

    Lipson, Jan; Bernhardt, Jeff; Block, Ueyn; Freeman, William R.; Hofmeister, Rudy; Hristakeva, Maya; Lenosky, Thomas; McNamara, Robert; Petrasek, Danny; Veltkamp, David; Waydo, Stephen

    2009-01-01

    Background In the development of noninvasive glucose monitoring technology, it is highly desirable to derive a calibration that relies on neither person-dependent calibration information nor supplementary calibration points furnished by an existing invasive measurement technique (universal calibration). Method By appropriate experimental design and associated analytical methods, we establish the sufficiency of multiple factors required to permit such a calibration. Factors considered are the discrimination of the measurement technique, stabilization of the experimental apparatus, physics–physiology-based measurement techniques for normalization, the sufficiency of the size of the data set, and appropriate exit criteria to establish the predictive value of the algorithm. Results For noninvasive glucose measurements, using Raman spectroscopy, the sufficiency of the scale of data was demonstrated by adding new data into an existing calibration algorithm and requiring that (a) the prediction error should be preserved or improved without significant re-optimization, (b) the complexity of the model for optimum estimation not rise with the addition of subjects, and (c) the estimation for persons whose data were removed entirely from the training set should be no worse than the estimates on the remainder of the population. Using these criteria, we established guidelines empirically for the number of subjects (30) and skin sites (387) for a preliminary universal calibration. We obtained a median absolute relative difference for our entire data set of 30 mg/dl, with 92% of the data in the Clarke A and B ranges. Conclusions Because Raman spectroscopy has high discrimination for glucose, a data set of practical dimensions appears to be sufficient for universal calibration. Improvements based on reducing the variance of blood perfusion are expected to reduce the prediction errors substantially, and the inclusion of supplementary calibration points for the wearable device under development will be permissible and beneficial. PMID:20144354

  15. A review of model applications for structured soils: b) Pesticide transport.

    PubMed

    Köhne, John Maximilian; Köhne, Sigrid; Simůnek, Jirka

    2009-02-16

    The past decade has seen considerable progress in the development of models simulating pesticide transport in structured soils subject to preferential flow (PF). Most PF pesticide transport models are based on the two-region concept and usually assume one (vertical) dimensional flow and transport. Stochastic parameter sets are sometimes used to account for the effects of spatial variability at the field scale. In the past decade, PF pesticide models were also coupled with Geographical Information Systems (GIS) and groundwater flow models for application at the catchment and larger regional scales. A review of PF pesticide model applications reveals that the principal difficulty of their application is still the appropriate parameterization of PF and pesticide processes. Experimental solution strategies involve improving measurement techniques and experimental designs. Model strategies aim at enhancing process descriptions, studying parameter sensitivity, uncertainty, inverse parameter identification, model calibration, and effects of spatial variability, as well as generating model emulators and databases. Model comparison studies demonstrated that, after calibration, PF pesticide models clearly outperform chromatographic models for structured soils. Considering nonlinear and kinetic sorption reactions further enhanced the pesticide transport description. However, inverse techniques combined with typically available experimental data are often limited in their ability to simultaneously identify parameters for describing PF, sorption, degradation and other processes. On the other hand, the predictive capacity of uncalibrated PF pesticide models currently allows at best an approximate (order-of-magnitude) estimation of concentrations. Moreover, models should target the entire soil-plant-atmosphere system, including often neglected above-ground processes such as pesticide volatilization, interception, sorption to plant residues, root uptake, and losses by runoff. The conclusions compile progress, problems, and future research choices for modelling pesticide displacement in structured soils.

  16. Determining Equilibrium Position For Acoustical Levitation

    NASA Technical Reports Server (NTRS)

    Barmatz, M. B.; Aveni, G.; Putterman, S.; Rudnick, J.

    1989-01-01

    Equilibrium position and orientation of acoustically-levitated weightless object determined by calibration technique on Earth. From calibration data, possible to calculate equilibrium position and orientation in presence of Earth gravitation. Sample not levitated acoustically during calibration. Technique relies on Boltzmann-Ehrenfest adiabatic-invariance principle. One converts resonant-frequency-shift data into data on normalized acoustical potential energy. Minimum of energy occurs at equilibrium point. From gradients of acoustical potential energy, one calculates acoustical restoring force or torque on objects as function of deviation from equilibrium position or orientation.

  17. Calibration techniques and strategies for the present and future LHC electromagnetic calorimeters

    NASA Astrophysics Data System (ADS)

    Aleksa, M.

    2018-02-01

    This document describes the different calibration strategies and techniques applied by the two general purpose experiments at the LHC, ATLAS and CMS, and discusses them underlining their respective strengths and weaknesses from the view of the author. The resulting performances of both calorimeters are described and compared on the basis of selected physics results. Future upgrade plans for High Luminosity LHC (HL-LHC) are briefly introduced and planned calibration strategies for the upgraded detectors are shown.

  18. Determination of calibration constants for the hole-drilling residual stress measurement technique applied to orthotropic composites. II - Experimental evaluations

    NASA Technical Reports Server (NTRS)

    Prasad, C. B.; Prabhakaran, R.; Tompkins, S.

    1987-01-01

    The first step in the extension of the semidestructive hole-drilling technique for residual stress measurement to orthotropic composite materials is the determination of the three calibration constants. Attention is presently given to an experimental determination of these calibration constants for a highly orthotropic, unidirectionally-reinforced graphite fiber-reinforced polyimide composite. A comparison of the measured values with theoretically obtained ones shows agreement to be good, in view of the many possible sources of experimental variation.

  19. Improved Reference Sampling and Subtraction: A Technique for Reducing the Read Noise of Near-infrared Detector Systems

    NASA Astrophysics Data System (ADS)

    Rauscher, Bernard J.; Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Wen, Yiting; Wilson, Donna V.; Xenophontos, Christos

    2017-10-01

    Near-infrared array detectors, like the James Webb Space Telescope (JWST) NIRSpec’s Teledyne’s H2RGs, often provide reference pixels and a reference output. These are used to remove correlated noise. Improved reference sampling and subtraction (IRS2) is a statistical technique for using this reference information optimally in a least-squares sense. Compared with the traditional H2RG readout, IRS2 uses a different clocking pattern to interleave many more reference pixels into the data than is otherwise possible. Compared with standard reference correction techniques, IRS2 subtracts the reference pixels and reference output using a statistically optimized set of frequency-dependent weights. The benefits include somewhat lower noise variance and much less obvious correlated noise. NIRSpec’s IRS2 images are cosmetically clean, with less 1/f banding than in traditional data from the same system. This article describes the IRS2 clocking pattern and presents the equations needed to use IRS2 in systems other than NIRSpec. For NIRSpec, applying these equations is already an option in the calibration pipeline. As an aid to instrument builders, we provide our prototype IRS2 calibration software and sample JWST NIRSpec data. The same techniques are applicable to other detector systems, including those based on Teledyne’s H4RG arrays. The H4RG’s interleaved reference pixel readout mode is effectively one IRS2 pattern.

  20. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  1. 40 CFR 86.1308-84 - Dynamometer and engine equipment specifications.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... technique involves the calibration of a master load cell (i.e., dynamometer case load cell). This... hydraulically actuated precalibrated master load cell. This calibration is then transferred to the flywheel torque measuring device. The technique involves the following steps: (i) A master load cell shall be...

  2. Intrinsic coincident linear polarimetry using stacked organic photovoltaics.

    PubMed

    Roy, S Gupta; Awartani, O M; Sen, P; O'Connor, B T; Kudenov, M W

    2016-06-27

    Polarimetry has widespread applications within atmospheric sensing, telecommunications, biomedical imaging, and target detection. Several existing methods of imaging polarimetry trade off the sensor's spatial resolution for polarimetric resolution, and often have some form of spatial registration error. To mitigate these issues, we have developed a system using oriented polymer-based organic photovoltaics (OPVs) that can preferentially absorb linearly polarized light. Additionally, the OPV cells can be made semitransparent, enabling multiple detectors to be cascaded along the same optical axis. Since each device performs a partial polarization measurement of the same incident beam, high temporal resolution is maintained with the potential for inherent spatial registration. In this paper, a Mueller matrix model of the stacked OPV design is provided. Based on this model, a calibration technique is developed and presented. This calibration technique and model are validated with experimental data, taken with a cascaded three cell OPV Stokes polarimeter, capable of measuring incident linear polarization states. Our results indicate polarization measurement error of 1.2% RMS and an average absolute radiometric accuracy of 2.2% for the demonstrated polarimeter.

  3. Passive Resistor Temperature Compensation for a High-Temperature Piezoresistive Pressure Sensor.

    PubMed

    Yao, Zong; Liang, Ting; Jia, Pinggang; Hong, Yingping; Qi, Lei; Lei, Cheng; Zhang, Bin; Li, Wangwang; Zhang, Diya; Xiong, Jijun

    2016-07-22

    The main limitation of high-temperature piezoresistive pressure sensors is the variation of output voltage with operating temperature, which seriously reduces their measurement accuracy. This paper presents a passive resistor temperature compensation technique whose parameters are calculated using differential equations. Unlike traditional experiential arithmetic, the differential equations are independent of the parameter deviation among the piezoresistors of the microelectromechanical pressure sensor and the residual stress caused by the fabrication process or a mismatch in the thermal expansion coefficients. The differential equations are solved using calibration data from uncompensated high-temperature piezoresistive pressure sensors. Tests conducted on the calibrated equipment at various temperatures and pressures show that the passive resistor temperature compensation produces a remarkable effect. Additionally, a high-temperature signal-conditioning circuit is used to improve the output sensitivity of the sensor, which can be reduced by the temperature compensation. Compared to traditional experiential arithmetic, the proposed passive resistor temperature compensation technique exhibits less temperature drift and is expected to be highly applicable for pressure measurements in harsh environments with large temperature variations.

  4. Passive Resistor Temperature Compensation for a High-Temperature Piezoresistive Pressure Sensor

    PubMed Central

    Yao, Zong; Liang, Ting; Jia, Pinggang; Hong, Yingping; Qi, Lei; Lei, Cheng; Zhang, Bin; Li, Wangwang; Zhang, Diya; Xiong, Jijun

    2016-01-01

    The main limitation of high-temperature piezoresistive pressure sensors is the variation of output voltage with operating temperature, which seriously reduces their measurement accuracy. This paper presents a passive resistor temperature compensation technique whose parameters are calculated using differential equations. Unlike traditional experiential arithmetic, the differential equations are independent of the parameter deviation among the piezoresistors of the microelectromechanical pressure sensor and the residual stress caused by the fabrication process or a mismatch in the thermal expansion coefficients. The differential equations are solved using calibration data from uncompensated high-temperature piezoresistive pressure sensors. Tests conducted on the calibrated equipment at various temperatures and pressures show that the passive resistor temperature compensation produces a remarkable effect. Additionally, a high-temperature signal-conditioning circuit is used to improve the output sensitivity of the sensor, which can be reduced by the temperature compensation. Compared to traditional experiential arithmetic, the proposed passive resistor temperature compensation technique exhibits less temperature drift and is expected to be highly applicable for pressure measurements in harsh environments with large temperature variations. PMID:27455271

  5. Measurement of the steady-state shear characteristics of filamentous suspensions using turbine, vane, and helical impellers.

    PubMed

    Svihla, C K; Dronawat, S N; Donnelly, J A; Rieth, T C; Hanley, T R

    1997-01-01

    The impeller viscometer technique is frequently used to characterize the rheology of filamentous suspensions in order to avoid difficulties encountered with conventional instruments. This work presents the results of experiments conducted with vane, turbine, and helical impellers. The validity of the assumptions made in the determination of the torque and shear-rate constants were assessed for each impeller type. For the turbine and vane impellers, an increase in the apparent torque constant c was observed with increasing Reynolds number even when measurements were confined to the viscous regime. The shear-rate constants determined for the vane and turbine impellers varied for different calibration fluids, which contradicts the assumptions usually invoked in the analysis of data for this technique. When the helical impeller was calibrated, consistent values for the torque and shear-rate constants were obtained. The three impeller types were also used to characterize the rheology of cellulose fiber suspensions and the results compared for consistency and reproducibility. The results have application in design of rheometers for use in process control and product quality assessment in the fermentation and pulp and paper industries.

  6. A Single-Block TRL Test Fixture for the Cryogenic Characterization of Planar Microwave Components

    NASA Technical Reports Server (NTRS)

    Mejia, M.; Creason, A. S.; Toncich, S. S.; Ebihara, B. T.; Miranda, F. A.

    1996-01-01

    The High-Temperature-Superconductivity (HTS) group of the RF Technology Branch, Space Electronics Division, is actively involved in the fabrication and cryogenic characterization of planar microwave components for space applications. This process requires fast, reliable, and accurate measurement techniques not readily available. A new calibration standard/test fixture that enhances the integrity and reliability of the component characterization process has been developed. The fixture consists of 50 omega thru, reflect, delay, and device under test gold lines etched onto a 254 microns (0.010 in) thick alumina substrate. The Thru-Reflect-Line (TRL) fixture was tested at room temperature using a 30 omega, 7.62 mm (300 mil) long, gold line as a known standard. Good agreement between the experimental data and the data modelled using Sonnet's em(C) software was obtained for both the return (S(sub 11)) and insertion (S( 21)) losses. A gold two-pole bandpass filter with a 7.3 GHz center frequency was used as our Device Under Test (DUT), and the results compared with those obtained using a Short-Open-Load-Thru (SOLT) calibration technique.

  7. Retrieving Storm Electric Fields from Aircrfaft Field Mill Data: Part II: Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Mach, D. M.; Christian H. J.; Stewart, M. F.; Bateman M. G.

    2006-01-01

    The Lagrange multiplier theory developed in Part I of this study is applied to complete a relative calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the Lagrange multiplier method performs well in computer simulations. For mill measurement errors of 1 V m(sup -1) and a 5 V m(sup -1) error in the mean fair-weather field function, the 3D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair-weather field was also tested using computer simulations. For mill measurement errors of 1 V m(sup -l), the method retrieves the 3D storm field to within an error of about 8% if the fair-weather field estimate is typically within 1 V m(sup -1) of the true fair-weather field. Using this type of side constraint and data from fair-weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. Absolute calibration was completed using the pitch down method developed in Part I, and conventional analyses. The resulting calibration matrices were then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably in many respects with results derived from earlier (iterative) techniques of calibration.

  8. Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain

    PubMed Central

    Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises

    2015-01-01

    Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156

  9. Calibrating airborne measurements of airspeed, pressure and temperature using a Doppler laser air-motion sensor

    NASA Astrophysics Data System (ADS)

    Cooper, W. A.; Spuler, S. M.; Spowart, M.; Lenschow, D. H.; Friesen, R. B.

    2014-09-01

    A new laser air-motion sensor measures the true airspeed with a standard uncertainty of less than 0.1 m s-1 and so reduces uncertainty in the measured component of the relative wind along the longitudinal axis of the aircraft to about the same level. The calculated pressure expected from that airspeed at the inlet of a pitot tube then provides a basis for calibrating the measurements of dynamic and static pressure, reducing standard uncertainty in those measurements to less than 0.3 hPa and the precision applicable to steady flight conditions to about 0.1 hPa. These improved measurements of pressure, combined with high-resolution measurements of geometric altitude from the global positioning system, then indicate (via integrations of the hydrostatic equation during climbs and descents) that the offset and uncertainty in temperature measurement for one research aircraft are +0.3 ± 0.3 °C. For airspeed, pressure and temperature, these are significant reductions in uncertainty vs. those obtained from calibrations using standard techniques. Finally, it is shown that although the initial calibration of the measured static and dynamic pressures requires a measured temperature, once calibrated these measured pressures and the measurement of airspeed from the new laser air-motion sensor provide a measurement of temperature that does not depend on any other temperature sensor.

  10. Standard Practices for Usage of Inductive Magnetic Field Probes with Application to Electric Propulsion Testing

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Hill, Carrie S.

    2013-01-01

    Inductive magnetic field probes (also known as B-dot probes and sometimes as B-probes or magnetic probes) are useful for performing measurements in electric space thrusters and various plasma accelerator applications where a time-varying magnetic field is present. Magnetic field probes have proven to be a mainstay in diagnosing plasma thrusters where changes occur rapidly with respect to time, providing the means to measure the magnetic fields produced by time-varying currents and even an indirect measure of the plasma current density through the application of Ampère's law. Examples of applications where this measurement technique has been employed include pulsed plasma thrusters and quasi-steady magnetoplasmadynamic thrusters. The Electric Propulsion Technical Committee (EPTC) of the American Institute of Aeronautics and Astronautics (AIAA) was asked to assemble a Committee on Standards (CoS) for Electric Propulsion Testing. The assembled CoS was tasked with developing Standards and Recommended Practices for various diagnostic techniques used in the evaluation of plasma thrusters. These include measurements that can yield either global information related to a thruster and its performance or detailed, local data related to the specific physical processes occurring in the plasma. This paper presents a summary of the standard, describing the preferred methods for fabrication, calibration, and usage of inductive magnetic field probes for use in diagnosing plasma thrusters. Inductive magnetic field probes (also called B-dot probes throughout this document) are commonly used in electric propulsion (EP) research and testing to measure unsteady magnetic fields produced by time-varying currents. The B-dot probe is relatively simple in construction, and requires minimal cost, making it a low-cost technique that is readily accessible to most researchers. While relatively simple, the design of a B-dot probe is not trivial and there are many opportunities for errors in probe construction, calibration, and usage, and in the post-processing of data that is produced by the probe. There are typically several ways in which each of these steps can be approached, and different applications may require more or less vigorous attention to various issues.

  11. A Bayesian modelling method for post-processing daily sub-seasonal to seasonal rainfall forecasts from global climate models and evaluation for 12 Australian catchments

    NASA Astrophysics Data System (ADS)

    Schepen, Andrew; Zhao, Tongtiegang; Wang, Quan J.; Robertson, David E.

    2018-03-01

    Rainfall forecasts are an integral part of hydrological forecasting systems at sub-seasonal to seasonal timescales. In seasonal forecasting, global climate models (GCMs) are now the go-to source for rainfall forecasts. For hydrological applications however, GCM forecasts are often biased and unreliable in uncertainty spread, and calibration is therefore required before use. There are sophisticated statistical techniques for calibrating monthly and seasonal aggregations of the forecasts. However, calibration of seasonal forecasts at the daily time step typically uses very simple statistical methods or climate analogue methods. These methods generally lack the sophistication to achieve unbiased, reliable and coherent forecasts of daily amounts and seasonal accumulated totals. In this study, we propose and evaluate a Rainfall Post-Processing method for Seasonal forecasts (RPP-S), which is based on the Bayesian joint probability modelling approach for calibrating daily forecasts and the Schaake Shuffle for connecting the daily ensemble members of different lead times. We apply the method to post-process ACCESS-S forecasts for 12 perennial and ephemeral catchments across Australia and for 12 initialisation dates. RPP-S significantly reduces bias in raw forecasts and improves both skill and reliability. RPP-S forecasts are also more skilful and reliable than forecasts derived from ACCESS-S forecasts that have been post-processed using quantile mapping, especially for monthly and seasonal accumulations. Several opportunities to improve the robustness and skill of RPP-S are identified. The new RPP-S post-processed forecasts will be used in ensemble sub-seasonal to seasonal streamflow applications.

  12. Least squares parameter estimation methods for material decomposition with energy discriminating detectors

    PubMed Central

    Le, Huy Q.; Molloi, Sabee

    2011-01-01

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg∕ml) and iodine (4, 12, 20, 28, 36, and 44 mg∕ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30∕70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg∕ml) and iodine (5, 15, 25, 35, and 45 mg∕ml). The x-ray transport process was simulated where the Beer–Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine. PMID:21361193

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, Huy Q.; Molloi, Sabee

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar tomore » the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg/ml) and iodine (4, 12, 20, 28, 36, and 44 mg/ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30/70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg/ml) and iodine (5, 15, 25, 35, and 45 mg/ml). The x-ray transport process was simulated where the Beer-Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine.« less

  14. Comparison of various techniques for calibration of AIS data

    NASA Technical Reports Server (NTRS)

    Roberts, D. A.; Yamaguchi, Y.; Lyon, R. J. P.

    1986-01-01

    The Airborne Imaging Spectrometer (AIS) samples a region which is strongly influenced by decreasing solar irradiance at longer wavelengths and strong atmospheric absorptions. Four techniques, the Log Residual, the Least Upper Bound Residual, the Flat Field Correction and calibration using field reflectance measurements were investigated as a means for removing these two features. Of the four techniques field reflectance calibration proved to be superior in terms of noise and normalization. Of the other three techniques, the Log Residual was superior when applied to areas which did not contain one dominant cover type. In heavily vegetated areas, the Log Residual proved to be ineffective. After removing anomalously bright data values, the Least Upper Bound Residual proved to be almost as effective as the Log Residual in sparsely vegetated areas and much more effective in heavily vegetated areas. Of all the techniques, the Flat Field Correction was the noisest.

  15. Calibration of the CMS hadron calorimeter in Run 2

    NASA Astrophysics Data System (ADS)

    Chadeeva, M.; Lychkovskaya, N.

    2018-03-01

    Various calibration techniques for the CMS Hadron calorimeter in Run 2 and the results of calibration using 2016 collision data are presented. The radiation damage corrections, intercalibration of different channels using the phi-symmetry technique for barrel, endcap and forward calorimeter regions are described, as well as the intercalibration with muons of the outer hadron calorimeter. The achieved intercalibration precision is within 3%. The in situ energy scale calibration is performed in the barrel and endcap regions using isolated charged hadrons and in the forward calorimeter using the Zarrow ee process. The impact of pileup and the developed technique of correction for pileup is also discussed. The achieved uncertainty of the response to hadrons is 3.4% in the barrel and 2.6% in the endcap region (at the pseudorapidity range |η|<2) and is dominated by the systematic uncertainty due to pileup contributions.

  16. Prototypic Development and Evaluation of a Medium Format Metric Camera

    NASA Astrophysics Data System (ADS)

    Hastedt, H.; Rofallski, R.; Luhmann, T.; Rosenbauer, R.; Ochsner, D.; Rieke-Zapp, D.

    2018-05-01

    Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2-3 m in each direction) and large volumes (around 20 x 20 x 1-10 m). The requested precision in object space (1σ RMS) is defined to be within 0.1-0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1) high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2) a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3) a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002). Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm-0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement). All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.

  17. Simultaneous Determination of Metamizole, Thiamin and Pyridoxin Using UV-Spectroscopy in Combination with Multivariate Calibration

    PubMed Central

    Chotimah, Chusnul; Sudjadi; Riyanto, Sugeng; Rohman, Abdul

    2015-01-01

    Purpose: Analysis of drugs in multicomponent system officially is carried out using chromatographic technique, however, this technique is too laborious and involving sophisticated instrument. Therefore, UV-VIS spectrophotometry coupled with multivariate calibration of partial least square (PLS) for quantitative analysis of metamizole, thiamin and pyridoxin is developed in the presence of cyanocobalamine without any separation step. Methods: The calibration and validation samples are prepared. The calibration model is prepared by developing a series of sample mixture consisting these drugs in certain proportion. Cross validation of calibration sample using leave one out technique is used to identify the smaller set of components that provide the greatest predictive ability. The evaluation of calibration model was based on the coefficient of determination (R2) and root mean square error of calibration (RMSEC). Results: The results showed that the coefficient of determination (R2) for the relationship between actual values and predicted values for all studied drugs was higher than 0.99 indicating good accuracy. The RMSEC values obtained were relatively low, indicating good precision. The accuracy and presision results of developed method showed no significant difference compared to those obtained by official method of HPLC. Conclusion: The developed method (UV-VIS spectrophotometry in combination with PLS) was succesfully used for analysis of metamizole, thiamin and pyridoxin in tablet dosage form. PMID:26819934

  18. The recalibration of the IUE scientific instrument

    NASA Technical Reports Server (NTRS)

    Imhoff, Catherine L.; Oliversen, Nancy A.; Nichols-Bohlin, Joy; Casatella, Angelo; Lloyd, Christopher

    1988-01-01

    The IUE instrument was recalibrated because of long time-scale changes in the scientific instrument, a better understanding of the performance of the instrument, improved sets of calibration data, and improved analysis techniques. Calibrations completed or planned include intensity transfer functions (ITF), low-dispersion absolute calibrations, high-dispersion ripple corrections and absolute calibrations, improved geometric mapping of the ITFs to spectral images, studies to improve the signal-to-noise, enhanced absolute calibrations employing corrections for time, temperature, and aperture dependence, and photometric and geometric calibrations for the FES.

  19. Lamp mapping technique for independent determination of the water vapor mixing ratio calibration factor for a Raman lidar system

    NASA Astrophysics Data System (ADS)

    Venable, Demetrius D.; Whiteman, David N.; Calhoun, Monique N.; Dirisu, Afusat O.; Connell, Rasheen M.; Landulfo, Eduardo

    2011-08-01

    We have investigated a technique that allows for the independent determination of the water vapor mixing ratio calibration factor for a Raman lidar system. This technique utilizes a procedure whereby a light source of known spectral characteristics is scanned across the aperture of the lidar system's telescope and the overall optical efficiency of the system is determined. Direct analysis of the temperature-dependent differential scattering cross sections for vibration and vibration-rotation transitions (convolved with narrowband filters) along with the measured efficiency of the system, leads to a theoretical determination of the water vapor mixing ratio calibration factor. A calibration factor was also obtained experimentally from lidar measurements and radiosonde data. A comparison of the theoretical and experimentally determined values agrees within 5%. We report on the sensitivity of the water vapor mixing ratio calibration factor to uncertainties in parameters that characterize the narrowband transmission filters, the temperature-dependent differential scattering cross section, and the variability of the system efficiency ratios as the lamp is scanned across the aperture of the telescope used in the Howard University Raman Lidar system.

  20. Concentration Independent Calibration of β-γ Coincidence Detector Using 131mXe and 133Xe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McIntyre, Justin I.; Cooper, Matthew W.; Carman, April J.

    Absolute efficiency calibration of radiometric detectors is frequently difficult and requires careful detector modeling and accurate knowledge of the radioactive source used. In the past we have calibrated the b-g coincidence detector of the Automated Radioxenon Sampler/Analyzer (ARSA) using a variety of sources and techniques which have proven to be less than desirable.[1] A superior technique has been developed that uses the conversion-electron (CE) and x-ray coincidence of 131mXe to provide a more accurate absolute gamma efficiency of the detector. The 131mXe is injected directly into the beta cell of the coincident counting system and no knowledge of absolute sourcemore » strength is required. In addition, 133Xe is used to provide a second independent means to obtain the absolute efficiency calibration. These two data points provide the necessary information for calculating the detector efficiency and can be used in conjunction with other noble gas isotopes to completely characterize and calibrate the ARSA nuclear detector. In this paper we discuss the techniques and results that we have obtained.« less

  1. Novel Calibration Technique for a Coulometric Evolved Vapor Analyzer for Measuring Water Content of Materials

    NASA Astrophysics Data System (ADS)

    Bell, S. A.; Miao, P.; Carroll, P. A.

    2018-04-01

    Evolved vapor coulometry is a measurement technique that selectively detects water and is used to measure water content of materials. The basis of the measurement is the quantitative electrolysis of evaporated water entrained in a carrier gas stream. Although this measurement has a fundamental principle—based on Faraday's law which directly relates electrolysis current to amount of substance electrolyzed—in practice it requires calibration. Commonly, reference materials of known water content are used, but the variety of these is limited, and they are not always available for suitable values, materials, with SI traceability, or with well-characterized uncertainty. In this paper, we report development of an alternative calibration approach using as a reference the water content of humid gas of defined dew point traceable to the SI via national humidity standards. The increased information available through this new type of calibration reveals a variation of the instrument performance across its range not visible using the conventional approach. The significance of this is discussed along with details of the calibration technique, example results, and an uncertainty evaluation.

  2. A validated method for the quantitation of 1,1-difluoroethane using a gas in equilibrium method of calibration.

    PubMed

    Avella, Joseph; Lehrer, Michael; Zito, S William

    2008-10-01

    1,1-Difluoroethane (DFE), also known as Freon 152A, is a member of a class of compounds known as halogenated hydrocarbons. A number of these compounds have gained notoriety because of their ability to induce rapid onset of intoxication after inhalation exposure. Abuse of DFE has necessitated development of methods for its detection and quantitation in postmortem and human performance specimens. Furthermore, methodologies applicable to research studies are required as there have been limited toxicokinetic and toxicodynamic reports published on DFE. This paper describes a method for the quantitation of DFE using a gas chromatography-flame-ionization headspace technique that employs solventless standards for calibration. Two calibration curves using 0.5 mL whole blood calibrators which ranged from A: 0.225-1.350 to B: 9.0-180.0 mg/L were developed. These were evaluated for linearity (0.9992 and 0.9995), limit of detection of 0.018 mg/L, limit of quantitation of 0.099 mg/L (recovery 111.9%, CV 9.92%), and upper limit of linearity of 27,000.0 mg/L. Combined curve recovery results of a 98.0 mg/L DFE control that was prepared using an alternate technique was 102.2% with CV of 3.09%. No matrix interference was observed in DFE enriched blood, urine or brain specimens nor did analysis of variance detect any significant differences (alpha = 0.01) in the area under the curve of blood, urine or brain specimens at three identical DFE concentrations. The method is suitable for use in forensic laboratories because validation was performed on instrumentation routinely used in forensic labs and due to the ease with which the calibration range can be adjusted. Perhaps more importantly it is also useful for research oriented studies because the removal of solvent from standard preparation eliminates the possibility for solvent induced changes to the gas/liquid partitioning of DFE or chromatographic interference due to the presence of solvent in specimens.

  3. Quantitative comparison of two independent lateral force calibration techniques for the atomic force microscope.

    PubMed

    Barkley, Sarice S; Deng, Zhao; Gates, Richard S; Reitsma, Mark G; Cannara, Rachel J

    2012-02-01

    Two independent lateral-force calibration methods for the atomic force microscope (AFM)--the hammerhead (HH) technique and the diamagnetic lateral force calibrator (D-LFC)--are systematically compared and found to agree to within 5 % or less, but with precision limited to about 15 %, using four different tee-shaped HH reference probes. The limitations of each method, both of which offer independent yet feasible paths toward traceable accuracy, are discussed and investigated. We find that stiff cantilevers may produce inconsistent D-LFC values through the application of excessively high normal loads. In addition, D-LFC results vary when the method is implemented using different modes of AFM feedback control, constant height and constant force modes, where the latter is more consistent with the HH method and closer to typical experimental conditions. Specifically, for the D-LFC apparatus used here, calibration in constant height mode introduced errors up to 14 %. In constant force mode using a relatively stiff cantilever, we observed an ≈ 4 % systematic error per μN of applied load for loads ≤ 1 μN. The issue of excessive load typically emerges for cantilevers whose flexural spring constant is large compared with the normal spring constant of the D-LFC setup (such that relatively small cantilever flexural displacements produce relatively large loads). Overall, the HH method carries a larger uncertainty, which is dominated by uncertainty in measurement of the flexural spring constant of the HH cantilever as well as in the effective length dimension of the cantilever probe. The D-LFC method relies on fewer parameters and thus has fewer uncertainties associated with it. We thus show that it is the preferred method of the two, as long as care is taken to perform the calibration in constant force mode with low applied loads.

  4. Laser Energy Monitor for Double-Pulsed 2-Micrometer IPDA Lidar Application

    NASA Technical Reports Server (NTRS)

    Refaat, Tamer F.; Petros, Mulugeta; Remus, Ruben; Yu, Jirong; Singh, Upendra N.

    2014-01-01

    Integrated path differential absorption (IPDA) lidar is a remote sensing technique for monitoring different atmospheric species. The technique relies on wavelength differentiation between strong and weak absorbing features normalized to the transmitted energy. 2-micron double-pulsed IPDA lidar is best suited for atmospheric carbon dioxide measurements. In such case, the transmitter produces two successive laser pulses separated by short interval (200 microseconds), with low repetition rate (10Hz). Conventional laser energy monitors, based on thermal detectors, are suitable for low repetition rate single pulse lasers. Due to the short pulse interval in double-pulsed lasers, thermal energy monitors underestimate the total transmitted energy. This leads to measurement biases and errors in double-pulsed IPDA technique. The design and calibration of a 2-micron double-pulse laser energy monitor is presented. The design is based on a high-speed, extended range InGaAs pin quantum detectors suitable for separating the two pulse events. Pulse integration is applied for converting the detected pulse power into energy. Results are compared to a photo-electro-magnetic (PEM) detector for impulse response verification. Calibration included comparing the three detection technologies in single-pulsed mode, then comparing the pin and PEM detectors in double-pulsed mode. Energy monitor linearity will be addressed.

  5. Full Flight Envelope Direct Thrust Measurement on a Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.; Sims, Robert L.

    1998-01-01

    Direct thrust measurement using strain gages offers advantages over analytically-based thrust calculation methods. For flight test applications, the direct measurement method typically uses a simpler sensor arrangement and minimal data processing compared to analytical techniques, which normally require costly engine modeling and multisensor arrangements throughout the engine. Conversely, direct thrust measurement has historically produced less than desirable accuracy because of difficulty in mounting and calibrating the strain gages and the inability to account for secondary forces that influence the thrust reading at the engine mounts. Consequently, the strain-gage technique has normally been used for simple engine arrangements and primarily in the subsonic speed range. This paper presents the results of a strain gage-based direct thrust-measurement technique developed by the NASA Dryden Flight Research Center and successfully applied to the full flight envelope of an F-15 aircraft powered by two F100-PW-229 turbofan engines. Measurements have been obtained at quasi-steady-state operating conditions at maximum non-augmented and maximum augmented power throughout the altitude range of the vehicle and to a maximum speed of Mach 2.0 and are compared against results from two analytically-based thrust calculation methods. The strain-gage installation and calibration processes are also described.

  6. Load monitoring using a calibrated piezo diaphragm based impedance strain sensor and wireless sensor network in real time

    NASA Astrophysics Data System (ADS)

    Gopal Madhav Annamdas, Venu; Kiong Soh, Chee

    2017-04-01

    The last decade has seen the use of various wired-wireless and contact-contactless sensors in several structural health monitoring (SHM) techniques. Most SHM sensors that are predominantly used for strain measurements may be ineffective for damage detection and vice versa, indicating the uniapplicability of these sensors. However, piezoelectric (PE)-based macro fiber composite (MFC) and lead zirconium titanate (PZT) sensors have been on the rise in SHM, vibration and damping control, etc, due to their superior actuation and sensing abilities. These PE sensors have created much interest for their multi-applicability in various technologies such as electromechanical impedance (EMI)-based SHM. This research employs piezo diaphragms, a cheaper alternative to several expensive types of PZT/MFC sensors for the EMI technique. These piezo diaphragms were validated last year for their applicability in damage detection using the frequency domain. Here we further validate their applicability in strain monitoring using the real time domain. Hence, these piezo diaphragms can now be classified as PE sensors and used with PZT and MFC sensors in the EMI technique for monitoring damage and loading. However, no single technique or single type of sensor will be sufficient for large SHM, thus requiring the necessary deployment of more than one technique with different types of sensors such as a piezoresistive strain gauge based wireless sensor network for strain measurements to complement the EMI technique. Furthermore, we present a novel procedure of converting a regular PE sensor in the ‘frequency domain’ to ‘real time domain’ for strain applications.

  7. A measurement technique to determine the calibration accuracy of an electromagnetic tracking system to radiation isocenter.

    PubMed

    Litzenberg, Dale W; Gallagher, Ian; Masi, Kathryn J; Lee, Choonik; Prisciandaro, Joann I; Hamstra, Daniel A; Ritter, Timothy; Lam, Kwok L

    2013-08-01

    To present and characterize a measurement technique to quantify the calibration accuracy of an electromagnetic tracking system to radiation isocenter. This technique was developed as a quality assurance method for electromagnetic tracking systems used in a multi-institutional clinical hypofractionated prostate study. In this technique, the electromagnetic tracking system is calibrated to isocenter with the manufacturers recommended technique, using laser-based alignment. A test patient is created with a transponder at isocenter whose position is measured electromagnetically. Four portal images of the transponder are taken with collimator rotations of 45° 135°, 225°, and 315°, at each of four gantry angles (0°, 90°, 180°, 270°) using a 3×6 cm2 radiation field. In each image, the center of the copper-wrapped iron core of the transponder is determined. All measurements are made relative to this transponder position to remove gantry and imager sag effects. For each of the 16 images, the 50% collimation edges are identified and used to find a ray representing the rotational axis of each collimation edge. The 16 collimator rotation rays from four gantry angles pass through and bound the radiation isocenter volume. The center of the bounded region, relative to the transponder, is calculated and then transformed to tracking system coordinates using the transponder position, allowing the tracking system's calibration offset from radiation isocenter to be found. All image analysis and calculations are automated with inhouse software for user-independent accuracy. Three different tracking systems at two different sites were evaluated for this study. The magnitude of the calibration offset was always less than the manufacturer's stated accuracy of 0.2 cm using their standard clinical calibration procedure, and ranged from 0.014 to 0.175 cm. On three systems in clinical use, the magnitude of the offset was found to be 0.053±0.036, 0.121±0.023, and 0.093±0.013 cm. The method presented here provides an independent technique to verify the calibration of an electromagnetic tracking system to radiation isocenter. The calibration accuracy of the system was better than the 0.2 cm accuracy stated by the manufacturer. However, it should not be assumed to be zero, especially for stereotactic radiation therapy treatments where planning target volume margins are very small.

  8. Self-calibration of a noisy multiple-sensor system with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Brooks, Richard R.; Iyengar, S. Sitharama; Chen, Jianhua

    1996-01-01

    This paper explores an image processing application of optimization techniques which entails interpreting noisy sensor data. The application is a generalization of image correlation; we attempt to find the optimal gruence which matches two overlapping gray-scale images corrupted with noise. Both taboo search and genetic algorithms are used to find the parameters which match the two images. A genetic algorithm approach using an elitist reproduction scheme is found to provide significantly superior results. The presentation includes a graphic presentation of the paths taken by tabu search and genetic algorithms when trying to find the best possible match between two corrupted images.

  9. Hydrological Modelling and Sensitivity Analysis Using Topmodel and Simulated Annealing Techniques.application To The Haute-mentue Catchment(switzerland).

    NASA Astrophysics Data System (ADS)

    Balin Talamba, D.; Higy, C.; Joerin, C.; Musy, A.

    The paper presents an application concerning the hydrological modelling for the Haute-Mentue catchment, located in western Switzerland. A simplified version of Topmodel, developed in a Labview programming environment, was applied in the aim of modelling the hydrological processes on this catchment. Previous researches car- ried out in this region outlined the importance of the environmental tracers in studying the hydrological behaviour and an important knowledge has been accumulated dur- ing this period concerning the mechanisms responsible for runoff generation. In con- formity with the theoretical constraints, Topmodel was applied for an Haute-Mentue sub-catchment where tracing experiments showed constantly low contributions of the soil water during the flood events. The model was applied for two humid periods in 1998. First, the model calibration was done in order to provide the best estimations for the total runoff. Instead, the simulated components (groundwater and rapid flow) showed far deviations from the reality indicated by the tracing experiments. Thus, a new calibration was performed including additional information given by the environ- mental tracing. The calibration of the model was done by using simulated annealing (SA) techniques, which are easy to implement and statistically allow for converging to a global minimum. The only problem is that the method is time and computer consum- ing. To improve this, a version of SA was used which is known as very fast-simulated annealing (VFSA). The principles are the same as for the SA technique. The random search is guided by certain probability distribution and the acceptance criterion is the same as for SA but the VFSA allows for better taking into account the ranges of vari- ation of each parameter. Practice with Topmodel showed that the energy function has different sensitivities along different dimensions of the parameter space. The VFSA algorithm allows differentiated search in relation with the sensitivity of the param- eters. The environmental tracing was used in the aim of constraining the parameter space in order to better simulate the hydrological behaviour of the catchment. VFSA outlined issues for characterising the significance of Topmodel input parameters as well as their uncertainty for the hydrological modelling.

  10. Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods

    NASA Astrophysics Data System (ADS)

    Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan

    2017-03-01

    Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.

  11. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    NASA Astrophysics Data System (ADS)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  12. Calibration of the maximum carboxylation velocity (Vcmax) using data mining techniques and ecophysiological data from the Brazilian semiarid region, for use in Dynamic Global Vegetation Models.

    PubMed

    Rezende, L F C; Arenque-Musa, B C; Moura, M S B; Aidar, S T; Von Randow, C; Menezes, R S C; Ometto, J P B H

    2016-06-01

    The semiarid region of northeastern Brazil, the Caatinga, is extremely important due to its biodiversity and endemism. Measurements of plant physiology are crucial to the calibration of Dynamic Global Vegetation Models (DGVMs) that are currently used to simulate the responses of vegetation in face of global changes. In a field work realized in an area of preserved Caatinga forest located in Petrolina, Pernambuco, measurements of carbon assimilation (in response to light and CO2) were performed on 11 individuals of Poincianella microphylla, a native species that is abundant in this region. These data were used to calibrate the maximum carboxylation velocity (Vcmax) used in the INLAND model. The calibration techniques used were Multiple Linear Regression (MLR), and data mining techniques as the Classification And Regression Tree (CART) and K-MEANS. The results were compared to the UNCALIBRATED model. It was found that simulated Gross Primary Productivity (GPP) reached 72% of observed GPP when using the calibrated Vcmax values, whereas the UNCALIBRATED approach accounted for 42% of observed GPP. Thus, this work shows the benefits of calibrating DGVMs using field ecophysiological measurements, especially in areas where field data is scarce or non-existent, such as in the Caatinga.

  13. Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    NASA Astrophysics Data System (ADS)

    Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.

    2015-09-01

    Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows.

  14. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations

    PubMed Central

    Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.

    2016-01-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354

  15. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations.

    PubMed

    Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B

    2016-04-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Noninvasive oxygen monitoring techniques.

    PubMed

    Wahr, J A; Tremper, K K

    1995-01-01

    As this article demonstrates, tremendous progress has been made in the techniques of oxygen measurement and monitoring over the past 50 years. From the early developments during and after World War II, to the most recent applications of solid state and microprocessor technology today, every patient in a critical care situation will have several continuous measurements of oxygenation applied simultaneously. Information therefore is available readily to alert personnel of acute problems and to guide appropriate therapy. The majority of effort to date has been placed on measuring oxygenation of arterial or venous blood. The next generation of devices will attempt to provide information about living tissue. Unlike the devices monitoring arterial or venous oxygen content, no "gold standards" exist for tissue oxygenation, so calibration will be difficult, as will interpretation of the data provided. The application of these devices ultimately may lead to a much better understanding of how disease (and the treatment of disease) alters the utilization of oxygen by the tissues.

  17. X-ray investigations related to the shock history of the Shergotty achondrite

    NASA Technical Reports Server (NTRS)

    Horz, F.; Hanss, R.; Serna, C.

    1986-01-01

    The shock stress suffered by naturally shocked materials from the Shergotty achondrite was studied using X-ray diffraction techniques and experimentally shocked augite and enstatite as standards. The Shergotty pyroxenes revealed the formation of continuous diffraction rings, line broadening, preferred orientation of small scale diffraction domains, and other evidence of substantial lattice disorders. As disclosed by the application of Debye-Scherrer techniques, they are hybrids between single crystals and fine-grained random powders. The pyroxene lattice is very resistant to shock damage on smaller scales. While measurable lattice disaggregation and progressive fragmentation occur below 25 GPa, little additional damage is suffered from application of pressures between 30 to 60 GPa, making pressure calibration of naturally shocked pyroxenes via X-ray methods difficult. Powder diffractometer scans on pure maskelynite fractions of Shergotty revealed small amounts of still coherently diffracting plagioclase, which may contribute to the high refractive indices of the diaplectic feldspar glasses of Shergotty.

  18. Terahertz Measurement of the Water Content Distribution in Wood Materials

    NASA Astrophysics Data System (ADS)

    Bensalem, M.; Sommier, A.; Mindeguia, J. C.; Batsale, J. C.; Pradere, C.

    2018-02-01

    Recently, THz waves have been shown to be an effective technique for investigating the water diffusion within porous media, such as biomaterial or insulation materials. This applicability is due to the sufficient resolution for such applications and the safe levels of radiation. This study aims to achieve contactless absolute water content measurements at a steady state case in semi-transparent solids (wood) using a transmittance THz wave range setup. First, a calibration method is developed to validate an analytical model based on the Beer-Lambert law, linking the absorption coefficient, the density of the solid, and its water content. Then, an estimation of the water content on a local scale in a transient-state case (drying) is performed. This study shows that THz waves are an effective contactless, safe, and low-cost technique for the measurement of water content in a porous medium, such as wood.

  19. Empirical transfer functions for stations in the Central California seismological network

    USGS Publications Warehouse

    Bakun, W.H.; Dratler, Jay

    1976-01-01

    A sequence of calibration signals composed of a station identification code, a transient from the release of the seismometer mass at rest from a known displacement from the equilibrium position, and a transient from a known step in voltage to the amplifier input are generated by the automatic daily calibration system (ADCS) now operational in the U.S. Geological Survey central California seismographic network. Documentation of a sequence of interactive programs to compute, from the calibration data, the complex transfer functions for the seismographic system (ground motion through digitizer) the electronics (amplifier through digitizer), and the seismometer alone are presented. The analysis utilizes the Fourier transform technique originally suggested by Espinosa et al (1962). Section I is a general description of seismographic calibration. Section II contrasts the 'Fourier transform' and the 'least-squares' techniques for analyzing transient calibration signals. Theoretical consideration for the Fourier transform technique used here are described in Section III. Section IV is a detailed description of the sequence of calibration signals generated by the ADCS. Section V is a brief 'cookbook description' of the calibration programs; Section VI contains a detailed sample program execution. Section VII suggests the uses of the resultant empirical transfer functions. Supplemental interactive programs by which smooth response functions, suitable for reducing seismic data to ground motion, are also documented in Section VII. Appendices A and B contain complete listings of the Fortran source Codes while Appendix C is an update containing preliminary results obtained from an analysis of some of the calibration signals from stations in the seismographic network near Oroville, California.

  20. Design of transonic airfoil sections using a similarity theory

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1978-01-01

    A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.

  1. Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera

    NASA Astrophysics Data System (ADS)

    Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.

    2017-10-01

    Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.

  2. Design and development of a micro-thermocouple sensor for determining temperature and relative humidity patterns within an airstream.

    PubMed

    Eisner, A D; Martonen, T B

    1989-11-01

    This paper describes the production and calibration of a miniature psychrometer treated with a specially developed porous coating. The investigation was conducted to determine localized patterns of rapidly changing temperature and relative humidity in dynamic flowing gas environments (e.g., with particular attention to future applications to the human respiratory system). The technique involved the use of dry miniature thermocouples and wetted miniature thermocouples coated with boron nitride to act as a wicking material. A precision humidity generator was developed for calibrating the psychrometer. It was found that, in most cases, the measured and expected (i.e., theoretically predicted) relative humidity agreed to within 0.5 to 1.0 percent relative humidity. Procedures that would decrease this discrepancy even further were pinpointed, and advantages of using the miniature psychrometer were assessed.

  3. Virtual Reality Calibration for Telerobotic Servicing

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1994-01-01

    A virtual reality calibration technique of matching a virtual environment of simulated graphics models in 3-D geometry and perspective with actual camera views of the remote site task environment has been developed to enable high-fidelity preview/predictive displays with calibrated graphics overlay on live video.

  4. The development of an electrochemical technique for in situ calibrating of combustible gas detectors

    NASA Technical Reports Server (NTRS)

    Shumar, J. W.; Lantz, J. B.; Schubert, F. H.

    1976-01-01

    A program to determine the feasibility of performing in situ calibration of combustible gas detectors was successfully completed. Several possible techniques for performing the in situ calibration were proposed. The approach that showed the most promise involved the use of a miniature water vapor electrolysis cell for the generation of hydrogen within the flame arrestor of a combustible gas detector to be used for the purpose of calibrating the combustible gas detectors. A preliminary breadboard of the in situ calibration hardware was designed, fabricated and assembled. The breadboard equipment consisted of a commercially available combustible gas detector, modified to incorporate a water vapor electrolysis cell, and the instrumentation required for controlling the water vapor electrolysis and controlling and calibrating the combustible gas detector. The results showed that operation of the water vapor electrolysis at a given current density for a specific time period resulted in the attainment of a hydrogen concentration plateau within the flame arrestor of the combustible gas detector.

  5. Impact of bias correction and downscaling through quantile mapping on simulated climate change signal: a case study over Central Italy

    NASA Astrophysics Data System (ADS)

    Sangelantoni, Lorenzo; Russo, Aniello; Gennaretti, Fabio

    2018-02-01

    Quantile mapping (QM) represents a common post-processing technique used to connect climate simulations to impact studies at different spatial scales. Depending on the simulation-observation spatial scale mismatch, QM can be used for two different applications. The first application uses only the bias correction component, establishing transfer functions between observations and simulations at similar spatial scales. The second application includes a statistical downscaling component when point-scale observations are considered. However, knowledge of alterations to climate change signal (CCS) resulting from these two applications is limited. This study investigates QM impacts on the original temperature and precipitation CCSs when applied according to a bias correction only (BC-only) and a bias correction plus downscaling (BC + DS) application over reference stations in Central Italy. BC-only application is used to adjust regional climate model (RCM) simulations having the same resolution as the observation grid. QM BC + DS application adjusts the same simulations to point-wise observations. QM applications alter CCS mainly for temperature. BC-only application produces a CCS of the median 1 °C lower than the original ( 4.5 °C). BC + DS application produces CCS closer to the original, except over the summer 95th percentile, where substantial amplification of the original CCS resulted. The impacts of the two applications are connected to the ratio between the observed and the simulated standard deviation (STD) of the calibration period. For the precipitation, original CCS is essentially preserved in both applications. Yet, calibration period STD ratio cannot predict QM impact on the precipitation CCS when simulated STD and mean are similarly misrepresented.

  6. Improving MWA/HERA Calibration Using Extended Radio Source Models

    NASA Astrophysics Data System (ADS)

    Cunningham, Devin; Tasker, Nicholas; University of Washington EoR Imaging Team

    2018-01-01

    The formation of the first stars and galaxies in the universe is among the greatest mysteries in astrophysics. Using special purpose radio interferometers, it is possible to detect the faint 21 cm radio line emitted by neutral hydrogen in order to characterize the Epoch of Reionization (EoR) and the formation of the first stars and galaxies. We create better models of extended radio sources by reducing component number of deconvolved Murchison Widefield Array (MWA) data by up to 90%, while preserving real structure and flux information. This real structure is confirmed by comparisons to observations of the same extended radio sources from the TIFR GMRT Sky Survey (TGSS) and NRAO VLA Sky Survey (NVSS), which detect at a similar frequency range as the MWA. These sophisticated data reduction techniques not only offer improvements to the calibration of the MWA, but also hold applications for the future sky-based calibration of the Hydrogen Epoch of Reionization Array (HERA). This has the potential to reduce noise in the power spectra from these instruments, and consequently provide a deeper view into the window of EoR.

  7. High heat flux measurements and experimental calibrations/characterizations

    NASA Technical Reports Server (NTRS)

    Kidd, Carl T.

    1992-01-01

    Recent progress in techniques employed in the measurement of very high heat-transfer rates in reentry-type facilities at the Arnold Engineering Development Center (AEDC) is described. These advances include thermal analyses applied to transducer concepts used to make these measurements; improved heat-flux sensor fabrication methods, equipment, and procedures for determining the experimental time response of individual sensors; performance of absolute heat-flux calibrations at levels above 2,000 Btu/cu ft-sec (2.27 kW/cu cm); and innovative methods of performing in-situ run-to-run characterizations of heat-flux probes installed in the test facility. Graphical illustrations of the results of extensive thermal analyses of the null-point calorimeter and coaxial surface thermocouple concepts with application to measurements in aerothermal test environments are presented. Results of time response experiments and absolute calibrations of null-point calorimeters and coaxial thermocouples performed in the laboratory at intermediate to high heat-flux levels are shown. Typical AEDC high-enthalpy arc heater heat-flux data recently obtained with a Calspan-fabricated null-point probe model are included.

  8. Signal Processing and Calibration of Continuous-Wave Focused CO2 Doppler Lidars for Atmospheric Backscatter Measurement

    NASA Technical Reports Server (NTRS)

    Rothermel, Jeffry; Chambers, Diana M.; Jarzembski, Maurice A.; Srivastava, Vandana; Bowdle, David A.; Jones, William D.

    1996-01-01

    Two continuous-wave(CW)focused C02 Doppler lidars (9.1 and 10.6 micrometers) were developed for airborne in situ aerosol backscatter measurements. The complex path of reliably calibrating these systems, with different signal processors, for accurate derivation of atmospheric backscatter coefficients is documented. Lidar calibration for absolute backscatter measurement for both lidars is based on range response over the lidar sample volume, not solely at focus. Both lidars were calibrated with a new technique using well-characterized aerosols as radiometric standard targets and related to conventional hard-target calibration. A digital signal processor (DSP), a surface acoustic and spectrum analyzer and manually tuned spectrum analyzer signal analyzers were used. The DSP signals were analyzed with an innovative method of correcting for systematic noise fluctuation; the noise statistics exhibit the chi-square distribution predicted by theory. System parametric studies and detailed calibration improved the accuracy of conversion from the measured signal-to-noise ratio to absolute backscatter. The minimum backscatter sensitivity is approximately 3 x 10(exp -12)/m/sr at 9.1 micrometers and approximately 9 x 10(exp -12)/m/sr at 10.6 micrometers. Sample measurements are shown for a flight over the remote Pacific Ocean in 1990 as part of the NASA Global Backscatter Experiment (GLOBE) survey missions, the first time to our knowledge that 9.1-10.6 micrometer lidar intercomparisons were made. Measurements at 9.1 micrometers, a potential wavelength for space-based lidar remote-sensing applications, are to our knowledge the first based on the rare isotope C-12 O(2)-18 gas.

  9. A GPS measurement system for precise satellite tracking and geodesy

    NASA Technical Reports Server (NTRS)

    Yunck, T. P.; Wu, S.-C.; Lichten, S. M.

    1985-01-01

    NASA is pursuing two key applications of differential positioning with the Global Positioning System (GPS): sub-decimeter tracking of earth satellites and few-centimeter determination of ground-fixed baselines. Key requirements of the two applications include the use of dual-frequency carrier phase data, multiple ground receivers to serve as reference points, simultaneous solution for use position and GPS orbits, and calibration of atmospheric delays using water vapor radiometers. Sub-decimeter tracking will be first demonstrated on the TOPEX oceanographic satellite to be launched in 1991. A GPS flight receiver together with at least six ground receivers will acquire delta range data from the GPS carriers for non-real-time analysis. Altitude accuracies of 5 to 10 cm are expected. For baseline measurements, efforts will be made to obtain precise differential pseudorange by resolving the cycle ambiguity in differential carrier phase. This could lead to accuracies of 2 or 3 cm over a few thousand kilometers. To achieve this, a high-performance receiver is being developed, along with improved calibration and data processing techniques. Demonstrations may begin in 1986.

  10. Neutron detection devices with 6LiF converter layers

    NASA Astrophysics Data System (ADS)

    Finocchiaro, Paolo; Cosentino, Luigi; Meo, Sergio Lo; Nolte, Ralf; Radeck, Desiree

    2018-01-01

    The demand for new thermal neutron detectors as an alternative to 3He tubes in research, industrial, safety and homeland security applications, is growing. These needs have triggered research and development activities about new generations of thermal neutron detectors, characterized by reasonable efficiency and gamma rejection comparable to 3He tubes. In this paper we show the state of art of a promising lowcost technique, based on commercial solid state silicon detectors coupled with thin neutron converter layers of 6LiF deposited onto carbon fiber substrates. Several configurations were studied with the GEANT4 simulation code, and then calibrated at the PTB Thermal Neutron Calibration Facility. The results show that the measured detection efficiency is well reproduced by the simulations, therefore validating the simulation tool in view of new designs. These neutron detectors have also been tested at neutron beam facilities like ISIS (Rutherford Appleton Laboratory, UK) and n_TOF (CERN) where a few samples are already in operation for beam flux and 2D profile measurements. Forthcoming applications are foreseen for the online monitoring of spent nuclear fuel casks in interim storage sites.

  11. Differential interferometry for measurement of density fluctuations and fluctuation-induced transport (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, L.; Ding, W. X.; Brower, D. L.

    2010-10-15

    Differential interferometry employs two parallel laser beams with a small spatial offset (less than beam width) and frequency difference (1-2 MHz) using common optics and a single mixer for a heterodyne detection. The differential approach allows measurement of the electron density gradient, its fluctuations, as well as the equilibrium density distribution. This novel interferometry technique is immune to fringe skip errors and is particularly useful in harsh plasma environments. Accurate calibration of the beam spatial offset, accomplished by use of a rotating dielectric wedge, is required to enable broad application of this approach. Differential interferometry has been successfully used onmore » the Madison Symmetric Torus reversed-field pinch plasma to directly measure fluctuation-induced transport along with equilibrium density profile evolution during pellet injection. In addition, by combining differential and conventional interferometry, both linear and nonlinear terms of the electron density fluctuation energy equation can be determined, thereby allowing quantitative investigation of the origin of the density fluctuations. The concept, calibration, and application of differential interferometry are presented.« less

  12. A study for hypergolic vapor sensor development

    NASA Technical Reports Server (NTRS)

    Stetter, J. R.; Tellefsen, K.

    1977-01-01

    In summary, the following tasks were completed within the scope of this work: (1) a portable Monomethylhydrazine analyzer was developed, designed, fabricated and tested. (2) A portable NO2 analyzer was developed, designed, fabricated and tested. (3) Sampling probes and accessories were designed and fabricated for this instrumentation. (4) Improvements and modifications were made to the model 7630 Ecolyzer in preparation for field testing. (5) Instrument calibration procedures and hydrazine handling techniques necessary to the successful application of this hardware were developed.

  13. Application guide for universal source encoding for space

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Miller, Warner H.

    1993-01-01

    Lossless data compression was studied for many NASA missions. The Rice algorithm was demonstrated to provide better performance than other available techniques on most scientific data. A top-level description of the Rice algorithm is first given, along with some new capabilities implemented in both software and hardware forms. Systems issues important for onboard implementation, including sensor calibration, error propagation, and data packetization, are addressed. The latter part of the guide provides twelve case study examples drawn from a broad spectrum of science instruments.

  14. Wire-Strain-Gage Hinge-Moment Indicators for Use in Tests of Airplane Models

    NASA Technical Reports Server (NTRS)

    Edwards, Howard B.

    1944-01-01

    The design and construction of various forms of strain-gage spring units and hinge-moment assemblies are discussed with particular reference to wind-tunnel test, although the indicators may be used equally well in flight tests. Strain-gage specifications are given, and the techniques of their application and use are described briefly. Testing, calibration and operation of hinge-moment indicators are discussed and precautions necessary for successful operation are stressed. Difficulties that may be encountered are summarized along with the possible causes.

  15. Simultaneous Luminescence Pressure and Temperature Mapping

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M. (Inventor)

    1998-01-01

    A simultaneous luminescence pressure and temperature mapping system is developed including improved dye application techniques for surface temperature and pressure measurements from 5 torr to 1000 torr with possible upgrade to from 0.5 torr to several atmospheres with improved camera resolution. Adsorbed perylene dye on slip-cast silica is pressure (oxygen) sensitive and reusable to relatively high temperatures (-150 C). Adsorbed luminescence has an approximately linear color shift with temperature, which can be used for independent temperature mapping and brightness pressure calibration with temperature.

  16. Active microwaves

    NASA Technical Reports Server (NTRS)

    Evans, D.; Vidal-Madjar, D.

    1994-01-01

    Research on the use of active microwaves in remote sensing, presented during plenary and poster sessions, is summarized. The main highlights are: calibration techniques are well understood; innovative modeling approaches have been developed which increase active microwave applications (segmentation prior to model inversion, use of ERS-1 scatterometer, simulations); polarization angle and frequency diversity improves characterization of ice sheets, vegetation, and determination of soil moisture (X band sensor study); SAR (Synthetic Aperture Radar) interferometry potential is emerging; use of multiple sensors/extended spectral signatures is important (increase emphasis).

  17. Simultaneous Luminescence Pressure and Temperature Mapping System

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M. (Inventor)

    1995-01-01

    A simultaneous luminescence pressure and temperature mapping system is developed including improved dye application techniques for surface temperature and pressure measurements from 5 torr to 1000 torr with possible upgrade to from 0.5 torr to several atmospheres with improved camera resolution. Adsorbed perylene dye on slip-cast silica is pressure (oxygen) sensitive and reusable to relatively high temperatures (approximately 150 C). Adsorbed luminescence has an approximately linear color shift with temperature, which can be used for independent temperature mapping and brightness pressure calibration with temperature.

  18. The study of molecular spectroscopy by ab initio methods

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.

    1991-01-01

    This review illustrates the potential of theory for solving spectroscopic problems. The accuracy of approximate techniques for including electron correlation have been calibrated by comparison with full configuration-interaction calculations. Examples of the application of ab initio calculations to vibrational, rotational, and electronic spectroscopy are given. It is shown that the state-averaged, complete active space self-consistent field, multireference configuration-interaction procedure provides a good approach for treating several electronic states accurately in a common molecular orbital basis.

  19. Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part 2; Applications

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Mach, D. M.; Christian, H. J.; Stewart, M. F.; Bateman, M. G.

    2005-01-01

    The Lagrange multiplier theory and "pitch down method" developed in Part I of this study are applied to complete the calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the method performs well in computer simulations. For mill measurement errors of 1 V/m and a 5 V/m error in the mean fair weather field function, the 3-D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair weather field was also tested using computer simulations. For mill measurement errors of 1 V/m, the method retrieves the 3-D storm field to within an error of about 8% if the fair weather field estimate is typically within 1 V/m of the true fair weather field. Using this side constraint and data from fair weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. The resulting calibration matrix was then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably with the results obtained from earlier calibration analyses that were based on iterative techniques.

  20. The development of an efficient mass balance approach for the purity assignment of organic calibration standards.

    PubMed

    Davies, Stephen R; Alamgir, Mahiuddin; Chan, Benjamin K H; Dang, Thao; Jones, Kai; Krishnaswami, Maya; Luo, Yawen; Mitchell, Peter S R; Moawad, Michael; Swan, Hilton; Tarrant, Greg J

    2015-10-01

    The purity determination of organic calibration standards using the traditional mass balance approach is described. Demonstrated examples highlight the potential for bias in each measurement and the need to implement an approach that provides a cross-check for each result, affording fit for purpose purity values in a timely and cost-effective manner. Chromatographic techniques such as gas chromatography with flame ionisation detection (GC-FID) and high-performance liquid chromatography with UV detection (HPLC-UV), combined with mass and NMR spectroscopy, provide a detailed impurity profile allowing an efficient conversion of chromatographic peak areas into relative mass fractions, generally avoiding the need to calibrate each impurity present. For samples analysed by GC-FID, a conservative measurement uncertainty budget is described, including a component to cover potential variations in the response of each unidentified impurity. An alternative approach is also detailed in which extensive purification eliminates the detector response factor issue, facilitating the certification of a super-pure calibration standard which can be used to quantify the main component in less-pure candidate materials. This latter approach is particularly useful when applying HPLC analysis with UV detection. Key to the success of this approach is the application of both qualitative and quantitative (1)H NMR spectroscopy.

  1. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    NASA Astrophysics Data System (ADS)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01

    Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits of Signal Averaging in Atmospheric Trace-Gas Monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS)," Applied Physics, B57, pp 131-139, April 1993

  2. Externally Calibrated Parallel Imaging for 3D Multispectral Imaging Near Metallic Implants Using Broadband Ultrashort Echo Time Imaging

    PubMed Central

    Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.

    2017-01-01

    Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613

  3. Setup calibration and optimization for comparative digital holography

    NASA Astrophysics Data System (ADS)

    Baumbach, Torsten; Osten, Wolfgang; Kebbel, Volker; von Kopylow, Christoph; Jueptner, Werner

    2004-08-01

    With increasing globalization many enterprises decide to produce the components of their products at different locations all over the world. Consequently, new technologies and strategies for quality control are required. In this context the remote comparison of objects with regard to their shape or response on certain loads is getting more and more important for a variety of applications. For such a task the novel method of comparative digital holography is a suitable tool with interferometric sensitivity. With this technique the comparison in shape or deformation of two objects does not require the presence of both objects at the same place. In contrast to the well known incoherent techniques based on inverse fringe projection this new approach uses a coherent mask for the illumination of the sample object. The coherent mask is created by digital holography to enable the instant access to the complete optical information of the master object at any wanted place. The reconstruction of the mask is done by a spatial light modulator (SLM). The transmission of the digital master hologram to the place of comparison can be done via digital telecommunication networks. Contrary to other interferometric techniques this method enables the comparison of objects with different microstructure. In continuation of earlier reports our investigations are focused here on the analysis of the constraints of the setup with respect to the quality of the hologram reconstruction with a spatial light modulator. For successful measurements the selection of the appropriate reconstruction method and the adequate optical set-up is mandatory. In addition, the use of a SLM for the reconstruction requires the knowledge of its properties for the accomplishment of this method. The investigation results for the display properties such as display curvature, phase shift and the consequences for the technique will be presented. The optimization and the calibration of the set-up and its components lead to improved results in comparative digital holography with respect to the resolution. Examples of measurements before and after the optimization and calibration will be presented.

  4. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order polynomial with 16-bit precision, significant improvement over the one and two-point correction algorithms. All algorithm have been implemented in software with satisfactory results and the third order gain equalization non-uniformity correction algorithm has been implemented in hardware.

  5. Efficient and robust analysis of complex scattering data under noise in microwave resonators.

    PubMed

    Probst, S; Song, F B; Bushev, P A; Ustinov, A V; Weides, M

    2015-02-01

    Superconducting microwave resonators are reliable circuits widely used for detection and as test devices for material research. A reliable determination of their external and internal quality factors is crucial for many modern applications, which either require fast measurements or operate in the single photon regime with small signal to noise ratios. Here, we use the circle fit technique with diameter correction and provide a step by step guide for implementing an algorithm for robust fitting and calibration of complex resonator scattering data in the presence of noise. The speedup and robustness of the analysis are achieved by employing an algebraic rather than an iterative fit technique for the resonance circle.

  6. Overview of hypersonic CFD code calibration studies

    NASA Technical Reports Server (NTRS)

    Miller, Charles G.

    1987-01-01

    The topics are presented in viewgraph form and include the following: definitions of computational fluid dynamics (CFD) code validation; climate in hypersonics and LaRC when first 'designed' CFD code calibration studied was initiated; methodology from the experimentalist's perspective; hypersonic facilities; measurement techniques; and CFD code calibration studies.

  7. The CCD Photometric Calibration Cookbook

    NASA Astrophysics Data System (ADS)

    Palmer, J.; Davenhall, A. C.

    This cookbook presents simple recipes for the photometric calibration of CCD frames. Using these recipes you can calibrate the brightness of objects measured in CCD frames into magnitudes in standard photometric systems, such as the Johnson-Morgan UBV, system. The recipes use standard software available at all Starlink sites. The topics covered include: selecting standard stars, measuring instrumental magnitudes and calibrating instrumental magnitudes into a standard system. The recipes are appropriate for use with data acquired with optical CCDs and filters, operated in standard ways, and describe the usual calibration technique of observing standard stars. The software is robust and reliable, but the techniques are usually not suitable where very high accuracy is required. In addition to the recipes and scripts, sufficient background material is presented to explain the procedures and techniques used. The treatment is deliberately practical rather than theoretical, in keeping with the aim of providing advice on the actual calibration of observations. This cookbook is aimed firmly at people who are new to astronomical photometry. Typical readers might have a set of photometric observations to reduce (perhaps observed by a colleague) or be planning a programme of photometric observations, perhaps for the first time. No prior knowledge of astronomical photometry is assumed. The cookbook is not aimed at experts in astronomical photometry. Many finer points are omitted for clarity and brevity. Also, in order to make the most accurate possible calibration of high-precision photometry, it is usually necessary to use bespoke software tailored to the observing programme and photometric system you are using.

  8. Self-calibration of photometric redshift scatter in weak-lensing surveys

    DOE PAGES

    Zhang, Pengjie; Pen, Ue -Li; Bernstein, Gary

    2010-06-11

    Photo-z errors, especially catastrophic errors, are a major uncertainty for precision weak lensing cosmology. We find that the shear-(galaxy number) density and density-density cross correlation measurements between photo-z bins, available from the same lensing surveys, contain valuable information for self-calibration of the scattering probabilities between the true-z and photo-z bins. The self-calibration technique we propose does not rely on cosmological priors nor parameterization of the photo-z probability distribution function, and preserves all of the cosmological information available from shear-shear measurement. We estimate the calibration accuracy through the Fisher matrix formalism. We find that, for advanced lensing surveys such as themore » planned stage IV surveys, the rate of photo-z outliers can be determined with statistical uncertainties of 0.01-1% for z < 2 galaxies. Among the several sources of calibration error that we identify and investigate, the galaxy distribution bias is likely the most dominant systematic error, whereby photo-z outliers have different redshift distributions and/or bias than non-outliers from the same bin. This bias affects all photo-z calibration techniques based on correlation measurements. As a result, galaxy bias variations of O(0.1) produce biases in photo-z outlier rates similar to the statistical errors of our method, so this galaxy distribution bias may bias the reconstructed scatters at several-σ level, but is unlikely to completely invalidate the self-calibration technique.« less

  9. An on-line calibration technique for improved blade by blade tip clearance measurement

    NASA Astrophysics Data System (ADS)

    Sheard, A. G.; Westerman, G. C.; Killeen, B.

    A description of a capacitance-based tip clearance measurement system which integrates a novel technique for calibrating the capacitance probe in situ is presented. The on-line calibration system allows the capacitance probe to be calibrated immediately prior to use, providing substantial operational advantages and maximizing measurement accuracy. The possible error sources when it is used in service are considered, and laboratory studies of performance to ascertain their magnitude are discussed. The 1.2-mm diameter FM capacitance probe is demonstrated to be insensitive to variations in blade tip thickness from 1.25 to 1.45 mm. Over typical compressor blading the probe's range was four times the variation in blade to blade clearance encountered in engine run components.

  10. Calibration of the C-14 timescale over the past 30,000 years using mass spectrometric U-Th ages from Barbados corals

    NASA Technical Reports Server (NTRS)

    Bard, Edouard; Hamelin, Bruno; Fairbanks, Richard G.; Zindler, Alan

    1990-01-01

    Uranium-thorium ages obtained by mass spectrometry from corals raised off the island of Barbados confirm the high precision of this technique over at least the past 30,000 years. Comparison of the U-Th ages with C-14 ages obtained on the Holocene samples shows that the U-Th ages are accurate, because they accord with the dendrochronological calibration. Before 9,000 yr BP, the C-14 ages are systematically younger than the U-Th ages, with a maximum difference of about 3500 yr at about 20,000 yr BP. The U-Th technique thus provides a way of calibrating the radiocarbon timescale beyond the range of dendrochronological calibration.

  11. ITER-relevant calibration technique for soft x-ray spectrometer.

    PubMed

    Rzadkiewicz, J; Książek, I; Zastrow, K-D; Coffey, I H; Jakubowska, K; Lawson, K D

    2010-10-01

    The ITER-oriented JET research program brings new requirements for the low-Z impurity monitoring, in particular for the Be—the future main wall component of JET and ITER. Monitoring based on Bragg spectroscopy requires an absolute sensitivity calibration, which is challenging for large tokamaks. This paper describes both “component-by-component” and “continua” calibration methods used for the Be IV channel (75.9 Å) of the Bragg rotor spectrometer deployed on JET. The calibration techniques presented here rely on multiorder reflectivity calculations and measurements of continuum radiation emitted from helium plasmas. These offer excellent conditions for the absolute photon flux calibration due to their low level of impurities. It was found that the component-by-component method gives results that are four times higher than those obtained by means of the continua method. A better understanding of this discrepancy requires further investigations.

  12. General Matrix Inversion Technique for the Calibration of Electric Field Sensor Arrays on Aircraft Platforms

    NASA Technical Reports Server (NTRS)

    Mach, D. M.; Koshak, W. J.

    2007-01-01

    A matrix calibration procedure has been developed that uniquely relates the electric fields measured at the aircraft with the external vector electric field and net aircraft charge. The calibration method can be generalized to any reasonable combination of electric field measurements and aircraft. A calibration matrix is determined for each aircraft that represents the individual instrument responses to the external electric field. The aircraft geometry and configuration of field mills (FMs) uniquely define the matrix. The matrix can then be inverted to determine the external electric field and net aircraft charge from the FM outputs. A distinct advantage of the method is that if one or more FMs need to be eliminated or deemphasized [e.g., due to a malfunction), it is a simple matter to reinvert the matrix without the malfunctioning FMs. To demonstrate the calibration technique, data are presented from several aircraft programs (ER-2, DC-8, Altus, and Citation).

  13. Two imaging techniques for 3D quantification of pre-cementation space for CAD/CAM crowns.

    PubMed

    Rungruanganunt, Patchanee; Kelly, J Robert; Adams, Douglas J

    2010-12-01

    Internal three-dimensional (3D) "fit" of prostheses to prepared teeth is likely more important clinically than "fit" judged only at the level of the margin (i.e. marginal "opening"). This work evaluates two techniques for quantitatively defining 3D "fit", both using pre-cementation space impressions: X-ray microcomputed tomography (micro-CT) and quantitative optical analysis. Both techniques are of interest for comparison of CAD/CAM system capabilities and for documenting "fit" as part of clinical studies. Pre-cementation space impressions were taken of a single zirconia coping on its die using a low viscosity poly(vinyl siloxane) impression material. Calibration specimens of this material were fabricated between the measuring platens of a micrometre. Both calibration curves and pre-cementation space impression data sets were obtained by examination using micro-CT and quantitative optical analysis. Regression analysis was used to compare calibration curves with calibration sets. Micro-CT calibration data showed tighter 95% confidence intervals and was able to measure over a wider thickness range than for the optical technique. Regions of interest (e.g., lingual, cervical) were more easily analysed with optical image analysis and this technique was more suitable for extremely thin impression walls (<10-15μm). Specimen preparation is easier for micro-CT and segmentation parameters appeared to capture dimensions accurately. Both micro-CT and the optical method can be used to quantify the thickness of pre-cementation space impressions. Each has advantages and limitations but either technique has the potential for use as part of clinical studies or CAD/CAM protocol optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. In situ calibration of inductively coupled plasma-atomic emission and mass spectroscopy

    DOEpatents

    Braymen, Steven D.

    1996-06-11

    A method and apparatus for in situ addition calibration of an inductively coupled plasma atomic emission spectrometer or mass spectrometer using a precision gas metering valve to introduce a volatile calibration gas of an element of interest directly into an aerosol particle stream. The present situ calibration technique is suitable for various remote, on-site sampling systems such as laser ablation or nebulization.

  15. Borehole geophysics applied to ground-water investigations

    USGS Publications Warehouse

    Keys, W.S.

    1990-01-01

    The purpose of this manual is to provide hydrologists, geologists, and others who have the necessary background in hydrogeology with the basic information needed to apply the most useful borehole-geophysical-logging techniques to the solution of problems in ground-water hydrology. Geophysical logs can provide information on the construction of wells and on the character of the rocks and fluids penetrated by those wells, as well as on changes in the character of these factors over time. The response of well logs is caused by petrophysical factors, by the quality, temperature, and pressure of interstitial fluids, and by ground-water flow. Qualitative and quantitative analysis of analog records and computer analysis of digitized logs are used to derive geohydrologic information. This information can then be extrapolated vertically within a well and laterally to other wells using logs. The physical principles by which the mechanical and electronic components of a logging system measure properties of rocks, fluids, and wells, as well as the principles of measurement, must be understood if geophysical logs are to be interpreted correctly. Plating a logging operation involves selecting the equipment and the logs most likely to provide the needed information. Information on well construction and geohydrology is needed to guide this selection. Quality control of logs is an important responsibility of both the equipment operator and the log analyst and requires both calibration and well-site standardization of equipment. Logging techniques that are widely used in ground-water hydrology or that have significant potential for application to this field include spontaneous potential, resistance, resistivity, gamma, gamma spectrometry, gamma-gamma, neutron, acoustic velocity, acoustic televiewer, caliper, and fluid temperature, conductivity, and flow. The following topics are discussed for each of these techniques: principles and instrumentation, calibration and standardization, volume of investigation, extraneous effects, and interpretation and applications.

  16. Borehole geophysics applied to ground-water investigations

    USGS Publications Warehouse

    Keys, W.S.

    1988-01-01

    The purpose of this manual is to provide hydrologists, geologists, and others who have the necessary training with the basic information needed to apply the most useful borehole-geophysical-logging techniques to the solution of problems in ground-water hydrology. Geophysical logs can provide information on the construction of wells and on the character of the rocks and fluids penetrated by those wells, in addition to changes in the character of these factors with time. The response of well logs is caused by: petrophysical factors; the quality; temperature, and pressure of interstitial fluids; and ground-water flow. Qualitative and quantitative analysis of the analog records and computer analysis of digitized logs are used to derive geohydrologic information. This information can then be extrapolated vertically within a well and laterally to other wells using logs.The physical principles by which the mechanical and electronic components of a logging system measure properties of rocks, fluids and wells, and the principles of measurement need to be understood to correctly interpret geophysical logs. Planning the logging operation involves selecting the equipment and the logs most likely to provide the needed information. Information on well construction and geohydrology are needed to guide this selection. Quality control of logs is an important responsibility of both the equipment operator and log analyst and requires both calibration and well-site standardization of equipment.Logging techniques that are widely used in ground-water hydrology or that have significant potential for application to this field include: spontaneous potential, resistance, resistivity, gamma, gamma spectrometry, gamma-gamma, neutron, acoustic velocity, acoustic televiewer, caliper, and fluid temperature, conductivity, and flow. The following topics are discussed for each of these techniques: principles and instrumentation, calibration and standardization, volume of investigation, extraneous effects, and interpretation and applications.

  17. Performance of thin CaSO4:Dy pellets for calibration of a Sr90+Y90 source

    NASA Astrophysics Data System (ADS)

    Oliveira, M. L.; Caldas, L. V. E.

    2007-09-01

    Because of the radionuclide long half-life, Sr90+Y90, plane or concave sources, utilized in brachytherapy, have to be calibrated initially by the manufacturer and then routinely while they are utilized. Plane applicators can be calibrated against a conventional extrapolation chamber, but concave sources, because of their geometry, should be calibrated using relative dosimeters, as thermoluminescent (TL) materials. Thin CaSO4:Dy pellets are produced at IPEN specially for beta radiation detection. Previous works showed the feasibility of this material in the dosimetry of Sr90+Y90 sources in a wide range of absorbed dose in air. The aim of this work was to study the usefulness of these pellets for the calibration of a Sr90+Y90 concave applicator. To reach this objective, a special phantom was designed and manufactured in PTFE with semi spherical geometry. Because of the dependence of the TL response on the mass of the pellet, the response of each pellet was normalized by its mass in order to reduce the dispersion on TL response. Important characteristics of this material were obtained in reference of a standard Sr90+Y90 source, and the pellets were calibrated against a plane applicator; then they were utilized to calibrate the concave applicator.

  18. Regionalisation of a distributed method for flood quantiles estimation: Revaluation of local calibration hypothesis to enhance the spatial structure of the optimised parameter

    NASA Astrophysics Data System (ADS)

    Odry, Jean; Arnaud, Patrick

    2016-04-01

    The SHYREG method (Aubert et al., 2014) associates a stochastic rainfall generator and a rainfall-runoff model to produce rainfall and flood quantiles on a 1 km2 mesh covering the whole French territory. The rainfall generator is based on the description of rainy events by descriptive variables following probability distributions and is characterised by a high stability. This stochastic generator is fully regionalised, and the rainfall-runoff transformation is calibrated with a single parameter. Thanks to the stability of the approach, calibration can be performed against only flood quantiles associated with observated frequencies which can be extracted from relatively short time series. The aggregation of SHYREG flood quantiles to the catchment scale is performed using an areal reduction factor technique unique on the whole territory. Past studies demonstrated the accuracy of SHYREG flood quantiles estimation for catchments where flow data are available (Arnaud et al., 2015). Nevertheless, the parameter of the rainfall-runoff model is independently calibrated for each target catchment. As a consequence, this parameter plays a corrective role and compensates approximations and modelling errors which makes difficult to identify its proper spatial pattern. It is an inherent objective of the SHYREG approach to be completely regionalised in order to provide a complete and accurate flood quantiles database throughout France. Consequently, it appears necessary to identify the model configuration in which the calibrated parameter could be regionalised with acceptable performances. The revaluation of some of the method hypothesis is a necessary step before the regionalisation. Especially the inclusion or the modification of the spatial variability of imposed parameters (like production and transfer reservoir size, base flow addition and quantiles aggregation function) should lead to more realistic values of the only calibrated parameter. The objective of the work presented here is to develop a SHYREG evaluation scheme focusing on both local and regional performances. Indeed, it is necessary to maintain the accuracy of at site flood quantiles estimation while identifying a configuration leading to a satisfactory spatial pattern of the calibrated parameter. This ability to be regionalised can be appraised by the association of common regionalisation techniques and split sample validation tests on a set of around 1,500 catchments representing the whole diversity of France physiography. Also, the presence of many nested catchments and a size-based split sample validation make possible to assess the relevance of the calibrated parameter spatial structure inside the largest catchments. The application of this multi-objective evaluation leads to the selection of a version of SHYREG more suitable for regionalisation. References: Arnaud, P., Cantet, P., Aubert, Y., 2015. Relevance of an at-site flood frequency analysis method for extreme events based on stochastic simulation of hourly rainfall. Hydrological Sciences Journal: on press. DOI:10.1080/02626667.2014.965174 Aubert, Y., Arnaud, P., Ribstein, P., Fine, J.A., 2014. The SHYREG flow method-application to 1605 basins in metropolitan France. Hydrological Sciences Journal, 59(5): 993-1005. DOI:10.1080/02626667.2014.902061

  19. Applying modern psychometric techniques to melodic discrimination testing: Item response theory, computerised adaptive testing, and automatic item generation.

    PubMed

    Harrison, Peter M C; Collins, Tom; Müllensiefen, Daniel

    2017-06-15

    Modern psychometric theory provides many useful tools for ability testing, such as item response theory, computerised adaptive testing, and automatic item generation. However, these techniques have yet to be integrated into mainstream psychological practice. This is unfortunate, because modern psychometric techniques can bring many benefits, including sophisticated reliability measures, improved construct validity, avoidance of exposure effects, and improved efficiency. In the present research we therefore use these techniques to develop a new test of a well-studied psychological capacity: melodic discrimination, the ability to detect differences between melodies. We calibrate and validate this test in a series of studies. Studies 1 and 2 respectively calibrate and validate an initial test version, while Studies 3 and 4 calibrate and validate an updated test version incorporating additional easy items. The results support the new test's viability, with evidence for strong reliability and construct validity. We discuss how these modern psychometric techniques may also be profitably applied to other areas of music psychology and psychological science in general.

  20. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clegg, Samuel M; Barefield, James E; Wiens, Roger C

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from whichmore » unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.« less

Top