Sample records for calibrators performance evaluation

  1. Method calibration of the model 13145 infrared target projectors

    NASA Astrophysics Data System (ADS)

    Huang, Jianxia; Gao, Yuan; Han, Ying

    2014-11-01

    The SBIR Model 13145 Infrared Target Projectors ( The following abbreviation Evaluation Unit ) used for characterizing the performances of infrared imaging system. Test items: SiTF, MTF, NETD, MRTD, MDTD, NPS. Infrared target projectors includes two area blackbodies, a 12 position target wheel, all reflective collimator. It provide high spatial frequency differential targets, Precision differential targets imaged by infrared imaging system. And by photoelectricity convert on simulate signal or digital signal. Applications software (IR Windows TM 2001) evaluate characterizing the performances of infrared imaging system. With regards to as a whole calibration, first differently calibration for distributed component , According to calibration specification for area blackbody to calibration area blackbody, by means of to amend error factor to calibration of all reflective collimator, radiance calibration of an infrared target projectors using the SR5000 spectral radiometer, and to analyze systematic error. With regards to as parameter of infrared imaging system, need to integrate evaluation method. According to regulation with -GJB2340-1995 General specification for military thermal imaging sets -testing parameters of infrared imaging system, the results compare with results from Optical Calibration Testing Laboratory . As a goal to real calibration performances of the Evaluation Unit.

  2. Evaluation and Enhancement of Calibration in the American College of Surgeons NSQIP Surgical Risk Calculator.

    PubMed

    Liu, Yaoming; Cohen, Mark E; Hall, Bruce L; Ko, Clifford Y; Bilimoria, Karl Y

    2016-08-01

    The American College of Surgeon (ACS) NSQIP Surgical Risk Calculator has been widely adopted as a decision aid and informed consent tool by surgeons and patients. Previous evaluations showed excellent discrimination and combined discrimination and calibration, but model calibration alone, and potential benefits of recalibration, were not explored. Because lack of calibration can lead to systematic errors in assessing surgical risk, our objective was to assess calibration and determine whether spline-based adjustments could improve it. We evaluated Surgical Risk Calculator model calibration, as well as discrimination, for each of 11 outcomes modeled from nearly 3 million patients (2010 to 2014). Using independent random subsets of data, we evaluated model performance for the Development (60% of records), Validation (20%), and Test (20%) datasets, where prediction equations from the Development dataset were recalibrated using restricted cubic splines estimated from the Validation dataset. We also evaluated performance on data subsets composed of higher-risk operations. The nonrecalibrated Surgical Risk Calculator performed well, but there was a slight tendency for predicted risk to be overestimated for lowest- and highest-risk patients and underestimated for moderate-risk patients. After recalibration, this distortion was eliminated, and p values for miscalibration were most often nonsignificant. Calibration was also excellent for subsets of higher-risk operations, though observed calibration was reduced due to instability associated with smaller sample sizes. Performance of NSQIP Surgical Risk Calculator models was shown to be excellent and improved with recalibration. Surgeons and patients can rely on the calculator to provide accurate estimates of surgical risk. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  3. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  4. Four years of Landsat-7 on-orbit geometric calibration and performance

    USGS Publications Warehouse

    Lee, D.S.; Storey, James C.; Choate, M.J.; Hayes, R.W.

    2004-01-01

    Unlike its predecessors, Landsat-7 has undergone regular geometric and radiometric performance monitoring and calibration since launch in April 1999. This ongoing activity, which includes issuing quarterly updates to calibration parameters, has generated a wealth of geometric performance data over the four-year on-orbit period of operations. A suite of geometric characterization (measurement and evaluation procedures) and calibration (procedures to derive improved estimates of instrument parameters) methods are employed by the Landsat-7 Image Assessment System to maintain the geometric calibration and to track specific aspects of geometric performance. These include geodetic accuracy, band-to-band registration accuracy, and image-to-image registration accuracy. These characterization and calibration activities maintain image product geometric accuracy at a high level - by monitoring performance to determine when calibration is necessary, generating new calibration parameters, and verifying that new parameters achieve desired improvements in accuracy. Landsat-7 continues to meet and exceed all geometric accuracy requirements, although aging components have begun to affect performance.

  5. Man vs. Machine: An interactive poll to evaluate hydrological model performance of a manual and an automatic calibration

    NASA Astrophysics Data System (ADS)

    Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that produced the respective hydrograph. Therefore, the result of the poll can be seen as an additional quality criterion for the comparison of the two different approaches and help in the evaluation of the automatic calibration method.

  6. SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Elder, E; Roper, J

    2015-06-15

    Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less

  7. Improving Fifth Grade Students' Mathematics Self-Efficacy Calibration and Performance through Self-Regulation Training

    ERIC Educational Resources Information Center

    Ramdass, Darshanand H.

    2009-01-01

    This primary goal of this study was to investigate the effects of strategy training and self-reflection, two subprocesses of Zimmerman's cyclical model of self-regulation, on fifth grade students' mathematics performance, self-efficacy, self-evaluation, and calibration measures of self-efficacy bias, self-efficacy accuracy, self-evaluation bias,…

  8. Test Takers' Performance Appraisals, Appraisal Calibration, and Cognitive and Metacognitive Strategy Use

    ERIC Educational Resources Information Center

    Phakiti, Aek

    2016-01-01

    The current study explores the nature and relationships among test takers' performance appraisals, appraisal calibration, and reported cognitive and metacognitive strategy use in a language test situation. Performance appraisals are executive processes of strategic competence for judging test performance (e.g., evaluating the correctness or…

  9. Evaluation of Calibration Laboratories Performance

    NASA Astrophysics Data System (ADS)

    Filipe, Eduarda

    2011-12-01

    One of the main goals of interlaboratory comparisons (ILCs) is the evaluation of the laboratories performance for the routine calibrations they perform for the clients. In the frame of Accreditation of Laboratories, the national accreditation boards (NABs) in collaboration with the national metrology institutes (NMIs) organize the ILCs needed to comply with the requirements of the international accreditation organizations. In order that an ILC is a reliable tool for a laboratory to validate its best measurement capability (BMC), it is needed that the NMI (reference laboratory) provides a better traveling standard—in terms of accuracy class or uncertainty—than the laboratories BMCs. Although this is the general situation, there are cases where the NABs ask the NMIs to evaluate the performance of the accredited laboratories when calibrating industrial measuring instruments. The aim of this article is to discuss the existing approaches for the evaluation of ILCs and propose a basis for the validation of the laboratories measurement capabilities. An example is drafted with the evaluation of the results of mercury-in-glass thermometers ILC with 12 participant laboratories.

  10. Decomposition of the Mean Squared Error and NSE Performance Criteria: Implications for Improving Hydrological Modelling

    NASA Technical Reports Server (NTRS)

    Gupta, Hoshin V.; Kling, Harald; Yilmaz, Koray K.; Martinez-Baquero, Guillermo F.

    2009-01-01

    The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.

  11. Results of the 1980 NASA/JPL balloon flight solar cell calibration program

    NASA Technical Reports Server (NTRS)

    Seaman, C. H.; Weiss, R. S.

    1981-01-01

    Thirty-eight modules were carried to an altitude of about 36 kilometers. In addition to the cell calibration program, an experiment to evaluate the calibration error versus altitude was performed. The calibrated cells can be used as reference standards in simulator testing of cells and arrays.

  12. DSS range delay calibrations: Current performance level

    NASA Technical Reports Server (NTRS)

    Spradlin, G. L.

    1976-01-01

    A means for evaluating Deep Space Station (DSS) range delay calibration performance was developed. Inconsistencies frequently noted in these data are resolved. Development of the DSS range delay data base is described. The data base is presented with comments regarding apparent discontinuities. Data regarding the exciter frequency dependence of the delay values are presented. The improvement observed in the consistency of current DSS range delay calibration data over the performance previously observed is noted.

  13. Evaluation of Long-Term Pavement Performance (LTTP) Climatic Data for Use in Mechanistic-Empirical Pavement Design Guide (MEPDG) Calibration and Other Pavement Analysis

    DOT National Transportation Integrated Search

    2015-05-01

    Improvements in the Long-Term Pavement Performance (LTPP) Programs climate data are needed to support current and future research into climate effects on pavement materials, design, and performance. The calibration and enhancement of the Mechanist...

  14. Performance characterization of polarimetric active radar calibrators and a new single antenna design

    NASA Astrophysics Data System (ADS)

    Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.

    1992-10-01

    Three aspects of a polarimetric active radar calibrator (PARC) are treated: (1) experimental measurements of the magnitudes and phases of the scattering-matrix elements of a pair of PARCs operating at 1.25 and 5.3 GHz; (2) the design, construction, and performance evaluation of a PARC; and (3) the extension of the single-target-calibration technique (STCT) to a PARC. STCT has heretofore been limited to the use of reciprocal passive calibration devices, such as spheres and trihedral corner reflectors.

  15. Performance characterization of polarimetric active radar calibrators and a new single antenna design

    NASA Technical Reports Server (NTRS)

    Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.

    1992-01-01

    Three aspects of a polarimetric active radar calibrator (PARC) are treated: (1) experimental measurements of the magnitudes and phases of the scattering-matrix elements of a pair of PARCs operating at 1.25 and 5.3 GHz; (2) the design, construction, and performance evaluation of a PARC; and (3) the extension of the single-target-calibration technique (STCT) to a PARC. STCT has heretofore been limited to the use of reciprocal passive calibration devices, such as spheres and trihedral corner reflectors.

  16. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    NASA Technical Reports Server (NTRS)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  17. LANDSAT-D conical scanner evaluation plan

    NASA Technical Reports Server (NTRS)

    Bilanow, S.; Chen, L. C. (Principal Investigator)

    1982-01-01

    The planned activities involved in the inflight sensor calibration and performance evaluation are discussed and the supporting software requirements are specified. The possible sensor error sources and their effects on sensor measurements are summarized. The methods by which the inflight sensor performance will be analyzed and the sensor modeling parameters will be calibrated are presented. In addition, a brief discussion on the data requirement for the study is provided.

  18. Evaluation of a laser scanner for large volume coordinate metrology: a comparison of results before and after factory calibration

    NASA Astrophysics Data System (ADS)

    Ferrucci, M.; Muralikrishnan, B.; Sawyer, D.; Phillips, S.; Petrov, P.; Yakovlev, Y.; Astrelin, A.; Milligan, S.; Palmateer, J.

    2014-10-01

    Large volume laser scanners are increasingly being used for a variety of dimensional metrology applications. Methods to evaluate the performance of these scanners are still under development and there are currently no documentary standards available. This paper describes the results of extensive ranging and volumetric performance tests conducted on a large volume laser scanner. The results demonstrated small but clear systematic errors that are explained in the context of a geometric error model for the instrument. The instrument was subsequently returned to the manufacturer for factory calibration. The ranging and volumetric tests were performed again and the results are compared against those obtained prior to the factory calibration.

  19. The preliminary checkout, evaluation and calibration of a 3-component force measurement system for calibrating propulsion simulators for wind tunnel models

    NASA Technical Reports Server (NTRS)

    Scott, W. A.

    1984-01-01

    The propulsion simulator calibration laboratory (PSCL) in which calibrations can be performed to determine the gross thrust and airflow of propulsion simulators installed in wind tunnel models is described. The preliminary checkout, evaluation and calibration of the PSCL's 3 component force measurement system is reported. Methods and equipment were developed for the alignment and calibration of the force measurement system. The initial alignment of the system demonstrated the need for more efficient means of aligning system's components. The use of precision alignment jigs increases both the speed and accuracy with which the system is aligned. The calibration of the force measurement system shows that the methods and equipment for this procedure can be successful.

  20. Accuracy evaluation of optical distortion calibration by digital image correlation

    NASA Astrophysics Data System (ADS)

    Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan

    2017-11-01

    Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.

  1. Evaluation of calibration efficacy under different levels of uncertainty

    DOE PAGES

    Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...

    2014-06-10

    This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less

  2. Comparison of calibration strategies for optical 3D scanners based on structured light projection using a new evaluation methodology

    NASA Astrophysics Data System (ADS)

    Bräuer-Burchardt, Christian; Ölsner, Sandy; Kühmstedt, Peter; Notni, Gunther

    2017-06-01

    In this paper a new evaluation strategy for optical 3D scanners based on structured light projection is introduced. It can be used for the characterization of the expected measurement accuracy. Compared to the procedure proposed in the VDI/VDE guidelines for optical 3D measurement systems based on area scanning it requires less effort and provides more impartiality. The methodology is suitable for the evaluation of sets of calibration parameters, which mainly determine the quality of the measurement result. It was applied to several calibrations of a mobile stereo camera based optical 3D scanner. The performed calibrations followed different strategies regarding calibration bodies and arrangement of the observed scene. The results obtained by the different calibration strategies are discussed and suggestions concerning future work on this area are given.

  3. Calibration and evaluation of a dispersant application system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shum, J.S.

    1987-05-01

    The report presents recommended methods for calibrating and operating boat-mounted dispersant application systems. Calibration of one commercially-available system and several unusual problems encountered in calibration are described. Charts and procedures for selecting pump rates and other operating parameters in order to achieve a desired dosage are provided. The calibration was performed at the EPA's Oil and Hazardous Materials Simulated Environmental Test Tank (OHMSETT) facility in Leonardo, New Jersey.

  4. Low-speed airspeed calibration data for a single-engine research-support aircraft

    NASA Technical Reports Server (NTRS)

    Holmes, B. J.

    1980-01-01

    A standard service airspeed system on a single engine research support airplane was calibrated by the trailing anemometer method. The effects of flaps, power, sideslip, and lag were evaluated. The factory supplied airspeed calibrations were not sufficiently accurate for high accuracy flight research applications. The trailing anemometer airspeed calibration was conducted to provide the capability to use the research support airplane to perform pace aircraft airspeed calibrations.

  5. Quantitative evaluation for accumulative calibration error and video-CT registration errors in electromagnetic-tracked endoscopy.

    PubMed

    Liu, Sheena Xin; Gutiérrez, Luis F; Stanton, Doug

    2011-05-01

    Electromagnetic (EM)-guided endoscopy has demonstrated its value in minimally invasive interventions. Accuracy evaluation of the system is of paramount importance to clinical applications. Previously, a number of researchers have reported the results of calibrating the EM-guided endoscope; however, the accumulated errors of an integrated system, which ultimately reflect intra-operative performance, have not been characterized. To fill this vacancy, we propose a novel system to perform this evaluation and use a 3D metric to reflect the intra-operative procedural accuracy. This paper first presents a portable design and a method for calibration of an electromagnetic (EM)-tracked endoscopy system. An evaluation scheme is then described that uses the calibration results and EM-CT registration to enable real-time data fusion between CT and endoscopic video images. We present quantitative evaluation results for estimating the accuracy of this system using eight internal fiducials as the targets on an anatomical phantom: the error is obtained by comparing the positions of these targets in the CT space, EM space and endoscopy image space. To obtain 3D error estimation, the 3D locations of the targets in the endoscopy image space are reconstructed from stereo views of the EM-tracked monocular endoscope. Thus, the accumulated errors are evaluated in a controlled environment, where the ground truth information is present and systematic performance (including the calibration error) can be assessed. We obtain the mean in-plane error to be on the order of 2 pixels. To evaluate the data integration performance for virtual navigation, target video-CT registration error (TRE) is measured as the 3D Euclidean distance between the 3D-reconstructed targets of endoscopy video images and the targets identified in CT. The 3D error (TRE) encapsulates EM-CT registration error, EM-tracking error, fiducial localization error, and optical-EM calibration error. We present in this paper our calibration method and a virtual navigation evaluation system for quantifying the overall errors of the intra-operative data integration. We believe this phantom not only offers us good insights to understand the systematic errors encountered in all phases of an EM-tracked endoscopy procedure but also can provide quality control of laboratory experiments for endoscopic procedures before the experiments are transferred from the laboratory to human subjects.

  6. An Enclosed Laser Calibration Standard

    NASA Astrophysics Data System (ADS)

    Adams, Thomas E.; Fecteau, M. L.

    1985-02-01

    We have designed, evaluated and calibrated an enclosed, safety-interlocked laser calibration standard for use in US Army Secondary Reference Calibration Laboratories. This Laser Test Set Calibrator (LTSC) represents the Army's first-generation field laser calibration standard. Twelve LTSC's are now being fielded world-wide. The main requirement on the LTSC is to provide calibration support for the Test Set (TS3620) which, in turn, is a GO/NO GO tester of the Hand-Held Laser Rangefinder (AN/GVS-5). However, we believe it's design is flexible enough to accommodate the calibration of other laser test, measurement and diagnostic equipment (TMDE) provided that single-shot capability is adequate to perform the task. In this paper we describe the salient aspects and calibration requirements of the AN/GVS-5 Rangefinder and the Test Set which drove the basic LTSC design. Also, we detail our evaluation and calibration of the LTSC, in particular, the LTSC system standards. We conclude with a review of our error analysis from which uncertainties were assigned to the LTSC calibration functions.

  7. Sensitivity and Uncertainty Analysis for Streamflow Prediction Using Different Objective Functions and Optimization Algorithms: San Joaquin California

    NASA Astrophysics Data System (ADS)

    Paul, M.; Negahban-Azar, M.

    2017-12-01

    The hydrologic models usually need to be calibrated against observed streamflow at the outlet of a particular drainage area through a careful model calibration. However, a large number of parameters are required to fit in the model due to their unavailability of the field measurement. Therefore, it is difficult to calibrate the model for a large number of potential uncertain model parameters. This even becomes more challenging if the model is for a large watershed with multiple land uses and various geophysical characteristics. Sensitivity analysis (SA) can be used as a tool to identify most sensitive model parameters which affect the calibrated model performance. There are many different calibration and uncertainty analysis algorithms which can be performed with different objective functions. By incorporating sensitive parameters in streamflow simulation, effects of the suitable algorithm in improving model performance can be demonstrated by the Soil and Water Assessment Tool (SWAT) modeling. In this study, the SWAT was applied in the San Joaquin Watershed in California covering 19704 km2 to calibrate the daily streamflow. Recently, sever water stress escalating due to intensified climate variability, prolonged drought and depleting groundwater for agricultural irrigation in this watershed. Therefore it is important to perform a proper uncertainty analysis given the uncertainties inherent in hydrologic modeling to predict the spatial and temporal variation of the hydrologic process to evaluate the impacts of different hydrologic variables. The purpose of this study was to evaluate the sensitivity and uncertainty of the calibrated parameters for predicting streamflow. To evaluate the sensitivity of the calibrated parameters three different optimization algorithms (Sequential Uncertainty Fitting- SUFI-2, Generalized Likelihood Uncertainty Estimation- GLUE and Parameter Solution- ParaSol) were used with four different objective functions (coefficient of determination- r2, Nash-Sutcliffe efficiency- NSE, percent bias- PBIAS, and Kling-Gupta efficiency- KGE). The preliminary results showed that using the SUFI-2 algorithm with the objective function NSE and KGE has improved significantly the calibration (e.g. R2 and NSE is found 0.52 and 0.47 respectively for daily streamflow calibration).

  8. On-Demand Calibration and Evaluation for Electromagnetically Tracked Laparoscope in Augmented Reality Visualization

    PubMed Central

    Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Purpose Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that calibration can be performed in the OR on demand. Methods We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration result in the OR, we integrated a tube phantom with fCalib and overlaid a virtual representation of the tube on the live video scene. Results We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggested that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, would affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s – 22.7 s). Conclusions We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand. PMID:27250853

  9. On-demand calibration and evaluation for electromagnetically tracked laparoscope in augmented reality visualization.

    PubMed

    Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2016-06-01

    Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that the calibration can be performed in the OR on demand. We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration results in the OR, we integrated a tube phantom with fCalib prototype and overlaid a virtual representation of the tube on the live video scene. We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggest that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, might affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s-22.7 s). We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand.

  10. 10 CFR 74.59 - Quality assurance and accounting requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... analyses and evaluations of the design, installation, preoperational tests, calibration, and operation of... performed at a pre-determined frequency, indicate a need for recalibration. Calibrations and tests must be... necessary for performance of the material control tests required by § 74.53(b). (e) Measurement control. The...

  11. 10 CFR 74.59 - Quality assurance and accounting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... analyses and evaluations of the design, installation, preoperational tests, calibration, and operation of... performed at a pre-determined frequency, indicate a need for recalibration. Calibrations and tests must be... necessary for performance of the material control tests required by § 74.53(b). (e) Measurement control. The...

  12. VIIRS thermal emissive bands on-orbit calibration coefficient performance using vicarious calibration results

    NASA Astrophysics Data System (ADS)

    Moyer, D.; Moeller, C.; De Luccia, F.

    2013-09-01

    The Visible Infrared Imager Radiometer Suite (VIIRS), a primary sensor on-board the Suomi-National Polar-orbiting Partnership (SNPP) spacecraft, was launched October 28, 2011. It has 22 bands: 7 thermal emissive bands (TEBs), 14 reflective solar bands (RSBs) and a Day Night Band (DNB). The TEBs cover the spectral wavelengths between 3.7 to 12 μm and have two 371 m and five 742 m spatial resolution bands. A VIIRS Key Performance Parameter (KPP) is the sea surface temperature (SST) which uses bands M12 (3.7 μm), M15 (10.8 μm) and M16's (12.0 μm) calibrated Science Data Records (SDRs). The TEB SDRs rely on pre-launch calibration coefficients used in a quadratic algorithm to convert the detector's response to calibrated radiance. This paper will evaluate the performance of these prelaunch calibration coefficients using vicarious calibration information from the Cross-track Infrared Sounder (CrIS) also onboard the SNPP spacecraft and the Infrared Atmospheric Sounding Interferometer (IASI) on-board the Meteorological Operational (MetOp) satellite. Changes to the pre-launch calibration coefficients' offset term c0 to improve the SDR's performance at cold scene temperatures will also be discussed.

  13. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  14. Evaluating and improving the performance of thin film force sensors within body and device interfaces.

    PubMed

    Likitlersuang, Jirapat; Leineweber, Matthew J; Andrysek, Jan

    2017-10-01

    Thin film force sensors are commonly used within biomechanical systems, and at the interface of the human body and medical and non-medical devices. However, limited information is available about their performance in such applications. The aims of this study were to evaluate and determine ways to improve the performance of thin film (FlexiForce) sensors at the body/device interface. Using a custom apparatus designed to load the sensors under simulated body/device conditions, two aspects were explored relating to sensor calibration and application. The findings revealed accuracy errors of 23.3±17.6% for force measurements at the body/device interface with conventional techniques of sensor calibration and application. Applying a thin rigid disc between the sensor and human body and calibrating the sensor using compliant surfaces was found to substantially reduce measurement errors to 2.9±2.0%. The use of alternative calibration and application procedures is recommended to gain acceptable measurement performance from thin film force sensors in body/device applications. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Calibration and parameterization of a semi-distributed hydrological model to support sub-daily ensemble flood forecasting; a watershed in southeast Brazil

    NASA Astrophysics Data System (ADS)

    de Almeida Bressiani, D.; Srinivasan, R.; Mendiondo, E. M.

    2013-12-01

    The use of distributed or semi-distributed models to represent the processes and dynamics of a watershed in the last few years has increased. These models are important tools to predict and forecast the hydrological responses of the watersheds, and they can subside disaster risk management and planning. However they usually have a lot of parameters, of which, due to the spatial and temporal variability of the processes, are not known, specially in developing countries; therefore a robust and sensible calibration is very important. This study conduced a sub-daily calibration and parameterization of the Soil & Water Assessment Tool (SWAT) for a 12,600 km2 watershed in southeast Brazil, and uses ensemble forecasts to evaluate if the model can be used as a tool for flood forecasting. The Piracicaba Watershed, in São Paulo State, is mainly rural, but has about 4 million of population in highly relevant urban areas, and three cities in the list of critical cities of the National Center for Natural Disasters Monitoring and Alerts. For calibration: the watershed was divided in areas with similar hydrological characteristics, for each of these areas one gauge station was chosen for calibration; this procedure was performed to evaluate the effectiveness of calibrating in fewer places, since areas with the same group of groundwater, soil, land use and slope characteristics should have similar parameters; making calibration a less time-consuming task. The sensibility analysis and calibration were performed on the software SWAT-CUP with the optimization algorithm: Sequential Uncertainly Fitting Version 2 (SUFI-2), which uses Latin hypercube sampling scheme in an iterative process. The performance of the models to evaluate the calibration and validation was done with: Nash-Sutcliffe efficiency coefficient (NSE), determination coefficient (r2), root mean square error (RMSE), and percent bias (PBIAS), with monthly average values of NSE around 0.70, r2 of 0.9, normalized RMSE of 0.01, and PBIAS of 10. Past events were analysed to evaluate the possibility of using the SWAT developed model for Piracicaba watershed as a tool for ensemble flood forecasting. For the ensemble evaluation members from the numerical model Eta were used. Eta is an atmospheric model used for research and operational purposes, with 5km resolution, and is updated twice a day (00 e 12 UTC) for a ten day horizon, with precipitation and weather estimates for each hour. The parameterized SWAT model performed overall well for ensemble flood forecasting.

  16. Global-scale regionalization of hydrological model parameters using streamflow data from many small catchments

    NASA Astrophysics Data System (ADS)

    Beck, Hylke; de Roo, Ad; van Dijk, Albert; McVicar, Tim; Miralles, Diego; Schellekens, Jaap; Bruijnzeel, Sampurno; de Jeu, Richard

    2015-04-01

    Motivated by the lack of large-scale model parameter regionalization studies, a large set of 3328 small catchments (< 10000 km2) around the globe was used to set up and evaluate five model parameterization schemes at global scale. The HBV-light model was chosen because of its parsimony and flexibility to test the schemes. The catchments were calibrated against observed streamflow (Q) using an objective function incorporating both behavioral and goodness-of-fit measures, after which the catchment set was split into subsets of 1215 donor and 2113 evaluation catchments based on the calibration performance. The donor catchments were subsequently used to derive parameter sets that were transferred to similar grid cells based on a similarity measure incorporating climatic and physiographic characteristics, thereby producing parameter maps with global coverage. Overall, there was a lack of suitable donor catchments for mountainous and tropical environments. The schemes with spatially-uniform parameter sets (EXP2 and EXP3) achieved the worst Q estimation performance in the evaluation catchments, emphasizing the importance of parameter regionalization. The direct transfer of calibrated parameter sets from donor catchments to similar grid cells (scheme EXP1) performed best, although there was still a large performance gap between EXP1 and HBV-light calibrated against observed Q. The schemes with parameter sets obtained by simultaneously calibrating clusters of similar donor catchments (NC10 and NC58) performed worse than EXP1. The relatively poor Q estimation performance achieved by two (uncalibrated) macro-scale hydrological models suggests there is considerable merit in regionalizing the parameters of such models. The global HBV-light parameter maps and ancillary data are freely available via http://water.jrc.ec.europa.eu.

  17. Evaluation of the Long-Term Stability and Temperature Coefficient of Dew-Point Hygrometers

    NASA Astrophysics Data System (ADS)

    Benyon, R.; Vicente, T.; Hernández, P.; De Rivas, L.; Conde, F.

    2012-09-01

    The continuous quest for improved specifications of optical dew-point hygrometers has raised customer expectations on the performance of these devices. In the absence of a long calibration history, users with a limited prior experience in the measurement of humidity, place reliance on manufacturer specifications to estimate long-term stability. While this might be reasonable in the case of measurement of electrical quantities, in humidity it can lead to optimistic estimations of uncertainty. This article reports a study of the long-term stability of some hygrometers and the analysis of their performance as monitored through regular calibration. The results of the investigations provide some typical, realistic uncertainties associated with the long-term stability of instruments used in calibration and testing laboratories. Together, these uncertainties can help in establishing initial contributions in uncertainty budgets, as well as in setting the minimum calibration requirements, based on the evaluation of dominant influence quantities.

  18. A 10 cm Dual Frequency Doppler Weather Radar. Part I. The Radar System.

    DTIC Science & Technology

    1982-10-25

    Evaluation System ( RAMCES )". The step attenuator required for this calibration can be programmed remotely, has low power and temperature coefficients, and...Control and Evaluation System". The Quality Assurance/Fault Location Network makes use of fault location techniques at critical locations in the radar and...quasi-con- tinuous monitoring of radar performance. The Radar Monitor, Control and Evaluation System provides for automated system calibration and

  19. Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays

    DTIC Science & Technology

    2005-07-09

    This final report summarizes the progress during the Phase I SBIR project entitled Embedded Electro - Optic Sensor Network for the On-Site Calibration...network based on an electro - optic field-detection technique (the Electro - optic Sensor Network, or ESN) for the performance evaluation of phased

  20. Implementation of the qualities of radiodiagnostic: mammography

    NASA Astrophysics Data System (ADS)

    Pacífico, L. C.; Magalhães, L. A. G.; Peixoto, J. G. P.; Fernandes, E.

    2018-03-01

    The objective of the present study was to evaluate the expanded uncertainty of the mammographic calibration process and present the result of the internal audit performed at the Laboratory of Radiological Sciences (LCR). The qualities of the mammographic beans that are references in the LCR, comprises two irradiation conditions: no-attenuated beam and attenuated beam. Both had satisfactory results, with an expanded uncertainty equals 2,1%. The internal audit was performed, and the degree of accordance with the ISO/IEC 17025 was evaluated. The result of the internal audit was satisfactory. We conclude that LCR can perform calibrations on mammography qualities for end users.

  1. Landsat-7 Enhanced Thematic Mapper plus radiometric calibration

    USGS Publications Warehouse

    Markham, B.L.; Boncyk, Wayne C.; Helder, D.L.; Barker, J.L.

    1997-01-01

    Landsat-7 is currently being built and tested for launch in 1998. The Enhanced Thematic Mapper Plus (ETM+) sensor for Landsat-7, a derivative of the highly successful Thematic Mapper (TM) sensors on Landsats 4 and 5, and the Landsat-7 ground system are being built to provide enhanced radiometric calibration performance. In addition, regular vicarious calibration campaigns are being planned to provide additional information for calibration of the ETM+ instrument. The primary upgrades to the instrument include the addition of two solar calibrators: the full aperture solar calibrator, a deployable diffuser, and the partial aperture solar calibrator, a passive device that allows the ETM+ to image the sun. The ground processing incorporates for the first time an off-line facility, the Image Assessment System (IAS), to perform calibration, evaluation and analysis. Within the IAS, processing capabilities include radiometric artifact characterization and correction, radiometric calibration from the multiple calibrator sources, inclusion of results from vicarious calibration and statistical trending of calibration data to improve calibration estimation. The Landsat Product Generation System, the portion of the ground system responsible for producing calibrated products, will incorporate the radiometric artifact correction algorithms and will use the calibration information generated by the IAS. This calibration information will also be supplied to ground processing systems throughout the world.

  2. Calibration of limited-area ensemble precipitation forecasts for hydrological predictions

    NASA Astrophysics Data System (ADS)

    Diomede, Tommaso; Marsigli, Chiara; Montani, Andrea; Nerozzi, Fabrizio; Paccagnella, Tiziana

    2015-04-01

    The main objective of this study is to investigate the impact of calibration for limited-area ensemble precipitation forecasts, to be used for driving discharge predictions up to 5 days in advance. A reforecast dataset, which spans 30 years, based on the Consortium for Small Scale Modeling Limited-Area Ensemble Prediction System (COSMO-LEPS) was used for testing the calibration strategy. Three calibration techniques were applied: quantile-to-quantile mapping, linear regression, and analogs. The performance of these methodologies was evaluated in terms of statistical scores for the precipitation forecasts operationally provided by COSMO-LEPS in the years 2003-2007 over Germany, Switzerland, and the Emilia-Romagna region (northern Italy). The analog-based method seemed to be preferred because of its capability of correct position errors and spread deficiencies. A suitable spatial domain for the analog search can help to handle model spatial errors as systematic errors. However, the performance of the analog-based method may degrade in cases where a limited training dataset is available. A sensitivity test on the length of the training dataset over which to perform the analog search has been performed. The quantile-to-quantile mapping and linear regression methods were less effective, mainly because the forecast-analysis relation was not so strong for the available training dataset. A comparison between the calibration based on the deterministic reforecast and the calibration based on the full operational ensemble used as training dataset has been considered, with the aim to evaluate whether reforecasts are really worthy for calibration, given that their computational cost is remarkable. The verification of the calibration process was then performed by coupling ensemble precipitation forecasts with a distributed rainfall-runoff model. This test was carried out for a medium-sized catchment located in Emilia-Romagna, showing a beneficial impact of the analog-based method on the reduction of missed events for discharge predictions.

  3. Software Tools for Design and Performance Evaluation of Intelligent Systems

    DTIC Science & Technology

    2004-08-01

    Self-calibration of Three-Legged Modular Reconfigurable Parallel Robots Based on Leg-End Distance Errors,” Robotica , Vol. 19, pp. 187-198. [4...9] Lintott, A. B., and Dunlop, G. R., “Parallel Topology Robot Calibration,” Robotica . [10] Vischer, P., and Clavel, R., “Kinematic Calibration...of the Parallel Delta Robot,” Robotica , Vol. 16, pp.207- 218, 1998. [11] Joshi, S.A., and Surianarayan, A., “Calibration of a 6-DOF Cable Robot Using

  4. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    PubMed

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed framework should usefully inform guidelines for preparing submissions to reimbursement bodies.

  5. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  6. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  7. Performance of the air2stream model that relates air and stream water temperatures depends on the calibration method

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jaroslaw J.

    2018-06-01

    A number of physical or data-driven models have been proposed to evaluate stream water temperatures based on hydrological and meteorological observations. However, physical models require a large amount of information that is frequently unavailable, while data-based models ignore the physical processes. Recently the air2stream model has been proposed as an intermediate alternative that is based on physical heat budget processes, but it is so simplified that the model may be applied like data-driven ones. However, the price for simplicity is the need to calibrate eight parameters that, although have some physical meaning, cannot be measured or evaluated a priori. As a result, applicability and performance of the air2stream model for a particular stream relies on the efficiency of the calibration method. The original air2stream model uses an inefficient 20-year old approach called Particle Swarm Optimization with inertia weight. This study aims at finding an effective and robust calibration method for the air2stream model. Twelve different optimization algorithms are examined on six different streams from northern USA (states of Washington, Oregon and New York), Poland and Switzerland, located in both high mountains, hilly and lowland areas. It is found that the performance of the air2stream model depends significantly on the calibration method. Two algorithms lead to the best results for each considered stream. The air2stream model, calibrated with the chosen optimization methods, performs favorably against classical streamwater temperature models. The MATLAB code of the air2stream model and the chosen calibration procedure (CoBiDE) are available as Supplementary Material on the Journal of Hydrology web page.

  8. Performance evaluation and clinical applications of 3D plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel

    2015-06-01

    The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.

  9. Follow-up of solar lentigo depigmentation with a retinaldehyde-based cream by clinical evaluation and calibrated colour imaging.

    PubMed

    Questel, E; Durbise, E; Bardy, A-L; Schmitt, A-M; Josse, G

    2015-05-01

    To assess an objective method evaluating the effects of a retinaldehyde-based cream (RA-cream) on solar lentigines; 29 women randomly applied RA-cream on lentigines of one hand and a control cream on the other, once daily for 3 months. A specific method enabling a reliable visualisation of the lesions was proposed, using high-magnification colour-calibrated camera imaging. Assessment was performed using clinical evaluation by Physician Global Assessment score and image analysis. Luminance determination on the numeric images was performed either on the basis of 5 independent expert's consensus borders or probability map analysis via an algorithm automatically detecting the pigmented area. Both image analysis methods showed a similar lightening of ΔL* = 2 after a 3-month treatment by RA-cream, in agreement with single-blind clinical evaluation. High-magnification colour-calibrated camera imaging combined with probability map analysis is a fast and precise method to follow lentigo depigmentation. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Evaluation of EIT system performance.

    PubMed

    Yasin, Mamatjan; Böhm, Stephan; Gaggero, Pascal O; Adler, Andy

    2011-07-01

    An electrical impedance tomography (EIT) system images internal conductivity from surface electrical stimulation and measurement. Such systems necessarily comprise multiple design choices from cables and hardware design to calibration and image reconstruction. In order to compare EIT systems and study the consequences of changes in system performance, this paper describes a systematic approach to evaluate the performance of the EIT systems. The system to be tested is connected to a saline phantom in which calibrated contrasting test objects are systematically positioned using a position controller. A set of evaluation parameters are proposed which characterize (i) data and image noise, (ii) data accuracy, (iii) detectability of single contrasts and distinguishability of multiple contrasts, and (iv) accuracy of reconstructed image (amplitude, resolution, position and ringing). Using this approach, we evaluate three different EIT systems and illustrate the use of these tools to evaluate and compare performance. In order to facilitate the use of this approach, all details of the phantom, test objects and position controller design are made publicly available including the source code of the evaluation and reporting software.

  11. OrbView-3 Technical Performance Evaluation 2005: Modulation Transfer Function

    NASA Technical Reports Server (NTRS)

    Cole, Aaron

    2007-01-01

    The Technical performance evaluation of OrbView-3 using the Modulation Transfer Function (MTF) is presented. The contents include: 1) MTF Results and Methodology; 2) Radiometric Calibration Methodology; and 3) Relative Radiometric Assessment Results

  12. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera

    PubMed Central

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-01-01

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416

  13. Microwave cryogenic thermal-noise standards

    NASA Technical Reports Server (NTRS)

    Stelzried, C. T.

    1971-01-01

    Field operational waveguide noise standard with nominal noise temperature of 78.09 plus/minus 0.12 deg K is calibrated more precisely than before. Calibration technique applies to various disciplines such as microwave radiometry, antenna temperature and loss measurement, and low-noise amplifier performance evaluation.

  14. Evaluation of the AnnAGNPS Model for Predicting Runoff and Nutrient Export in a Typical Small Watershed in the Hilly Region of Taihu Lake.

    PubMed

    Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin

    2015-09-02

    The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.

  15. Comparison and uncertainty evaluation of different calibration protocols and ionization chambers for low-energy surface brachytherapy dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candela-Juan, C., E-mail: ccanjuan@gmail.com; Vijande, J.; García-Martínez, T.

    2015-08-15

    Purpose: A surface electronic brachytherapy (EBT) device is in fact an x-ray source collimated with specific applicators. Low-energy (<100 kVp) x-ray beam dosimetry faces several challenges that need to be addressed. A number of calibration protocols have been published for x-ray beam dosimetry. The media in which measurements are performed are the fundamental difference between them. The aim of this study was to evaluate the surface dose rate of a low-energy x-ray source with small field applicators using different calibration standards and different small-volume ionization chambers, comparing the values and uncertainties of each methodology. Methods: The surface dose rate ofmore » the EBT unit Esteya (Elekta Brachytherapy, The Netherlands), a 69.5 kVp x-ray source with applicators of 10, 15, 20, 25, and 30 mm diameter, was evaluated using the AAPM TG-61 (based on air kerma) and International Atomic Energy Agency (IAEA) TRS-398 (based on absorbed dose to water) dosimetry protocols for low-energy photon beams. A plane parallel T34013 ionization chamber (PTW Freiburg, Germany) calibrated in terms of both absorbed dose to water and air kerma was used to compare the two dosimetry protocols. Another PTW chamber of the same model was used to evaluate the reproducibility between these chambers. Measurements were also performed with two different Exradin A20 (Standard Imaging, Inc., Middleton, WI) chambers calibrated in terms of air kerma. Results: Differences between surface dose rates measured in air and in water using the T34013 chamber range from 1.6% to 3.3%. No field size dependence has been observed. Differences are below 3.7% when measurements with the A20 and the T34013 chambers calibrated in air are compared. Estimated uncertainty (with coverage factor k = 1) for the T34013 chamber calibrated in water is 2.2%–2.4%, whereas it increases to 2.5% and 2.7% for the A20 and T34013 chambers calibrated in air, respectively. The output factors, measured with the PTW chambers, differ by less than 1.1% for any applicator size when compared to the output factors that were measured with the A20 chamber. Conclusions: Measurements using both dosimetric protocols are consistent, once the overall uncertainties are considered. There is also consistency between measurements performed with both chambers calibrated in air. Both the T34013 and A20 chambers have negligible stem effect. Any x-ray surface brachytherapy system, including Esteya, can be characterized using either one of these calibration protocols and ionization chambers. Having less correction factors, lower uncertainty, and based on measurements, performed in closer to clinical conditions, the TRS-398 protocol seems to be the preferred option.« less

  16. Radon Assessment of Occupational Facilities, Grissom ARB, IN

    DTIC Science & Technology

    2012-11-28

    4) EPA 402-R-92-014, Radon Measurements in Schools , July 1993 b. Measurement Device Protocols: The following protocols were used when placing...Performance Tests : A biennial performance test from commercial vendors evaluates the proficiency of USAFSAM’s radon analysis. A proficiency test ...listing of duplicates and analysis. (4) Calibration Tests : Please see Attachment 4 for calibration certificates. 3. RESULTS: In total, 106 radon monitors

  17. Sensitivity and Calibration of Non-Destructive Evaluation Method That Uses Neural-Net Processing of Characteristic Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Weiland, Kenneth E.

    2003-01-01

    This paper answers some performance and calibration questions about a non-destructive-evaluation (NDE) procedure that uses artificial neural networks to detect structural damage or other changes from sub-sampled characteristic patterns. The method shows increasing sensitivity as the number of sub-samples increases from 108 to 6912. The sensitivity of this robust NDE method is not affected by noisy excitations of the first vibration mode. A calibration procedure is proposed and demonstrated where the output of a trained net can be correlated with the outputs of the point sensors used for vibration testing. The calibration procedure is based on controlled changes of fastener torques. A heterodyne interferometer is used as a displacement sensor for a demonstration of the challenges to be handled in using standard point sensors for calibration.

  18. Two laboratory methods for the calibration of GPS speed meters

    NASA Astrophysics Data System (ADS)

    Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie

    2015-01-01

    The set-ups of two calibration systems are presented to investigate calibration methods of GPS speed meters. The GPS speed meter calibrated is a special type of high accuracy speed meter for vehicles which uses Doppler demodulation of GPS signals to calculate the measured speed of a moving target. Three experiments are performed: including simulated calibration, field-test signal replay calibration, and in-field test comparison with an optical speed meter. The experiments are conducted at specific speeds in the range of 40-180 km h-1 with the same GPS speed meter as the device under calibration. The evaluation of measurement results validates both methods for calibrating GPS speed meters. The relative deviations between the measurement results of the GPS-based high accuracy speed meter and those of the optical speed meter are analyzed, and the equivalent uncertainty of the comparison is evaluated. The comparison results justify the utilization of GPS speed meters as reference equipment if no fewer than seven satellites are available. This study contributes to the widespread use of GPS-based high accuracy speed meters as legal reference equipment in traffic speed metrology.

  19. Multisite Evaluation of APEX for Water Quality: II. Regional Parameterization.

    PubMed

    Nelson, Nathan O; Baffaut, Claire; Lory, John A; Anomaa Senaviratne, G M M M; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S

    2017-11-01

    Phosphorus (P) Index assessment requires independent estimates of long-term average annual P loss from fields, representing multiple climatic scenarios, management practices, and landscape positions. Because currently available measured data are insufficient to evaluate P Index performance, calibrated and validated process-based models have been proposed as tools to generate the required data. The objectives of this research were to develop a regional parameterization for the Agricultural Policy Environmental eXtender (APEX) model to estimate edge-of-field runoff, sediment, and P losses in restricted-layer soils of Missouri and Kansas and to assess the performance of this parameterization using monitoring data from multiple sites in this region. Five site-specific calibrated models (SSCM) from within the region were used to develop a regionally calibrated model (RCM), which was further calibrated and validated with measured data. Performance of the RCM was similar to that of the SSCMs for runoff simulation and had Nash-Sutcliffe efficiency (NSE) > 0.72 and absolute percent bias (|PBIAS|) < 18% for both calibration and validation. The RCM could not simulate sediment loss (NSE < 0, |PBIAS| > 90%) and was particularly ineffective at simulating sediment loss from locations with small sediment loads. The RCM had acceptable performance for simulation of total P loss (NSE > 0.74, |PBIAS| < 30%) but underperformed the SSCMs. Total P-loss estimates should be used with caution due to poor simulation of sediment loss. Although we did not attain our goal of a robust regional parameterization of APEX for estimating sediment and total P losses, runoff estimates with the RCM were acceptable for P Index evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  20. Performance analysis of a film dosimetric quality assurance procedure for IMRT with regard to the employment of quantitative evaluation methods.

    PubMed

    Winkler, Peter; Zurl, Brigitte; Guss, Helmuth; Kindl, Peter; Stuecklschweiger, Georg

    2005-02-21

    A system for dosimetric verification of intensity-modulated radiotherapy (IMRT) treatment plans using absolute calibrated radiographic films is presented. At our institution this verification procedure is performed for all IMRT treatment plans prior to patient irradiation. Therefore clinical treatment plans are transferred to a phantom and recalculated. Composite treatment plans are irradiated to a single film. Film density to absolute dose conversion is performed automatically based on a single calibration film. A software application encompassing film calibration, 2D registration of measurement and calculated distributions, image fusion, and a number of visual and quantitative evaluation utilities was developed. The main topic of this paper is a performance analysis for this quality assurance procedure, with regard to the specification of tolerance levels for quantitative evaluations. Spatial and dosimetric precision and accuracy were determined for the entire procedure, comprising all possible sources of error. The overall dosimetric and spatial measurement uncertainties obtained thereby were 1.9% and 0.8 mm respectively. Based on these results, we specified 5% dose difference and 3 mm distance-to-agreement as our tolerance levels for patient-specific quality assurance for IMRT treatments.

  1. A novel spatial performance metric for robust pattern optimization of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Demirel, C.; Koch, J.

    2017-12-01

    Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing platforms. We see great potential of spaef across environmental disciplines dealing with spatially distributed modelling.

  2. WE-D-9A-06: Open Source Monitor Calibration and Quality Control Software for Enterprise Display Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevins, N; Vanderhoek, M; Lang, S

    2014-06-15

    Purpose: Medical display monitor calibration and quality control present challenges to medical physicists. The purpose of this work is to demonstrate and share experiences with an open source package that allows for both initial monitor setup and routine performance evaluation. Methods: A software package, pacsDisplay, has been developed over the last decade to aid in the calibration of all monitors within the radiology group in our health system. The software is used to calibrate monitors to follow the DICOM Grayscale Standard Display Function (GSDF) via lookup tables installed on the workstation. Additional functionality facilitates periodic evaluations of both primary andmore » secondary medical monitors to ensure satisfactory performance. This software is installed on all radiology workstations, and can also be run as a stand-alone tool from a USB disk. Recently, a database has been developed to store and centralize the monitor performance data and to provide long-term trends for compliance with internal standards and various accrediting organizations. Results: Implementation and utilization of pacsDisplay has resulted in improved monitor performance across the health system. Monitor testing is now performed at regular intervals and the software is being used across multiple imaging modalities. Monitor performance characteristics such as maximum and minimum luminance, ambient luminance and illuminance, color tracking, and GSDF conformity are loaded into a centralized database for system performance comparisons. Compliance reports for organizations such as MQSA, ACR, and TJC are generated automatically and stored in the same database. Conclusion: An open source software solution has simplified and improved the standardization of displays within our health system. This work serves as an example method for calibrating and testing monitors within an enterprise health system.« less

  3. Calibration of automatic performance measures - speed and volume data : volume 1, evaluation of the accuracy of traffic volume counts collected by microwave sensors.

    DOT National Transportation Integrated Search

    2015-09-01

    Over the past few years, the Utah Department of Transportation (UDOT) has developed a system called the : Signal Performance Metrics System (SPMS) to evaluate the performance of signalized intersections. This system : currently provides data summarie...

  4. The calibration and flight test performance of the space shuttle orbiter air data system

    NASA Technical Reports Server (NTRS)

    Dean, A. S.; Mena, A. L.

    1983-01-01

    The Space Shuttle air data system (ADS) is used by the guidance, navigation and control system (GN&C) to guide the vehicle to a safe landing. In addition, postflight aerodynamic analysis requires a precise knowledge of flight conditions. Since the orbiter is essentially an unpowered vehicle, the conventional methods of obtaining the ADS calibration were not available; therefore, the calibration was derived using a unique and extensive wind tunnel test program. This test program included subsonic tests with a 0.36-scale orbiter model, transonic and supersonic tests with a smaller 0.2-scale model, and numerous ADS probe-alone tests. The wind tunnel calibration was further refined with subsonic results from the approach and landing test (ALT) program, thus producing the ADS calibration for the orbital flight test (OFT) program. The calibration of the Space Shuttle ADS and its performance during flight are discussed in this paper. A brief description of the system is followed by a discussion of the calibration methodology, and then by a review of the wind tunnel and flight test programs. Finally, the flight results are presented, including an evaluation of the system performance for on-board systems use and a description of the calibration refinements developed to provide the best possible air data for postflight analysis work.

  5. Artifact correction and absolute radiometric calibration techniques employed in the Landsat 7 image assessment system

    USGS Publications Warehouse

    Boncyk, Wayne C.; Markham, Brian L.; Barker, John L.; Helder, Dennis

    1996-01-01

    The Landsat-7 Image Assessment System (IAS), part of the Landsat-7 Ground System, will calibrate and evaluate the radiometric and geometric performance of the Enhanced Thematic Mapper Plus (ETM +) instrument. The IAS incorporates new instrument radiometric artifact correction and absolute radiometric calibration techniques which overcome some limitations to calibration accuracy inherent in historical calibration methods. Knowledge of ETM + instrument characteristics gleaned from analysis of archival Thematic Mapper in-flight data and from ETM + prelaunch tests allow the determination and quantification of the sources of instrument artifacts. This a priori knowledge will be utilized in IAS algorithms designed to minimize the effects of the noise sources before calibration, in both ETM + image and calibration data.

  6. Calibration procedure for a laser triangulation scanner with uncertainty evaluation

    NASA Astrophysics Data System (ADS)

    Genta, Gianfranco; Minetola, Paolo; Barbato, Giulio

    2016-11-01

    Most of low cost 3D scanning devices that are nowadays available on the market are sold without a user calibration procedure to correct measurement errors related to changes in environmental conditions. In addition, there is no specific international standard defining a procedure to check the performance of a 3D scanner along time. This paper aims at detailing a thorough methodology to calibrate a 3D scanner and assess its measurement uncertainty. The proposed procedure is based on the use of a reference ball plate and applied to a triangulation laser scanner. Experimental results show that the metrological performance of the instrument can be greatly improved by the application of the calibration procedure that corrects systematic errors and reduces the device's measurement uncertainty.

  7. Radiation and Health Technology Laboratory Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bihl, Donald E.; Lynch, Timothy P.; Murphy, Mark K.

    2005-07-09

    The Radiological Standards and Calibrations Laboratory, a part of Pacific Northwest National Laboratory (PNNL)(a) performs calibrations and upholds reference standards necessary to maintain traceability to national standards. The facility supports U.S. Department of Energy (DOE) programs at the Hanford Site, programs sponsored by DOE Headquarters and other federal agencies, radiological protection programs at other DOE and commercial nuclear sites and research and characterization programs sponsored through the commercial sector. The laboratory is located in the 318 Building of the Hanford Site's 300 Area. The facility contains five major exposure rooms and several laboratories used for exposure work preparation, low-activity instrumentmore » calibrations, instrument performance evaluations, instrument maintenance, instrument design and fabrication work, thermoluminescent and radiochromic Dosimetry, and calibration of measurement and test equipment (M&TE). The major exposure facilities are a low-scatter room used for neutron and photon exposures, a source well room used for high-volume instrument calibration work, an x-ray facility used for energy response studies, a high-exposure facility used for high-rate photon calibration work, a beta standards laboratory used for beta energy response studies and beta reference calibrations and M&TE laboratories. Calibrations are routinely performed for personnel dosimeters, health physics instrumentation, photon and neutron transfer standards alpha, beta, and gamma field sources used throughout the Hanford Site, and a wide variety of M&TE. This report describes the standards and calibrations laboratory.« less

  8. Sky camera geometric calibration using solar observations

    DOE PAGES

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-05

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less

  9. Evaluation of the TOPSAR performance by using passive and active calibrators

    NASA Technical Reports Server (NTRS)

    Alberti, G.; Moccia, A.; Ponte, S.; Vetrella, S.

    1992-01-01

    The preliminary analysis of the C-band cross-track interferometric data (XTI) acquired during the MAC Europe 1991 campaign over the Matera test site, in Southern Italy is presented. Twenty three passive calibrators (Corner Reflector, CR) and 3 active calibrators (Active Radar Calibrator, ARC) were deployed over an area characterized by homogeneous background. Contemporaneously to the flight, a ground truth data collection campaign was carried out. The research activity was focused on the development of motion compensation algorithms, in order to improve the height measurement accuracy of the TOPSAR system.

  10. A new method to calibrate Lagrangian model with ASAR images for oil slick trajectory.

    PubMed

    Tian, Siyu; Huang, Xiaoxia; Li, Hongga

    2017-03-15

    Since Lagrangian model coefficients vary with different conditions, it is necessary to calibrate the model to obtain optimal coefficient combination for special oil spill accident. This paper focuses on proposing a new method to calibrate Lagrangian model with time series of Envisat ASAR images. Oil slicks extracted from time series images form a detected trajectory of special oil slick. Lagrangian model is calibrated by minimizing the difference between simulated trajectory and detected trajectory. mean center position distance difference (MCPD) and rotation difference (RD) of Oil slicks' or particles' standard deviational ellipses (SDEs) are calculated as two evaluations. The two parameters are taken to evaluate the performance of Lagrangian transport model with different coefficient combinations. This method is applied to Penglai 19-3 oil spill accident. The simulation result with calibrated model agrees well with related satellite observations. It is suggested the new method is effective to calibrate Lagrangian model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Effect of Using Extreme Years in Hydrologic Model Calibration Performance

    NASA Astrophysics Data System (ADS)

    Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.

    2017-12-01

    Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.

  12. Evaluation of the AnnAGNPS Model for Predicting Runoff and Nutrient Export in a Typical Small Watershed in the Hilly Region of Taihu Lake

    PubMed Central

    Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin

    2015-01-01

    The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds. PMID:26364642

  13. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    DTIC Science & Technology

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  14. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    PubMed

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  15. Status of calibration and data evaluation of AMSR on board ADEOS-II

    NASA Astrophysics Data System (ADS)

    Imaoka, Keiji; Fujimoto, Yasuhiro; Kachi, Misako; Takeshima, Toshiaki; Igarashi, Tamotsu; Kawanishi, Toneo; Shibata, Akira

    2004-02-01

    The Advanced Microwave Scanning Radiometer (AMSR) is the multi-frequency, passive microwave radiometer on board the Advanced Earth Observing Satellite-II (ADEOS-II), currently called Midori-II. The instrument has eight-frequency channels with dual polarization (except 50-GHz band) covering frequencies between 6.925 and 89.0 GHz. Measurement of 50-GHz channels is the first attempt by this kind of conically scanning microwave radiometers. Basic concept of the instrument including hardware configuration and calibration method is almost the same as that of ASMR for EOS (AMSR-E), the modified version of AMSR. Its swath width of 1,600 km is wider than that of AMSR-E. In parallel with the calibration and data evaluation of AMSR-E instrument, almost identical calibration activities have been made for AMSR instrument. After finished the initial checkout phase, the instrument has been continuously obtaining the data in global basis. Time series of radiometer sensitivities and automatic gain control telemetry indicate the stable instrument performance. For the radiometric calibration, we are now trying to apply the same procedure that is being used for AMSR-E. This paper provides an overview of the instrument characteristics, instrument status, and preliminary results of calibration and data evaluation activities.

  16. Development of composite calibration standard for quantitative NDE by ultrasound and thermography

    NASA Astrophysics Data System (ADS)

    Dayal, Vinay; Benedict, Zach G.; Bhatnagar, Nishtha; Harper, Adam G.

    2018-04-01

    Inspection of aircraft components for damage utilizing ultrasonic Non-Destructive Evaluation (NDE) is a time intensive endeavor. Additional time spent during aircraft inspections translates to added cost to the company performing them, and as such, reducing this expenditure is of great importance. There is also great variance in the calibration samples from one entity to another due to a lack of a common calibration set. By characterizing damage types, we can condense the required calibration sets and reduce the time required to perform calibration while also providing procedures for the fabrication of these standard sets. We present here our effort to fabricate composite samples with known defects and quantify the size and location of defects, such as delaminations, and impact damage. Ultrasonic and Thermographic images are digitally enhanced to accurately measure the damage size. Ultrasonic NDE is compared with thermography.

  17. On-orbit calibration for star sensors without priori information.

    PubMed

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, Chengfen; Yang, Yanqiang

    2017-07-24

    The star sensor is a prerequisite navigation device for a spacecraft. The on-orbit calibration is an essential guarantee for its operation performance. However, traditional calibration methods rely on ground information and are invalid without priori information. The uncertain on-orbit parameters will eventually influence the performance of guidance navigation and control system. In this paper, a novel calibration method without priori information for on-orbit star sensors is proposed. Firstly, the simplified back propagation neural network is designed for focal length and main point estimation along with system property evaluation, called coarse calibration. Then the unscented Kalman filter is adopted for the precise calibration of all parameters, including focal length, main point and distortion. The proposed method benefits from self-initialization and no attitude or preinstalled sensor parameter is required. Precise star sensor parameter estimation can be achieved without priori information, which is a significant improvement for on-orbit devices. Simulations and experiments results demonstrate that the calibration is easy for operation with high accuracy and robustness. The proposed method can satisfy the stringent requirement for most star sensors.

  18. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  19. Nondestructive evaluation of soluble solid content in strawberry by near infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Guo, Zhiming; Huang, Wenqian; Chen, Liping; Wang, Xiu; Peng, Yankun

    This paper indicates the feasibility to use near infrared (NIR) spectroscopy combined with synergy interval partial least squares (siPLS) algorithms as a rapid nondestructive method to estimate the soluble solid content (SSC) in strawberry. Spectral preprocessing methods were optimized selected by cross-validation in the model calibration. Partial least squares (PLS) algorithm was conducted on the calibration of regression model. The performance of the final model was back-evaluated according to root mean square error of calibration (RMSEC) and correlation coefficient (R2 c) in calibration set, and tested by mean square error of prediction (RMSEP) and correlation coefficient (R2 p) in prediction set. The optimal siPLS model was obtained with after first derivation spectra preprocessing. The measurement results of best model were achieved as follow: RMSEC = 0.2259, R2 c = 0.9590 in the calibration set; and RMSEP = 0.2892, R2 p = 0.9390 in the prediction set. This work demonstrated that NIR spectroscopy and siPLS with efficient spectral preprocessing is a useful tool for nondestructively evaluation SSC in strawberry.

  20. Camera calibration: active versus passive targets

    NASA Astrophysics Data System (ADS)

    Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli

    2011-11-01

    Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.

  1. Accuracy evaluation of a new real-time continuous glucose monitoring algorithm in hypoglycemia.

    PubMed

    Mahmoudi, Zeinab; Jensen, Morten Hasselstrøm; Dencker Johansen, Mette; Christensen, Toke Folke; Tarnow, Lise; Christiansen, Jens Sandahl; Hejlesen, Ole

    2014-10-01

    The purpose of this study was to evaluate the performance of a new continuous glucose monitoring (CGM) calibration algorithm and to compare it with the Guardian(®) REAL-Time (RT) (Medtronic Diabetes, Northridge, CA) calibration algorithm in hypoglycemia. CGM data were obtained from 10 type 1 diabetes patients undergoing insulin-induced hypoglycemia. Data were obtained in two separate sessions using the Guardian RT CGM device. Data from the same CGM sensor were calibrated by two different algorithms: the Guardian RT algorithm and a new calibration algorithm. The accuracy of the two algorithms was compared using four performance metrics. The median (mean) of absolute relative deviation in the whole range of plasma glucose was 20.2% (32.1%) for the Guardian RT calibration and 17.4% (25.9%) for the new calibration algorithm. The mean (SD) sample-based sensitivity for the hypoglycemic threshold of 70 mg/dL was 31% (33%) for the Guardian RT algorithm and 70% (33%) for the new algorithm. The mean (SD) sample-based specificity at the same hypoglycemic threshold was 95% (8%) for the Guardian RT algorithm and 90% (16%) for the new calibration algorithm. The sensitivity of the event-based hypoglycemia detection for the hypoglycemic threshold of 70 mg/dL was 61% for the Guardian RT calibration and 89% for the new calibration algorithm. Application of the new calibration caused one false-positive instance for the event-based hypoglycemia detection, whereas the Guardian RT caused no false-positive instances. The overestimation of plasma glucose by CGM was corrected from 33.2 mg/dL in the Guardian RT algorithm to 21.9 mg/dL in the new calibration algorithm. The results suggest that the new algorithm may reduce the inaccuracy of Guardian RT CGM system within the hypoglycemic range; however, data from a larger number of patients are required to compare the clinical reliability of the two algorithms.

  2. On Lack of Robustness in Hydrological Model Development Due to Absence of Guidelines for Selecting Calibration and Evaluation Data: Demonstration for Data-Driven Models

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Maier, Holger R.; Wu, Wenyan; Dandy, Graeme C.; Gupta, Hoshin V.; Zhang, Tuqiao

    2018-02-01

    Hydrological models are used for a wide variety of engineering purposes, including streamflow forecasting and flood-risk estimation. To develop such models, it is common to allocate the available data to calibration and evaluation data subsets. Surprisingly, the issue of how this allocation can affect model evaluation performance has been largely ignored in the research literature. This paper discusses the evaluation performance bias that can arise from how available data are allocated to calibration and evaluation subsets. As a first step to assessing this issue in a statistically rigorous fashion, we present a comprehensive investigation of the influence of data allocation on the development of data-driven artificial neural network (ANN) models of streamflow. Four well-known formal data splitting methods are applied to 754 catchments from Australia and the U.S. to develop 902,483 ANN models. Results clearly show that the choice of the method used for data allocation has a significant impact on model performance, particularly for runoff data that are more highly skewed, highlighting the importance of considering the impact of data splitting when developing hydrological models. The statistical behavior of the data splitting methods investigated is discussed and guidance is offered on the selection of the most appropriate data splitting methods to achieve representative evaluation performance for streamflow data with different statistical properties. Although our results are obtained for data-driven models, they highlight the fact that this issue is likely to have a significant impact on all types of hydrological models, especially conceptual rainfall-runoff models.

  3. Impact of dose calibrators quality control programme in Argentina

    NASA Astrophysics Data System (ADS)

    Furnari, J. C.; de Cabrejas, M. L.; del C. Rotta, M.; Iglicki, F. A.; Milá, M. I.; Magnavacca, C.; Dima, J. C.; Rodríguez Pasqués, R. H.

    1992-02-01

    The national Quality Control (QC) programme for radionuclide calibrators started 12 years ago. Accuracy and the implementation of a QC programme were evaluated over all these years at 95 nuclear medicine laboratories where dose calibrators were in use. During all that time, the Metrology Group of CNEA has distributed 137Cs sealed sources to check stability and has been performing periodic "checking rounds" and postal surveys using unknown samples (external quality control). An account of the results of both methods is presented. At present, more of 65% of the dose calibrators measure activities with an error less than 10%.

  4. New blackbody standard for the evaluation and calibration of tympanic ear thermometers at the NPL, United Kingdom

    NASA Astrophysics Data System (ADS)

    McEvoy, Helen C.; Simpson, Robert; Machin, Graham

    2004-04-01

    The use of infrared tympanic thermometers for monitoring patient health is widespread. However, studies into the performance of these thermometers have questioned their accuracy and repeatability. To give users confidence in these devices, and to provide credibility in the measurements, it is necessary for them to be tested using an accredited, standard blackbody source, with a calibration traceable to the International Temperature Scale of 1990 (ITS-90). To address this need the National Physical Laboratory (NPL), UK, has recently set up a primary ear thermometer calibration (PET-C) source for the evaluation and calibration of tympanic (ear) thermometers over the range from 15 °C to 45 °C. The overall uncertainty of the PET-C source is estimated to be +/- 0.04 °C at k = 2. The PET-C source meets the requirements of the European Standard EN 12470-5: 2003 Clinical thermometers. It consists of a high emissivity blackbody cavity immersed in a bath of stirred liquid. The temperature of the blackbody is determined using an ITS-90 calibrated platinum resistance thermometer inserted close to the rear of the cavity. The temperature stability and uniformity of the PET-C source was evaluated and its performance validated. This paper provides a description of the PET-C along with the results of the validation measurements. To further confirm the performance of the PET-C source it was compared to the standard ear thermometer calibration sources of the National Metrology Institute of Japan (NMIJ), Japan and the Physikalisch-Technische Bundesanstalt (PTB), Germany. The results of this comparison will also be briefly discussed. The PET-C source extends the capability for testing ear thermometers offered by the NPL body temperature fixed-point source, described previously. An update on the progress with the commercialisation of the fixed-point source will be given.

  5. Beyond discrimination: A comparison of calibration methods and clinical usefulness of predictive models of readmission risk.

    PubMed

    Walsh, Colin G; Sharman, Kavya; Hripcsak, George

    2017-12-01

    Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration Slopes and Intercepts. Clinical usefulness analyses provided optimal risk thresholds, which varied by reason for readmission, outcome prevalence, and calibration algorithm. Utility analyses also suggested maximum tolerable intervention costs, e.g., $1720 for all-cause readmissions based on a published cost of readmission of $11,862. Choice of calibration method depends on availability of validation data and on performance. Improperly calibrated models may contribute to higher costs of intervention as measured via clinical usefulness. Decision-makers must understand underlying utilities or costs inherent in the use-case at hand to assess usefulness and will obtain the optimal risk threshold to trigger intervention with intervention cost limits as a result. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  7. Nitrogen dioxide and kerosene-flame soot calibration of photoacoustic instruments for measurement of light absorption by aerosols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnott, W. Patrick; Moosmu''ller, Hans; Walker, John W.

    2000-12-01

    A nitrogen dioxide calibration method is developed to evaluate the theoretical calibration for a photoacoustic instrument used to measure light absorption by atmospheric aerosols at a laser wavelength of 532.0 nm. This method uses high concentrations of nitrogen dioxide so that both a simple extinction and the photoacoustically obtained absorption measurement may be performed simultaneously. Since Rayleigh scattering is much less than absorption for the gas, the agreement between the extinction and absorption coefficients can be used to evaluate the theoretical calibration, so that the laser gas spectra are not needed. Photoacoustic theory is developed to account for strong absorptionmore » of the laser beam power in passage through the resonator. Findings are that the photoacoustic absorption based on heat-balance theory for the instrument compares well with absorption inferred from the extinction measurement, and that both are well within values represented by published spectra of nitrogen dioxide. Photodissociation of nitrogen dioxide limits the calibration method to wavelengths longer than 398 nm. Extinction and absorption at 532 and 1047 nm were measured for kerosene-flame soot to evaluate the calibration method, and the single scattering albedo was found to be 0.31 and 0.20 at these wavelengths, respectively.« less

  8. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    NASA Astrophysics Data System (ADS)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this study establish the feasibility and importance of including influential point detection diagnostics as a standard tool in hydrological model calibration. They provide the hydrologist with important information on whether model calibration is susceptible to a small number of highly influent data points. This enables the hydrologist to make a more informed decision of whether to (1) remove/retain the calibration data; (2) adjust the calibration strategy and/or hydrological model to reduce the susceptibility of model predictions to a small number of influential observations.

  9. Complete Tri-Axis Magnetometer Calibration with a Gyro Auxiliary

    PubMed Central

    Yang, Deng; You, Zheng; Li, Bin; Duan, Wenrui; Yuan, Binwen

    2017-01-01

    Magnetometers combined with inertial sensors are widely used for orientation estimation, and calibrations are necessary to achieve high accuracy. This paper presents a complete tri-axis magnetometer calibration algorithm with a gyro auxiliary. The magnetic distortions and sensor errors, including the misalignment error between the magnetometer and assembled platform, are compensated after calibration. With the gyro auxiliary, the magnetometer linear interpolation outputs are calculated, and the error parameters are evaluated under linear operations of magnetometer interpolation outputs. The simulation and experiment are performed to illustrate the efficiency of the algorithm. After calibration, the heading errors calculated by magnetometers are reduced to 0.5° (1σ). This calibration algorithm can also be applied to tri-axis accelerometers whose error model is similar to tri-axis magnetometers. PMID:28587115

  10. Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.

    2016-06-01

    In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.

  11. Improving Hydrological Simulations by Incorporating GRACE Data for Parameter Calibration

    NASA Astrophysics Data System (ADS)

    Bai, P.

    2017-12-01

    Hydrological model parameters are commonly calibrated by observed streamflow data. This calibration strategy is questioned when the modeled hydrological variables of interest are not limited to streamflow. Well-performed streamflow simulations do not guarantee the reliable reproduction of other hydrological variables. One of the reasons is that hydrological model parameters are not reasonably identified. The Gravity Recovery and Climate Experiment (GRACE) satellite-derived total water storage change (TWSC) data provide an opportunity to constrain hydrological model parameterizations in combination with streamflow observations. We constructed a multi-objective calibration scheme based on GRACE-derived TWSC and streamflow observations, with the aim of improving the parameterizations of hydrological models. The multi-objective calibration scheme was compared with the traditional single-objective calibration scheme, which is based only on streamflow observations. Two monthly hydrological models were employed on 22 Chinese catchments with different hydroclimatic conditions. The model evaluation was performed using observed streamflows, GRACE-derived TWSC, and evapotranspiraiton (ET) estimates from flux towers and from the water balance approach. Results showed that the multi-objective calibration provided more reliable TWSC and ET simulations without significant deterioration in the accuracy of streamflow simulations than the single-objective calibration. In addition, the improvements of TWSC and ET simulations were more significant in relatively dry catchments than in relatively wet catchments. This study highlights the importance of including additional constraints besides streamflow observations in the parameter estimation to improve the performances of hydrological models.

  12. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2011-07-01

    The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow and where peak-flow timing at sub-daily time scales is of high importance. The results suggest that the calibration method can be useful when observation time periods for discharge and model input data do not overlap. The method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.

  13. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2010-12-01

    The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow. The results suggest that the new calibration method can be useful when observation time periods for discharge and model input data do not overlap. The new method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.

  14. In-flight calibration and performance evaluation of the fixed head star trackers for the solar maximum mission

    NASA Technical Reports Server (NTRS)

    Thompson, R. H.; Gambardella, P. J.

    1980-01-01

    The Solar Maximum Mission (SMM) spacecraft provides an excellent opportunity for evaluating attitude determination accuracies achievable with tracking instruments such as fixed head star trackers (FHSTs). As a part of its payload, SMM carries a highly accurate fine pointing Sun sensor (FPSS). The EPSS provides an independent check of the pitch and yaw parameters computed from observations of stars in the FHST field of view. A method to determine the alignment of the FHSTs relative to the FPSS using spacecraft data is applied. Two methods that were used to determine distortions in the 8 degree by 8 degree field of view of the FHSTs using spacecraft data are also presented. The attitude determination accuracy performance of the in flight calibrated FHSTs is evaluated.

  15. Cross-calibration between airborne SAR sensors

    NASA Technical Reports Server (NTRS)

    Zink, Manfred; Olivier, Philippe; Freeman, Anthony

    1993-01-01

    As Synthetic Aperture Radar (SAR) system performance and experience in SAR signature evaluation increase, quantitative analysis becomes more and more important. Such analyses require an absolute radiometric calibration of the complete SAR system. To keep the expenditure on calibration of future multichannel and multisensor remote sensing systems (e.g., X-SAR/SIR-C) within a tolerable level, data from different tracks and different sensors (channels) must be cross calibrated. The 1989 joint E-SAR/DC-8 SAR calibration campaign gave a first opportunity for such an experiment, including cross sensor and cross track calibration. A basic requirement for successful cross calibration is the stability of the SAR systems. The calibration parameters derived from different tracks and the polarimetric properties of the uncalibrated data are used to describe this stability. Quality criteria for a successful cross calibration are the agreement of alpha degree values and the consistency of radar cross sections of equally sized corner reflectors. Channel imbalance and cross talk provide additional quality in case of the polarimetric DC-8 SAR.

  16. Calibration of automatic performance measures - speed and volume data: volume 2, evaluation of the accuracy of approach volume counts and speeds collected by microwave sensors.

    DOT National Transportation Integrated Search

    2016-05-01

    This study evaluated the accuracy of approach volumes and free flow approach speeds collected by the Wavetronix : SmartSensor Advance sensor for the Signal Performance Metrics system of the Utah Department of Transportation (UDOT), : using the field ...

  17. A study to assess the long-term stability of the ionization chamber reference system in the LNMRI

    NASA Astrophysics Data System (ADS)

    Trindade Filho, O. L.; Conceição, D. A.; da Silva, C. J.; Delgado, J. U.; de Oliveira, A. E.; Iwahara, A.; Tauhata, L.

    2018-03-01

    Ionization chambers are used as secondary standard in order to maintain the calibration factors of radionuclides in the activity measurements in metrology laboratories. Used as radionuclide calibrator in nuclear medicine clinics to control dose in patients, its long-term performance is not evaluated systematically. A methodology for long-term evaluation for its stability is monitored and checked. Historical data produced monthly of 2012 until 2017, by an ionization chamber, electrometer and 226Ra, were analyzed via control chart, aiming to follow the long-term performance. Monitoring systematic errors were consistent within the limits of control, demonstrating the quality of measurements in compliance with ISO17025.

  18. Calibration of the APEX Model to Simulate Management Practice Effects on Runoff, Sediment, and Phosphorus Loss.

    PubMed

    Bhandari, Ammar B; Nelson, Nathan O; Sweeney, Daniel W; Baffaut, Claire; Lory, John A; Senaviratne, Anomaa; Pierzynski, Gary M; Janssen, Keith A; Barnes, Philip L

    2017-11-01

    Process-based computer models have been proposed as a tool to generate data for Phosphorus (P) Index assessment and development. Although models are commonly used to simulate P loss from agriculture using managements that are different from the calibration data, this use of models has not been fully tested. The objective of this study is to determine if the Agricultural Policy Environmental eXtender (APEX) model can accurately simulate runoff, sediment, total P, and dissolved P loss from 0.4 to 1.5 ha of agricultural fields with managements that are different from the calibration data. The APEX model was calibrated with field-scale data from eight different managements at two locations (management-specific models). The calibrated models were then validated, either with the same management used for calibration or with different managements. Location models were also developed by calibrating APEX with data from all managements. The management-specific models resulted in satisfactory performance when used to simulate runoff, total P, and dissolved P within their respective systems, with > 0.50, Nash-Sutcliffe efficiency > 0.30, and percent bias within ±35% for runoff and ±70% for total and dissolved P. When applied outside the calibration management, the management-specific models only met the minimum performance criteria in one-third of the tests. The location models had better model performance when applied across all managements compared with management-specific models. Our results suggest that models only be applied within the managements used for calibration and that data be included from multiple management systems for calibration when using models to assess management effects on P loss or evaluate P Indices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  19. CALET On-orbit Calibration and Performance

    NASA Astrophysics Data System (ADS)

    Akaike, Yosui; Calet Collaboration

    2017-01-01

    The CALorimetric Electron Telescope (CALET) was installed on the International Space Station (ISS) in August 2015, and has been accumulating high-statistics data to perform high-precision measurements of cosmic ray electrons, nuclei and gamma-rays. CALET has an imaging and a fully active calorimeter, with a total thickness of 30 radiation lengths and 1.3 proton interaction lengths, that allow measurements well into the TeV energy region with excellent energy resolution, 2% for electrons above 100 GeV, and powerful particle identification. CALET's performance has been confirmed by Monte Carlo simulations and beam tests. In order to maximize the detector performance and keep the high resolution for long observation on the ISS, it is required to perform the precise calibration of each detector component. We have therefore evaluated the detector response and monitored it by using penetrating cosmic ray events such as protons and helium nuclei. In this paper, we will present the on-orbit calibration and detector performance of CALET on the ISS. This research was supported by JSPS postdoctral fellowships for research abroad.

  20. Corrections to the MODIS Aqua Calibration Derived From MODIS Aqua Ocean Color Products

    NASA Technical Reports Server (NTRS)

    Meister, Gerhard; Franz, Bryan Alden

    2013-01-01

    Ocean color products such as, e.g., chlorophyll-a concentration, can be derived from the top-of-atmosphere radiances measured by imaging sensors on earth-orbiting satellites. There are currently three National Aeronautics and Space Administration sensors in orbit capable of providing ocean color products. One of these sensors is the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, whose ocean color products are currently the most widely used of the three. A recent improvement to the MODIS calibration methodology has used land targets to improve the calibration accuracy. This study evaluates the new calibration methodology and describes further calibration improvements that are built upon the new methodology by including ocean measurements in the form of global temporally averaged water-leaving reflectance measurements. The calibration improvements presented here mainly modify the calibration at the scan edges, taking advantage of the good performance of the land target trending in the center of the scan.

  1. Accuracy of a new real-time continuous glucose monitoring algorithm.

    PubMed

    Keenan, D Barry; Cartaya, Raymond; Mastrototaro, John J

    2010-01-01

    Through minimally invasive sensor-based continuous glucose monitoring (CGM), individuals can manage their blood glucose (BG) levels more aggressively, thereby improving their hemoglobin A1c level, while reducing the risk of hypoglycemia. Tighter glycemic control through CGM, however, requires an accurate glucose sensor and calibration algorithm with increased performance at lower BG levels. Sensor and BG measurements for 72 adult and adolescent subjects were obtained during the course of a 26-week multicenter study evaluating the efficacy of the Paradigm REAL-Time (PRT) sensor-augmented pump system (Medtronic Diabetes, Northridge, CA) in an outpatient setting. Subjects in the study arm performed at least four daily finger stick measurements. A retrospective analysis of the data set was performed to evaluate a new calibration algorithm utilized in the Paradigm Veo insulin pump (Medtronic Diabetes) and to compare these results to performance metrics calculated for the PRT. A total of N = 7193 PRT sensor downloads for 3 days of use, as well as 90,472 temporally and nonuniformly paired data points (sensor and meter values), were evaluated, with 5841 hypoglycemic and 15,851 hyperglycemic events detected through finger stick measurements. The Veo calibration algorithm decreased the overall mean absolute relative difference by greater than 0.25 to 15.89%, with hypoglycemia sensitivity increased from 54.9% in the PRT to 82.3% in the Veo (90.5% with predictive alerts); however, hyperglycemia sensitivity was decreased only marginally from 86% in the PRT to 81.7% in the Veo. The Veo calibration algorithm, with sensor error reduced significantly in the 40- to 120-mg/dl range, improves hypoglycemia detection, while retaining accuracy at high glucose levels. 2010 Diabetes Technology Society.

  2. Systematic calibration of an integrated x-ray and optical tomography system for preclinical radiation research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yidong, E-mail: yidongyang@med.miami.edu; Wang, Ken Kang-Hsin; Wong, John W.

    2015-04-15

    Purpose: The cone beam computed tomography (CBCT) guided small animal radiation research platform (SARRP) has been developed for focal tumor irradiation, allowing laboratory researchers to test basic biological hypotheses that can modify radiotherapy outcomes in ways that were not feasible previously. CBCT provides excellent bone to soft tissue contrast, but is incapable of differentiating tumors from surrounding soft tissue. Bioluminescence tomography (BLT), in contrast, allows direct visualization of even subpalpable tumors and quantitative evaluation of tumor response. Integration of BLT with CBCT offers complementary image information, with CBCT delineating anatomic structures and BLT differentiating luminescent tumors. This study is tomore » develop a systematic method to calibrate an integrated CBCT and BLT imaging system which can be adopted onboard the SARRP to guide focal tumor irradiation. Methods: The integrated imaging system consists of CBCT, diffuse optical tomography (DOT), and BLT. The anatomy acquired from CBCT and optical properties acquired from DOT serve as a priori information for the subsequent BLT reconstruction. Phantoms were designed and procedures were developed to calibrate the CBCT, DOT/BLT, and the entire integrated system. Geometrical calibration was performed to calibrate the CBCT system. Flat field correction was performed to correct the nonuniform response of the optical imaging system. Absolute emittance calibration was performed to convert the camera readout to the emittance at the phantom or animal surface, which enabled the direct reconstruction of the bioluminescence source strength. Phantom and mouse imaging were performed to validate the calibration. Results: All calibration procedures were successfully performed. Both CBCT of a thin wire and a euthanized mouse revealed no spatial artifact, validating the accuracy of the CBCT calibration. The absolute emittance calibration was validated with a 650 nm laser source, resulting in a 3.0% difference between simulated and measured signal. The calibration of the entire system was confirmed through the CBCT and BLT reconstruction of a bioluminescence source placed inside a tissue-simulating optical phantom. Using a spatial region constraint, the source position was reconstructed with less than 1 mm error and the source strength reconstructed with less than 24% error. Conclusions: A practical and systematic method has been developed to calibrate an integrated x-ray and optical tomography imaging system, including the respective CBCT and optical tomography system calibration and the geometrical calibration of the entire system. The method can be modified and adopted to calibrate CBCT and optical tomography systems that are operated independently or hybrid x-ray and optical tomography imaging systems.« less

  3. Systematic calibration of an integrated x-ray and optical tomography system for preclinical radiation research

    PubMed Central

    Yang, Yidong; Wang, Ken Kang-Hsin; Eslami, Sohrab; Iordachita, Iulian I.; Patterson, Michael S.; Wong, John W.

    2015-01-01

    Purpose: The cone beam computed tomography (CBCT) guided small animal radiation research platform (SARRP) has been developed for focal tumor irradiation, allowing laboratory researchers to test basic biological hypotheses that can modify radiotherapy outcomes in ways that were not feasible previously. CBCT provides excellent bone to soft tissue contrast, but is incapable of differentiating tumors from surrounding soft tissue. Bioluminescence tomography (BLT), in contrast, allows direct visualization of even subpalpable tumors and quantitative evaluation of tumor response. Integration of BLT with CBCT offers complementary image information, with CBCT delineating anatomic structures and BLT differentiating luminescent tumors. This study is to develop a systematic method to calibrate an integrated CBCT and BLT imaging system which can be adopted onboard the SARRP to guide focal tumor irradiation. Methods: The integrated imaging system consists of CBCT, diffuse optical tomography (DOT), and BLT. The anatomy acquired from CBCT and optical properties acquired from DOT serve as a priori information for the subsequent BLT reconstruction. Phantoms were designed and procedures were developed to calibrate the CBCT, DOT/BLT, and the entire integrated system. Geometrical calibration was performed to calibrate the CBCT system. Flat field correction was performed to correct the nonuniform response of the optical imaging system. Absolute emittance calibration was performed to convert the camera readout to the emittance at the phantom or animal surface, which enabled the direct reconstruction of the bioluminescence source strength. Phantom and mouse imaging were performed to validate the calibration. Results: All calibration procedures were successfully performed. Both CBCT of a thin wire and a euthanized mouse revealed no spatial artifact, validating the accuracy of the CBCT calibration. The absolute emittance calibration was validated with a 650 nm laser source, resulting in a 3.0% difference between simulated and measured signal. The calibration of the entire system was confirmed through the CBCT and BLT reconstruction of a bioluminescence source placed inside a tissue-simulating optical phantom. Using a spatial region constraint, the source position was reconstructed with less than 1 mm error and the source strength reconstructed with less than 24% error. Conclusions: A practical and systematic method has been developed to calibrate an integrated x-ray and optical tomography imaging system, including the respective CBCT and optical tomography system calibration and the geometrical calibration of the entire system. The method can be modified and adopted to calibrate CBCT and optical tomography systems that are operated independently or hybrid x-ray and optical tomography imaging systems. PMID:25832060

  4. Task Identification and Evaluation System (TIES)

    DTIC Science & Technology

    1991-08-01

    Caliorate A N/AVh-11A- iUD -test -sets 127. Calibrate AN/AWII1-55 ASCU test setsI - 128. Calibrate 5001L11 tally punched tape readersI- 129. Perform...11AKHbD test sets -- 132. ?erform fault isolation of U4/AWN-55 ASCU -test sets -- 133. Perform fault isolation of 500 R.M tally punched tape I...AIN/AVM1-11A HfLM test sets- 137. Perf-orm self-tests of AL%/AWL-S5 ASCU test sets G. !MAI.T.T!ING A-7D_ ANUAL TEST SETS 138. Adjust SM-661/AS-388air

  5. Vicarious absolute radiometric calibration of GF-2 PMS2 sensor using permanent artificial targets in China

    NASA Astrophysics Data System (ADS)

    Liu, Yaokai; Li, Chuanrong; Ma, Lingling; Wang, Ning; Qian, Yonggang; Tang, Lingli

    2016-10-01

    GF-2, launched on August 19 2014, is one of the high-resolution land resource observing satellite of the China GF series satellites plan. The radiometric performance evaluation of the onboard optical pan and multispectral (PMS2) sensor of GF-2 satellite is very important for the further application of the data. And, the vicarious absolute radiometric calibration approach is one of the most useful way to monitor the radiometric performance of the onboard optical sensors. In this study, the traditional reflectance-based method is used to vicarious radiometrically calibrate the onboard PMS2 sensor of GF-2 satellite using three black, gray and white reflected permanent artificial targets located in the AOE Baotou site in China. Vicarious field calibration campaign were carried out in the AOE-Baotou calibration site on 22 April 2016. And, the absolute radiometric calibration coefficients were determined with in situ measured atmospheric parameters and surface reflectance of the permanent artificial calibration targets. The predicted TOA radiance of a selected desert area with our determined calibrated coefficients were compared with the official distributed calibration coefficients. Comparison results show a good consistent and the mean relative difference of the multispectral channels is less than 5%. Uncertainty analysis was also carried out and a total uncertainty with 3.87% is determined of the TOA radiance.

  6. Calibration of volume and component biomass equations for Douglas-fir and lodgepole pine in Western Oregon forests

    Treesearch

    Krishna P. Poudel; Temesgen Hailemariam

    2016-01-01

    Using data from destructively sampled Douglas-fir and lodgepole pine trees, we evaluated the performance of regional volume and component biomass equations in terms of bias and RMSE. The volume and component biomass equations were calibrated using three different adjustment methods that used: (a) a correction factor based on ordinary least square regression through...

  7. Spatially distributed model calibration of flood inundation guided by consequences such as loss of property

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Beven, K. J.; Frodsham, K.; Matgen, P.

    2005-12-01

    Flood inundation models play an increasingly important role in assessing flood risk. The growth of 2D inundation models that are intimately related to raster maps of floodplains is occurring at the same time as an increase in the availability of 2D remote data (e.g. SAR images and aerial photographs), against which model performancee can be evaluated. This requires new techniques to be explored in order to evaluate model performance in two dimensional space. In this paper we present a fuzzified pattern matching algorithm which compares favorably to a set of traditional measures. However, we further argue that model calibration has to go beyond the comparison of physical properties and should demonstrate how a weighting towards consequences, such as loss of property, can enhance model focus and prediction. Indeed, it will be necessary to abandon a fully spatial comparison in many scenarios to concentrate the model calibration exercise on specific points such as hospitals, police stations or emergency response centers. It can be shown that such point evaluations lead to significantly different flood hazard maps due to the averaging effect of a spatial performance measure. A strategy to balance the different needs (accuracy at certain spatial points and acceptable spatial performance) has to be based in a public and political decision making process.

  8. Multivariate meta-analysis of individual participant data helped externally validate the performance and implementation of a prediction model.

    PubMed

    Snell, Kym I E; Hua, Harry; Debray, Thomas P A; Ensor, Joie; Look, Maxime P; Moons, Karel G M; Riley, Richard D

    2016-01-01

    Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  9. GIFTS SM EDU Data Processing and Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Johnson, David G.; Reisse, Robert A.; Gazarik, Michael J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration stage. The calibration procedures can be subdivided into three stages. In the pre-calibration stage, a phase correction algorithm is applied to the decimated and filtered complex interferogram. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected blackbody reference spectra. In the radiometric calibration stage, we first compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. During the post-processing stage, we estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. We then implement a correction scheme that compensates for the effect of fore-optics offsets. Finally, for off-axis pixels, the FPA off-axis effects correction is performed. To estimate the performance of the entire FPA, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation.

  10. Early Evaluation of the VIIRS Calibration, Cloud Mask and Surface Reflectance Earth Data Records

    NASA Technical Reports Server (NTRS)

    Vermote, Eric; Justice, Chris; Csiszar, Ivan

    2014-01-01

    Surface reflectance is one of the key products fromVIIRS and as withMODIS, is used in developing several higherorder land products. The VIIRS Surface Reflectance (SR) Intermediate Product (IP) is based on the heritageMODIS Collection 5 product (Vermote, El Saleous, & Justice, 2002). The quality and character of surface reflectance depend on the accuracy of the VIIRS Cloud Mask (VCM), the aerosol algorithms and the adequate calibration of the sensor. The focus of this paper is the early evaluation of the VIIRS SR product in the context of the maturity of the operational processing system, the Interface Data Processing System (IDPS). After a brief introduction, the paper presents the calibration performance and the role of the surface reflectance in calibration monitoring. The analysis of the performance of the cloud mask with a focus on vegetation monitoring (no snow conditions) shows typical problems over bright surfaces and high elevation sites. Also discussed is the performance of the aerosol input used in the atmospheric correction and in particular the artifacts generated by the use of the Navy Aerosol Analysis and Prediction System. Early quantitative results of the performance of the SR product over the AERONET sites showthatwith the fewadjustments recommended, the accuracy iswithin the threshold specifications. The analysis of the adequacy of the SR product (Land PEATE adjusted version) in applications of societal benefits is then presented. We conclude with a set of recommendations to ensure consistency and continuity of the JPSS mission with the MODIS Land Climate Data Record.

  11. Effects of line fiducial parameters and beamforming on ultrasound calibration

    PubMed Central

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Peters, Terry M.; Chen, Elvis C. S.

    2017-01-01

    Abstract. Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures. PMID:28331886

  12. Effects of line fiducial parameters and beamforming on ultrasound calibration.

    PubMed

    Ameri, Golafsoun; Baxter, John S H; McLeod, A Jonathan; Peters, Terry M; Chen, Elvis C S

    2017-01-01

    Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures.

  13. Evaluation of SARAL/AltiKa performance using GNSS/IMU equipped buoy in Sajafi, Imam Hassan and Kangan Ports

    NASA Astrophysics Data System (ADS)

    Ardalan, Alireza A.; Jazireeyan, Iraj; Abdi, Naser; Rezvani, Mohammad-Hadi

    2018-03-01

    Performance of SARAL/AltiKa mission has been evaluated within 2016 altimeter calibration/validation framework in Persian Gulf through three campaigns conducted in the offshore waters of Sajafi, Imam Hassan and Kangan Ports, while the altimeter overflew the passes 470, 111 and 25 on 13 Feb, 7 March and 17 June 2016, respectively. As the preparation, a lightweight buoy was equipped with a GNSS receiver/choke-ring antenna and a MEMS-based IMU to measure independent datasets in the field operations. To obtain accurate sea surface height (SSH) time series, the offset of the onboard antenna from the equilibrium sea level was predetermined through surveying operations as the buoy was deploying in the onshore waters of Kangan Port. Accordingly, the double-difference carrier phase observations have been processed via the Bernese GPS Software v. 5.0 so as to provide the GNSS-derived time series at the comparison points of the calibration campaigns, once the disturbing effects due to the platform tilt and heave have been eliminated. Owing to comparing of the SSH time series and the associating altimetry 1 Hz GDR-T datasets, the calibration/validation of the SARAL/AltiKa has been performed in the both cases of radiometer and ECMWF wet troposphere corrections so as to identify potential land contamination. An agreement of the present findings in comparison with those attained in other international calibrations sites confirms the promising feasibility of Persian Gulf as a new dedicated site for calibration/validation of ongoing and future altimetry missions.

  14. Recalibration of risk prediction models in a large multicenter cohort of admissions to adult, general critical care units in the United Kingdom.

    PubMed

    Harrison, David A; Brady, Anthony R; Parry, Gareth J; Carpenter, James R; Rowan, Kathy

    2006-05-01

    To assess the performance of published risk prediction models in common use in adult critical care in the United Kingdom and to recalibrate these models in a large representative database of critical care admissions. Prospective cohort study. A total of 163 adult general critical care units in England, Wales, and Northern Ireland, during the period of December 1995 to August 2003. A total of 231,930 admissions, of which 141,106 met inclusion criteria and had sufficient data recorded for all risk prediction models. None. The published versions of the Acute Physiology and Chronic Health Evaluation (APACHE) II, APACHE II UK, APACHE III, Simplified Acute Physiology Score (SAPS) II, and Mortality Probability Models (MPM) II were evaluated for discrimination and calibration by means of a combination of appropriate statistical measures recommended by an expert steering committee. All models showed good discrimination (the c index varied from 0.803 to 0.832) but imperfect calibration. Recalibration of the models, which was performed by both the Cox method and re-estimating coefficients, led to improved discrimination and calibration, although all models still showed significant departures from perfect calibration. Risk prediction models developed in another country require validation and recalibration before being used to provide risk-adjusted outcomes within a new country setting. Periodic reassessment is beneficial to ensure calibration is maintained.

  15. Laboratory Performance of Five Selected Soil Moisture Sensors Applying Factory and Own Calibration Equations for Two Soil Media of Different Bulk Density and Salinity Levels.

    PubMed

    Matula, Svatopluk; Báťková, Kamila; Legese, Wossenu Lemma

    2016-11-15

    Non-destructive soil water content determination is a fundamental component for many agricultural and environmental applications. The accuracy and costs of the sensors define the measurement scheme and the ability to fit the natural heterogeneous conditions. The aim of this study was to evaluate five commercially available and relatively cheap sensors usually grouped with impedance and FDR sensors. ThetaProbe ML2x (impedance) and ECH₂O EC-10, ECH₂O EC-20, ECH₂O EC-5, and ECH₂O TE (all FDR) were tested on silica sand and loess of defined characteristics under controlled laboratory conditions. The calibrations were carried out in nine consecutive soil water contents from dry to saturated conditions (pure water and saline water). The gravimetric method was used as a reference method for the statistical evaluation (ANOVA with significance level 0.05). Generally, the results showed that our own calibrations led to more accurate soil moisture estimates. Variance component analysis arranged the factors contributing to the total variation as follows: calibration (contributed 42%), sensor type (contributed 29%), material (contributed 18%), and dry bulk density (contributed 11%). All the tested sensors performed very well within the whole range of water content, especially the sensors ECH₂O EC-5 and ECH₂O TE, which also performed surprisingly well in saline conditions.

  16. Laboratory Performance of Five Selected Soil Moisture Sensors Applying Factory and Own Calibration Equations for Two Soil Media of Different Bulk Density and Salinity Levels

    PubMed Central

    Matula, Svatopluk; Báťková, Kamila; Legese, Wossenu Lemma

    2016-01-01

    Non-destructive soil water content determination is a fundamental component for many agricultural and environmental applications. The accuracy and costs of the sensors define the measurement scheme and the ability to fit the natural heterogeneous conditions. The aim of this study was to evaluate five commercially available and relatively cheap sensors usually grouped with impedance and FDR sensors. ThetaProbe ML2x (impedance) and ECH2O EC-10, ECH2O EC-20, ECH2O EC-5, and ECH2O TE (all FDR) were tested on silica sand and loess of defined characteristics under controlled laboratory conditions. The calibrations were carried out in nine consecutive soil water contents from dry to saturated conditions (pure water and saline water). The gravimetric method was used as a reference method for the statistical evaluation (ANOVA with significance level 0.05). Generally, the results showed that our own calibrations led to more accurate soil moisture estimates. Variance component analysis arranged the factors contributing to the total variation as follows: calibration (contributed 42%), sensor type (contributed 29%), material (contributed 18%), and dry bulk density (contributed 11%). All the tested sensors performed very well within the whole range of water content, especially the sensors ECH2O EC-5 and ECH2O TE, which also performed surprisingly well in saline conditions. PMID:27854263

  17. A comparison of hydrologic models for ecological flows and water availability

    USGS Publications Warehouse

    Caldwell, Peter V; Kennen, Jonathan G.; Sun, Ge; Kiang, Julie E.; Butcher, John B; Eddy, Michelle C; Hay, Lauren E.; LaFontaine, Jacob H.; Hain, Ernie F.; Nelson, Stacy C; McNulty, Steve G

    2015-01-01

    Robust hydrologic models are needed to help manage water resources for healthy aquatic ecosystems and reliable water supplies for people, but there is a lack of comprehensive model comparison studies that quantify differences in streamflow predictions among model applications developed to answer management questions. We assessed differences in daily streamflow predictions by four fine-scale models and two regional-scale monthly time step models by comparing model fit statistics and bias in ecologically relevant flow statistics (ERFSs) at five sites in the Southeastern USA. Models were calibrated to different extents, including uncalibrated (level A), calibrated to a downstream site (level B), calibrated specifically for the site (level C) and calibrated for the site with adjusted precipitation and temperature inputs (level D). All models generally captured the magnitude and variability of observed streamflows at the five study sites, and increasing level of model calibration generally improved performance. All models had at least 1 of 14 ERFSs falling outside a +/−30% range of hydrologic uncertainty at every site, and ERFSs related to low flows were frequently over-predicted. Our results do not indicate that any specific hydrologic model is superior to the others evaluated at all sites and for all measures of model performance. Instead, we provide evidence that (1) model performance is as likely to be related to calibration strategy as it is to model structure and (2) simple, regional-scale models have comparable performance to the more complex, fine-scale models at a monthly time step.

  18. Comparison between DICOM-calibrated and uncalibrated consumer grade and 6-MP displays under different lighting conditions in panoramic radiography

    PubMed Central

    Haapea, M; Liukkonen, E; Huumonen, S; Tervonen, O; Nieminen, M T

    2015-01-01

    Objectives: To compare observer performance in the detection of anatomical structures and pathology in panoramic radiographs using consumer grade with and without digital imaging and communication in medicine (DICOM)-calibration and 6-megapixel (6-MP) displays under different lighting conditions. Methods: 30 panoramic radiographs were randomly evaluated on three displays under bright (510 lx) and dim (16 lx) ambient lighting by two observers with different years of experience. Dentinoenamel junction, dentinal caries and periapical inflammatory lesions, visibility of cortical border of the floor and pathological lesions in maxillary sinus were evaluated. Consensus between the observers was considered as reference. Intraobserver agreement was determined. Proportion of equivalent ratings and weighted kappa were used to assess reliability. The level of significance was set to p < 0.05. Results: The proportion of equivalent ratings with consensus differed between uncalibrated and DICOM-calibrated consumer grade displays in dentinal caries in the lower molar in dim lighting (p = 0.021) and between DICOM-calibrated consumer grade and 6-MP display in bright lighting (p = 0.038) for an experienced observer. Significant differences were found between uncalibrated and DICOM-calibrated consumer grade displays in dentinal caries in bright lighting (p = 0.044) and periapical lesions in the upper molar in dim lighting (p = 0.008) for a less experienced observer. Intraobserver reliability was better at detecting dentinal caries than at detecting periapical and maxillary sinus pathology. Conclusions: DICOM calibration may improve observer performance in panoramic radiography in different lighting conditions. Therefore, a DICOM-calibrated consumer grade display can be used instead of a medical display in dental practice without compromising the diagnostic quality. PMID:25564888

  19. Characterization of the Sonoran desert as a radiometric calibration target for Earth observing sensors

    USGS Publications Warehouse

    Angal, Amit; Chander, Gyanesh; Xiong, Xiaoxiong; Choi, Tae-young; Wu, Aisheng

    2011-01-01

    To provide highly accurate quantitative measurements of the Earth's surface, a comprehensive calibration and validation of the satellite sensors is required. The NASA Moderate Resolution Imaging Spectroradiometer (MODIS) Characterization Support Team, in collaboration with United States Geological Survey, Earth Resources Observation and Science Center, has previously demonstrated the use of African desert sites to monitor the long-term calibration stability of Terra MODIS and Landsat 7 (L7) Enhanced Thematic Mapper plus (ETM+). The current study focuses on evaluating the suitability of the Sonoran Desert test site for post-launch long-term radiometric calibration as well as cross-calibration purposes. Due to the lack of historical and on-going in situ ground measurements, the Sonoran Desert is not usually used for absolute calibration. An in-depth evaluation (spatial, temporal, and spectral stability) of this site using well calibrated L7 ETM+ measurements and local climatology data has been performed. The Sonoran Desert site produced spatial variability of about 3 to 5% in the reflective solar regions, and the temporal variations of the site after correction for view-geometry impacts were generally around 3%. The results demonstrate that, barring the impacts due to occasional precipitation, the Sonoran Desert site can be effectively used for cross-calibration and long-term stability monitoring of satellite sensors, thus, providing a good test site in the western hemisphere.

  20. A Bayesian procedure for evaluating the frequency of calibration factor updates in highway safety manual (HSM) applications.

    PubMed

    Saha, Dibakar; Alluri, Priyanka; Gan, Albert

    2017-01-01

    The Highway Safety Manual (HSM) presents statistical models to quantitatively estimate an agency's safety performance. The models were developed using data from only a few U.S. states. To account for the effects of the local attributes and temporal factors on crash occurrence, agencies are required to calibrate the HSM-default models for crash predictions. The manual suggests updating calibration factors every two to three years, or preferably on an annual basis. Given that the calibration process involves substantial time, effort, and resources, a comprehensive analysis of the required calibration factor update frequency is valuable to the agencies. Accordingly, the objective of this study is to evaluate the HSM's recommendation and determine the required frequency of calibration factor updates. A robust Bayesian estimation procedure is used to assess the variation between calibration factors computed annually, biennially, and triennially using data collected from over 2400 miles of segments and over 700 intersections on urban and suburban facilities in Florida. Bayesian model yields a posterior distribution of the model parameters that give credible information to infer whether the difference between calibration factors computed at specified intervals is credibly different from the null value which represents unaltered calibration factors between the comparison years or in other words, zero difference. The concept of the null value is extended to include the range of values that are practically equivalent to zero. Bayesian inference shows that calibration factors based on total crash frequency are required to be updated every two years in cases where the variations between calibration factors are not greater than 0.01. When the variations are between 0.01 and 0.05, calibration factors based on total crash frequency could be updated every three years. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.

    PubMed

    Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

    2009-01-01

    3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.

  2. Exploiting Task Constraints for Self-Calibrated Brain-Machine Interface Control Using Error-Related Potentials

    PubMed Central

    Iturrate, Iñaki; Grizou, Jonathan; Omedes, Jason; Oudeyer, Pierre-Yves; Lopes, Manuel; Montesano, Luis

    2015-01-01

    This paper presents a new approach for self-calibration BCI for reaching tasks using error-related potentials. The proposed method exploits task constraints to simultaneously calibrate the decoder and control the device, by using a robust likelihood function and an ad-hoc planner to cope with the large uncertainty resulting from the unknown task and decoder. The method has been evaluated in closed-loop online experiments with 8 users using a previously proposed BCI protocol for reaching tasks over a grid. The results show that it is possible to have a usable BCI control from the beginning of the experiment without any prior calibration. Furthermore, comparisons with simulations and previous results obtained using standard calibration hint that both the quality of recorded signals and the performance of the system were comparable to those obtained with a standard calibration approach. PMID:26131890

  3. Comparison of the uncertainties of several European low-dose calibration facilities

    NASA Astrophysics Data System (ADS)

    Dombrowski, H.; Cornejo Díaz, N. A.; Toni, M. P.; Mihelic, M.; Röttger, A.

    2018-04-01

    The typical uncertainty of a low-dose rate calibration of a detector, which is calibrated in a dedicated secondary national calibration laboratory, is investigated, including measurements in the photon field of metrology institutes. Calibrations at low ambient dose equivalent rates (at the level of the natural ambient radiation) are needed when environmental radiation monitors are to be characterised. The uncertainties of calibration measurements in conventional irradiation facilities above ground are compared with those obtained in a low-dose rate irradiation facility located deep underground. Four laboratories quantitatively evaluated the uncertainties of their calibration facilities, in particular for calibrations at low dose rates (250 nSv/h and 1 μSv/h). For the first time, typical uncertainties of European calibration facilities are documented in a comparison and the main sources of uncertainty are revealed. All sources of uncertainties are analysed, including the irradiation geometry, scattering, deviations of real spectra from standardised spectra, etc. As a fundamental metrological consequence, no instrument calibrated in such a facility can have a lower total uncertainty in subsequent measurements. For the first time, the need to perform calibrations at very low dose rates (< 100 nSv/h) deep underground is underpinned on the basis of quantitative data.

  4. Design and performance evaluation of a simplified dynamic model for combined sewer overflows in pumped sewer systems

    NASA Astrophysics Data System (ADS)

    van Daal-Rombouts, Petra; Sun, Siao; Langeveld, Jeroen; Bertrand-Krajewski, Jean-Luc; Clemens, François

    2016-07-01

    Optimisation or real time control (RTC) studies in wastewater systems increasingly require rapid simulations of sewer systems in extensive catchments. To reduce the simulation time calibrated simplified models are applied, with the performance generally based on the goodness of fit of the calibration. In this research the performance of three simplified and a full hydrodynamic (FH) model for two catchments are compared based on the correct determination of CSO event occurrences and of the total discharged volumes to the surface water. Simplified model M1 consists of a rainfall runoff outflow (RRO) model only. M2 combines the RRO model with a static reservoir model for the sewer behaviour. M3 comprises the RRO model and a dynamic reservoir model. The dynamic reservoir characteristics were derived from FH model simulations. It was found that M2 and M3 are able to describe the sewer behaviour of the catchments, contrary to M1. The preferred model structure depends on the quality of the information (geometrical database and monitoring data) available for the design and calibration of the model. Finally, calibrated simplified models are shown to be preferable to uncalibrated FH models when performing optimisation or RTC studies.

  5. Using a gradient boosting model to improve the performance of low-cost aerosol monitors in a dense, heterogeneous urban environment

    NASA Astrophysics Data System (ADS)

    Johnson, Nicholas E.; Bonczak, Bartosz; Kontokosta, Constantine E.

    2018-07-01

    The increased availability and improved quality of new sensing technologies have catalyzed a growing body of research to evaluate and leverage these tools in order to quantify and describe urban environments. Air quality, in particular, has received greater attention because of the well-established links to serious respiratory illnesses and the unprecedented levels of air pollution in developed and developing countries and cities around the world. Though numerous laboratory and field evaluation studies have begun to explore the use and potential of low-cost air quality monitoring devices, the performance and stability of these tools has not been adequately evaluated in complex urban environments, and further research is needed. In this study, we present the design of a low-cost air quality monitoring platform based on the Shinyei PPD42 aerosol monitor and examine the suitability of the sensor for deployment in a dense heterogeneous urban environment. We assess the sensor's performance during a field calibration campaign from February 7th to March 25th 2017 with a reference instrument in New York City, and present a novel calibration approach using a machine learning method that incorporates publicly available meteorological data in order to improve overall sensor performance. We find that while the PPD42 performs well in relation to the reference instrument using linear regression (R2 = 0.36-0.51), a gradient boosting regression tree model can significantly improve device calibration (R2 = 0.68-0.76). We discuss the sensor's performance and reliability when deployed in a dense, heterogeneous urban environment during a period of significant variation in weather conditions, and important considerations when using machine learning techniques to improve the performance of low-cost air quality monitors.

  6. Development of new methodologies for evaluating the energy performance of new commercial buildings

    NASA Astrophysics Data System (ADS)

    Song, Suwon

    The concept of Measurement and Verification (M&V) of a new building continues to become more important because efficient design alone is often not sufficient to deliver an efficient building. Simulation models that are calibrated to measured data can be used to evaluate the energy performance of new buildings if they are compared to energy baselines such as similar buildings, energy codes, and design standards. Unfortunately, there is a lack of detailed M&V methods and analysis methods to measure energy savings from new buildings that would have hypothetical energy baselines. Therefore, this study developed and demonstrated several new methodologies for evaluating the energy performance of new commercial buildings using a case-study building in Austin, Texas. First, three new M&V methods were developed to enhance the previous generic M&V framework for new buildings, including: (1) The development of a method to synthesize weather-normalized cooling energy use from a correlation of Motor Control Center (MCC) electricity use when chilled water use is unavailable, (2) The development of an improved method to analyze measured solar transmittance against incidence angle for sample glazing using different solar sensor types, including Eppley PSP and Li-Cor sensors, and (3) The development of an improved method to analyze chiller efficiency and operation at part-load conditions. Second, three new calibration methods were developed and analyzed, including: (1) A new percentile analysis added to the previous signature method for use with a DOE-2 calibration, (2) A new analysis to account for undocumented exhaust air in DOE-2 calibration, and (3) An analysis of the impact of synthesized direct normal solar radiation using the Erbs correlation on DOE-2 simulation. Third, an analysis of the actual energy savings compared to three different energy baselines was performed, including: (1) Energy Use Index (EUI) comparisons with sub-metered data, (2) New comparisons against Standards 90.1-1989 and 90.1-2001, and (3) A new evaluation of the performance of selected Energy Conservation Design Measures (ECDMs). Finally, potential energy savings were also simulated from selected improvements, including: minimum supply air flow, undocumented exhaust air, and daylighting.

  7. Ground truth data for test sites (SL-4). [thermal radiation brightness temperature and solar radiation measurments

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Field measurements performed simultaneous with Skylab overpass in order to provide comparative calibration and performance evaluation measurements for the EREP sensors are presented. Wavelength region covered include: solar radiation (400 to 1300 nanometer), and thermal radiation (8 to 14 micrometer). Measurements consisted of general conditions and near surface meteorology, atmospheric temperature and humidity vs altitude, the thermal brightness temperature, total and diffuse solar radiation, direct solar radiation (subsequently analyzed for optical depth/transmittance), and target reflectivity/radiance. The particular instruments used are discussed along with analyses performed. Detailed instrument operation, calibrations, techniques, and errors are given.

  8. Performance of multilayer coated diffraction gratings in the EUV

    NASA Technical Reports Server (NTRS)

    Keski-Kuha, Ritva A. M.; Thomas, Roger J.; Gum, Jeffrey S.; Condor, Charles E.

    1990-01-01

    The effect of multilayer coating application on the performance of a diffraction grating in the EUV spectral region was evaluated by examining the performance of a 3600-line/mm and a 1200-line/mm replica blazed gratings, designed for operation in the 300-A spectral region in first order. A ten-layer IrSi multilayer optimized for 304 A was deposited using electron-beam evaporation. The grating efficiency was measured on the SURF II calibration beamline in a chamber designed for calibrating the solar EUV rocket telescope and spectrograph multilayer coatings. A significant (by a factor of about 7) enhancement in grating efficiency in the 300-A region was demonstrated.

  9. SENSITIVITY ANALYSIS OF THE USEPA WINS PM 2.5 SEPARATOR

    EPA Science Inventory

    Factors affecting the performance of the US EPA WINS PM2.5 separator have been systematically evaluated. In conjunction with the separator's laboratory calibrated penetration curve, analysis of the governing equation that describes conventional impactor performance was used to ...

  10. Design, installation, and performance evaluation of a custom dye matrix standard for automated capillary electrophoresis.

    PubMed

    Cloete, Kevin Wesley; Ristow, Peter Gustav; Kasu, Mohaimin; D'Amato, Maria Eugenia

    2017-03-01

    CE equipment detects and deconvolutes mixtures containing up to six fluorescently labeled DNA fragments. This deconvolution is done by the collection software that requires a spectral calibration file. The calibration file is used to adjust for the overlap that occurs between the emission spectra of fluorescence dyes. All commercial genotyping and sequencing kits require the installation of a corresponding matrix standard to generate a calibration file. Due to the differences in emission spectrum overlap between fluorescent dyes, the application of existing commercial matrix standards to the electrophoretic separation of DNA labeled with other fluorescent dyes can yield undesirable results. Currently, the number of fluorescent dyes available for oligonucleotide labeling surpasses the availability of commercial matrix standards. Therefore, in this study we developed and evaluated a customized matrix standard using ATTO 633, ATTO 565, ATTO 550, ATTO Rho6G, and 6-FAM dyes for which no commercial matrix standard is available. We highlighted the potential genotyping errors of using an incorrect matrix standard by evaluating the relative performance of our custom dye set using six matrix standards. The specific performance of two genotyping kits (UniQTyper™ Y-10 version 1.0 and PowerPlex® Y23 System) was also evaluated using their specific matrix standards. The procedure we followed for the construction of our custom dye matrix standard can be extended to other fluorescent dyes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Innovative methodology for intercomparison of radionuclide calibrators using short half-life in situ prepared radioactive sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira, P. A.; Santos, J. A. M., E-mail: joao.santos@ipoporto.min-saude.pt; Serviço de Física Médica do Instituto Português de Oncologia do Porto Francisco Gentil, EPE, Porto

    2014-07-15

    Purpose: An original radionuclide calibrator method for activity determination is presented. The method could be used for intercomparison surveys for short half-life radioactive sources used in Nuclear Medicine, such as{sup 99m}Tc or most positron emission tomography radiopharmaceuticals. Methods: By evaluation of the resulting net optical density (netOD) using a standardized scanning method of irradiated Gafchromic XRQA2 film, a comparison of the netOD measurement with a previously determined calibration curve can be made and the difference between the tested radionuclide calibrator and a radionuclide calibrator used as reference device can be calculated. To estimate the total expected measurement uncertainties, a carefulmore » analysis of the methodology, for the case of{sup 99m}Tc, was performed: reproducibility determination, scanning conditions, and possible fadeout effects. Since every factor of the activity measurement procedure can influence the final result, the method also evaluates correct syringe positioning inside the radionuclide calibrator. Results: As an alternative to using a calibrated source sent to the surveyed site, which requires a relatively long half-life of the nuclide, or sending a portable calibrated radionuclide calibrator, the proposed method uses a source preparedin situ. An indirect activity determination is achieved by the irradiation of a radiochromic film using {sup 99m}Tc under strictly controlled conditions, and cumulated activity calculation from the initial activity and total irradiation time. The irradiated Gafchromic film and the irradiator, without the source, can then be sent to a National Metrology Institute for evaluation of the results. Conclusions: The methodology described in this paper showed to have a good potential for accurate (3%) radionuclide calibrators intercomparison studies for{sup 99m}Tc between Nuclear Medicine centers without source transfer and can easily be adapted to other short half-life radionuclides.« less

  12. Enhancing Students' Self-Regulation and Mathematics Performance: The Influence of Feedback and Self-Evaluative Standards

    ERIC Educational Resources Information Center

    Labuhn, Andju Sara; Zimmerman, Barry J.; Hasselhorn, Marcus

    2010-01-01

    The purpose of this study was to examine the effects of self-evaluative standards and graphed feedback on calibration accuracy and performance in mathematics. Specifically, we explored the influence of mastery learning standards as opposed to social comparison standards as well as of individual feedback as opposed to social comparison feedback. 90…

  13. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    PubMed Central

    Yu, Chengyi; Chen, Xiaobo; Xi, Juntong

    2017-01-01

    A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844

  14. High-Reynolds Number Active Blowing Semi-Span Force Measurement System Development

    NASA Technical Reports Server (NTRS)

    Lynn, Keith C.; Rhew, Ray D.; Acheson, Michael J.; Jones, Gregory S.; Milholen, William E.; Goodliff, Scott L.

    2012-01-01

    Recent wind-tunnel tests at the NASA Langley Research Center National Transonic Facility utilized high-pressure bellows to route air to the model for evaluating aircraft circulation control. The introduction of these bellows within the Sidewall Model Support System significantly impacted the performance of the external sidewall mounted semi-span balance. As a result of this impact on the semi-span balance measurement performance, it became apparent that a new capability needed to be built into the National Transonic Facility s infrastructure to allow for performing pressure tare calibrations on the balance in order to properly characterize its performance under the influence of static bellows pressure tare loads and bellows thermal effects. The objective of this study was to design both mechanical calibration hardware and an experimental calibration design that can be employed at the facility in order to efficiently and precisely perform the necessary loadings in order to characterize the semi-span balance under the influence of multiple calibration factors (balance forces/moments and bellows pressure/temperature). Using statistical design of experiments, an experimental design was developed allowing for strategically characterizing the behavior of the semi-span balance for use in circulation control and propulsion-type flow control testing at the National Transonic Facility.

  15. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  16. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  17. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  18. Spreadsheet for designing valid least-squares calibrations: A tutorial.

    PubMed

    Bettencourt da Silva, Ricardo J N

    2016-02-01

    Instrumental methods of analysis are used to define the price of goods, the compliance of products with a regulation, or the outcome of fundamental or applied research. These methods can only play their role properly if reported information is objective and their quality is fit for the intended use. If measurement results are reported with an adequately small measurement uncertainty both of these goals are achieved. The evaluation of the measurement uncertainty can be performed by the bottom-up approach, that involves a detailed description of the measurement process, or using a pragmatic top-down approach that quantify major uncertainty components from global performance data. The bottom-up approach is not so frequently used due to the need to master the quantification of individual components responsible for random and systematic effects that affect measurement results. This work presents a tutorial that can be easily used by non-experts in the accurate evaluation of the measurement uncertainty of instrumental methods of analysis calibrated using least-squares regressions. The tutorial includes the definition of the calibration interval, the assessments of instrumental response homoscedasticity, the definition of calibrators preparation procedure required for least-squares regression model application, the assessment of instrumental response linearity and the evaluation of measurement uncertainty. The developed measurement model is only applicable in calibration ranges where signal precision is constant. A MS-Excel file is made available to allow the easy application of the tutorial. This tool can be useful for cases where top-down approaches cannot produce results with adequately low measurement uncertainty. An example of the application of this tool to the determination of nitrate in water by ion chromatography is presented. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. A Monte-Carlo simulation analysis for evaluating the severity distribution functions (SDFs) calibration methodology and determining the minimum sample-size requirements.

    PubMed

    Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique

    2017-01-01

    Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Early Assessment of VIIRS On-Orbit Calibration and Support Activities

    NASA Technical Reports Server (NTRS)

    Xiong, Xiaoxiong; Chiang, Kwofu; McIntire, Jeffrey; Oudrari, Hassan; Wu, Aisheng; Schwaller, Mathew; Butler, James

    2012-01-01

    The Suomi National Polar-orbiting Partnership (S-NPP) satellite, formally the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP), provides a bridge between current and future low-Earth orbiting weather and environmental observation satellite systems. The NASA s NPP VIIRS Characterization Support Team (VCST) is designed to assess the long term geometric and radiometric performance of the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the S-NPP spacecraft and to support NPP Science Team Principal Investigators (PI) for their independent evaluation of VIIRS Environmental Data Records (EDRs). This paper provides an overview of Suomi NPP VIIRS on-orbit calibration activities and examples of sensor initial on-orbit performance. It focuses on the radiometric calibration support activities and capabilities provided by the NASA VCST.

  1. Accuracy, calibration and clinical performance of the EuroSCORE: can we reduce the number of variables?

    PubMed

    Ranucci, Marco; Castelvecchio, Serenella; Menicanti, Lorenzo; Frigiola, Alessandro; Pelissero, Gabriele

    2010-03-01

    The European system for cardiac operative risk evaluation (EuroSCORE) is currently used in many institutions and is considered a reference tool in many countries. We hypothesised that too many variables were included in the EuroSCORE using limited patient series. We tested different models using a limited number of variables. A total of 11150 adult patients undergoing cardiac operations at our institution (2001-2007) were retrospectively analysed. The 17 risk factors composing the EuroSCORE were separately analysed and ranked for accuracy of prediction of hospital mortality. Seventeen models were created by progressively including one factor at a time. The models were compared for accuracy with a receiver operating characteristics (ROC) analysis and area under the curve (AUC) evaluation. Calibration was tested with Hosmer-Lemeshow statistics. Clinical performance was assessed by comparing the predicted with the observed mortality rates. The best accuracy (AUC 0.76) was obtained using a model including only age, left ventricular ejection fraction, serum creatinine, emergency operation and non-isolated coronary operation. The EuroSCORE AUC (0.75) was not significantly different. Calibration and clinical performance were better in the five-factor model than in the EuroSCORE. Only in high-risk patients were 12 factors needed to achieve a good performance. Including many factors in multivariable logistic models increases the risk for overfitting, multicollinearity and human error. A five-factor model offers the same level of accuracy but demonstrated better calibration and clinical performance. Models with a limited number of factors may work better than complex models when applied to a limited number of patients. Copyright (c) 2009 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.

  2. Precipitation Estimate Using NEXRAD Ground-Based Radar Images: Validation, Calibration and Spatial Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong

    2012-12-17

    Precipitation is an important input variable for hydrologic and ecological modeling and analysis. Next Generation Radar (NEXRAD) can provide precipitation products that cover most of the continental United States with a high resolution display of approximately 4 × 4 km2. Two major issues concerning the applications of NEXRAD data are (1) lack of a NEXRAD geo-processing and geo-referencing program and (2) bias correction of NEXRAD estimates. In this chapter, a geographic information system (GIS) based software that can automatically support processing of NEXRAD data for hydrologic and ecological models is presented. Some geostatistical approaches to calibrating NEXRAD data using rainmore » gauge data are introduced, and two case studies on evaluating accuracy of NEXRAD Multisensor Precipitation Estimator (MPE) and calibrating MPE with rain-gauge data are presented. The first case study examines the performance of MPE in mountainous region versus south plains and cold season versus warm season, as well as the effect of sub-grid variability and temporal scale on NEXRAD performance. From the results of the first case study, performance of MPE was found to be influenced by complex terrain, frozen precipitation, sub-grid variability, and temporal scale. Overall, the assessment of MPE indicates the importance of removing bias of the MPE precipitation product before its application, especially in the complex mountainous region. The second case study examines the performance of three MPE calibration methods using rain gauge observations in the Little River Experimental Watershed in Georgia. The comparison results show that no one method can perform better than the others in terms of all evaluation coefficients and for all time steps. For practical estimation of precipitation distribution, implementation of multiple methods to predict spatial precipitation is suggested.« less

  3. Calibration of a Distributed Hydrological Model using Remote Sensing Evapotranspiration data in the Semi-Arid Punjab Region of Pakista

    NASA Astrophysics Data System (ADS)

    Becker, R.; Usman, M.

    2017-12-01

    A SWAT (Soil Water Assessment Tool) model is applied in the semi-arid Punjab region in Pakistan. The physically based hydrological model is set up to simulate hydrological processes and water resources demands under future land use, climate change and irrigation management scenarios. In order to successfully run the model, detailed focus is laid on the calibration procedure of the model. The study deals with the following calibration issues:i. lack of reliable calibration/validation data, ii. difficulty to accurately model a highly managed system with a physically based hydrological model and iii. use of alternative and spatially distributed data sets for model calibration. In our study area field observations are rare and the entirely human controlled irrigation system renders central calibration parameters (e.g. runoff/curve number) unsuitable, as it can't be assumed that they represent the natural behavior of the hydrological system. From evapotranspiration (ET) however principal hydrological processes can still be inferred. Usman et al. (2015) derived satellite based monthly ET data for our study area based on SEBAL (Surface Energy Balance Algorithm) and created a reliable ET data set which we use in this study to calibrate our SWAT model. The initial SWAT model performance is evaluated with respect to the SEBAL results using correlation coefficients, RMSE, Nash-Sutcliffe efficiencies and mean differences. Particular focus is laid on the spatial patters, investigating the potential of a spatially differentiated parameterization instead of just using spatially uniform calibration data. A sensitivity analysis reveals the most sensitive parameters with respect to changes in ET, which are then selected for the calibration process.Using the SEBAL-ET product we calibrate the SWAT model for the time period 2005-2006 using a dynamically dimensioned global search algorithm to minimize RMSE. The model improvement after the calibration procedure is finally evaluated based on the previously chosen evaluation criteria for the time period 2007-2008. The study reveals the sensitivity of SWAT model parameters to changes in ET in a semi-arid and human controlled system and the potential of calibrating those parameters using satellite derived ET data.

  4. Real-Time Point Positioning Performance Evaluation of Single-Frequency Receivers Using NASA's Global Differential GPS System

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.; Iijima, Byron; Meyer, Robert; Bar-Sever, Yoaz; Accad, Elie

    2004-01-01

    This paper evaluates the performance of a single-frequency receiver using the 1-Hz differential corrections as provided by NASA's global differential GPS system. While the dual-frequency user has the ability to eliminate the ionosphere error by taking a linear combination of observables, the single-frequency user must remove or calibrate this error by other means. To remove the ionosphere error we take advantage of the fact that the magnitude of the group delay in range observable and the carrier phase advance have the same magnitude but are opposite in sign. A way to calibrate this error is to use a real-time database of grid points computed by JPL's RTI (Real-Time Ionosphere) software. In both cases we evaluate the positional accuracy of a kinematic carrier phase based point positioning method on a global extent.

  5. SSA Sensor Calibration Best Practices

    NASA Astrophysics Data System (ADS)

    Johnson, T.

    Best practices for calibrating orbit determination sensors in general and space situational awareness (SSA) sensors in particular are presented. These practices were developed over the last ten years within AGI and most recently applied to over 70 sensors in AGI's Commercial Space Operations Center (ComSpOC) and the US Air Force Space Command (AFSPC) Space Surveillance Network (SSN) to evaluate and configure new sensors and perform on-going system calibration. They are generally applicable to any SSA sensor and leverage some unique capabilities of an SSA estimation approach using an optimal sequential filter and smoother. Real world results are presented and analyzed.

  6. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Emilia, Giulio, E-mail: giulio.demilia@univaq.it; Di Gasbarro, David, E-mail: david.digasbarro@graduate.univaq.it; Gaspari, Antonella, E-mail: antonella.gaspari@graduate.univaq.it

    2016-06-28

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behaviormore » if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.« less

  7. [Evaluation of vaporizers by anesthetic gas monitors corrected with a new method for preparation of calibration gases].

    PubMed

    Kurashiki, T

    1996-11-01

    For resolving the discrepancy of concentrations found among anesthetic gas monitors, the author proposed a new method using a vaporizer as a standard anesthetic gas generator for calibration. In this method, the carrier gas volume is measured by a mass flow meter (SEF-510 + FI-101) installed before the inlet of the vaporizer. The vaporized weight of volatile anesthetic agent is simultaneously measured by an electronic force balance (E12000S), on which the vaporizer is placed directly. The molar percent of the anesthetic is calculated using these data and is transformed into the volume percent. These gases discharging from the vaporizer are utilized for calibrating anesthetic gas monitors. These monitors are normalized by the linear equation describing the relationship between concentrations of calibration gases and readings of the anesthetic gas monitors. By using normalized monitors, flow rate-concentration performance curves of several anesthetic vaporizers were obtained. The author concludes that this method can serve as a standard in evaluating anesthetic vaporizers.

  8. Evaluation of Airborne Visible/Infrared Imaging Spectrometer Data of the Mountain Pass, California carbonatite complex

    NASA Technical Reports Server (NTRS)

    Crowley, James; Rowan, Lawrence; Podwysocki, Melvin; Meyer, David

    1988-01-01

    Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data of the Mountain Pass, California carbonatite complex were examined to evaluate the AVIRIS instrument performance and to explore alternative methods of data calibration. Although signal-to-noise estimates derived from the data indicated that the A, B, and C spectrometers generally met the original instrument design objectives, the S/N performance of the D spectrometer was below expectations. Signal-to-noise values of 20 to 1 or lower were typical of the D spectrometer and several detectors in the D spectrometer array were shown to have poor electronic stability. The AVIRIS data also exhibited periodic noise, and were occasionally subject to abrupt dark current offsets. Despite these limitations, a number of mineral absorption bands, including CO3, Al-OH, and unusual rare earth element bands, were observed for mine areas near the main carbonatite body. To discern these bands, two different calibration procedures were applied to remove atmospheric and solar components from the remote sensing data. The two procedures, referred to as the single spectrum and the flat field calibration methods gave distinctly different results. In principle, the single spectrum method should be more accurate; however, additional fieldwork is needed to rigorously determine the degree of calibration success.

  9. Digital filtering and model updating methods for improving the robustness of near-infrared multivariate calibrations.

    PubMed

    Kramer, Kirsten E; Small, Gary W

    2009-02-01

    Fourier transform near-infrared (NIR) transmission spectra are used for quantitative analysis of glucose for 17 sets of prediction data sampled as much as six months outside the timeframe of the corresponding calibration data. Aqueous samples containing physiological levels of glucose in a matrix of bovine serum albumin and triacetin are used to simulate clinical samples such as blood plasma. Background spectra of a single analyte-free matrix sample acquired during the instrumental warm-up period on the prediction day are used for calibration updating and for determining the optimal frequency response of a preprocessing infinite impulse response time-domain digital filter. By tuning the filter and the calibration model to the specific instrumental response associated with the prediction day, the calibration model is given enhanced ability to operate over time. This methodology is demonstrated in conjunction with partial least squares calibration models built with a spectral range of 4700-4300 cm(-1). By using a subset of the background spectra to evaluate the prediction performance of the updated model, projections can be made regarding the success of subsequent glucose predictions. If a threshold standard error of prediction (SEP) of 1.5 mM is used to establish successful model performance with the glucose samples, the corresponding threshold for the SEP of the background spectra is found to be 1.3 mM. For calibration updating in conjunction with digital filtering, SEP values of all 17 prediction sets collected over 3-178 days displaced from the calibration data are below 1.5 mM. In addition, the diagnostic based on the background spectra correctly assesses the prediction performance in 16 of the 17 cases.

  10. A calibration rig for multi-component internal strain gauge balance using the new design-of-experiment (DOE) approach

    NASA Astrophysics Data System (ADS)

    Nouri, N. M.; Mostafapour, K.; Kamran, M.

    2018-02-01

    In a closed water-tunnel circuit, the multi-component strain gauge force and moment sensor (also known as balance) are generally used to measure hydrodynamic forces and moments acting on scaled models. These balances are periodically calibrated by static loading. Their performance and accuracy depend significantly on the rig and the method of calibration. In this research, a new calibration rig was designed and constructed to calibrate multi-component internal strain gauge balances. The calibration rig has six degrees of freedom and six different component-loading structures that can be applied separately and synchronously. The system was designed based on the applicability of formal experimental design techniques, using gravity for balance loading and balance positioning and alignment relative to gravity. To evaluate the calibration rig, a six-component internal balance developed by Iran University of Science and Technology was calibrated using response surface methodology. According to the results, calibration rig met all design criteria. This rig provides the means by which various methods of formal experimental design techniques can be implemented. The simplicity of the rig saves time and money in the design of experiments and in balance calibration while simultaneously increasing the accuracy of these activities.

  11. Aquarius Whole Range Calibration: Celestial Sky, Ocean, and Land Targets

    NASA Technical Reports Server (NTRS)

    Dinnat, Emmanuel P.; Le Vine, David M.; Bindlish, Rajat; Piepmeier, Jeffrey R.; Brown, Shannon T.

    2014-01-01

    Aquarius is a spaceborne instrument that uses L-band radiometers to monitor sea surface salinity globally. Other applications of its data over land and the cryosphere are being developed. Combining its measurements with existing and upcoming L-band sensors will allow for long term studies. For that purpose, the radiometers calibration is critical. Aquarius measurements are currently calibrated over the oceans. They have been found too cold at the low end (celestial sky) of the brightness temperature scale, and too warm at the warm end (land and ice). We assess the impact of the antenna pattern model on the biases and propose a correction. We re-calibrate Aquarius measurements using the corrected antenna pattern and measurements over the Sky and oceans. The performances of the new calibration are evaluated using measurements over well instrument land sites.

  12. Calibrating the orientation between a microlens array and a sensor based on projective geometry

    NASA Astrophysics Data System (ADS)

    Su, Lijuan; Yan, Qiangqiang; Cao, Jun; Yuan, Yan

    2016-07-01

    We demonstrate a method for calibrating a microlens array (MLA) with a sensor component by building a plenoptic camera with a conventional prime lens. This calibration method includes a geometric model, a setup to adjust the distance (L) between the prime lens and the MLA, a calibration procedure for determining the subimage centers, and an optimization algorithm. The geometric model introduces nine unknown parameters regarding the centers of the microlenses and their images, whereas the distance adjustment setup provides an initial guess for the distance L. The simulation results verify the effectiveness and accuracy of the proposed method. The experimental results demonstrate the calibration process can be performed with a commercial prime lens and the proposed method can be used to quantitatively evaluate whether a MLA and a sensor is assembled properly for plenoptic systems.

  13. Wavelet Analysis Used for Spectral Background Removal in the Determination of Glucose from Near-Infrared Single-Beam Spectra

    PubMed Central

    Wan, Boyong; Small, Gary W.

    2010-01-01

    Wavelet analysis is developed as a preprocessing tool for use in removing background information from near-infrared (near-IR) single-beam spectra before the construction of multivariate calibration models. Three data sets collected with three different near-IR spectrometers are investigated that involve the determination of physiological levels of glucose (1-30 mM) in a simulated biological matrix containing alanine, ascorbate, lactate, triacetin, and urea in phosphate buffer. A factorial design is employed to optimize the specific wavelet function used and the level of decomposition applied, in addition to the spectral range and number of latent variables associated with a partial least-squares calibration model. The prediction performance of the computed models is studied with separate data acquired after the collection of the calibration spectra. This evaluation includes one data set collected over a period of more than six months. Preprocessing with wavelet analysis is also compared to the calculation of second-derivative spectra. Over the three data sets evaluated, wavelet analysis is observed to produce better-performing calibration models, with improvements in concentration predictions on the order of 30% being realized relative to models based on either second-derivative spectra or spectra preprocessed with simple additive and multiplicative scaling correction. This methodology allows the construction of stable calibrations directly with single-beam spectra, thereby eliminating the need for the collection of a separate background or reference spectrum. PMID:21035604

  14. Wavelet analysis used for spectral background removal in the determination of glucose from near-infrared single-beam spectra.

    PubMed

    Wan, Boyong; Small, Gary W

    2010-11-29

    Wavelet analysis is developed as a preprocessing tool for use in removing background information from near-infrared (near-IR) single-beam spectra before the construction of multivariate calibration models. Three data sets collected with three different near-IR spectrometers are investigated that involve the determination of physiological levels of glucose (1-30 mM) in a simulated biological matrix containing alanine, ascorbate, lactate, triacetin, and urea in phosphate buffer. A factorial design is employed to optimize the specific wavelet function used and the level of decomposition applied, in addition to the spectral range and number of latent variables associated with a partial least-squares calibration model. The prediction performance of the computed models is studied with separate data acquired after the collection of the calibration spectra. This evaluation includes one data set collected over a period of more than 6 months. Preprocessing with wavelet analysis is also compared to the calculation of second-derivative spectra. Over the three data sets evaluated, wavelet analysis is observed to produce better-performing calibration models, with improvements in concentration predictions on the order of 30% being realized relative to models based on either second-derivative spectra or spectra preprocessed with simple additive and multiplicative scaling correction. This methodology allows the construction of stable calibrations directly with single-beam spectra, thereby eliminating the need for the collection of a separate background or reference spectrum. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Simulation of the Quantity, Variability, and Timing of Streamflow in the Dennys River Basin, Maine, by Use of a Precipitation-Runoff Watershed Model

    USGS Publications Warehouse

    Dudley, Robert W.

    2008-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Maine Department of Marine Resources Bureau of Sea Run Fisheries and Habitat, began a study in 2004 to characterize the quantity, variability, and timing of streamflow in the Dennys River. The study included a synoptic summary of historical streamflow data at a long-term streamflow gage, collecting data from an additional four short-term streamflow gages, and the development and evaluation of a distributed-parameter watershed model for the Dennys River Basin. The watershed model used in this investigation was the USGS Precipitation-Runoff Modeling System (PRMS). The Geographic Information System (GIS) Weasel was used to delineate the Dennys River Basin and subbasins and derive parameters for their physical geographic features. Calibration of the models used in this investigation involved a four-step procedure in which model output was evaluated against four calibration data sets using computed objective functions for solar radiation, potential evapotranspiration, annual and seasonal water budgets, and daily streamflows. The calibration procedure involved thousands of model runs and was carried out using the USGS software application Luca (Let us calibrate). Luca uses the Shuffled Complex Evolution (SCE) global search algorithm to calibrate the model parameters. The SCE method reliably produces satisfactory solutions for large, complex optimization problems. The primary calibration effort went into the Dennys main stem watershed model. Calibrated parameter values obtained for the Dennys main stem model were transferred to the Cathance Stream model, and a similar four-step SCE calibration procedure was performed; this effort was undertaken to determine the potential to transfer modeling information to a nearby basin in the same region. The calibrated Dennys main stem watershed model performed with Nash-Sutcliffe efficiency (NSE) statistic values for the calibration period and evaluation period of 0.79 and 0.76, respectively. The Cathance Stream model had an NSE value of 0.68. The Dennys River Basin models make use of limited streamflow-gaging station data and provide information to characterize subbasin hydrology. The calibrated PRMS watershed models of the Dennys River Basin provide simulated daily streamflow time series from October 1, 1985, through September 30, 2006, for nearly any location within the basin. These models enable natural-resources managers to characterize the timing and quantity of water moving through the basin to support many endeavors including geochemical calculations, water-use assessment, Atlantic salmon population dynamics and migration modeling, habitat modeling and assessment, and other resource-management scenario evaluations. Characterizing streamflow contributions from subbasins in the basin and the relative amounts of surface- and ground-water contributions to streamflow throughout the basin will lead to a better understanding of water quantity and quality in the basin. Improved water-resources information will support Atlantic salmon protection efforts.

  16. Risk scores for outcome in bacterial meningitis: Systematic review and external validation study.

    PubMed

    Bijlsma, Merijn W; Brouwer, Matthijs C; Bossuyt, Patrick M; Heymans, Martijn W; van der Ende, Arie; Tanck, Michael W T; van de Beek, Diederik

    2016-11-01

    To perform an external validation study of risk scores, identified through a systematic review, predicting outcome in community-acquired bacterial meningitis. MEDLINE and EMBASE were searched for articles published between January 1960 and August 2014. Performance was evaluated in 2108 episodes of adult community-acquired bacterial meningitis from two nationwide prospective cohort studies by the area under the receiver operating characteristic curve (AUC), the calibration curve, calibration slope or Hosmer-Lemeshow test, and the distribution of calculated risks. Nine risk scores were identified predicting death, neurological deficit or death, or unfavorable outcome at discharge in bacterial meningitis, pneumococcal meningitis and invasive meningococcal disease. Most studies had shortcomings in design, analyses, and reporting. Evaluation showed AUCs of 0.59 (0.57-0.61) and 0.74 (0.71-0.76) in bacterial meningitis, 0.67 (0.64-0.70) in pneumococcal meningitis, and 0.81 (0.73-0.90), 0.82 (0.74-0.91), 0.84 (0.75-0.93), 0.84 (0.76-0.93), 0.85 (0.75-0.95), and 0.90 (0.83-0.98) in meningococcal meningitis. Calibration curves showed adequate agreement between predicted and observed outcomes for four scores, but statistical tests indicated poor calibration of all risk scores. One score could be recommended for the interpretation and design of bacterial meningitis studies. None of the existing scores performed well enough to recommend routine use in individual patient management. Copyright © 2016 The British Infection Association. Published by Elsevier Ltd. All rights reserved.

  17. Review of technological advancements in calibration systems for laser vision correction

    NASA Astrophysics Data System (ADS)

    Arba-Mosquera, Samuel; Vinciguerra, Paolo; Verma, Shwetabh

    2018-02-01

    Using PubMed and our internal database, we extensively reviewed the literature on the technological advancements in calibration systems, with a motive to present an account of the development history, and latest developments in calibration systems used in refractive surgery laser systems. As a second motive, we explored the clinical impact of the error introduced due to the roughness in ablation and its corresponding effect on system calibration. The inclusion criterion for this review was strict relevance to the clinical questions under research. The existing calibration methods, including various plastic models, are highly affected by various factors involved in refractive surgery, such as temperature, airflow, and hydration. Surface roughness plays an important role in accurate measurement of ablation performance on calibration materials. The ratio of ablation efficiency between the human cornea and calibration material is very critical and highly dependent on the laser beam characteristics and test conditions. Objective evaluation of the calibration data and corresponding adjustment of the laser systems at regular intervals are essential for the continuing success and further improvements in outcomes of laser vision correction procedures.

  18. Calibration and assessment of electrochemical air quality sensors by co-location with regulatory-grade instruments

    NASA Astrophysics Data System (ADS)

    Hagan, David H.; Isaacman-VanWertz, Gabriel; Franklin, Jonathan P.; Wallace, Lisa M. M.; Kocar, Benjamin D.; Heald, Colette L.; Kroll, Jesse H.

    2018-01-01

    The use of low-cost air quality sensors for air pollution research has outpaced our understanding of their capabilities and limitations under real-world conditions, and there is thus a critical need for understanding and optimizing the performance of such sensors in the field. Here we describe the deployment, calibration, and evaluation of electrochemical sensors on the island of Hawai`i, which is an ideal test bed for characterizing such sensors due to its large and variable sulfur dioxide (SO2) levels and lack of other co-pollutants. Nine custom-built SO2 sensors were co-located with two Hawaii Department of Health Air Quality stations over the course of 5 months, enabling comparison of sensor output with regulatory-grade instruments under a range of realistic environmental conditions. Calibration using a nonparametric algorithm (k nearest neighbors) was found to have excellent performance (RMSE < 7 ppb, MAE < 4 ppb, r2 > 0.997) across a wide dynamic range in SO2 (< 1 ppb, > 2 ppm). However, since nonparametric algorithms generally cannot extrapolate to conditions beyond those outside the training set, we introduce a new hybrid linear-nonparametric algorithm, enabling accurate measurements even when pollutant levels are higher than encountered during calibration. We find no significant change in instrument sensitivity toward SO2 after 18 weeks and demonstrate that calibration accuracy remains high when a sensor is calibrated at one location and then moved to another. The performance of electrochemical SO2 sensors is also strong at lower SO2 mixing ratios (< 25 ppb), for which they exhibit an error of less than 2.5 ppb. While some specific results of this study (calibration accuracy, performance of the various algorithms, etc.) may differ for measurements of other pollutant species in other areas (e.g., polluted urban regions), the calibration and validation approaches described here should be widely applicable to a range of pollutants, sensors, and environments.

  19. Timing Calibration in PET Using a Time Alignment Probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moses, William W.; Thompson, Christopher J.

    2006-05-05

    We evaluate the Scanwell Time Alignment Probe for performing the timing calibration for the LBNL Prostate-Specific PET Camera. We calibrate the time delay correction factors for each detector module in the camera using two methods--using the Time Alignment Probe (which measures the time difference between the probe and each detector module) and using the conventional method (which measures the timing difference between all module-module combinations in the camera). These correction factors, which are quantized in 2 ns steps, are compared on a module-by-module basis. The values are in excellent agreement--of the 80 correction factors, 62 agree exactly, 17 differ bymore » 1 step, and 1 differs by 2 steps. We also measure on-time and off-time counting rates when the two sets of calibration factors are loaded into the camera and find that they agree within statistical error. We conclude that the performance using the Time Alignment Probe and conventional methods are equivalent.« less

  20. LANDSAT-4 Scientific Characterization: Early Results Symposium

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Radiometric calibration, geometric accuracy, spatial and spectral resolution, and image quality are examined for the thematic mapper and the multispectral band scanner on LANDSAT 4. Sensor performance is evaluated.

  1. Performance Testing of Tracer Gas and Tracer Aerosol Detectors for use in Radionuclide NESHAP Compliance Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuehne, David Patrick; Lattin, Rebecca Renee

    The Rad-NESHAP program, part of the Air Quality Compliance team of LANL’s Compliance Programs group (EPC-CP), and the Radiation Instrumentation & Calibration team, part of the Radiation Protection Services group (RP-SVS), frequently partner on issues relating to characterizing air flow streams. This memo documents the most recent example of this partnership, involving performance testing of sulfur hexafluoride detectors for use in stack gas mixing tests. Additionally, members of the Rad-NESHAP program performed a functional trending test on a pair of optical particle counters, comparing results from a non-calibrated instrument to a calibrated instrument. Prior to commissioning a new stack samplingmore » system, the ANSI Standard for stack sampling requires that the stack sample location must meet several criteria, including uniformity of tracer gas and aerosol mixing in the air stream. For these mix tests, tracer media (sulfur hexafluoride gas or liquid oil aerosol particles) are injected into the stack air stream and the resulting air concentrations are measured across the plane of the stack at the proposed sampling location. The coefficient of variation of these media concentrations must be under 20% when evaluated over the central 2/3 area of the stack or duct. The instruments which measure these air concentrations must be tested prior to the stack tests in order to ensure their linear response to varying air concentrations of either tracer gas or tracer aerosol. The instruments used in tracer gas and aerosol mix testing cannot be calibrated by the LANL Standards and Calibration Laboratory, so they would normally be sent off-site for factory calibration by the vendor. Operational requirements can prevent formal factory calibration of some instruments after they have been used in hazardous settings, e.g., within a radiological facility with potential airborne contamination. The performance tests described in this document are intended to demonstrate the reliable performance of the test instruments for the specific tests used in stack flow characterization.« less

  2. The New Sun-Sky-Lunar Cimel CE318-T Multiband Photometer - A Comprehensive Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Barreto, Africa; Cuevas, Emilio; Granados-Munoz, Maria-Jose; Alados-Arboledas, Lucas; Romero, Pedro M.; Grobner, Julian; Kouremeti, Natalia; Almansa, Antonio F.; Stone, Tom; Toledano, Carlos; hide

    2016-01-01

    This paper presents the new photometer CE318-T, able to perform daytime and night-time photometric measurements using the sun and the moon as light source. Therefore,this new device permits a complete cycle of diurnal aerosol and water vapour measurements valuable to enhance atmospheric monitoring to be extracted. In this study wehave found significantly higher precision of triplets when comparing the CE318-T master instrument and the Cimel AErosol RObotic NET work (AERONET) master (CE318-AERONET) triplets as a result of the new CE318-T tracking system. Regarding the instrument calibration, two new methodologies to transfer the calibration from a reference instrument using only daytime measurements (Sun Ratio and Sun-Moon gain factor techniques) are presented and discussed. These methods allow the reduction of the previous complexities inherent to nocturnal calibration. A quantitative estimation of CE318-T AOD uncertainty by means of error propagation theory during daytime revealed AOD uncertainties (u(sup D)(sub AOD)) for Langley-calibrated instruments similar to the expected values for other reference instruments (0.002-0.009). We have also found u(sup D)(sub AOD) values similar to the values reported in sun photometry for field instruments (approximately 0.015). In the case of the night-time period, the CE318-T-estimated standard combined uncertainty (u(sup N)(sub AOD)) is dependent not only on the calibration technique but also on illumination conditions and the instrumental noise. These values range from 0.011-0.018 for Lunar Langley-calibrated instruments to 0.012-0.021 for instruments calibrated using the Sun Ratio technique. In the case of moon-calibrated instruments using the Sun-Moon gain factor method and sun calibrated using the Langley technique, we found u(sup N)(sub AOD) ranging from 0.016 to 0.017 (up to 0.019 in 440 nm channel), not dependent on any lunar irradiance model. A subsequent performance evaluation including CE318-T and collocated measurements from independent reference instruments has served to assess the CE318-T performance as well as to confirm its estimated uncertainty. Daytime AOD evaluation, performed at Izana station from March to June 2014, encompassed measurements from a reference CE318-T, a CE318-AERONET master instrument, a Precision Filter Radiometer (PFR) and a Precision Spectroradiometer (PSR) prototype, reporting low AOD discrepancies between the four instruments (up to 0.006). The nocturnal AOD evaluation was performed using CE318-T- and starphotometer-collocated measurements and also by means of a day/night coherence transition test using the CE318-T master instrument and the CE318 daytime data from the CE318-AERONET master instrument. Results showed low discrepancies with the star photometer at 870 and 500 nm channels(less tna or equal to 0.013) and differences with AERONET daytime data (1 h after and before sunset and sunrise) in agreement with the estimated u(sup N)(sub AOD) values at all illumination conditions in the case of channels within the visible spectral range, and only for high moon's illumination conditions in the case of near infrared channels. Perceptible water vapour (PWV) validation showed a good agreement between CE318-T and Global Navigation Satellite System (GNSS) PWV values for all illumination conditions, within the expected precision for sun photometry. Finally, two case studies have been included to highlight the ability of the new CE318-T to capture the diurnal cycle of aerosols and water vapour as well as short-term atmospheric variations, critical for climate studies.

  3. The effect of rainfall measurement uncertainties on rainfall-runoff processes modelling.

    PubMed

    Stransky, D; Bares, V; Fatka, P

    2007-01-01

    Rainfall data are a crucial input for various tasks concerning the wet weather period. Nevertheless, their measurement is affected by random and systematic errors that cause an underestimation of the rainfall volume. Therefore, the general objective of the presented work was to assess the credibility of measured rainfall data and to evaluate the effect of measurement errors on urban drainage modelling tasks. Within the project, the methodology of the tipping bucket rain gauge (TBR) was defined and assessed in terms of uncertainty analysis. A set of 18 TBRs was calibrated and the results were compared to the previous calibration. This enables us to evaluate the ageing of TBRs. A propagation of calibration and other systematic errors through the rainfall-runoff model was performed on experimental catchment. It was found that the TBR calibration is important mainly for tasks connected with the assessment of peak values and high flow durations. The omission of calibration leads to up to 30% underestimation and the effect of other systematic errors can add a further 15%. The TBR calibration should be done every two years in order to catch up the ageing of TBR mechanics. Further, the authors recommend to adjust the dynamic test duration proportionally to generated rainfall intensity.

  4. Evaluation and calibration of mobile phones for noise monitoring application.

    PubMed

    Ventura, Raphaël; Mallet, Vivien; Issarny, Valérie; Raverdy, Pierre-Guillaume; Rebhi, Fadwa

    2017-11-01

    The increasing number and quality of sensors integrated in mobile phones have paved the way for sensing schemes driven by city dwellers. The sensing quality can drastically depend on the mobile phone, and appropriate calibration strategies are needed. This paper evaluates the quality of noise measurements acquired by a variety of Android phones. The Ambiciti application was developed so as to acquire a larger control over the acquisition process. Pink and narrowband noises were used to evaluate the phones' accuracy at levels ranging from background noise to 90 dB(A) inside the lab. Conclusions of this evaluation lead to the proposition of a calibration strategy that has been embedded in Ambiciti and applied to more than 50 devices during public events. A performance analysis addressed the range, accuracy, precision, and reproducibility of measurements. After identification and removal of a bias, the measurement error standard deviation is below 1.2 dB(A) within a wide range of noise levels [45 to 75 dB(A)], for 12 out of 15 phones calibrated in the lab. In the perspective of citizens-driven noise sensing, in situ experiments were carried out, while additional tests helped to produce recommendations regarding the sensing context (grip, orientation, moving speed, mitigation, frictions, wind).

  5. The SPAtial EFficiency metric (SPAEF): multiple-component evaluation of spatial patterns for optimization of hydrological models

    NASA Astrophysics Data System (ADS)

    Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon

    2018-05-01

    The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.

  6. Computer program determines performance efficiency of remote measuring systems

    NASA Technical Reports Server (NTRS)

    Merewether, E. K.

    1966-01-01

    Computer programs control and evaluate instrumentation system performance for numerous rocket engine test facilities and prescribe calibration and maintenance techniques to maintain the systems within process specifications. Similar programs can be written for other test equipment in an industry such as the petrochemical industry.

  7. The Preflight Calibration of the Thermal Infrared Sensor (TIRS) on the Landsat Data Continuity Mission

    NASA Technical Reports Server (NTRS)

    Smith, Ramsey; Reuter, Dennis; Irons, James; Lunsford, Allen; Montanero, Matthew; Tesfaye, Zelalem; Wenny, Brian; Thome, Kurtis

    2011-01-01

    The preflight calibration testing of TIRS evaluates the performance of the instrument at the component, subsystem and system level, The overall objective is to provide an instrument that is well calibrated and well characterized with specification compliant data that will ensure the data continuity of Landsat from the previous missions to the LDCM, The TIRS flight build unit and the flight instrument were assessed through a series of calibration tests at NASA Goddard Space Flight Center. Instrument-level requirements played a strong role in defining the test equipment and procedures used for the calibration in the thermal/vacuum chamber. The calibration ground support equipment (CGSE), manufactured by MEI and ATK Corporation, was used to measure the optical, radiometric and geometric characteristics of TIRS, The CGSE operates in three test configurations: GeoRad (geometric, radiometric and spatial), flood source and spectral, TIRS was evaluated though the following tests: bright target recovery, radiometry, spectral response, spatial shape, scatter, stray light, focus, and uniformity, Data were obtained for the instrument and various subsystems under conditions simulating those on orbit In the spectral configuration, a monochromator system with a blackbody source is used for in-band and out-of-band relative spectral response characterization, In the flood source configuration the entire focal plane array is illuminated simultaneously to investigate pixel-to-pixel uniformity and dead or inoperable pixels, The remaining tests were executed in the GeoRad configuration and use a NIST calibrated cavity blackbody source, The NIST calibration is transferred to the TIRS sensor and to the blackbody source on-board TIRS, The onboard calibrator will be the primary calibration source for the TIRS sensor on orbit.

  8. MODIS On-Board Blackbody Function and Performance

    NASA Technical Reports Server (NTRS)

    Xiaoxiong, Xiong; Wenny, Brian N.; Wu, Aisheng; Barnes, William

    2009-01-01

    Two MODIS instruments are currently in orbit, making continuous global observations in visible to long-wave infrared wavelengths. Compared to heritage sensors, MODIS was built with an advanced set of on-board calibrators, providing sensor radiometric, spectral, and spatial calibration and characterization during on-orbit operation. For the thermal emissive bands (TEB) with wavelengths from 3.7 m to 14.4 m, a v-grooved blackbody (BB) is used as the primary calibration source. The BB temperature is accurately measured each scan (1.47s) using a set of 12 temperature sensors traceable to NIST temperature standards. The onboard BB is nominally operated at a fixed temperature, 290K for Terra MODIS and 285K for Aqua MODIS, to compute the TEB linear calibration coefficients. Periodically, its temperature is varied from 270K (instrument ambient) to 315K in order to evaluate and update the nonlinear calibration coefficients. This paper describes MODIS on-board BB functions with emphasis on on-orbit operation and performance. It examines the BB temperature uncertainties under different operational conditions and their impact on TEB calibration and data product quality. The temperature uniformity of the BB is also evaluated using TEB detector responses at different operating temperatures. On-orbit results demonstrate excellent short-term and long-term stability for both the Terra and Aqua MODIS on-board BB. The on-orbit BB temperature uncertainty is estimated to be 10mK for Terra MODIS at 290K and 5mK for Aqua MODIS at 285K, thus meeting the TEB design specifications. In addition, there has been no measurable BB temperature drift over the entire mission of both Terra and Aqua MODIS.

  9. Chapter 15: Commercial New Construction Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Keates, Steven

    This protocol is intended to describe the recommended method when evaluating the whole-building performance of new construction projects in the commercial sector. The protocol focuses on energy conservation measures (ECMs) or packages of measures where evaluators can analyze impacts using building simulation. These ECMs typically require the use of calibrated building simulations under Option D of the International Performance Measurement and Verification Protocol (IPMVP).

  10. Calibration transfer of a Raman spectroscopic quantification method for the assessment of liquid detergent compositions from at-line laboratory to in-line industrial scale.

    PubMed

    Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T

    2018-03-01

    Calibration transfer or standardisation aims at creating a uniform spectral response on different spectroscopic instruments or under varying conditions, without requiring a full recalibration for each situation. In the current study, this strategy is applied to construct at-line multivariate calibration models and consequently employ them in-line in a continuous industrial production line, using the same spectrometer. Firstly, quantitative multivariate models are constructed at-line at laboratory scale for predicting the concentration of two main ingredients in hard surface cleaners. By regressing the Raman spectra of a set of small-scale calibration samples against their reference concentration values, partial least squares (PLS) models are developed to quantify the surfactant levels in the liquid detergent compositions under investigation. After evaluating the models performance with a set of independent validation samples, a univariate slope/bias correction is applied in view of transporting these at-line calibration models to an in-line manufacturing set-up. This standardisation technique allows a fast and easy transfer of the PLS regression models, by simply correcting the model predictions on the in-line set-up, without adjusting anything to the original multivariate calibration models. An extensive statistical analysis is performed in order to assess the predictive quality of the transferred regression models. Before and after transfer, the R 2 and RMSEP of both models is compared for evaluating if their magnitude is similar. T-tests are then performed to investigate whether the slope and intercept of the transferred regression line are not statistically different from 1 and 0, respectively. Furthermore, it is inspected whether no significant bias can be noted. F-tests are executed as well, for assessing the linearity of the transfer regression line and for investigating the statistical coincidence of the transfer and validation regression line. Finally, a paired t-test is performed to compare the original at-line model to the slope/bias corrected in-line model, using interval hypotheses. It is shown that the calibration models of Surfactant 1 and Surfactant 2 yield satisfactory in-line predictions after slope/bias correction. While Surfactant 1 passes seven out of eight statistical tests, the recommended validation parameters are 100% successful for Surfactant 2. It is hence concluded that the proposed strategy for transferring at-line calibration models to an in-line industrial environment via a univariate slope/bias correction of the predicted values offers a successful standardisation approach. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  12. Calibrated Noise Measurements with Induced Receiver Gain Fluctuations

    NASA Technical Reports Server (NTRS)

    Racette, Paul; Walker, David; Gu, Dazhen; Rajola, Marco; Spevacek, Ashly

    2011-01-01

    The lack of well-developed techniques for modeling changing statistical moments in our observations has stymied the application of stochastic process theory in science and engineering. These limitations were encountered when modeling the performance of radiometer calibration architectures and algorithms in the presence of non stationary receiver fluctuations. Analyses of measured signals have traditionally been limited to a single measurement series. Whereas in a radiometer that samples a set of noise references, the data collection can be treated as an ensemble set of measurements of the receiver state. Noise Assisted Data Analysis is a growing field of study with significant potential for aiding the understanding and modeling of non stationary processes. Typically, NADA entails adding noise to a signal to produce an ensemble set on which statistical analysis is performed. Alternatively as in radiometric measurements, mixing a signal with calibrated noise provides, through the calibration process, the means to detect deviations from the stationary assumption and thereby a measurement tool to characterize the signal's non stationary properties. Data sets comprised of calibrated noise measurements have been limited to those collected with naturally occurring fluctuations in the radiometer receiver. To examine the application of NADA using calibrated noise, a Receiver Gain Modulation Circuit (RGMC) was designed and built to modulate the gain of a radiometer receiver using an external signal. In 2010, an RGMC was installed and operated at the National Institute of Standards and Techniques (NIST) using their Noise Figure Radiometer (NFRad) and national standard noise references. The data collected is the first known set of calibrated noise measurements from a receiver with an externally modulated gain. As an initial step, sinusoidal and step-function signals were used to modulate the receiver gain, to evaluate the circuit characteristics and to study the performance of a variety of calibration algorithms. The receiver noise temperature and time-bandwidth product of the NFRad are calculated from the data. Statistical analysis using temporal-dependent calibration algorithms reveals that the natural occurring fluctuations in the receiver are stationary over long intervals (100s of seconds); however the receiver exhibits local non stationarity over the interval over which one set of reference measurements are collected. A variety of calibration algorithms have been applied to the data to assess algorithms' performance with the gain fluctuation signals. This presentation will describe the RGMC, experiment design and a comparative analysis of calibration algorithms.

  13. In-Flight Calibration Processes for the MMS Fluxgate Magnetometers

    NASA Astrophysics Data System (ADS)

    Bromund, K. R.; Leinweber, H. K.; Plaschke, F.; Strangeway, R. J.; Magnes, W.; Fischer, D.; Nakamura, R.; Anderson, B. J.; Russell, C. T.; Baumjohann, W.; Chutter, M.; Torbert, R. B.; Le, G.; Slavin, J. A.; Kepko, L.

    2015-12-01

    The calibration effort for the Magnetospheric Multiscale Mission (MMS) Analog Fluxgate (AFG) and Digital Fluxgate (DFG) magnetometers is a coordinated effort between three primary institutions: University of California, Los Angeles (UCLA); Space Research Institute, Graz, Austria (IWF); and Goddard Space Flight Center (GSFC). Since the successful deployment of all 8 magnetometers on 17 March 2015, the effort to confirm and update the ground calibrations has been underway during the MMS commissioning phase. The in-flight calibration processes evaluate twelve parameters that determine the alignment, orthogonalization, offsets, and gains for all 8 magnetometers using algorithms originally developed by UCLA and the Technical University of Braunschweig and tailored to MMS by IWF, UCLA, and GSFC. We focus on the processes run at GSFC to determine the eight parameters associated with spin tones and harmonics. We will also discuss the processing flow and interchange of parameters between GSFC, IWF, and UCLA. IWF determines the low range spin axis offsets using the Electron Drift Instrument (EDI). UCLA determines the absolute gains and sensor azimuth orientation using Earth field comparisons. We evaluate the performance achieved for MMS and give examples of the quality of the resulting calibrations.

  14. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler

    PubMed Central

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-01-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  15. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.

  16. Combining satellite data and appropriate objective functions for improved spatial pattern performance of a distributed hydrologic model

    NASA Astrophysics Data System (ADS)

    Demirel, Mehmet C.; Mai, Juliane; Mendiguren, Gorka; Koch, Julian; Samaniego, Luis; Stisen, Simon

    2018-02-01

    Satellite-based earth observations offer great opportunities to improve spatial model predictions by means of spatial-pattern-oriented model evaluations. In this study, observed spatial patterns of actual evapotranspiration (AET) are utilised for spatial model calibration tailored to target the pattern performance of the model. The proposed calibration framework combines temporally aggregated observed spatial patterns with a new spatial performance metric and a flexible spatial parameterisation scheme. The mesoscale hydrologic model (mHM) is used to simulate streamflow and AET and has been selected due to its soil parameter distribution approach based on pedo-transfer functions and the build in multi-scale parameter regionalisation. In addition two new spatial parameter distribution options have been incorporated in the model in order to increase the flexibility of root fraction coefficient and potential evapotranspiration correction parameterisations, based on soil type and vegetation density. These parameterisations are utilised as they are most relevant for simulated AET patterns from the hydrologic model. Due to the fundamental challenges encountered when evaluating spatial pattern performance using standard metrics, we developed a simple but highly discriminative spatial metric, i.e. one comprised of three easily interpretable components measuring co-location, variation and distribution of the spatial data. The study shows that with flexible spatial model parameterisation used in combination with the appropriate objective functions, the simulated spatial patterns of actual evapotranspiration become substantially more similar to the satellite-based estimates. Overall 26 parameters are identified for calibration through a sequential screening approach based on a combination of streamflow and spatial pattern metrics. The robustness of the calibrations is tested using an ensemble of nine calibrations based on different seed numbers using the shuffled complex evolution optimiser. The calibration results reveal a limited trade-off between streamflow dynamics and spatial patterns illustrating the benefit of combining separate observation types and objective functions. At the same time, the simulated spatial patterns of AET significantly improved when an objective function based on observed AET patterns and a novel spatial performance metric compared to traditional streamflow-only calibration were included. Since the overall water balance is usually a crucial goal in hydrologic modelling, spatial-pattern-oriented optimisation should always be accompanied by traditional discharge measurements. In such a multi-objective framework, the current study promotes the use of a novel bias-insensitive spatial pattern metric, which exploits the key information contained in the observed patterns while allowing the water balance to be informed by discharge observations.

  17. Evaluation of a numerical model's ability to predict bed load transport observed in braided river experiments

    NASA Astrophysics Data System (ADS)

    Javernick, Luke; Redolfi, Marco; Bertoldi, Walter

    2018-05-01

    New data collection techniques offer numerical modelers the ability to gather and utilize high quality data sets with high spatial and temporal resolution. Such data sets are currently needed for calibration, verification, and to fuel future model development, particularly morphological simulations. This study explores the use of high quality spatial and temporal data sets of observed bed load transport in braided river flume experiments to evaluate the ability of a two-dimensional model, Delft3D, to predict bed load transport. This study uses a fixed bed model configuration and examines the model's shear stress calculations, which are the foundation to predict the sediment fluxes necessary for morphological simulations. The evaluation is conducted for three flow rates, and model setup used highly accurate Structure-from-Motion (SfM) topography and discharge boundary conditions. The model was hydraulically calibrated using bed roughness, and performance was evaluated based on depth and inundation agreement. Model bed load performance was evaluated in terms of critical shear stress exceedance area compared to maps of observed bed mobility in a flume. Following the standard hydraulic calibration, bed load performance was tested for sensitivity to horizontal eddy viscosity parameterization and bed morphology updating. Simulations produced depth errors equal to the SfM inherent errors, inundation agreement of 77-85%, and critical shear stress exceedance in agreement with 49-68% of the observed active area. This study provides insight into the ability of physically based, two-dimensional simulations to accurately predict bed load as well as the effects of horizontal eddy viscosity and bed updating. Further, this study highlights how using high spatial and temporal data to capture the physical processes at work during flume experiments can help to improve morphological modeling.

  18. Nonlinear bias analysis and correction of microwave temperature sounder observations for FY-3C meteorological satellite

    NASA Astrophysics Data System (ADS)

    Hu, Taiyang; Lv, Rongchuan; Jin, Xu; Li, Hao; Chen, Wenxin

    2018-01-01

    The nonlinear bias analysis and correction of receiving channels in Chinese FY-3C meteorological satellite Microwave Temperature Sounder (MWTS) is a key technology of data assimilation for satellite radiance data. The thermal-vacuum chamber calibration data acquired from the MWTS can be analyzed to evaluate the instrument performance, including radiometric temperature sensitivity, channel nonlinearity and calibration accuracy. Especially, the nonlinearity parameters due to imperfect square-law detectors will be calculated from calibration data and further used to correct the nonlinear bias contributions of microwave receiving channels. Based upon the operational principles and thermalvacuum chamber calibration procedures of MWTS, this paper mainly focuses on the nonlinear bias analysis and correction methods for improving the calibration accuracy of the important instrument onboard FY-3C meteorological satellite, from the perspective of theoretical and experimental studies. Furthermore, a series of original results are presented to demonstrate the feasibility and significance of the methods.

  19. Development of dynamic calibration methods for POGO pressure transducers. [for space shuttle

    NASA Technical Reports Server (NTRS)

    Hilten, J. S.; Lederer, P. S.; Vezzetti, C. F.; Mayo-Wells, J. F.

    1976-01-01

    Two dynamic pressure sources are described for the calibration of pogo pressure transducers used to measure oscillatory pressures generated in the propulsion system of the space shuttle. Rotation of a mercury-filled tube in a vertical plane at frequencies below 5 Hz generates sinusoidal pressures up to 48 kPa, peak-to-peak; vibrating the same mercury-filled tube sinusoidally in the vertical plane extends the frequency response from 5 Hz to 100 Hz at pressures up to 140 kPa, peak-to-peak. The sinusoidal pressure fluctuations can be generated by both methods in the presence of high pressures (bias) up to 55 MPa. Calibration procedures are given in detail for the use of both sources. The dynamic performance of selected transducers was evaluated using these procedures; the results of these calibrations are presented. Calibrations made with the two sources near 5 Hz agree to within 3% of each other.

  20. Estimating Plasma Glucose from Interstitial Glucose: The Issue of Calibration Algorithms in Commercial Continuous Glucose Monitoring Devices

    PubMed Central

    Rossetti, Paolo; Bondia, Jorge; Vehí, Josep; Fanelli, Carmine G.

    2010-01-01

    Evaluation of metabolic control of diabetic people has been classically performed measuring glucose concentrations in blood samples. Due to the potential improvement it offers in diabetes care, continuous glucose monitoring (CGM) in the subcutaneous tissue is gaining popularity among both patients and physicians. However, devices for CGM measure glucose concentration in compartments other than blood, usually the interstitial space. This means that CGM need calibration against blood glucose values, and the accuracy of the estimation of blood glucose will also depend on the calibration algorithm. The complexity of the relationship between glucose dynamics in blood and the interstitial space, contrasts with the simplistic approach of calibration algorithms currently implemented in commercial CGM devices, translating in suboptimal accuracy. The present review will analyze the issue of calibration algorithms for CGM, focusing exclusively on the commercially available glucose sensors. PMID:22163505

  1. Stability analysis for a multi-camera photogrammetric system.

    PubMed

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-08-18

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  2. Stability Analysis for a Multi-Camera Photogrammetric System

    PubMed Central

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-01-01

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012

  3. A Modern Series Of Cinematographic Lenses: From Concept To Product

    NASA Astrophysics Data System (ADS)

    Neil, lain A.

    1988-06-01

    In the past photographic "taking" lenses and, in particular, those for the motion picture industry i.e. cinematographic lenses have had a mixed career due to inconsistencies between the processes of lens design, manufacture, testing and calibration and practical assessment in the customer domain. Usually these inconsistencies can be attributed to differences between the comparison of, a lens design "scientifically" made and final evaluation in an "artistic" manner. The following paper addresses the processes of lens design, manufacture, testing and calibration using a combination of acquired practical experience and modern test and calibration methods. Various performance aspects are separately addressed and considered in terms of different means of measurement.

  4. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  5. Review and evaluation of performance measures for survival prediction models in external validation settings.

    PubMed

    Rahman, M Shafiqur; Ambler, Gareth; Choodari-Oskooei, Babak; Omar, Rumana Z

    2017-04-18

    When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell's concordance measure which tended to increase as censoring increased. We recommend that Uno's concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller's measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston's D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive accuracy curves. In addition, we recommend to investigate the characteristics of the validation data such as the level of censoring and the distribution of the prognostic index derived in the validation setting before choosing the performance measures.

  6. Neural-Net Based Optical NDE Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Weiland, Kenneth E.

    2003-01-01

    This paper answers some performance and calibration questions about a non-destructive-evaluation (NDE) procedure that uses artificial neural networks to detect structural damage or other changes from sub-sampled characteristic patterns. The method shows increasing sensitivity as the number of sub-samples increases from 108 to 6912. The sensitivity of this robust NDE method is not affected by noisy excitations of the first vibration mode. A calibration procedure is proposed and demonstrated where the output of a trained net can be correlated with the outputs of the point sensors used for vibration testing. The calibration procedure is based on controlled changes of fastener torques. A heterodyne interferometer is used as a displacement sensor for a demonstration of the challenges to be handled in using standard point sensors for calibration.

  7. Calibration of the Spanish PROMIS Smoking Item Banks.

    PubMed

    Huang, Wenjing; Stucky, Brian D; Edelen, Maria O; Tucker, Joan S; Shadel, William G; Hansen, Mark; Cai, Li

    2016-07-01

    The Patient-Reported Outcomes Measurement Information System (PROMIS) Smoking Initiative has developed item banks for assessing six smoking behaviors and biopsychosocial correlates of smoking among adult cigarette smokers. The goal of this study is to evaluate the performance of the Spanish version of the PROMIS smoking item banks as compared to the original banks developed in English. The six PROMIS banks for daily smokers were translated into Spanish and administered to a sample of Spanish-speaking adult daily smokers in the United States (N = 302). We first evaluated the unidimensionality of each bank using confirmatory factor analysis. We then conducted a two-group item response theory calibration, including an item response theory-based Differential Item Functioning (DIF) analysis by language of administration (Spanish vs. English). Finally, we generated full bank and short form scores for the translated banks and evaluated their psychometric performance. Unidimensionality of the Spanish smoking item banks was supported by confirmatory factor analysis results. Out of a total of 109 items that were evaluated for language DIF, seven items in three of the six banks were identified as having levels of DIF that exceeded an established criterion. The psychometric performance of the Spanish daily smoker banks is largely comparable to that of the English versions. The Spanish PROMIS smoking item banks are highly similar, but not entirely equivalent, to the original English versions. The parameters from these two-group calibrations can be used to generate comparable bank scores across the two language versions. In this study, we developed a Spanish version of the PROMIS smoking toolkit, which was originally designed and developed for English speakers. With the growing Spanish-speaking population, it is important to make the toolkit more accessible by translating the items and calibrating the Spanish version to be comparable with English-language scores. This study provided the translated item banks and short forms, comparable unbiased scores for Spanish speakers and evaluations of the psychometric properties of the new Spanish toolkit. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Effect of experimental design on the prediction performance of calibration models based on near-infrared spectroscopy for pharmaceutical applications.

    PubMed

    Bondi, Robert W; Igne, Benoît; Drennen, James K; Anderson, Carl A

    2012-12-01

    Near-infrared spectroscopy (NIRS) is a valuable tool in the pharmaceutical industry, presenting opportunities for online analyses to achieve real-time assessment of intermediates and finished dosage forms. The purpose of this work was to investigate the effect of experimental designs on prediction performance of quantitative models based on NIRS using a five-component formulation as a model system. The following experimental designs were evaluated: five-level, full factorial (5-L FF); three-level, full factorial (3-L FF); central composite; I-optimal; and D-optimal. The factors for all designs were acetaminophen content and the ratio of microcrystalline cellulose to lactose monohydrate. Other constituents included croscarmellose sodium and magnesium stearate (content remained constant). Partial least squares-based models were generated using data from individual experimental designs that related acetaminophen content to spectral data. The effect of each experimental design was evaluated by determining the statistical significance of the difference in bias and standard error of the prediction for that model's prediction performance. The calibration model derived from the I-optimal design had similar prediction performance as did the model derived from the 5-L FF design, despite containing 16 fewer design points. It also outperformed all other models estimated from designs with similar or fewer numbers of samples. This suggested that experimental-design selection for calibration-model development is critical, and optimum performance can be achieved with efficient experimental designs (i.e., optimal designs).

  9. Empirical performance of the calibrated self-controlled cohort analysis within temporal pattern discovery: lessons for developing a risk identification and analysis system.

    PubMed

    Norén, G Niklas; Bergvall, Tomas; Ryan, Patrick B; Juhlin, Kristina; Schuemie, Martijn J; Madigan, David

    2013-10-01

    Observational healthcare data offer the potential to identify adverse drug reactions that may be missed by spontaneous reporting. The self-controlled cohort analysis within the Temporal Pattern Discovery framework compares the observed-to-expected ratio of medical outcomes during post-exposure surveillance periods with those during a set of distinct pre-exposure control periods in the same patients. It utilizes an external control group to account for systematic differences between the different time periods, thus combining within- and between-patient confounder adjustment in a single measure. To evaluate the performance of the calibrated self-controlled cohort analysis within Temporal Pattern Discovery as a tool for risk identification in observational healthcare data. Different implementations of the calibrated self-controlled cohort analysis were applied to 399 drug-outcome pairs (165 positive and 234 negative test cases across 4 health outcomes of interest) in 5 real observational databases (four with administrative claims and one with electronic health records). Performance was evaluated on real data through sensitivity/specificity, the area under receiver operator characteristics curve (AUC), and bias. The calibrated self-controlled cohort analysis achieved good predictive accuracy across the outcomes and databases under study. The optimal design based on this reference set uses a 360 days surveillance period and a single control period 180 days prior to new prescriptions. It achieved an average AUC of 0.75 and AUC >0.70 in all but one scenario. A design with three separate control periods performed better for the electronic health records database and for acute renal failure across all data sets. The estimates for negative test cases were generally unbiased, but a minor negative bias of up to 0.2 on the RR-scale was observed with the configurations using multiple control periods, for acute liver injury and upper gastrointestinal bleeding. The calibrated self-controlled cohort analysis within Temporal Pattern Discovery shows promise as a tool for risk identification; it performs well at discriminating positive from negative test cases. The optimal parameter configuration may vary with the data set and medical outcome of interest.

  10. Does ADHD in Adults Affect the Relative Accuracy of Metamemory Judgments?

    ERIC Educational Resources Information Center

    Knouse, Laura E.; Paradise, Matthew J.; Dunlosky, John

    2006-01-01

    Objective: Prior research suggests that individuals with ADHD overestimate their performance across domains despite performing more poorly in these domains. The authors introduce measures of accuracy from the larger realm of judgment and decision making--namely, relative accuracy and calibration--to the study of self-evaluative judgment accuracy…

  11. A simplified gross primary production and evapotranspiration model for boreal coniferous forests - is a generic calibration sufficient?

    NASA Astrophysics Data System (ADS)

    Minunno, F.; Peltoniemi, M.; Launiainen, S.; Aurela, M.; Lindroth, A.; Lohila, A.; Mammarella, I.; Minkkinen, K.; Mäkelä, A.

    2015-07-01

    The problem of model complexity has been lively debated in environmental sciences as well as in the forest modelling community. Simple models are less input demanding and their calibration involves a lower number of parameters, but they might be suitable only at local scale. In this work we calibrated a simplified ecosystem process model (PRELES) to data from multiple sites and we tested if PRELES can be used at regional scale to estimate the carbon and water fluxes of Boreal conifer forests. We compared a multi-site (M-S) with site-specific (S-S) calibrations. Model calibrations and evaluations were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. To evaluate model performances BMC results were combined with more classical analysis of model-data mismatch (M-DM). Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 10 sites of Finland and Sweden were used in the study. Calibration results showed that similar estimates were obtained for the parameters at which model outputs are most sensitive. No significant differences were encountered in the predictions of the multi-site and site-specific versions of PRELES with exception of a site with agricultural history (Alkkia). Although PRELES predicted GPP better than evapotranspiration, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Our analyses underlined also the importance of using long and carefully collected flux datasets in model calibration. In fact, even a single site can provide model calibrations that can be applied at a wider spatial scale, since it covers a wide range of variability in climatic conditions.

  12. Calibration of X-Ray Observatories

    NASA Technical Reports Server (NTRS)

    Weisskopf, Martin C.; L'Dell, Stephen L.

    2011-01-01

    Accurate calibration of x-ray observatories has proved an elusive goal. Inaccuracies and inconsistencies amongst on-ground measurements, differences between on-ground and in-space performance, in-space performance changes, and the absence of cosmic calibration standards whose physics we truly understand have precluded absolute calibration better than several percent and relative spectral calibration better than a few percent. The philosophy "the model is the calibration" relies upon a complete high-fidelity model of performance and an accurate verification and calibration of this model. As high-resolution x-ray spectroscopy begins to play a more important role in astrophysics, additional issues in accurately calibrating at high spectral resolution become more evident. Here we review the challenges of accurately calibrating the absolute and relative response of x-ray observatories. On-ground x-ray testing by itself is unlikely to achieve a high-accuracy calibration of in-space performance, especially when the performance changes with time. Nonetheless, it remains an essential tool in verifying functionality and in characterizing and verifying the performance model. In the absence of verified cosmic calibration sources, we also discuss the notion of an artificial, in-space x-ray calibration standard. 6th

  13. Recent Research on the Automated Mass Measuring System

    NASA Astrophysics Data System (ADS)

    Yao, Hong; Ren, Xiao-Ping; Wang, Jian; Zhong, Rui-Lin; Ding, Jing-An

    The research development of robotic measurement system as well as the representative automatic system were introduced in the paper, and then discussed a sub-multiple calibration scheme adopted on a fully-automatic CCR10 system effectively. Automatic robot system can be able to perform the dissemination of the mass scale without any manual intervention as well as the fast speed calibration of weight samples against a reference weight. At the last, evaluation of the expanded uncertainty was given out.

  14. Radiometric and geometric assessment of data from the RapidEye constellation of satellites

    USGS Publications Warehouse

    Chander, Gyanesh; Haque, Md. Obaidul; Sampath, Aparajithan; Brunn, A.; Trosset, G.; Hoffmann, D.; Roloff, S.; Thiele, M.; Anderson, C.

    2013-01-01

    To monitor land surface processes over a wide range of temporal and spatial scales, it is critical to have coordinated observations of the Earth's surface using imagery acquired from multiple spaceborne imaging sensors. The RapidEye (RE) satellite constellation acquires high-resolution satellite images covering the entire globe within a very short period of time by sensors identical in construction and cross-calibrated to each other. To evaluate the RE high-resolution Multi-spectral Imager (MSI) sensor capabilities, a cross-comparison between the RE constellation of sensors was performed first using image statistics based on large common areas observed over pseudo-invariant calibration sites (PICS) by the sensors and, second, by comparing the on-orbit radiometric calibration temporal trending over a large number of calibration sites. For any spectral band, the individual responses measured by the five satellites of the RE constellation were found to differ <2–3% from the average constellation response depending on the method used for evaluation. Geometric assessment was also performed to study the positional accuracy and relative band-to-band (B2B) alignment of the image data sets. The position accuracy was assessed by comparing the RE imagery against high-resolution aerial imagery, while the B2B characterization was performed by registering each band against every other band to ensure that the proper band alignment is provided for an image product. The B2B results indicate that the internal alignments of these five RE bands are in agreement, with bands typically registered to within 0.25 pixels of each other or better.

  15. Evaluation, Calibration and Comparison of the Precipitation-Runoff Modeling System (PRMS) National Hydrologic Model (NHM) Using Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) Gridded Datasets

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II; Haj, A. E., Jr.

    2014-12-01

    The United States Geological Survey is currently developing a National Hydrologic Model (NHM) to support and facilitate coordinated and consistent hydrologic modeling efforts at the scale of the continental United States. As part of this effort, the Geospatial Fabric (GF) for the NHM was created. The GF is a database that contains parameters derived from datasets that characterize the physical features of watersheds. The GF was used to aggregate catchments and flowlines defined in the National Hydrography Dataset Plus dataset for more than 100,000 hydrologic response units (HRUs), and to establish initial parameter values for input to the Precipitation-Runoff Modeling System (PRMS). Many parameter values are adjusted in PRMS using an automated calibration process. Using these adjusted parameter values, the PRMS model estimated variables such as evapotranspiration (ET), potential evapotranspiration (PET), snow-covered area (SCA), and snow water equivalent (SWE). In order to evaluate the effectiveness of parameter calibration, and model performance in general, several satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) gridded datasets including ET, PET, SCA, and SWE were compared to PRMS-simulated values. The MODIS and SNODAS data were spatially averaged for each HRU, and compared to PRMS-simulated ET, PET, SCA, and SWE values for each HRU in the Upper Missouri River watershed. Default initial GF parameter values and PRMS calibration ranges were evaluated. Evaluation results, and the use of MODIS and SNODAS datasets to update GF parameter values and PRMS calibration ranges, are presented and discussed.

  16. Simulation of streamflow in the Pleasant, Narraguagus, Sheepscot, and Royal Rivers, Maine, using watershed models

    USGS Publications Warehouse

    Dudley, Robert W.; Nielsen, Martha G.

    2011-01-01

    The U.S. Geological Survey (USGS) began a study in 2008 to investigate anticipated changes in summer streamflows and stream temperatures in four coastal Maine river basins and the potential effects of those changes on populations of endangered Atlantic salmon. To achieve this purpose, it was necessary to characterize the quantity and timing of streamflow in these rivers by developing and evaluating a distributed-parameter watershed model for a part of each river basin by using the USGS Precipitation-Runoff Modeling System (PRMS). The GIS (geographic information system) Weasel, a USGS software application, was used to delineate the four study basins and their many subbasins, and to derive parameters for their geographic features. The models were calibrated using a four-step optimization procedure in which model output was evaluated against four datasets for calibrating solar radiation, potential evapotranspiration, annual and seasonal water balances, and daily streamflows. The calibration procedure involved thousands of model runs that used the USGS software application Luca (Let us calibrate). Luca uses the Shuffled Complex Evolution (SCE) global search algorithm to calibrate the model parameters. The calibrated watershed models performed satisfactorily, in that Nash-Sutcliffe efficiency (NSE) statistic values for the calibration periods ranged from 0.59 to 0.75 (on a scale of negative infinity to 1) and NSE statistic values for the evaluation periods ranged from 0.55 to 0.73. The calibrated watershed models simulate daily streamflow at many locations in each study basin. These models enable natural resources managers to characterize the timing and amount of streamflow in order to support a variety of water-resources efforts including water-quality calculations, assessments of water use, modeling of population dynamics and migration of Atlantic salmon, modeling and assessment of habitat, and simulation of anticipated changes to streamflow and water temperature resulting from changes forecast for air temperature and precipitation.

  17. PSA discriminator influence on (222)Rn efficiency detection in waters by liquid scintillation counting.

    PubMed

    Stojković, Ivana; Todorović, Nataša; Nikolov, Jovana; Tenjović, Branislava

    2016-06-01

    A procedure for the (222)Rn determination in aqueous samples using liquid scintillation counting (LSC) was evaluated and optimized. Measurements were performed by ultra-low background spectrometer Quantulus 1220™ equipped with PSA (Pulse Shape Analysis) circuit which discriminates alpha/beta spectra. Since calibration procedure is carried out with (226)Ra standard, which has both alpha and beta progenies, it is clear that PSA discriminator has vital importance in order to provide precise spectra separation. Improvement of calibration procedure was done through investigation of PSA discriminator level and, consequentially, the activity of (226)Ra calibration standard influence on (222)Rn efficiency detection. Quench effects on generated spectra i.e. determination of radon efficiency detection were also investigated with quench calibration curve obtained. Radon determination in waters based on modified procedure according to the activity of (226)Ra standard used, dependent on PSA setup, was evaluated with prepared (226)Ra solution samples and drinking water samples with assessment of measurement uncertainty variation included. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. 8s, a numerical simulator of the challenging optical calibration of the E-ELT adaptive mirror M4

    NASA Astrophysics Data System (ADS)

    Briguglio, Runa; Pariani, Giorgio; Xompero, Marco; Riccardi, Armando; Tintori, Matteo; Lazzarini, Paolo; Spanò, Paolo

    2016-07-01

    8s stands for Optical Test TOwer Simulator (with 8 read as in italian 'otto'): it is a simulation tool for the optical calibration of the E-ELT deformable mirror M4 on its test facility. It has been developed to identify possible criticalities in the procedure, evaluate the solutions and estimate the sensitivity to environmental noise. The simulation system is composed by the finite elements model of the tower, the analytic influence functions of the actuators, the ray tracing propagation of the laser beam through the optical surfaces. The tool delivers simulated phasemaps of M4, associated with the current system status: actuator commands, optics alignment and position, beam vignetting, bench temperature and vibrations. It is possible to simulate a single step of the optical test of M4 by changing the system parameters according to a calibration procedure and collect the associated phasemap for performance evaluation. In this paper we will describe the simulation package and outline the proposed calibration procedure of M4.

  19. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less

  20. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  1. Multiresidue determination of pesticides in crop plants by the quick, easy, cheap, effective, rugged, and safe method and ultra-high-performance liquid chromatography tandem mass spectrometry using a calibration based on a single level standard addition in the sample.

    PubMed

    Viera, Mariela S; Rizzetti, Tiele M; de Souza, Maiara P; Martins, Manoel L; Prestes, Osmar D; Adaime, Martha B; Zanella, Renato

    2017-12-01

    In this study, a QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method, optimized by a 2 3 full factorial design, was developed for the determination of 72 pesticides in plant parts of carrot, corn, melon, rice, soy, silage, tobacco, cassava, lettuce and wheat by ultra-high-performance liquid chromatographic tandem mass spectrometry (UHPLC-MS/MS). Considering the complexity of these matrices and the need of use calibration in matrix, a new calibration approach based on single level standard addition in the sample (SLSAS) was proposed in this work and compared with the matrix-matched calibration (MMC), the procedural standard calibration (PSC) and the diluted standard addition calibration (DSAC). All approaches presented satisfactory validation parameters with recoveries from 70 to 120% and relative standard deviations≤20%. SLSAS was the most practical from the evaluated approaches and proved to be an effective way of calibration. Method limit of detection were between 4.8 and 48μgkg -1 and limit of quantification were from 16 to 160μgkg -1 . Method application to different kinds of plants found residues of 20 pesticides that were quantified with z-scores values≤2 in comparison with other calibration approaches. The proposed QuEChERS method combined with UHPLC-MS/MS analysis and using an easy and effective calibration procedure presented satisfactory results for pesticide residues determination in different crop plants and is a good alternative for routine analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Calibration and error analysis of metal-oxide-semiconductor field-effect transistor dosimeters for computed tomography radiation dosimetry.

    PubMed

    Trattner, Sigal; Prinsen, Peter; Wiegert, Jens; Gerland, Elazar-Lars; Shefer, Efrat; Morton, Tom; Thompson, Carla M; Yagil, Yoad; Cheng, Bin; Jambawalikar, Sachin; Al-Senan, Rani; Amurao, Maxwell; Halliburton, Sandra S; Einstein, Andrew J

    2017-12-01

    Metal-oxide-semiconductor field-effect transistors (MOSFETs) serve as a helpful tool for organ radiation dosimetry and their use has grown in computed tomography (CT). While different approaches have been used for MOSFET calibration, those using the commonly available 100 mm pencil ionization chamber have not incorporated measurements performed throughout its length, and moreover, no previous work has rigorously evaluated the multiple sources of error involved in MOSFET calibration. In this paper, we propose a new MOSFET calibration approach to translate MOSFET voltage measurements into absorbed dose from CT, based on serial measurements performed throughout the length of a 100-mm ionization chamber, and perform an analysis of the errors of MOSFET voltage measurements and four sources of error in calibration. MOSFET calibration was performed at two sites, to determine single calibration factors for tube potentials of 80, 100, and 120 kVp, using a 100-mm-long pencil ion chamber and a cylindrical computed tomography dose index (CTDI) phantom of 32 cm diameter. The dose profile along the 100-mm ion chamber axis was sampled in 5 mm intervals by nine MOSFETs in the nine holes of the CTDI phantom. Variance of the absorbed dose was modeled as a sum of the MOSFET voltage measurement variance and the calibration factor variance, the latter being comprised of three main subcomponents: ionization chamber reading variance, MOSFET-to-MOSFET variation and a contribution related to the fact that the average calibration factor of a few MOSFETs was used as an estimate for the average value of all MOSFETs. MOSFET voltage measurement error was estimated based on sets of repeated measurements. The calibration factor overall voltage measurement error was calculated from the above analysis. Calibration factors determined were close to those reported in the literature and by the manufacturer (~3 mV/mGy), ranging from 2.87 to 3.13 mV/mGy. The error σ V of a MOSFET voltage measurement was shown to be proportional to the square root of the voltage V: σV=cV where c = 0.11 mV. A main contributor to the error in the calibration factor was the ionization chamber reading error with 5% error. The usage of a single calibration factor for all MOSFETs introduced an additional error of about 5-7%, depending on the number of MOSFETs that were used to determine the single calibration factor. The expected overall error in a high-dose region (~30 mGy) was estimated to be about 8%, compared to 6% when an individual MOSFET calibration was performed. For a low-dose region (~3 mGy), these values were 13% and 12%. A MOSFET calibration method was developed using a 100-mm pencil ion chamber and a CTDI phantom, accompanied by an absorbed dose error analysis reflecting multiple sources of measurement error. When using a single calibration factor, per tube potential, for different MOSFETs, only a small error was introduced into absorbed dose determinations, thus supporting the use of a single calibration factor for experiments involving many MOSFETs, such as those required to accurately estimate radiation effective dose. © 2017 American Association of Physicists in Medicine.

  3. Multivariate calibration standardization across instruments for the determination of glucose by Fourier transform near-infrared spectrometry.

    PubMed

    Zhang, Lin; Small, Gary W; Arnold, Mark A

    2003-11-01

    The transfer of multivariate calibration models is investigated between a primary (A) and two secondary Fourier transform near-infrared (near-IR) spectrometers (B, C). The application studied in this work is the use of bands in the near-IR combination region of 5000-4000 cm(-)(1) to determine physiological levels of glucose in a buffered aqueous matrix containing varying levels of alanine, ascorbate, lactate, triacetin, and urea. The three spectrometers are used to measure 80 samples produced through a randomized experimental design that minimizes correlations between the component concentrations and between the concentrations of glucose and water. Direct standardization (DS), piecewise direct standardization (PDS), and guided model reoptimization (GMR) are evaluated for use in transferring partial least-squares calibration models developed with the spectra of 64 samples from the primary instrument to the prediction of glucose concentrations in 16 prediction samples measured with each secondary spectrometer. The three algorithms are evaluated as a function of the number of standardization samples used in transferring the calibration models. Performance criteria for judging the success of the calibration transfer are established as the standard error of prediction (SEP) for internal calibration models built with the spectra of the 64 calibration samples collected with each secondary spectrometer. These SEP values are 1.51 and 1.14 mM for spectrometers B and C, respectively. When calibration standardization is applied, the GMR algorithm is observed to outperform DS and PDS. With spectrometer C, the calibration transfer is highly successful, producing an SEP value of 1.07 mM. However, an SEP of 2.96 mM indicates unsuccessful calibration standardization with spectrometer B. This failure is attributed to differences in the variance structure of the spectra collected with spectrometers A and B. Diagnostic procedures are presented for use with the GMR algorithm that forecasts the successful calibration transfer with spectrometer C and the unsatisfactory results with spectrometer B.

  4. Calibration improvements to electronically scanned pressure systems and preliminary statistical assessment

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.

    1996-01-01

    Orifice-to-orifice inconsistencies in data acquired with an electronically-scanned pressure system at the beginning of a wind tunnel experiment forced modifications to the standard, instrument calibration procedures. These modifications included a large increase in the number of calibration points which would allow a critical examination of the calibration curve-fit process, and a subsequent post-test reduction of the pressure data. Evaluation of these data has resulted in an improved functional representation of the pressure-voltage signature for electronically-scanned pressures sensors, which can reduce the errors due to calibration curve fit to under 0.10 percent of reading compared to the manufacturer specified 0.10 percent of full scale. Application of the improved calibration function allows a more rational selection of the calibration set-point pressures. These pressures should be adjusted to achieve a voltage output which matches the physical shape of the pressure-voltage signature of the sensor. This process is conducted in lieu of the more traditional approach where a calibration pressure is specified and the resulting sensor voltage is recorded. The fifteen calibrations acquired over the two-week duration of the wind tunnel test were further used to perform a preliminary, statistical assessment of the variation in the calibration process. The results allowed the estimation of the bias uncertainty for a single instrument calibration; and, they form the precursor for more extensive and more controlled studies in the laboratory.

  5. Evaluation of the NICE mini-GRACE risk scores for acute myocardial infarction using the Myocardial Ischaemia National Audit Project (MINAP) 2003-2009: National Institute for Cardiovascular Outcomes Research (NICOR).

    PubMed

    Simms, Alexander D; Reynolds, Stephanie; Pieper, Karen; Baxter, Paul D; Cattle, Brian A; Batin, Phillip D; Wilson, John I; Deanfield, John E; West, Robert M; Fox, Keith A A; Hall, Alistair S; Gale, Christopher P

    2013-01-01

    To evaluate the performance of the National Institute for Health and Clinical Excellence (NICE) mini-Global Registry of Acute Coronary Events (GRACE) (MG) and adjusted mini-GRACE (AMG) risk scores. Retrospective observational study. 215 acute hospitals in England and Wales. 137 084 patients discharged from hospital with a diagnosis of acute myocardial infarction (AMI) between 2003 and 2009, as recorded in the Myocardial Ischaemia National Audit Project (MINAP). Model performance indices of calibration accuracy, discriminative and explanatory performance, including net reclassification index (NRI) and integrated discrimination improvement. Of 495 263 index patients hospitalised with AMI, there were 53 196 ST elevation myocardial infarction and 83 888 non-ST elevation myocardial infarction (NSTEMI) (27.7%) cases with complete data for all AMG variables. For AMI, AMG calibration was better than MG calibration (Hosmer-Lemeshow goodness of fit test: p=0.33 vs p<0.05). MG and AMG predictive accuracy and discriminative ability were good (Brier score: 0.10 vs 0.09; C statistic: 0.82 and 0.84, respectively). The NRI of AMG over MG was 8.1% (p<0.05). Model performance was reduced in patients with NSTEMI, chronic heart failure, chronic renal failure and in patients aged ≥85 years. The AMG and MG risk scores, utilised by NICE, demonstrated good performance across a range of indices using MINAP data, but performed less well in higher risk subgroups. Although indices were better for AMG, its application may be constrained by missing predictors.

  6. Ground truth data for test sites (SL-3). [solar radiation and thermal radiation brightness temperature measurements

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Field measurements performed simultaneously with Skylab overpasses in order to provide comparative calibration and performance evaluation measurements for the EREP sensors are presented. The solar radiation region from 400 to 1300 nanometers and the thermal radiation region from 8 to 14 micrometer region were investigated. The measurements of direct solar radiation were analyzed for atmospheric optical depth; the total and reflected solar radiation were analyzed for target reflectivity. These analyses were used in conjunction with a radiative transfer computer program in order to calculate the amount and spectral distribution of solar radiation at the apertures of the EREP sensors. The instrumentation and techniques employed, calibrations and analyses performed, and results obtained are discussed.

  7. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    NASA Technical Reports Server (NTRS)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  8. C-band polarimetric scatterometer for soil studies

    NASA Astrophysics Data System (ADS)

    D'Alessio, Angelo C.; Mongelli, Antonio; Notarnicola, Claudia; Paparella, Giuseppina; Posa, Francesco; Sabatelli, Vincenzo

    2003-03-01

    The aim of this study is to evaluate the performances of a polarimetric scatterometer. This sensor can measure the module of the electromagnetic backscattering matrix elements. The knowledge of this matrix permits the computation of all the possible polarisation combinations of transmitted and received signals through a Polarisation Synthesis approach. Scatterometer data are useful for monitoring a large number of soil physical parameters. In particular, the sensitivity of a C-band radar to different growing conditions of vegetation depends on the wave polarisation. As consequences, the possibility of acquiringi both polarisation components presents a great advantage in the vegetarian studies. In addition, this type of ground sensor can permit a fast coverage of the areas of interest. A first test of the polarimetric scatterometer has been performed over an asphalt surface, which has a well-known electromagnetic response. Moreover, a calibration procedure has been tested using both passive (Trihedral Corner Reflector, TCR) and active (Active Radar Calibrator, ARC) radar calibrator.

  9. Evaluation of the CMODIS-measured radiance

    NASA Astrophysics Data System (ADS)

    Mao, Zhihua; Pan, Delu; Huang, Haiqing

    2006-12-01

    A Chinese Moderate Resolution Imaging Spectrometer (CMODIS) on "Shenzhou-3" spaceship was launched on March 25, 2002. CMODIS has 34 channels, with 30 visible and near-infrared channels and 4 infrared channels. The 30 channels are 20nm width with wavelength ranging from 403nm to 1023nm. The radiance calibration of CMODIS was finished in the laboratory measurements before it was launched and the laboratory calibration coefficients were used to calibrate the CMODIS raw data. Since none of on-board radiance absolute calibration devices including internal lamps system and calibration system which is based on solar reflectance and lunar irradiance were installed with the sensor, how about the accuracy of CMODIS-measured radiance is a key question for the remote sensing data processing and ocean applications. A new model was developed as a program to evaluate the accuracy of calibrated radiance measured by CMODIS at the top of the atmosphere (TOA). The program can compute the Rayleigh scattering radiance and aerosol scattering radiance together with the radiance component from the water-leaving radiance to deduce the total radiance at TOA under some similar observation conditions of CMODIS. Both the multiple-scattering effects and atmosphere absorbing effects are taken into account on the radiative transfer model to improve the accuracy of atmospheric scattering radiances. The model was used to deduce the spectral radiances at TOA and compared with the radiances measured by Sea-viewing Wide Field-of-view Sensor (SeaWiFS) to check the performance of the model, showing that the spectral radiances from the model with small differences from those of SeaWiFS. The spectral radiances of the model can be taken as reference values to evaluate the accuracy of CMODIS calibrated radiance. The relative differences of the two radiances are large from 16% to 300%, especially for CMODIS at the near-infrared channels with more than one time larger than those of the model. It is shown that the calibration coefficients from the laboratory measurements are not reliable and the radiance of CMODIS needs to be recalibrated before the data are used for oceanography applications. The results show that the model is effective in evaluating the CMODIS sensor and easily to be modified to evaluate other kinds of ocean color satellite sensors.

  10. SIRU utilization. Volume 1: Theory, development and test evaluation

    NASA Technical Reports Server (NTRS)

    Musoff, H.

    1974-01-01

    The theory, development, and test evaluations of the Strapdown Inertial Reference Unit (SIRU) are discussed. The statistical failure detection and isolation, single position calibration, and self alignment techniques are emphasized. Circuit diagrams of the system components are provided. Mathematical models are developed to show the performance characteristics of the subsystems. Specific areas of the utilization program are identified as: (1) error source propagation characteristics and (2) local level navigation performance demonstrations.

  11. Simultaneous intrinsic and extrinsic calibration of a laser deflecting tilting mirror in the projective voltage space.

    PubMed

    Schneider, Adrian; Pezold, Simon; Baek, Kyung-Won; Marinov, Dilyan; Cattin, Philippe C

    2016-09-01

    PURPOSE  : During the past five decades, laser technology emerged and is nowadays part of a great number of scientific and industrial applications. In the medical field, the integration of laser technology is on the rise and has already been widely adopted in contemporary medical applications. However, it is new to use a laser to cut bone and perform general osteotomy surgical tasks with it. In this paper, we describe a method to calibrate a laser deflecting tilting mirror and integrate it into a sophisticated laser osteotome, involving next generation robots and optical tracking. METHODS  : A mathematical model was derived, which describes a controllable deflection mirror by the general projective transformation. This makes the application of well-known camera calibration methods possible. In particular, the direct linear transformation algorithm is applied to calibrate and integrate a laser deflecting tilting mirror into the affine transformation chain of a surgical system. RESULTS  : Experiments were performed on synthetic generated calibration input, and the calibration was tested with real data. The determined target registration errors in a working distance of 150 mm for both simulated input and real data agree at the declared noise level of the applied optical 3D tracking system: The evaluation of the synthetic input showed an error of 0.4 mm, and the error with the real data was 0.3 mm.

  12. Changes in deviation of absorbed dose to water among users by chamber calibration shift.

    PubMed

    Katayose, Tetsurou; Saitoh, Hidetoshi; Igari, Mitsunobu; Chang, Weishan; Hashimoto, Shimpei; Morioka, Mie

    2017-07-01

    The JSMP01 dosimetry protocol had adopted the provisional 60 Co calibration coefficient [Formula: see text], namely, the product of exposure calibration coefficient N C and conversion coefficient k D,X . After that, the absorbed dose to water D w  standard was established, and the JSMP12 protocol adopted the [Formula: see text] calibration. In this study, the influence of the calibration shift on the measurement of D w among users was analyzed. The intercomparison of the D w using an ionization chamber was annually performed by visiting related hospitals. Intercomparison results before and after the calibration shift were analyzed, the deviation of D w among users was re-evaluated, and the cause of deviation was estimated. As a result, the stability of LINAC, calibration of the thermometer and barometer, and collection method of ion recombination were confirmed. The statistical significance of standard deviation of D w was not observed, but that of difference of D w among users was observed between N C and [Formula: see text] calibration. Uncertainty due to chamber-to-chamber variation was reduced by the calibration shift, consequently reducing the uncertainty among users regarding D w . The result also pointed out uncertainty might be reduced by accurate and detailed instructions on the setup of an ionization chamber.

  13. Calibration of Clinical Audio Recording and Analysis Systems for Sound Intensity Measurement.

    PubMed

    Maryn, Youri; Zarowski, Andrzej

    2015-11-01

    Sound intensity is an important acoustic feature of voice/speech signals. Yet recordings are performed with different microphone, amplifier, and computer configurations, and it is therefore crucial to calibrate sound intensity measures of clinical audio recording and analysis systems on the basis of output of a sound-level meter. This study was designed to evaluate feasibility, validity, and accuracy of calibration methods, including audiometric speech noise signals and human voice signals under typical speech conditions. Calibration consisted of 3 comparisons between data from 29 measurement microphone-and-computer systems and data from the sound-level meter: signal-specific comparison with audiometric speech noise at 5 levels, signal-specific comparison with natural voice at 3 levels, and cross-signal comparison with natural voice at 3 levels. Intensity measures from recording systems were then linearly converted into calibrated data on the basis of these comparisons, and validity and accuracy of calibrated sound intensity were investigated. Very strong correlations and quasisimilarity were found between calibrated data and sound-level meter data across calibration methods and recording systems. Calibration of clinical sound intensity measures according to this method is feasible, valid, accurate, and representative for a heterogeneous set of microphones and data acquisition systems in real-life circumstances with distinct noise contexts.

  14. Design of a tracked ultrasound calibration phantom made of LEGO bricks

    NASA Astrophysics Data System (ADS)

    Walsh, Ryan; Soehl, Marie; Rankin, Adam; Lasso, Andras; Fichtinger, Gabor

    2014-03-01

    PURPOSE: Spatial calibration of tracked ultrasound systems is commonly performed using precisely fabricated phantoms. Machining or 3D printing has relatively high cost and not easily available. Moreover, the possibilities for modifying the phantoms are very limited. Our goal was to find a method to construct a calibration phantom from affordable, widely available components, which can be built in short time, can be easily modified, and provides comparable accuracy to the existing solutions. METHODS: We designed an N-wire calibration phantom made of LEGO® bricks. To affirm the phantom's reproducibility and build time, ten builds were done by first-time users. The phantoms were used for a tracked ultrasound calibration by an experienced user. The success of each user's build was determined by the lowest root mean square (RMS) wire reprojection error of three calibrations. The accuracy and variance of calibrations were evaluated for the calibrations produced for various tracked ultrasound probes. The proposed model was compared to two of the currently available phantom models for both electromagnetic and optical tracking. RESULTS: The phantom was successfully built by all ten first-time users in an average time of 18.8 minutes. It cost approximately $10 CAD for the required LEGO® bricks and averaged a 0.69mm of error in the calibration reproducibility for ultrasound calibrations. It is one third the cost of similar 3D printed phantoms and takes much less time to build. The proposed phantom's image reprojections were 0.13mm more erroneous than those of the highest performing current phantom model The average standard deviation of multiple 3D image reprojections differed by 0.05mm between the phantoms CONCLUSION: It was found that the phantom could be built in less time, was one third the cost, compared to similar 3D printed models. The proposed phantom was found to be capable of producing equivalent calibrations to 3D printed phantoms.

  15. Evaluation of chiller modeling approaches and their usability for fault detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Several factors must be considered in model evaluation, including accuracy, training data requirements, calibration effort, generality, and computational requirements. All modeling approaches fall somewhere between pure first-principles models, and empirical models. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression air conditioning units, which are commonly known as chillers. Three different models were studied: two are based on first-principles and the third is empirical in nature. The first-principles models are themore » Gordon and Ng Universal Chiller model (2nd generation), and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles. The DOE-2 chiller model as implemented in CoolTools{trademark} was selected for the empirical category. The models were compared in terms of their ability to reproduce the observed performance of an older chiller operating in a commercial building, and a newer chiller in a laboratory. The DOE-2 and Gordon-Ng models were calibrated by linear regression, while a direct-search method was used to calibrate the Toolkit model. The ''CoolTools'' package contains a library of calibrated DOE-2 curves for a variety of different chillers, and was used to calibrate the building chiller to the DOE-2 model. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less

  16. Measurement of liver iron overload: noninvasive calibration of MRI-R2* by magnetic iron detector susceptometer.

    PubMed

    Gianesin, B; Zefiro, D; Musso, M; Rosa, A; Bruzzone, C; Balocco, M; Carrara, P; Bacigalupo, L; Banderali, S; Rollandi, G A; Gambaro, M; Marinelli, M; Forni, G L

    2012-06-01

    An accurate assessment of body iron accumulation is essential for the diagnosis and therapy of iron overload in diseases such as thalassemia or hemochromatosis. Magnetic iron detector susceptometry and MRI are noninvasive techniques capable of detecting iron overload in the liver. Although the transverse relaxation rate measured by MRI can be correlated with the presence of iron, a calibration step is needed to obtain the liver iron concentration. Magnetic iron detector provides an evaluation of the iron overload in the whole liver. In this article, we describe a retrospective observational study comparing magnetic iron detector and MRI examinations performed on the same group of 97 patients with transfusional or congenital iron overload. A biopsy-free linear calibration to convert the average transverse relaxation rate in iron overload (R(2) = 0.72), or in liver iron concentration evaluated in wet tissue (R(2) = 0.68), is presented. This article also compares liver iron concentrations calculated in dry tissue using MRI and the existing biopsy calibration with liver iron concentrations evaluated in wet tissue by magnetic iron detector to obtain an estimate of the wet-to-dry conversion factor of 6.7 ± 0.8 (95% confidence level). Copyright © 2011 Wiley-Liss, Inc.

  17. Multisite Evaluation of APEX for Water Quality: I. Best Professional Judgment Parameterization.

    PubMed

    Baffaut, Claire; Nelson, Nathan O; Lory, John A; Senaviratne, G M M M Anomaa; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S

    2017-11-01

    The Agricultural Policy Environmental eXtender (APEX) model is capable of estimating edge-of-field water, nutrient, and sediment transport and is used to assess the environmental impacts of management practices. The current practice is to fully calibrate the model for each site simulation, a task that requires resources and data not always available. The objective of this study was to compare model performance for flow, sediment, and phosphorus transport under two parameterization schemes: a best professional judgment (BPJ) parameterization based on readily available data and a fully calibrated parameterization based on site-specific soil, weather, event flow, and water quality data. The analysis was conducted using 12 datasets at four locations representing poorly drained soils and row-crop production under different tillage systems. Model performance was based on the Nash-Sutcliffe efficiency (NSE), the coefficient of determination () and the regression slope between simulated and measured annualized loads across all site years. Although the BPJ model performance for flow was acceptable (NSE = 0.7) at the annual time step, calibration improved it (NSE = 0.9). Acceptable simulation of sediment and total phosphorus transport (NSE = 0.5 and 0.9, respectively) was obtained only after full calibration at each site. Given the unacceptable performance of the BPJ approach, uncalibrated use of APEX for planning or management purposes may be misleading. Model calibration with water quality data prior to using APEX for simulating sediment and total phosphorus loss is essential. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  18. A SYSTEMS APPROACH UTILIZING GENERAL-PURPOSE AND SPECIAL-PURPOSE TEACHING MACHINES.

    ERIC Educational Resources Information Center

    SILVERN, LEONARD C.

    IN ORDER TO IMPROVE THE EMPLOYEE TRAINING-EVALUATION METHOD, TEACHING MACHINES AND PERFORMANCE AIDS MUST BE PHYSICALLY AND OPERATIONALLY INTEGRATED INTO THE SYSTEM, THUS RETURNING TRAINING TO THE ACTUAL JOB ENVIRONMENT. GIVEN THESE CONDITIONS, TRAINING CAN BE MEASURED, CALIBRATED, AND CONTROLLED WITH RESPECT TO ACTUAL JOB PERFORMANCE STANDARDS AND…

  19. Evaluation of the use of performance reference compounds in an oasis-HLB adsorbent based passive sampler for improving water concentration estimates of polar herbicides in freshwater

    USGS Publications Warehouse

    Mazzella, N.; Lissalde, S.; Moreira, S.; Delmas, F.; Mazellier, P.; Huckins, J.N.

    2010-01-01

    Passive samplers such as the Polar Organic Chemical Integrative Sampler (POCIS) are useful tools for monitoring trace levels of polar organic chemicals in aquatic environments. The use of performance reference compounds (PRC) spiked into the POCIS adsorbent for in situ calibration may improve the semiquantitative nature of water concentration estimates based on this type of sampler. In this work, deuterium labeled atrazine-desisopropyl (DIA-d5) was chosen as PRC because of its relatively high fugacity from Oasis HLB (the POCIS adsorbent used) and our earlier evidence of its isotropic exchange. In situ calibration of POCIS spiked with DIA-d5was performed, and the resulting time-weighted average concentration estimates were compared with similar values from an automatic sampler equipped with Oasis HLB cartridges. Before PRC correction, water concentration estimates based on POCIS data sampling ratesfrom a laboratory calibration exposure were systematically lower than the reference concentrations obtained with the automatic sampler. Use of the DIA-d5 PRC data to correct POCIS sampling rates narrowed differences between corresponding values derived from the two methods. Application of PRCs for in situ calibration seems promising for improving POCIS-derived concentration estimates of polar pesticides. However, careful attention must be paid to the minimization of matrix effects when the quantification is performed by HPLC-ESI-MS/MS. ?? 2010 American Chemical Society.

  20. Validation of an assay for quantification of free normetanephrine, metanephrine and methoxytyramine in plasma by high performance liquid chromatography with coulometric detection: Comparison of peak-area vs. peak-height measurements.

    PubMed

    Nieć, Dawid; Kunicki, Paweł K

    2015-10-01

    Measurements of plasma concentrations of free normetanephrine (NMN), metanephrine (MN) and methoxytyramine (MTY) constitute the most diagnostically accurate screening test for pheochromocytomas and paragangliomas. The aim of this article is to present the results from a validation of an analytical method utilizing high performance liquid chromatography with coulometric detection (HPLC-CD) for quantifying plasma free NMN, MN and MTY. Additionally, peak integration by height and area and the use of one calibration curve for all batches or individual calibration curve for each batch of samples was explored as to determine the optimal approach with regard to accuracy and precision. The method was validated using charcoal stripped plasma spiked with solutions of NMN, MN, MTY and internal standard (4-hydroxy-3-methoxybenzylamine) with the exception of selectivity which was evaluated by analysis of real plasma samples. Calibration curve performance, accuracy, precision and recovery were determined following both peak-area and peak-height measurements and the obtained results were compared. The most accurate and precise method of calibration was evaluated by analyzing quality control samples at three concentration levels in 30 analytical runs. The detector response was linear over the entire tested concentration range from 10 to 2000pg/mL with R(2)≥0.9988. The LLOQ was 10pg/mL for each analyte of interest. To improve accuracy for measurements at low concentrations, a weighted (1/amount) linear regression model was employed, which resulted in inaccuracies of -2.48 to 9.78% and 0.22 to 7.81% following peak-area and peak-height integration, respectively. The imprecisions ranged from 1.07 to 15.45% and from 0.70 to 11.65% for peak-area and peak-height measurements, respectively. The optimal approach to calibration was the one utilizing an individual calibration curve for each batch of samples and peak-height measurements. It was characterized by inaccuracies ranging from -3.39 to +3.27% and imprecisions from 2.17 to 13.57%. The established HPLC-CD method enables accurate and precise measurements of plasma free NMN, MN and MTY with reasonable selectivity. Preparing calibration curve based on peak-height measurements for each batch of samples yields optimal accuracy and precision. Copyright © 2015. Published by Elsevier B.V.

  1. Improving the effectiveness of smart work zone technologies.

    DOT National Transportation Integrated Search

    2016-11-01

    This project evaluates the effectiveness of sensor network systems for work zone traffic estimation. The comparative analysis is : performed on a work zone modeled in microsimulation and calibrated with field data from two Illinois work zones. Realis...

  2. Performance evaluation of radiant cooling system application on a university building in Indonesia

    NASA Astrophysics Data System (ADS)

    Satrio, Pujo; Sholahudin, S.; Nasruddin

    2017-03-01

    The paper describes a study developed to estimate the energy savings potential of a radiant cooling system installed in an institutional building in Indonesia. The simulations were carried out using IESVE to evaluate thermal performance and energy consumption The building model was calibrated using the measured data for the installed radiant system. Then this calibrated model was used to simulate the energy consumption and temperature distribution to determine the proportional energy savings and occupant comfort under different systems. The result was radiant cooling which integrated with a Dedicated Outside Air System (DOAS) could make 41,84% energy savings compared to the installed cooling system. The Computational Fluid Dynamics (CFD) simulation showed that a radiant system integrated with DOAS provides superior human comfort than a radiant system integrated with Variable Air Volume (VAV). Percentage People Dissatisfied was kept below 10% using the proposed system.

  3. Intercomparison of 30+ years of AVHRR and Landsat-5 TM Surface Reflectance using Multiple Pseudo-Invariant Calibration Sites

    NASA Astrophysics Data System (ADS)

    Santamaría-Artigas, A. E.; Franch, B.; Vermote, E.; Roger, J. C.; Justice, C. O.

    2017-12-01

    The 30+ years daily surface reflectance long term data record (LTDR) from the Advanced Very High Resolution Radiometer (AVHRR) is a valuable source of information for long-term studies of the Earth surface. This LTDR was generated by combining observations from multiple AVHRR sensors aboard different NOAA satellites starting from the early 1980s, and due to the lack of on-board calibration its quality should be evaluated. Previous studies have used observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) over pseudo-invariant calibration sites (PICS) as a calibrated reference to assess the performance of AVHRR products. However, this limits the evaluation to the period after MODIS launch. In this work, the AVHRR surface reflectance LTDR was evaluated against Landsat-5 Thematic Mapper (TM) data using observations from 4 well known pseudo-invariant calibration sites (i.e. Sonoran, Saharan, Sudan1, and Libya4) over an extended time period (1984-2011). For the intercomparison, AVHRR and TM observations of each site were extracted and averaged over a 20 km x 20 km area and aggregated to monthly mean values. In order to account for the spectral differences between sensors, Hyperion hyperspectral data from the Sonoran and Libya4 sites were convolved with sensor-specific relative spectral responses, and used to compute spectral band adjustment factors (SBAFs). Results of the intercomparison are reported in terms of the root mean square difference (RMSD) and determination coefficient (r2). In general, there is good agreement between the surface reflectance products from both sensors. The overall RMSD and r2 for all the sites and AVHRR/TM combinations were 0.03 and 0.85 for the red band, and 0.04 and 0.81 for the near-infrared band. These results show the strong performance of the AVHRR surface reflectance LTDR through all of the considered period. Thus, remarking its usefulness and value for long term Earth studies. Figure 1 shows the red (filled markers) and near-infrared (empty markers) surface reflectance from AVHRR and TM for the complete evaluation period over the Saharan (diamond), Libya4 (square), Sudan1 (triangle), and Sonoran (circle) PICS.

  4. Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique

    NASA Astrophysics Data System (ADS)

    Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua

    2018-05-01

    A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.

  5. Novel Real-time Alignment and Calibration of the LHCb detector in Run2

    NASA Astrophysics Data System (ADS)

    Martinelli, Maurizio; LHCb Collaboration

    2017-10-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run2. Data collected at the start of the fill are processed in a few minutes and used to update the alignment parameters, while the calibration constants are evaluated for each run. This procedure improves the quality of the online reconstruction. For example, the vertex locator is retracted and reinserted for stable beam conditions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline-selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configuration are discussed, as well as the working procedures of the framework and its performance.

  6. Digital Signal Processing Techniques for the GIFTS SM EDU

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Reisse, Robert A.; Gazarik, Michael J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes several digital signal processing (DSP) techniques involved in the development of the calibration model. In the first stage, the measured raw interferograms must undergo a series of processing steps that include filtering, decimation, and detector nonlinearity correction. The digital filtering is achieved by employing a linear-phase even-length FIR complex filter that is designed based on the optimum equiripple criteria. Next, the detector nonlinearity effect is compensated for using a set of pre-determined detector response characteristics. In the next stage, a phase correction algorithm is applied to the decimated interferograms. This is accomplished by first estimating the phase function from the spectral phase response of the windowed interferogram, and then correcting the entire interferogram based on the estimated phase function. In the calibration stage, we first compute the spectral responsivity based on the previous results and the ideal Planck blackbody spectra at the given temperatures, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. In the post-calibration stage, we estimate the Noise Equivalent Spectral Radiance (NESR) from the calibrated ABB and HBB spectra. The NESR is generally considered as a measure of the instrument noise performance, and can be estimated as the standard deviation of calibrated radiance spectra from multiple scans. To obtain an estimate of the FPA performance, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is developed based on the pixel performance evaluation. This would allow us to perform the calibration procedures on a random pixel population that is a good statistical representation of the entire FPA. The design and implementation of each individual component will be discussed in details.

  7. In situ continuous monitoring of nitrogen with ion-selective electrodes in a constructed wetland receiving treated wastewater: an operating protocol to obtain reliable data.

    PubMed

    Papias, Sandrine; Masson, Matthieu; Pelletant, Sébastien; Prost-Boucle, Stéphanie; Boutin, Catherine

    2018-03-01

    Constructed wetlands receiving treated wastewater (CWtw) are placed between wastewater treatment plants and receiving water bodies, under the perception that they increase water quality. A better understanding of the CWtw functioning is required to evaluate their real performance. To achieve this, in situ continuous monitoring of nitrate and ammonium concentrations with ion-selective electrodes (ISEs) can provide valuable information. However, this measurement needs precautions to be taken to produce good data quality, especially in areas with high effluent quality requirements. In order to study the functioning of a CWtw instrumented with six ISE probes, we have developed an appropriate methodology for probe management and data processing. It is based on an evaluation of performance in the laboratory and an adapted field protocol for calibration, data treatment and validation. The result is an operating protocol concerning an acceptable cleaning frequency of 2 weeks, a complementary calibration using CWtw water, a drift evaluation and the determination of limits of quantification (1 mgN/L for ammonium and 0.5 mgN/L for nitrate). An example of a 9-month validated dataset confirms that it is fundamental to include the technical limitations of the measuring equipment and set appropriate maintenance and calibration methodologies in order to ensure an accurate interpretation of data.

  8. Accuracy of rapid radiographic film calibration for intensity‐modulated radiation therapy verification

    PubMed Central

    Kulasekere, Ravi; Moran, Jean M.; Fraass, Benedick A.; Roberson, Peter L.

    2006-01-01

    A single calibration film method was evaluated for use with intensity‐modulated radiation therapy film quality assurance measurements. The single‐film method has the potential advantages of exposure simplicity, less media consumption, and improved processor quality control. Potential disadvantages include cross contamination of film exposure, implementation effort to document delivered dose, and added complication of film response analysis. Film response differences were measured between standard and single‐film calibration methods. Additional measurements were performed to help trace causes for the observed discrepancies. Kodak X‐OmatV (XV) film was found to have greater response variability than extended dose range (EDR) film. We found it advisable for XV film to relate the film response calibration for the single‐film method to a user‐defined optimal calibration geometry. Using a single calibration film exposed at the time of experiment, the total uncertainty of film response was estimated to be <2% (1%) for XV (EDR) film at 50 (100) cGy and higher, respectively. PACS numbers: 87.53.‐j, 87.53.Dq PMID:17533325

  9. Comparison of the Mortality Probability Admission Model III, National Quality Forum, and Acute Physiology and Chronic Health Evaluation IV hospital mortality models: implications for national benchmarking*.

    PubMed

    Kramer, Andrew A; Higgins, Thomas L; Zimmerman, Jack E

    2014-03-01

    To examine the accuracy of the original Mortality Probability Admission Model III, ICU Outcomes Model/National Quality Forum modification of Mortality Probability Admission Model III, and Acute Physiology and Chronic Health Evaluation IVa models for comparing observed and risk-adjusted hospital mortality predictions. Retrospective paired analyses of day 1 hospital mortality predictions using three prognostic models. Fifty-five ICUs at 38 U.S. hospitals from January 2008 to December 2012. Among 174,001 intensive care admissions, 109,926 met model inclusion criteria and 55,304 had data for mortality prediction using all three models. None. We compared patient exclusions and the discrimination, calibration, and accuracy for each model. Acute Physiology and Chronic Health Evaluation IVa excluded 10.7% of all patients, ICU Outcomes Model/National Quality Forum 20.1%, and Mortality Probability Admission Model III 24.1%. Discrimination of Acute Physiology and Chronic Health Evaluation IVa was superior with area under receiver operating curve (0.88) compared with Mortality Probability Admission Model III (0.81) and ICU Outcomes Model/National Quality Forum (0.80). Acute Physiology and Chronic Health Evaluation IVa was better calibrated (lowest Hosmer-Lemeshow statistic). The accuracy of Acute Physiology and Chronic Health Evaluation IVa was superior (adjusted Brier score = 31.0%) to that for Mortality Probability Admission Model III (16.1%) and ICU Outcomes Model/National Quality Forum (17.8%). Compared with observed mortality, Acute Physiology and Chronic Health Evaluation IVa overpredicted mortality by 1.5% and Mortality Probability Admission Model III by 3.1%; ICU Outcomes Model/National Quality Forum underpredicted mortality by 1.2%. Calibration curves showed that Acute Physiology and Chronic Health Evaluation performed well over the entire risk range, unlike the Mortality Probability Admission Model and ICU Outcomes Model/National Quality Forum models. Acute Physiology and Chronic Health Evaluation IVa had better accuracy within patient subgroups and for specific admission diagnoses. Acute Physiology and Chronic Health Evaluation IVa offered the best discrimination and calibration on a large common dataset and excluded fewer patients than Mortality Probability Admission Model III or ICU Outcomes Model/National Quality Forum. The choice of ICU performance benchmarks should be based on a comparison of model accuracy using data for identical patients.

  10. SU-E-J-85: Leave-One-Out Perturbation (LOOP) Fitting Algorithm for Absolute Dose Film Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, A; Ahmad, M; Chen, Z

    2014-06-01

    Purpose: To introduce an outliers-recognition fitting routine for film dosimetry. It cannot only be flexible with any linear and non-linear regression but also can provide information for the minimal number of sampling points, critical sampling distributions and evaluating analytical functions for absolute film-dose calibration. Methods: The technique, leave-one-out (LOO) cross validation, is often used for statistical analyses on model performance. We used LOO analyses with perturbed bootstrap fitting called leave-one-out perturbation (LOOP) for film-dose calibration . Given a threshold, the LOO process detects unfit points (“outliers”) compared to other cohorts, and a bootstrap fitting process follows to seek any possibilitiesmore » of using perturbations for further improvement. After that outliers were reconfirmed by a traditional t-test statistics and eliminated, then another LOOP feedback resulted in the final. An over-sampled film-dose- calibration dataset was collected as a reference (dose range: 0-800cGy), and various simulated conditions for outliers and sampling distributions were derived from the reference. Comparisons over the various conditions were made, and the performance of fitting functions, polynomial and rational functions, were evaluated. Results: (1) LOOP can prove its sensitive outlier-recognition by its statistical correlation to an exceptional better goodness-of-fit as outliers being left-out. (2) With sufficient statistical information, the LOOP can correct outliers under some low-sampling conditions that other “robust fits”, e.g. Least Absolute Residuals, cannot. (3) Complete cross-validated analyses of LOOP indicate that the function of rational type demonstrates a much superior performance compared to the polynomial. Even with 5 data points including one outlier, using LOOP with rational function can restore more than a 95% value back to its reference values, while the polynomial fitting completely failed under the same conditions. Conclusion: LOOP can cooperate with any fitting routine functioning as a “robust fit”. In addition, it can be set as a benchmark for film-dose calibration fitting performance.« less

  11. A multimodality imaging-compatible insertion robot with a respiratory motion calibration module designed for ablation of liver tumors: a preclinical study.

    PubMed

    Li, Dongrui; Cheng, Zhigang; Chen, Gang; Liu, Fangyi; Wu, Wenbo; Yu, Jie; Gu, Ying; Liu, Fengyong; Ren, Chao; Liang, Ping

    2018-04-03

    To test the accuracy and efficacy of the multimodality imaging-compatible insertion robot with a respiratory motion calibration module designed for ablation of liver tumors in phantom and animal models. To evaluate and compare the influences of intervention experience on robot-assisted and ultrasound-controlled ablation procedures. Accuracy tests on rigid body/phantom model with a respiratory movement simulation device and microwave ablation tests on porcine liver tumor/rabbit liver cancer were performed with the robot we designed or with the traditional ultrasound-guidance by physicians with or without intervention experience. In the accuracy tests performed by the physicians without intervention experience, the insertion accuracy and efficiency of robot-assisted group was higher than those of ultrasound-guided group with statistically significant differences. In the microwave ablation tests performed by the physicians without intervention experience, better complete ablation rate was achieved when applying the robot. In the microwave ablation tests performed by the physicians with intervention experience, there was no statistically significant difference of the insertion number and total ablation time between the robot-assisted group and the ultrasound-controlled group. The evaluation by the NASA-TLX suggested that the robot-assisted insertion and microwave ablation process performed by physicians with or without experience were more comfortable. The multimodality imaging-compatible insertion robot with a respiratory motion calibration module designed for ablation of liver tumors could increase the insertion accuracy and ablation efficacy, and minimize the influence of the physicians' experience. The ablation procedure could be more comfortable with less stress with the application of the robot.

  12. Evaluation of different parameterizations of the spatial heterogeneity of subsurface storage capacity for hourly runoff simulation in boreal mountainous watershed

    NASA Astrophysics Data System (ADS)

    Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur

    2015-03-01

    Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the parameterizations.

  13. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  14. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis

    PubMed Central

    Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  15. Relevance of the c-statistic when evaluating risk-adjustment models in surgery.

    PubMed

    Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y

    2012-05-01

    The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become more homogenous. Although it remains an important tool, caution is advised when the c-statistic is advanced as the sole measure of a model performance. Copyright © 2012 American College of Surgeons. All rights reserved.

  16. Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach

    NASA Astrophysics Data System (ADS)

    Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.

    2016-09-01

    The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.

  17. Calibration for the SAGE III/EOS instruments

    NASA Technical Reports Server (NTRS)

    Chu, W. P.; Mccormick, M. P.; Zawodny, J. M.; Mcmaster, L. R.

    1991-01-01

    The calibration plan for the SAGE III instruments for maintaining instrument performance during the Earth Observing System (EOS) mission lifetime is described. The SAGE III calibration plan consists of detailed preflight and inflight calibration on the instrument performance together with the correlative measurement program to validate the data products from the inverted satellite measurements. Since the measurement technique is primarily solar/lunar occultation, the instrument will be self-calibrating by using the sun as the calibration source during the routine operation of the instrument in flight. The instrument is designed to perform radiometric calibration of throughput, spectral, and spatial response in flight during routine operation. Spectral calibration can be performed in-flight from observation of the solar Fraunhofer lines within the spectral region from 290 to 1030 nm wavelength.

  18. Design of an ultra-portable field transfer radiometer supporting automated vicarious calibration

    NASA Astrophysics Data System (ADS)

    Anderson, Nikolaus; Thome, Kurtis; Czapla-Myers, Jeffrey; Biggar, Stuart

    2015-09-01

    The University of Arizona Remote Sensing Group (RSG) began outfitting the radiometric calibration test site (RadCaTS) at Railroad Valley Nevada in 2004 for automated vicarious calibration of Earth-observing sensors. RadCaTS was upgraded to use RSG custom 8-band ground viewing radiometers (GVRs) beginning in 2011 and currently four GVRs are deployed providing an average reflectance for the test site. This measurement of ground reflectance is the most critical component of vicarious calibration using the reflectance-based method. In order to ensure the quality of these measurements, RSG has been exploring more efficient and accurate methods of on-site calibration evaluation. This work describes the design of, and initial results from, a small portable transfer radiometer for the purpose of GVR calibration validation on site. Prior to deployment, RSG uses high accuracy laboratory calibration methods in order to provide radiance calibrations with low uncertainties for each GVR. After deployment, a solar radiation based calibration has typically been used. The method is highly dependent on a clear, stable atmosphere, requires at least two people to perform, is time consuming in post processing, and is dependent on several large pieces of equipment. In order to provide more regular and more accurate calibration monitoring, the small portable transfer radiometer is designed for quick, one-person operation and on-site field calibration comparison results. The radiometer is also suited for laboratory calibration use and thus could be used as a transfer radiometer calibration standard for ground viewing radiometers of a RadCalNet site.

  19. Fast calibration of electromagnetically tracked oblique-viewing rigid endoscopes.

    PubMed

    Liu, Xinyang; Rice, Christina E; Shekhar, Raj

    2017-10-01

    The oblique-viewing (i.e., angled) rigid endoscope is a commonly used tool in conventional endoscopic surgeries. The relative rotation between its two moveable parts, the telescope and the camera head, creates a rotation offset between the actual and the projection of an object in the camera image. A calibration method tailored to compensate such offset is needed. We developed a fast calibration method for oblique-viewing rigid endoscopes suitable for clinical use. In contrast to prior approaches based on optical tracking, we used electromagnetic (EM) tracking as the external tracking hardware to improve compactness and practicality. Two EM sensors were mounted on the telescope and the camera head, respectively, with considerations to minimize EM tracking errors. Single-image calibration was incorporated into the method, and a sterilizable plate, laser-marked with the calibration pattern, was also developed. Furthermore, we proposed a general algorithm to estimate the rotation center in the camera image. Formulas for updating the camera matrix in terms of clockwise and counterclockwise rotations were also developed. The proposed calibration method was validated using a conventional [Formula: see text], 5-mm laparoscope. Freehand calibrations were performed using the proposed method, and the calibration time averaged 2 min and 8 s. The calibration accuracy was evaluated in a simulated clinical setting with several surgical tools present in the magnetic field of EM tracking. The root-mean-square re-projection error averaged 4.9 pixel (range 2.4-8.5 pixel, with image resolution of [Formula: see text] for rotation angles ranged from [Formula: see text] to [Formula: see text]. We developed a method for fast and accurate calibration of oblique-viewing rigid endoscopes. The method was also designed to be performed in the operating room and will therefore support clinical translation of many emerging endoscopic computer-assisted surgical systems.

  20. Development and Evaluation of an Automated Machine Learning Algorithm for In-Hospital Mortality Risk Adjustment Among Critical Care Patients.

    PubMed

    Delahanty, Ryan J; Kaufman, David; Jones, Spencer S

    2018-06-01

    Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death score has many attractive attributes that address the key barriers to adoption of ICU risk adjustment algorithms and performs comparably to existing human-intensive algorithms. Automated risk adjustment algorithms have the potential to obviate known barriers to adoption such as cost-prohibitive licensing fees and significant direct labor costs. Further evaluation is needed to ensure that the level of performance observed in this study could be achieved at independent sites.

  1. LANDSAT-4 MSS and Thematic Mapper data quality and information content analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P.; Bartolucci, L.; Dean, E.; Lozano, F.; Malaret, E.; Mcgillem, C. D.; Valdes, J.; Valenzuela, C.

    1984-01-01

    LANDSAT-4 thematic mapper (TM) and multispectral scanner (MSS) data were analyzed to obtain information on data quality and information content. Geometric evaluations were performed to test band-to-band registration accuracy. Thematic mapper overall system resolution was evaluated using scene objects which demonstrated sharp high contrast edge responses. Radiometric evaluation included detector relative calibration, effects of resampling, and coherent noise effects. Information content evaluation was carried out using clustering, principal components, transformed divergence separability measure, and supervised classifiers on test data. A detailed spectral class analysis (multispectral classification) was carried out to compare the information content of the MSS and TM for a large number of scene classes. A temperature-mapping experiment was carried out for a cooling pond to test the quality of thermal-band calibration. Overall TM data quality is very good. The MSS data are noisier than previous LANDSAT results.

  2. Development of Eye Dosimeter Using Additive Manufacturing Techniques to Monitor Occupational Eye Lens Exposures to Interventional Radiologists

    NASA Astrophysics Data System (ADS)

    Choi, JungHwan

    In this project, an eye dosimeter was designed for monitoring occupational lens of the eye exposures targeted to interventional radiologists who are often indirectly exposed to scattered radiation from the patient while performing image-guided procedures. The dosimeter was designed with a computer-aided design software to facilitate additive manufacturing techniques to make the dosimeter. The dosimeter consisted of three separate components that are attached to the hinges and the bridge of the occupational worker's protective eyewear. The produced dosimeter was radiologically calibrated to measure the lens dose on an anthropomorphic phantom of the human head. To supplement the physical design, an algorithm was written that prompts the user to input the element responses of the dosimeter, then estimates the average angle, energy, and resulting lens dose of the exposure by comparing the input with the data acquired during the dosimeter calibration procedure. The performance of the calibrated dosimeter (and the algorithm) was evaluated according to guidelines of the American National Standards Institute, and the dosimeter demonstrated a performance that was in compliance with the standard's performance criteria which suggests that the design of the eye dosimeter is feasible.

  3. Validation of Storm Water Management Model Storm Control Measures Modules

    NASA Astrophysics Data System (ADS)

    Simon, M. A.; Platz, M. C.

    2017-12-01

    EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.

  4. Investigation of factors affecting the heater wire method of calibrating fine wire thermocouples

    NASA Technical Reports Server (NTRS)

    Keshock, E. G.

    1972-01-01

    An analytical investigation was made of a transient method of calibrating fine wire thermocouples. The system consisted of a 10 mil diameter standard thermocouple (Pt, Pt-13% Rh) and an 0.8 mil diameter chromel-alumel thermocouple attached to a 20 mil diameter electrically heated platinum wire. The calibration procedure consisted of electrically heating the wire to approximately 2500 F within about a seven-second period in an environment approximating atmospheric conditions at 120,000 feet. Rapid periodic readout of the standard and fine wire thermocouple signals permitted a comparison of the two temperature indications. An analysis was performed which indicated that the temperature distortion at the heater wire produced by the thermocouple junctions appears to be of negligible magnitude. Consequently, the calibration technique appears to be basically sound, although several practical changes which appear desirable are presented and discussed. Additional investigation is warranted to evaluate radiation effects and transient response characteristics.

  5. Reference NO2 calibration system for ground-based intercomparisons during NASA's GTE/CITE 2 mission

    NASA Technical Reports Server (NTRS)

    Fried, Alan; Nunnermacker, Linda; Cadoff, Barry; Sams, Robert; Yates, Nathan

    1990-01-01

    An NO2 calibration system, based on a permeation device and a two-stage dynamic dilution system, was designed, constructed, and characterized at the National Bureau of Standards. In this system, calibrant flow entering the second stage was controlled without contacting a metal flow controller, and permeation oven temperature and flow were continuously maintained, even during transport. The system performance and the permeation emission rate were characterized by extensive laboratory tests. This system was capable of accurately delivering known NO2 concentrations in the ppbv and sub-ppbv concentration range with a total uncertainty of approximately 10 percent. The calibration system was placed on board NASA research aircraft at both the Wallops Island and Ames research facilities. There it was employed as the reference standard in NASA's Global Tropospheric Experiment/Chemical Instrumental Test and Evaluation 2 mission in August 1986.

  6. Self-calibration of a W/Re thermocouple using a miniature Ru-C (1954 °C) eutectic cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ongrai, O.; University of Surrey, Guildford, Surrey; National Institute of Metrology, Klong 5, Klong Luang, Pathumthani

    2013-09-11

    Previous successful investigations of miniature cobalt-carbon (Co-C, 1324 °C) and palladium-carbon (Pd-C, 1492 °C) high temperature fixed-point cells for thermocouple self-calibration have been reported [1-2]. In the present work, we describe a series of measurements of a miniature ruthenium-carbon (Ru-C) eutectic cell (melting point 1954 °C) to evaluate the repeatability and stability of a W/Re thermocouple (type C) by means of in-situ calibration. A miniature Ru-C eutectic fixed-point cell with outside diameter 14 mm and length 30 mm was fabricated to be used as a self-calibrating device. The performance of the miniature Ru-C cell and the type C thermocouple ismore » presented, including characterization of the stability, repeatability, thermal environment influence, ITS-90 temperature realization and measurement uncertainty.« less

  7. Novel Hyperspectral Sun Photometer for Satellite Remote Sensing Data Radiometric Calibration and Atmospheric Aerosol Studies

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Ryan, Robert E.; Holekamp, Kara; Harrington, Gary; Frisbie, Troy

    2006-01-01

    A simple and cost-effective, hyperspectral sun photometer for radiometric vicarious remote sensing system calibration, air quality monitoring, and potentially in-situ planetary climatological studies, was developed. The device was constructed solely from off the shelf components and was designed to be easily deployable for support of short-term verification and validation data collects. This sun photometer not only provides the same data products as existing multi-band sun photometers, this device requires a simpler setup, less data acquisition time and allows for a more direct calibration approach. Fielding this instrument has also enabled Stennis Space Center (SSC) Applied Sciences Directorate personnel to cross calibrate existing sun photometers. This innovative research will position SSC personnel to perform air quality assessments in support of the NASA Applied Sciences Program's National Applications program element as well as to develop techniques to evaluate aerosols in a Martian or other planetary atmosphere.

  8. Multi-projector auto-calibration and placement optimization for non-planar surfaces

    NASA Astrophysics Data System (ADS)

    Li, Dong; Xie, Jinghui; Zhao, Lu; Zhou, Lijing; Weng, Dongdong

    2015-10-01

    Non-planar projection has been widely applied in virtual reality and digital entertainment and exhibitions because of its flexible layout and immersive display effects. Compared with planar projection, a non-planar projection is more difficult to achieve because projector calibration and image distortion correction are difficult processes. This paper uses a cylindrical screen as an example to present a new method for automatically calibrating a multi-projector system in a non-planar environment without using 3D reconstruction. This method corrects the geometric calibration error caused by the screen's manufactured imperfections, such as an undulating surface or a slant in the vertical plane. In addition, based on actual projection demand, this paper presents the overall performance evaluation criteria for the multi-projector system. According to these criteria, we determined the optimal placement for the projectors. This method also extends to surfaces that can be parameterized, such as spheres, ellipsoids, and paraboloids, and demonstrates a broad applicability.

  9. Accurate evaluation of sensitivity for calibration between a LiDAR and a panoramic camera used for remote sensing

    NASA Astrophysics Data System (ADS)

    García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier

    2016-04-01

    Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.

  10. Streamflow characteristics from modelled runoff time series: Importance of calibration criteria selection

    USGS Publications Warehouse

    Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan

    2017-01-01

    Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.

  11. The Characterization of a Piston Displacement-Type Flowmeter Calibration Facility and the Calibration and Use of Pulsed Output Type Flowmeters

    PubMed Central

    Mattingly, G. E.

    1992-01-01

    Critical measurement performance of fluid flowmeters requires proper and quantified verification data. These data should be generated using calibration and traceability techniques established for these verification purposes. In these calibration techniques, the calibration facility should be well-characterized and its components and performance properly traced to pertinent higher standards. The use of this calibrator to calibrate flowmeters should be appropriately established and the manner in which the calibrated flowmeter is used should be specified in accord with the conditions of the calibration. These three steps: 1) characterizing the calibration facility itself, 2) using the characterized facility to calibrate a flowmeter, and 3) using the calibrated flowmeter to make a measurement are described and the pertinent equations are given for an encoded-stroke, piston displacement-type calibrator and a pulsed output flowmeter. It is concluded that, given these equations and proper instrumentation of this type of calibrator, very high levels of performance can be attained and, in turn, these can be used to achieve high fluid flow rate measurement accuracy with pulsed output flowmeters. PMID:28053444

  12. Effect of pH Test-Strip Characteristics on Accuracy of Readings.

    PubMed

    Metheny, Norma A; Gunn, Emily M; Rubbelke, Cynthia S; Quillen, Terrilynn Fox; Ezekiel, Uthayashanker R; Meert, Kathleen L

    2017-06-01

    Little is known about characteristics of colorimetric pH test strips that are most likely to be associated with accurate interpretations in clinical situations. To compare the accuracy of 4 pH test strips with varying characteristics (ie, multiple vs single colorimetric squares per calibration, and differing calibration units [1.0 vs 0.5]). A convenience sample of 100 upper-level nursing students with normal color vision was recruited to evaluate the accuracy of the test strips. Six buffer solutions (pH range, 3.0 to 6.0) were used during the testing procedure. Each of the 100 participants performed 20 pH tests in random order, providing a total of 2000 readings. The sensitivity and specificity of each test strip was computed. In addition, the degree to which the test strips under- or overestimated the pH values was analyzed using descriptive statistics. Our criterion for correct readings was an exact match with the pH buffer solution being evaluated. Although none of the test strips evaluated in our study was 100% accurate at all of the measured pH values, those with multiple squares per pH calibration were clearly superior overall to those with a single test square. Test strips with multiple squares per calibration were associated with greater overall accuracy than test strips with a single square per calibration. However, because variable degrees of error were observed in all of the test strips, use of a pH meter is recommended when precise readings are crucial. ©2017 American Association of Critical-Care Nurses.

  13. Sensitivity-Based Guided Model Calibration

    NASA Astrophysics Data System (ADS)

    Semnani, M.; Asadzadeh, M.

    2017-12-01

    A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.

  14. Evaluation of commercially available techniques and development of simplified methods for measuring grille airflows in HVAC systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Iain S.; Wray, Craig P.; Guillot, Cyril

    2003-08-01

    In this report, we discuss the accuracy of flow hoods for residential applications, based on laboratory tests and field studies. The results indicate that commercially available hoods are often inadequate to measure flows in residential systems, and that there can be a wide range of performance between different flow hoods. The errors are due to poor calibrations, sensitivity of existing hoods to grille flow non-uniformities, and flow changes from added flow resistance. We also evaluated several simple techniques for measuring register airflows that could be adopted by the HVAC industry and homeowners as simple diagnostics that are often as accuratemore » as commercially available devices. Our test results also show that current calibration procedures for flow hoods do not account for field application problems. As a result, organizations such as ASHRAE or ASTM need to develop a new standard for flow hood calibration, along with a new measurement standard to address field use of flow hoods.« less

  15. Evaluation and Comparison of Methods for Measuring Ozone ...

    EPA Pesticide Factsheets

    Ambient evaluations of the various ozone and NO2 methods were conducted during field intensive studies as part of the NASA DISCOVER-AQ project conducted during July 2011 near Baltimore, MD; January – February 2013 in the San Juaquin valley, CA; September 2013 in Houston, TX; and July – August 2014 near Denver, CO. During field intensive studies, instruments were calibrated according to manufacturers’ operation manuals and in accordance with FRM requirements listed in 40 CFR 50. During the ambient evaluation campaigns, nightly automated zero and span checks were performed to monitor the validity of the calibration and control for drifts or variations in the span and/or zero response. Both the calibration gas concentrations and the nightly zero and span gas concentrations were delivered using a dynamic dilution calibration system (T700U/T701H, Teledyne API). The analyzers were housed within a temperature-controlled shelter during the sampling campaigns. A glass inlet with sampling height located approximately 5 m above ground level and a subsequent sampling manifold were shared by all instruments. Data generated by all analyzers were collected and logged using a field deployable data acquisition system (Envidas Ultimate). A summary of instruments used during DISCOVER-AQ deployment are listed in Table 1. Figure 1 shows a typical DISCOVER-AQ site (Houston 2013) where EPA (and others) instrumentation was deployed. Under the Clean Air Act, the U.S. EPA has estab

  16. Line fiducial material and thickness considerations for ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; McLeod, A. J.; Baxter, John S. H.; Chen, Elvis C. S.; Peters, Terry M.

    2015-03-01

    Ultrasound calibration is a necessary procedure in many image-guided interventions, relating the position of tools and anatomical structures in the ultrasound image to a common coordinate system. This is a necessary component of augmented reality environments in image-guided interventions as it allows for a 3D visualization where other surgical tools outside the imaging plane can be found. Accuracy of ultrasound calibration fundamentally affects the total accuracy of this interventional guidance system. Many ultrasound calibration procedures have been proposed based on a variety of phantom materials and geometries. These differences lead to differences in representation of the phantom on the ultrasound image which subsequently affect the ability to accurately and automatically segment the phantom. For example, taut wires are commonly used as line fiducials in ultrasound calibration. However, at large depths or oblique angles, the fiducials appear blurred and smeared in ultrasound images making it hard to localize their cross-section with the ultrasound image plane. Intuitively, larger diameter phantoms with lower echogenicity are more accurately segmented in ultrasound images in comparison to highly reflective thin phantoms. In this work, an evaluation of a variety of calibration phantoms with different geometrical and material properties for the phantomless calibration procedure was performed. The phantoms used in this study include braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. Conventional B-mode and synthetic aperture images of the phantoms at different positions were obtained. The phantoms were automatically segmented from the ultrasound images using an ellipse fitting algorithm, the centroid of which is subsequently used as a fiducial for calibration. Calibration accuracy was evaluated for these procedures based on the leave-one-out target registration error. It was shown that larger diameter phantoms with lower echogenicity are more accurately segmented in comparison to highly reflective thin phantoms. This improvement in segmentation accuracy leads to a lower fiducial localization error, which ultimately results in low target registration error. This would have a profound effect on calibration procedures and the feasibility of different calibration procedures in the context of image-guided procedures.

  17. Comparison of Two Methodologies for Calibrating Satellite Instruments in the Visible and Near Infrared

    NASA Technical Reports Server (NTRS)

    Barnes, Robert A.; Brown, Steven W.; Lykke, Keith R.; Guenther, Bruce; Xiong, Xiaoxiong (Jack); Butler, James J.

    2010-01-01

    Traditionally, satellite instruments that measure Earth-reflected solar radiation in the visible and near infrared wavelength regions have been calibrated for radiance response in a two-step method. In the first step, the spectral response of the instrument is determined using a nearly monochromatic light source, such a lamp-illuminated monochromator. Such sources only provide a relative spectral response (RSR) for the instrument, since they do not act as calibrated sources of light nor do they typically fill the field-of-view of the instrument. In the second step, the instrument views a calibrated source of broadband light, such as lamp-illuminated integrating sphere. In the traditional method, the RSR and the sphere spectral radiance are combined and, with the instrument's response, determine the absolute spectral radiance responsivity of the instrument. More recently, an absolute calibration system using widely tunable monochromatic laser systems has been developed, Using these sources, the absolute spectral responsivity (ASR) of an instrument can be determined on a wavelength-hy-wavelength basis. From these monochromatic ASRs. the responses of the instrument bands to broadband radiance sources can be calculated directly, eliminating the need for calibrated broadband light sources such as integrating spheres. Here we describe the laser-based calibration and the traditional broad-band source-based calibration of the NPP VIIRS sensor, and compare the derived calibration coefficients for the instrument. Finally, we evaluate the impact of the new calibration approach on the on-orbit performance of the sensor.

  18. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    PubMed Central

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  19. Development of a High Resolution X-ray Spectrometer on the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Gao, L.; Kraus, B.; Hill, K. W.; Bitter, M.; Efthimion, P.; Schneider, M. B.; Chen, H.; Ayers, J.; Liedahl, D.; Macphee, A. G.; Le, H. P.; Thorn, D.; Nelson, D.

    2017-10-01

    A high-resolution x-ray spectrometer has been designed, calibrated, and deployed on the National Ignition Facility (NIF) to measure plasma parameters for a Kr-doped surrogate capsule imploded at NIF conditions. Two conical crystals, each diffracting the He α and He β complexes respectively, focus the spectra onto a steak camera photocathode for time-resolved measurements with a temporal resolution of <20 ps. A third cylindrical crystal focuses the entire He α to He β spectrum onto an image plate for a time-integrated spectrum to correlate the two streaked signals. The instrument was absolutely calibrated by the x-ray group at the Princeton Plasma Physics Laboratory using a micro-focus x-ray source. Detailed calibration procedures, including source and spectrum alignment, energy calibration, crystal performance evaluation, and measurement of the resolving power and the integrated reflectivity will be presented. Initial NIF experimental results will also be discussed. This work was performed under the auspices of the U.S. Department of Energy by Princeton Plasma Physics Laboratory under contract DE-AC02-09CH11466 and by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.

  20. Atomic Resonance Radiation Energetics Investigation as a Diagnostic Method for Non-Equilibrium Hypervelocity Flows

    NASA Technical Reports Server (NTRS)

    Meyer, Scott A.; Bershader, Daniel; Sharma, Surendra P.; Deiwert, George S.

    1996-01-01

    Absorption measurements with a tunable vacuum ultraviolet light source have been proposed as a concentration diagnostic for atomic oxygen, and the viability of this technique is assessed in light of recent measurements. The instrumentation, as well as initial calibration measurements, have been reported previously. We report here additional calibration measurements performed to study the resonance broadening line shape for atomic oxygen. The application of this diagnostic is evaluated by considering the range of suitable test conditions and requirements, and by identifying issues that remain to be addressed.

  1. Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, A.; Sengupta, M.; Reda, I.

    2014-11-01

    Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.

  2. LHCb detector and trigger performance in Run II

    NASA Astrophysics Data System (ADS)

    Francesca, Dordei

    2017-12-01

    The LHCb detector is a forward spectrometer at the LHC, designed to perform high precision studies of b- and c- hadrons. In Run II of the LHC, a new scheme for the software trigger at LHCb allows splitting the triggering of events into two stages, giving room to perform the alignment and calibration in real time. In the novel detector alignment and calibration strategy for Run II, data collected at the start of the fill are processed in a few minutes and used to update the alignment, while the calibration constants are evaluated for each run. This allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The larger timing budget, available in the trigger, allows to perform the same track reconstruction online and offline. This enables LHCb to achieve the best reconstruction performance already in the trigger, and allows physics analyses to be performed directly on the data produced by the trigger reconstruction. The novel real-time processing strategy at LHCb is discussed from both the technical and operational point of view. The overall performance of the LHCb detector on the data of Run II is presented as well.

  3. Effect of poor control of film processors on mammographic image quality.

    PubMed

    Kimme-Smith, C; Sun, H; Bassett, L W; Gold, R H

    1992-11-01

    With the increasingly stringent standards of image quality in mammography, film processor quality control is especially important. Current methods are not sufficient for ensuring good processing. The authors used a sensitometer and densitometer system to evaluate the performance of 22 processors at 16 mammographic facilities. Standard sensitometric values of two films were established, and processor performance was assessed for variations from these standards. Developer chemistry of each processor was analyzed and correlated with its sensitometric values. Ten processors were retested, and nine were found to be out of calibration. The developer components of hydroquinone, sulfites, bromide, and alkalinity varied the most, and low concentrations of hydroquinone were associated with lower average gradients at two facilities. Use of the sensitometer and densitometer system helps identify out-of-calibration processors, but further study is needed to correlate sensitometric values with developer component values. The authors believe that present quality control would be improved if sensitometric or other tests could be used to identify developer components that are out of calibration.

  4. Dark Matter Limits From a 2L C3F8 Filled Bubble Chamber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Alan Edward

    2015-12-01

    The PICO-2L C3F8 bubble chamber search forWeakly Interacting Massive Particle (WIMP) dark matter was operated in the SNOLAB underground laboratory at the same location as the previous CF3I lled COUPP-4kg detector. Neutron calibrations using photoneutron sources in C3F8 and CF3I lled calibration bubble chambers were performed to verify the sensitivity of these target uids to dark matter scattering. This data was combined with similar measurements using a low-energy neutron beam at the University of Montreal and in situ calibrations of the PICO-2L and COUPP-4kg detectors. C3F8 provides much greater sensitivity to WIMP-proton scattering than CF3I in bubble chamber detectors. PICO-2Lmore » searched for dark matter recoils with energy thresholds below 10 keV. Radiopurity assays of detector materials were performed and the expected neutron recoil background was evaluated to be 1.6+0:3« less

  5. Calibration of Electret-Based Integral Radon Monitors Using NIST Polyethylene-Encapsulated 226Ra/222Rn Emanation (PERE) Standards

    PubMed Central

    Collé, R.; Kotrappa, P.; Hutchinson, J. M. R.

    1995-01-01

    The recently developed 222Rn emanation standards that are based on polyethylene-encapsulated 226Ra solutions were employed for a first field-measurement application test to demonstrate their efficacy in calibrating passive integral radon monitors. The performance of the capsules was evaluated with respect to the calibration needs of electret ionization chambers (E-PERM®, Rad Elec Inc.). The encapsulated standards emanate well-characterized and known quantities of 222Rn, and were used in two different-sized, relatively-small, accumulation vessels (about 3.6 L and 10 L) which also contained the deployed electret monitors under test. Calculated integral 222Rn activities from the capsules over various accumulation times were compared to the averaged electret responses. Evaluations were made with four encapsulated standards ranging in 226Ra activity from approximately 15 Bq to 540 Bq (with 222Rn emanation fractions of 0.888); over accumulation times from 1 d to 33 d; and with four different types of E-PERM detectors that were independently calibrated. The ratio of the electret chamber response ERn to the integral 222Rn activity IRn was constant (within statistical variations) over the variables of the specific capsule used, the accumulation volume, accumulation time, and detector type. The results clearly demonstrated the practicality and suitability of the encapsulated standards for providing a simple and readily-available calibration for those measurement applications. However, the mean ratio ERn/IRn was approximately 0.91, suggesting a possible systematic bias in the extant E-PERM calibrations. This 9 % systematic difference was verified by an independent test of the E-PERM calibration based on measurements with the NIST radon-in-water standard generator. PMID:29151765

  6. Calibration of Electret-Based Integral Radon Monitors Using NIST Polyethylene-Encapsulated 226Ra/222Rn Emanation (PERE) Standards.

    PubMed

    Collé, R; Kotrappa, P; Hutchinson, J M R

    1995-01-01

    The recently developed 222 Rn emanation standards that are based on polyethylene-encapsulated 226 Ra solutions were employed for a first field-measurement application test to demonstrate their efficacy in calibrating passive integral radon monitors. The performance of the capsules was evaluated with respect to the calibration needs of electret ionization chambers (E-PERM ® , Rad Elec Inc.). The encapsulated standards emanate well-characterized and known quantities of 222 Rn, and were used in two different-sized, relatively-small, accumulation vessels (about 3.6 L and 10 L) which also contained the deployed electret monitors under test. Calculated integral 222 Rn activities from the capsules over various accumulation times were compared to the averaged electret responses. Evaluations were made with four encapsulated standards ranging in 226 Ra activity from approximately 15 Bq to 540 Bq (with 222 Rn emanation fractions of 0.888); over accumulation times from 1 d to 33 d; and with four different types of E-PERM detectors that were independently calibrated. The ratio of the electret chamber response E Rn to the integral 222 Rn activity I Rn was constant (within statistical variations) over the variables of the specific capsule used, the accumulation volume, accumulation time, and detector type. The results clearly demonstrated the practicality and suitability of the encapsulated standards for providing a simple and readily-available calibration for those measurement applications. However, the mean ratio E Rn / I Rn was approximately 0.91, suggesting a possible systematic bias in the extant E-PERM calibrations. This 9 % systematic difference was verified by an independent test of the E-PERM calibration based on measurements with the NIST radon-in-water standard generator.

  7. Performance evaluation of an infrared thermocouple.

    PubMed

    Chen, Chiachung; Weng, Yu-Kai; Shen, Te-Ching

    2010-01-01

    The measurement of the leaf temperature of forests or agricultural plants is an important technique for the monitoring of the physiological state of crops. The infrared thermometer is a convenient device due to its fast response and nondestructive measurement technique. Nowadays, a novel infrared thermocouple, developed with the same measurement principle of the infrared thermometer but using a different detector, has been commercialized for non-contact temperature measurement. The performances of two-kinds of infrared thermocouples were evaluated in this study. The standard temperature was maintained by a temperature calibrator and a special black cavity device. The results indicated that both types of infrared thermocouples had good precision. The error distribution ranged from -1.8 °C to 18 °C as the reading values served as the true values. Within the range from 13 °C to 37 °C, the adequate calibration equations were the high-order polynomial equations. Within the narrower range from 20 °C to 35 °C, the adequate equation was a linear equation for one sensor and a two-order polynomial equation for the other sensor. The accuracy of the two kinds of infrared thermocouple was improved by nearly 0.4 °C with the calibration equations. These devices could serve as mobile monitoring tools for in situ and real time routine estimation of leaf temperatures.

  8. Evaluation of global climate model on performances of precipitation simulation and prediction in the Huaihe River basin

    NASA Astrophysics Data System (ADS)

    Wu, Yenan; Zhong, Ping-an; Xu, Bin; Zhu, Feilin; Fu, Jisi

    2017-06-01

    Using climate models with high performance to predict the future climate changes can increase the reliability of results. In this paper, six kinds of global climate models that selected from the Coupled Model Intercomparison Project Phase 5 (CMIP5) under Representative Concentration Path (RCP) 4.5 scenarios were compared to the measured data during baseline period (1960-2000) and evaluate the simulation performance on precipitation. Since the results of single climate models are often biased and highly uncertain, we examine the back propagation (BP) neural network and arithmetic mean method in assembling the precipitation of multi models. The delta method was used to calibrate the result of single model and multimodel ensembles by arithmetic mean method (MME-AM) during the validation period (2001-2010) and the predicting period (2011-2100). We then use the single models and multimodel ensembles to predict the future precipitation process and spatial distribution. The result shows that BNU-ESM model has the highest simulation effect among all the single models. The multimodel assembled by BP neural network (MME-BP) has a good simulation performance on the annual average precipitation process and the deterministic coefficient during the validation period is 0.814. The simulation capability on spatial distribution of precipitation is: calibrated MME-AM > MME-BP > calibrated BNU-ESM. The future precipitation predicted by all models tends to increase as the time period increases. The order of average increase amplitude of each season is: winter > spring > summer > autumn. These findings can provide useful information for decision makers to make climate-related disaster mitigation plans.

  9. Prediction models for successful external cephalic version: a systematic review.

    PubMed

    Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M; Molkenboer, Jan F M; Van der Post, Joris A M; Mol, Ben W; Kok, Marjolein

    2015-12-01

    To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015. We extracted information on study design, sample size, model-building strategies and validation. We evaluated the phases of model development and summarized their performance in terms of discrimination, calibration and clinical usefulness. We collected different predictor variables together with their defined significance, in order to identify important predictor variables for successful ECV. We identified eight articles reporting on seven prediction models. All models were subjected to internal validation. Only one model was also validated in an external cohort. Two prediction models had a low overall risk of bias, of which only one showed promising predictive performance at internal validation. This model also completed the phase of external validation. For none of the models their impact on clinical practice was evaluated. The most important predictor variables for successful ECV described in the selected articles were parity, placental location, breech engagement and the fetal head being palpable. One model was assessed using discrimination and calibration using internal (AUC 0.71) and external validation (AUC 0.64), while two other models were assessed with discrimination and calibration, respectively. We found one prediction model for breech presentation that was validated in an external cohort and had acceptable predictive performance. This model should be used to council women considering ECV. Copyright © 2015. Published by Elsevier Ireland Ltd.

  10. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  11. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  12. Improved guidelines for estimating the Highway safety manual calibration factors.

    DOT National Transportation Integrated Search

    2016-01-01

    Crash prediction models can be used to predict the number of crashes and evaluate roadway safety. Part C of the first edition of the Highway Safety Manual (HSM) provides safety performance functions (SPFs). The HSM addendum that includes freeway and ...

  13. RELIABILITY OF CONFOCAL MICROSCOPY SPECTRAL IMAGING SYSTEMS: USE OF MULTISPECTRAL BEADS

    EPA Science Inventory

    Background: There is a need for a standardized, impartial calibration, and validation protocol on confocal spectral imaging (CSI) microscope systems. To achieve this goal, it is necessary to have testing tools to provide a reproducible way to evaluate instrument performance. ...

  14. Development of speed models for improving travel forecasting and highway performance evaluation : [technical summary].

    DOT National Transportation Integrated Search

    2013-12-01

    Travel forecasting models predict travel demand based on the present transportation system and its use. Transportation modelers must develop, validate, and calibrate models to ensure that predicted travel demand is as close to reality as possible. Mo...

  15. A novel method of calibrating a MEMS inertial reference unit on a turntable under limited working conditions

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Liang, Shufang; Yang, Yanqiang

    2017-10-01

    Micro-electro-mechanical systems (MEMS) inertial measurement devices tend to be widely used in inertial navigation systems and have quickly emerged on the market due to their characteristics of low cost, high reliability and small size. Calibration is the most effective way to remove the deterministic error of an inertial reference unit (IRU), which in this paper consists of three orthogonally mounted MEMS gyros. However, common testing methods in the lab cannot predict the corresponding errors precisely when the turntable’s working condition is restricted. In this paper, the turntable can only provide a relatively small rotation angle. Moreover, the errors must be compensated exactly because of the great effect caused by the high angular velocity of the craft. To deal with this question, a new method is proposed to evaluate the MEMS IRU’s performance. In the calibration procedure, a one-axis table that can rotate a limited angle in the form of a sine function is utilized to provide the MEMS IRU’s angular velocity. A new algorithm based on Fourier series is designed to calculate the misalignment and scale factor errors. The proposed method is tested in a set of experiments, and the calibration results are compared to a traditional calibration method performed under normal working conditions to verify their correctness. In addition, a verification test in the given rotation speed is implemented for further demonstration.

  16. Improved calibration-based non-uniformity correction method for uncooled infrared camera

    NASA Astrophysics Data System (ADS)

    Liu, Chengwei; Sui, Xiubao

    2017-08-01

    With the latest improvements of microbolometer focal plane arrays (FPA), uncooled infrared (IR) cameras are becoming the most widely used devices in thermography, especially in handheld devices. However the influences derived from changing ambient condition and the non-uniform response of the sensors make it more difficult to correct the nonuniformity of uncooled infrared camera. In this paper, based on the infrared radiation characteristic in the TEC-less uncooled infrared camera, a novel model was proposed for calibration-based non-uniformity correction (NUC). In this model, we introduce the FPA temperature, together with the responses of microbolometer under different ambient temperature to calculate the correction parameters. Based on the proposed model, we can work out the correction parameters with the calibration measurements under controlled ambient condition and uniform blackbody. All correction parameters can be determined after the calibration process and then be used to correct the non-uniformity of the infrared camera in real time. This paper presents the detail of the compensation procedure and the performance of the proposed calibration-based non-uniformity correction method. And our method was evaluated on realistic IR images obtained by a 384x288 pixels uncooled long wave infrared (LWIR) camera operated under changed ambient condition. The results show that our method can exclude the influence caused by the changed ambient condition, and ensure that the infrared camera has a stable performance.

  17. Ground-based x-ray calibration of the Astro-H/Hitomi soft x-ray telescopes

    NASA Astrophysics Data System (ADS)

    Iizuka, Ryo; Hayashi, Takayuki; Maeda, Yoshitomo; Ishida, Manabu; Tomikawa, Kazuki; Sato, Toshiki; Kikuchi, Naomichi; Okajima, Takashi; Soong, Yang; Serlemitsos, Peter J.; Mori, Hideyuki; Izumiya, Takanori; Minami, Sari

    2018-01-01

    We present the summary of the on-ground calibration of two soft x-ray telescopes (SXT-I and SXT-S), developed by NASA's Goddard Space Flight Center (GSFC), onboard Astro-H/Hitomi. After the initial x-ray measurements with a diverging beam at the GSFC 100-m beamline, we performed the full calibration of the x-ray performance, using the 30-m x-ray beamline facility at the Institute of Space and Astronautical Science of Japan Aerospace Exploration Agency in Japan. We adopted a raster scan method with a narrow x-ray pencil beam with a divergence of ˜15″. The on-axis effective area (EA), half-power diameter, and vignetting function were measured at several energies between 1.5 and 17.5 keV. The detailed results appear in tables and figures in this paper. We measured and evaluated the performance of the SXT-S and the SXT-I with regard to the detector-limited field-of-view and the pixel size of the paired flight detector, i.e., SXS and the SXI, respectively. The primary items measured are the EA, image quality, and stray light for on-axis and off-axis sources. The accurate measurement of these parameters is vital to make the precise response function of the ASTRO-H SXTs. This paper presents the definitive results of the ground-based calibration of the ASTRO-H SXTs.

  18. Calibration and Performance Of The Juno Microwave Radiometer In Jupiter Orbit

    NASA Astrophysics Data System (ADS)

    Brown, Shannon; Janssen, Mike; Misra, Sid

    2017-04-01

    The NASA Juno mission was launched from Kennedy Space Center on August 5th, 2011. Juno is a New Frontiers mission to study Jupiter and carries as one of its payloads a six-frequency microwave radiometer to retrieve the water vapor abundance in the Jovian atmosphere, down to at least 100 bars. The Juno Microwave Radiometer (MWR) operates from 600 MHz to 22 GHz and was designed and built at the Jet Propulsion Laboratory. The MWR radiometer system consists of a MMIC-based receiver for each channel that includes a PIN-diode Dicke switch and three noise diodes distributed along the front end for receiver calibration. The receivers and electronics are housed inside the Juno payload vault, which provides radiation shielding for the Juno payloads. The antenna system consists of patch-array antennas at 600 MHz and 1.2 GHz, slotted waveguide antennas at 2.5, 5.5 and 10 GHz and a feed horn at 22 GHz, providing 20-degree beams at the lowest two frequencies and 12-degree beams at the others. Since launch, MWR has operated nearly continually over the five year cruise. During this time, the Juno spacecraft is spinning on the sky providing the MWR with an excellent calibration source. Furthermore, the spacecraft sun angle and distance have varied, offering a wide range of instrument thermal states to further constrain the calibration. An approach was developed to optimally use the pre-launch and post-launch data to find a calibration solution which minimizes the errors with respect to the pre-launch calibration targets, the post-launch cold sky data and the component level loss/reflection measurements. The extended cruise data allow traceability from the pre-launch measurements to the science observations. In addition, a special data set was taken at apojove during the capture orbits to validate the antenna patterns in-flight using Jupiter as a source. An assessment of the radiometer calibration performance during the first science orbits will be presented. Both the absolute and relative performance will be shown. The relative calibration is assessed by evaluating the temporal stability over the pass and the forward looking and aft looking observations of the same point in the atmosphere.

  19. Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2012 Test)

    NASA Technical Reports Server (NTRS)

    Pastor-Barsi, Christine M.; Arrington, E. Allen; VanZante, Judith Foss

    2012-01-01

    A major modification of the refrigeration plant and heat exchanger at the NASA Glenn Icing Research Tunnel (IRT) occurred in autumn of 2011. It is standard practice at NASA Glenn to perform a full aero-thermal calibration of the test section of a wind tunnel facility upon completion of major modifications. This paper will discuss the tools and techniques used to complete an aero-thermal calibration of the IRT and the results that were acquired. The goal of this test entry was to complete a flow quality survey and aero-thermal calibration measurements in the test section of the IRT. Test hardware that was used includes the 2D Resistive Temperature Detector (RTD) array, 9-ft pressure survey rake, hot wire survey rake, and the quick check survey rake. This test hardware provides a map of the velocity, Mach number, total and static pressure, total temperature, flow angle and turbulence intensity. The data acquired were then reduced to examine pressure, temperature, velocity, flow angle, and turbulence intensity. Reduced data has been evaluated to assess how the facility meets flow quality goals. No icing conditions were tested as part of the aero-thermal calibration. However, the effects of the spray bar air injections on the flow quality and aero-thermal calibration measurements were examined as part of this calibration.

  20. A Preliminary Design of a Calibration Chamber for Evaluating the Stability of Unsaturated Soil Slope

    NASA Astrophysics Data System (ADS)

    Hsu, H.-H.

    2012-04-01

    The unsaturated soil slopes, which have ground water tables and are easily failure caused by heavy rainfalls, are widely distributed in the arid and semi-arid areas. For analyzing the stability of slope, in situ tests are the direct methods to obtain the test site characteristics. The cone penetration test (CPT) is a popular in situ test method. Some of the CPT empirical equations established from calibration chamber tests. The CPT performed in calibration chamber was commonly used clean quartz sand as testing material in the past. The silty sand is observed in many actual slopes. Because silty sand is relatively compressible than quartz sand, it is not suitable to apply the correlations between soil properties and CPT results built from quartz sand to silty sand. The experience on CPT calibration in silty sand has been limited. CPT calibration tests were mostly performed in dry or saturated soils. The condition around cone tip during penetration is assumed to be fully drained or fully undrained, yet it was observed to be partially drained for unsaturated soils. Because of the suction matrix has a great effect on the characteristics of unsaturated soils, they are much sensitive to the water content than saturated soils. The design of an unsaturated calibration chamber is in progress. The air pressure is supplied from the top plate and the pore water pressure is provided through the high air entry value ceramic disks located at the bottom plate of chamber cell. To boost and uniform distribute the unsaturated effect, four perforated burettes are installed onto the ceramic disks and stretch upwards to the midheight of specimen. This paper describes design concepts, illustrates this unsaturated calibration chamber, and presents the preliminary test results.

  1. Hand-eye calibration for rigid laparoscopes using an invariant point.

    PubMed

    Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2016-06-01

    Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.

  2. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  3. Clusters of Monoisotopic Elements for Calibration in (TOF) Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Kolářová, Lenka; Prokeš, Lubomír; Kučera, Lukáš; Hampl, Aleš; Peňa-Méndez, Eladia; Vaňhara, Petr; Havel, Josef

    2017-03-01

    Precise calibration in TOF MS requires suitable and reliable standards, which are not always available for high masses. We evaluated inorganic clusters of the monoisotopic elements gold and phosphorus (Au n +/Au n - and P n +/P n -) as an alternative to peptides or proteins for the external and internal calibration of mass spectra in various experimental and instrumental scenarios. Monoisotopic gold or phosphorus clusters can be easily generated in situ from suitable precursors by laser desorption/ionization (LDI) or matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS). Their use offers numerous advantages, including simplicity of preparation, biological inertness, and exact mass determination even at lower mass resolution. We used citrate-stabilized gold nanoparticles to generate gold calibration clusters, and red phosphorus powder to generate phosphorus clusters. Both elements can be added to samples to perform internal calibration up to mass-to-charge ( m/z) 10-15,000 without significantly interfering with the analyte. We demonstrated the use of the gold and phosphorous clusters in the MS analysis of complex biological samples, including microbial standards and total extracts of mouse embryonic fibroblasts. We believe that clusters of monoisotopic elements could be used as generally applicable calibrants for complex biological samples.

  4. Experience Gained From Launch and Early Orbit Support of the Rossi X-Ray Timing Explorer (RXTE)

    NASA Technical Reports Server (NTRS)

    Fink, D. R.; Chapman, K. B.; Davis, W. S.; Hashmall, J. A.; Shulman, S. E.; Underwood, S. C.; Zsoldos, J. M.; Harman, R. R.

    1996-01-01

    this paper reports the results to date of early mission support provided by the personnel of the Goddard Space Flight Center Flight Dynamics Division (FDD) for the Rossi X-Ray Timing Explorer (RXTE) spacecraft. For this mission, the FDD supports onboard attitude determination and ephemeris propagation by supplying ground-based orbit and attitude solutions and calibration results. The first phase of that support was to provide launch window analyses. As the launch window was determined, acquisition attitudes were calculated and calibration slews were planned. postlaunch, these slews provided the basis for ground determined calibration. Ground determined calibration results are used to improve the accuracy of onboard solutions. The FDD is applying new calibration tools designed to facilitate use of the simultaneous, high-accuracy star observations from the two RXTE star trackers for ground attitude determination and calibration. An evaluation of the performance of these tools is presented. The FDD provides updates to the onboard star catalog based on preflight analysis and analysis of flight data. The in-flight results of the mission support in each area are summarized and compared with pre-mission expectations.

  5. The solar vector error within the SNPP Common GEO code, the correction, and the effects on the VIIRS SDR RSB calibration

    NASA Astrophysics Data System (ADS)

    Fulbright, Jon; Anderson, Samuel; Lei, Ning; Efremova, Boryana; Wang, Zhipeng; McIntire, Jeffrey; Chiang, Kwofu; Xiong, Xiaoxiong

    2014-11-01

    Due to a software error, the solar and lunar vectors reported in the on-board calibrator intermediate product (OBC-IP) files for SNPP VIIRS are incorrect. The magnitude of the error is about 0.2 degree, and the magnitude is increasing by about 0.01 degree per year. This error, although small, has an effect on the radiometric calibration of the reflective solar bands (RSB) because accurate solar angles are required for calculating the screen transmission functions and for calculating the illumination of the Solar Diffuser panel. In this paper, we describe the error in the Common GEO code, and how it may be fixed. We present evidence for the error from within the OBC-IP data. We also describe the effects of the solar vector error on the RSB calibration and the Sensor Data Record (SDR). In order to perform this evaluation, we have reanalyzed the yaw-maneuver data to compute the vignetting functions required for the on-orbit SD RSB radiometric calibration. After the reanalysis, we find effect of up to 0.5% on the shortwave infrared (SWIR) RSB calibration.

  6. MODIS Instrument Operation and Calibration Improvements

    NASA Technical Reports Server (NTRS)

    Xiong, X.; Angal, A.; Madhavan, S.; Link, D.; Geng, X.; Wenny, B.; Wu, A.; Chen, H.; Salomonson, V.

    2014-01-01

    Terra and Aqua MODIS have successfully operated for over 14 and 12 years since their respective launches in 1999 and 2002. The MODIS on-orbit calibration is performed using a set of on-board calibrators, which include a solar diffuser for calibrating the reflective solar bands (RSB) and a blackbody for the thermal emissive bands (TEB). On-orbit changes in the sensor responses as well as key performance parameters are monitored using the measurements of these on-board calibrators. This paper provides an overview of MODIS on-orbit operation and calibration activities, and instrument long-term performance. It presents a brief summary of the calibration enhancements made in the latest MODIS data collection 6 (C6). Future improvements in the MODIS calibration and their potential applications to the S-NPP VIIRS are also discussed.

  7. Re-calibration of coronary risk prediction: an example of the Seven Countries Study.

    PubMed

    Puddu, Paolo Emilio; Piras, Paolo; Kromhout, Daan; Tolonen, Hanna; Kafatos, Anthony; Menotti, Alessandro

    2017-12-14

    We aimed at performing a calibration and re-calibration process using six standard risk factors from Northern (NE, N = 2360) or Southern European (SE, N = 2789) middle-aged men of the Seven Countries Study, whose parameters and data were fully known, to establish whether re-calibration gave the right answer. Greenwood-Nam-D'Agostino technique as modified by Demler (GNDD) in 2015 produced chi-squared statistics using 10 deciles of observed/expected CHD mortality risk, corresponding to Hosmer-Lemeshaw chi-squared employed for multiple logistic equations whereby binary data are used. Instead of the number of events, the GNDD test uses survival probabilities of observed and predicted events. The exercise applied, in five different ways, the parameters of the NE-predictive model to SE (and vice-versa) and compared the outcome of the simulated re-calibration with the real data. Good re-calibration could be obtained only when risk factor coefficients were substituted, being similar in magnitude and not significantly different between NE-SE. In all other ways, a good re-calibration could not be obtained. This is enough to praise for an overall need of re-evaluation of most investigations that, without GNDD or another proper technique for statistically assessing the potential differences, concluded that re-calibration is a fair method and might therefore be used, with no specific caution.

  8. Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process

    NASA Astrophysics Data System (ADS)

    Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.

    2016-12-01

    Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.

  9. Accuracy of a Factory-Calibrated, Real-Time Continuous Glucose Monitoring System During 10 Days of Use in Youth and Adults with Diabetes.

    PubMed

    Wadwa, R Paul; Laffel, Lori M; Shah, Viral N; Garg, Satish K

    2018-06-01

    Frequent use of continuous glucose monitoring (CGM) systems is associated with improved glycemic outcomes in persons with diabetes, but the need for calibrations and sensor insertions are often barriers to adoption. In this study, we evaluated the performance of G6, a sixth-generation, factory-calibrated CGM system specified for 10-day wear. The study enrolled participants of ages 6 years and up with type 1 diabetes or insulin-treated type 2 diabetes at 11 sites in the United States. Participation involved one sensor wear period of up to 10 days. Adults wore the system on the abdomen; youth of ages 6-17 years could choose to wear it on the abdomen or upper buttocks. Clinic sessions for frequent comparison with reference blood glucose measurements took place on days 1, 4-5, 7, and/or 10. Participants of ages 13 years and up underwent purposeful supervised glucose manipulation during in-clinic sessions. During the study, participants calibrated the systems once daily. However, analysis was performed on glucose values that were derived from reprocessed raw sensor data, independently of self-monitored blood glucose values used for calibration. Reprocessing used assigned sensor codes and a factory-calibration algorithm. Performance evaluation included the proportion of CGM values that were within ±20% of reference glucose values >100 mg/dL or within ±20 mg/dL of reference glucose values ≤100 mg/dL (%20/20), the analogous %15/15, and the mean absolute relative difference (MARD, expressed as a percentage) between temporally matched CGM and reference values. Data from 262 study participants (21,569 matched CGM reference pairs) were analyzed. The overall %15/15, %20/20, and MARD were 82.4%, 92.3%, and 10.0%, respectively. Matched pairs from 134 adults and 128 youth of ages 6-17 years were similar with respect to %20/20 (92.4% and 91.9%) and MARD (9.9% and 10.1%). Overall %20/20 values on days 1 and 10 of sensor wear were 88.6% and 90.6%, respectively. The system's "Urgent Low Soon" (predictive of hypoglycemia within 20 min) hypoglycemia alert was correctly provided 84% of the time within 30 min before impending biochemical hypoglycemia (<70 mg/dL). The 10-day sensor survival rate was 87%. The new factory-calibrated G6 real-time CGM system provides accurate readings for 10 days and removes several clinical barriers to broader CGM adoption.

  10. Fluorescent quantification of terazosin hydrochloride content in human plasma and tablets using second-order calibration based on both parallel factor analysis and alternating penalty trilinear decomposition.

    PubMed

    Zou, Hong-Yan; Wu, Hai-Long; OuYang, Li-Qun; Zhang, Yan; Nie, Jin-Fang; Fu, Hai-Yan; Yu, Ru-Qin

    2009-09-14

    Two second-order calibration methods based on the parallel factor analysis (PARAFAC) and the alternating penalty trilinear decomposition (APTLD) method, have been utilized for the direct determination of terazosin hydrochloride (THD) in human plasma samples, coupled with the excitation-emission matrix fluorescence spectroscopy. Meanwhile, the two algorithms combing with the standard addition procedures have been applied for the determination of terazosin hydrochloride in tablets and the results were validated by the high-performance liquid chromatography with fluorescence detection. These second-order calibrations all adequately exploited the second-order advantages. For human plasma samples, the average recoveries by the PARAFAC and APTLD algorithms with the factor number of 2 (N=2) were 100.4+/-2.7% and 99.2+/-2.4%, respectively. The accuracy of two algorithms was also evaluated through elliptical joint confidence region (EJCR) tests and t-test. It was found that both algorithms could give accurate results, and only the performance of APTLD was slightly better than that of PARAFAC. Figures of merit, such as sensitivity (SEN), selectivity (SEL) and limit of detection (LOD) were also calculated to compare the performances of the two strategies. For tablets, the average concentrations of THD in tablet were 63.5 and 63.2 ng mL(-1) by using the PARAFAC and APTLD algorithms, respectively. The accuracy was evaluated by t-test and both algorithms could give accurate results, too.

  11. Operational correction and validation of the VIIRS TEB longwave infrared band calibration bias during blackbody temperature changes

    NASA Astrophysics Data System (ADS)

    Wang, Wenhui; Cao, Changyong; Ignatov, Alex; Li, Zhenglong; Wang, Likun; Zhang, Bin; Blonski, Slawomir; Li, Jun

    2017-09-01

    The Suomi NPP VIIRS thermal emissive bands (TEB) have been performing very well since data became available on January 20, 2012. The longwave infrared bands at 11 and 12 um (M15 and M16) are primarily used for sea surface temperature (SST) retrievals. A long standing anomaly has been observed during the quarterly warm-up-cool-down (WUCD) events. During such event daytime SST product becomes anomalous with a warm bias shown as a spike in the SST time series on the order of 0.2 K. A previous study (CAO et al. 2017) suggested that the VIIRS TEB calibration anomaly during WUCD is due to a flawed theoretical assumption in the calibration equation and proposed an Ltrace method to address the issue. This paper complements that study and presents operational implementation and validation of the Ltrace method for M15 and M16. The Ltrace method applies bias correction during WUCD only. It requires a simple code change and one-time calibration parameter look-up-table update. The method was evaluated using colocated CrIS observations and the SST algorithm. Our results indicate that the method can effectively reduce WUCD calibration anomaly in M15, with residual bias of 0.02 K after the correction. It works less effectively for M16, with residual bias of 0.04 K. The Ltrace method may over-correct WUCD calibration biases, especially for M16. However, the residual WUCD biases are small in both bands. Evaluation results using the SST algorithm show that the method can effectively remove SST anomaly during WUCD events.

  12. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R

    2014-05-07

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  13. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Beavis, A. W.; Saunderson, J. R.

    2014-05-01

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  14. On-Orbit Noise Characterization for MODIS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Xiong, X.; Xie, X.; Angal, A.

    2008-01-01

    Since launch, the Moderate Resolution Imaging Spectroradiometer (MODIS) has operated successfully on-board the NASA Earth Observing System (EOS) Terra and EOS Aqua spacecraft. MODIS is a passive cross-track scanning radiometer that makes observations in 36 spectral bands with spectral wavelengths from visible (VIS) to long-wave infrared. MODIS bands 1-19 and 26 are the reflective solar bands (RSB) with wavelengths from 0.41 to 2.2 micrometers. They are calibrated on-orbit using an on-board solar diffuser (SD) and a SD stability monitor (SDSM) system. For MODIS RSB, the level 1B calibration algorithm produces top of the atmosphere reflectance factors and radiances for every pixel of the Earth view. The sensor radiometric calibration accuracy, specified at each spectral band's typical scene radiance, is 2% for the RSB reflectance factors and 5% for the RSB radiances. Also specified at the typical scene radiance is the detector signal-to-noise ratio (SNR), a key sensor performance parameter that directly impacts its radiometric calibration accuracy and stability, as well as the image quality. This paper describes an on-orbit SNR characterization approach developed to evaluate and track MODIS RSB detector performance. In order to perform on-orbit SNR characterization, MODIS RSB detector responses to the solar illumination reflected from the SD panel must be corrected for factors due to variations of the solar angles and the SD bi-directional reflectance factor. This approach enables RSB SNR characterization to be performed at different response levels for each detector. On-orbit results show that both Terra and Aqua MODIS RSB detectors have performed well since launch. Except for a few noisy or inoperable detectors which were identified pre-launch, most RSB detectors continue to meet the SNR design requirements and are able to maintain satisfactory short-term stability. A comparison of on-orbit noise characterization results with results derived from pre-launch calibration and characterization are also provided.

  15. Calibration and characterization of UV sensors for water disinfection

    NASA Astrophysics Data System (ADS)

    Larason, T.; Ohno, Y.

    2006-04-01

    The National Institute of Standards and Technology (NIST), USA is participating in a project with the American Water Works Association Research Foundation (AwwaRF) to develop new guidelines for ultraviolet (UV) sensor characteristics to monitor the performance of UV water disinfection plants. The current UV water disinfection standards, ÖNORM M5873-1 and M5873-2 (Austria) and DVGW W294 3 (Germany), on the requirements for UV sensors for low-pressure mercury (LPM) and medium-pressure mercury (MPM) lamp systems have been studied. Additionally, the characteristics of various types of UV sensors from several different commercial vendors have been measured and analysed. This information will aid in the development of new guidelines to address issues such as sensor requirements, calibration methods, uncertainty and traceability. Practical problems were found in the calibration methods and evaluation of spectral responsivity requirements for sensors designed for MPM lamp systems. To solve the problems, NIST is proposing an alternative sensor calibration method for MPM lamp systems. A future calibration service is described for UV sensors intended for low- and medium-pressure mercury lamp systems used in water disinfection applications.

  16. Estimation of option-implied risk-neutral into real-world density by using calibration function

    NASA Astrophysics Data System (ADS)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-04-01

    Option prices contain crucial information that can be used as a reflection of future development of an underlying assets' price. The main objective of this study is to extract the risk-neutral density (RND) and the risk-world density (RWD) of option prices. A volatility function technique is applied by using a fourth order polynomial interpolation to obtain the RNDs. Then, a calibration function is used to convert the RNDs into RWDs. There are two types of calibration function which are parametric and non-parametric calibrations. The density is extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity from January 2009 until December 2015. The performance of RNDs and RWDs extracted are evaluated by using a density forecasting test. This study found out that the RWDs obtain can provide an accurate information regarding the price of the underlying asset in future compared to that of the RNDs. In addition, empirical evidence suggests that RWDs from a non-parametric calibration has a better accuracy than other densities.

  17. Inter-satellite calibration of FengYun 3 medium energy electron fluxes with POES electron measurements

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Ni, Binbin; Xiang, Zheng; Zhang, Xianguo; Zhang, Xiaoxin; Gu, Xudong; Fu, Song; Cao, Xing; Zou, Zhengyang

    2018-05-01

    We perform an L-shell dependent inter-satellite calibration of FengYun 3 medium energy electron measurements with POES measurements based on rough orbital conjunctions within 5 min × 0.1 L × 0.5 MLT. By comparing electron flux data between the U.S. Polar Orbiting Environmental Satellites (POES) and Chinese sun-synchronous satellites including FY-3B and FY-3C for a whole year of 2014, we attempt to remove less reliable data and evaluate systematic uncertainties associated with the FY-3B and FY-3C datasets, expecting to quantify the inter-satellite calibration factors for the 150-350 keV energy channel at L = 2-7. Compared to the POES data, the FY-3B and FY-3C data generally exhibit a similar trend of electron flux variations but more or less underestimate them within a factor of 5 for the medium electron energy 150-350 keV channel. Good consistency in the flux conjunctions after the inter-calibration procedures gives us certain confidence to generalize our method to calibrate electron flux measurements from various satellite instruments.

  18. Novel Calibration Technique for a Coulometric Evolved Vapor Analyzer for Measuring Water Content of Materials

    NASA Astrophysics Data System (ADS)

    Bell, S. A.; Miao, P.; Carroll, P. A.

    2018-04-01

    Evolved vapor coulometry is a measurement technique that selectively detects water and is used to measure water content of materials. The basis of the measurement is the quantitative electrolysis of evaporated water entrained in a carrier gas stream. Although this measurement has a fundamental principle—based on Faraday's law which directly relates electrolysis current to amount of substance electrolyzed—in practice it requires calibration. Commonly, reference materials of known water content are used, but the variety of these is limited, and they are not always available for suitable values, materials, with SI traceability, or with well-characterized uncertainty. In this paper, we report development of an alternative calibration approach using as a reference the water content of humid gas of defined dew point traceable to the SI via national humidity standards. The increased information available through this new type of calibration reveals a variation of the instrument performance across its range not visible using the conventional approach. The significance of this is discussed along with details of the calibration technique, example results, and an uncertainty evaluation.

  19. Analysis of Expert Diagnosis of a Computer Simulation of Congenital Heart Disease

    ERIC Educational Resources Information Center

    Johnson, Paul E.; And Others

    1975-01-01

    It is concluded that while behavior of experts in the hospital and clinic is the primary means of evaluating successful student performance, computer simulations of patient cases offer the opportunity to use expert data in the calibration of student error. (Editor)

  20. Calibration of HERS-ST for estimating traffic impact on pavement deterioration in Texas.

    DOT National Transportation Integrated Search

    2012-08-01

    The Highway Economic Requirements System-State Version (or the HERS-ST) is a software package which was developed by the Federal Highway Administration as a tool for evaluating the performance of state highway systems. HERS-ST has the capabilities of...

  1. Development of a calibration equipment for spectrometer qualification

    NASA Astrophysics Data System (ADS)

    Michel, C.; Borguet, B.; Boueé, A.; Blain, P.; Deep, A.; Moreau, V.; François, M.; Maresi, L.; Myszkowiak, A.; Taccola, M.; Versluys, J.; Stockman, Y.

    2017-09-01

    With the development of new spectrometer concepts, it is required to adapt the calibration facilities to characterize correctly their performances. These spectro-imaging performances are mainly Modulation Transfer Function, spectral response, resolution and registration; polarization, straylight and radiometric calibration. The challenge of this calibration development is to achieve better performance than the item under test using mostly standard items. Because only the subsystem spectrometer needs to be calibrated, the calibration facility needs to simulate the geometrical "behaviours" of the imaging system. A trade-off study indicates that no commercial devices are able to fulfil completely all the requirements so that it was necessary to opt for an in home telecentric achromatic design. The proposed concept is based on an Offner design. This allows mainly to use simple spherical mirrors and to cover the spectral range. The spectral range is covered with a monochromator. Because of the large number of parameters to record the calibration facility is fully automatized. The performances of the calibration system have been verified by analysis and experimentally. Results achieved recently on a free-form grating Offner spectrometer demonstrate the capacities of this new calibration facility. In this paper, a full calibration facility is described, developed specifically for a new free-form spectro-imager.

  2. Seismic performance evaluation of an historical concrete deck arch bridge using survey and drawing of the damages, in situ tests, dynamic identification and pushover analysis

    NASA Astrophysics Data System (ADS)

    Bergamo, Otello; Russo, Eleonora; Lodolo, Fabio

    2017-07-01

    The paper describes the performance evaluation of a retrofit historical multi-span (RC) deck arch bridge analyzed with in situ tests, dynamic identification and FEM analysis. The peculiarity of this case study lies in the structural typology of "San Felice" bridge, an historical concrete arch bridge built in the early 20th century, a quite uncommon feature in Italy. The preservation and retrofit of historic cultural heritage and infrastructures has been carefully analyzed in the international codes governing seismic response. A complete survey of the bridge was carried out prior to sketching a drawing of the existing bridge. Subsequently, the study consists in four steps: material investigation and dynamic vibration tests, FEM analysis and calibration, retrofit assessment, pushover analysis. The aim is to define an innovative approach to calibrate the FEM analysis through modern experimental investigations capable of taking structural deterioration into account, and to offer an appropriate and cost-effective retrofitting strategy.

  3. Improved infra-red procedure for the evaluation of calibrating units.

    DOT National Transportation Integrated Search

    2011-01-04

    Introduction. The NHTSA Model Specifications for Calibrating Units for Breath : Alcohol Testers (FR 72 34742-34748) requires that calibration units submitted for : inclusion on the NHTSA Conforming Products List for such devices be evaluated using : ...

  4. Development and Calibration of a System-Integrated Rotorcraft Finite Element Model for Impact Scenarios

    NASA Technical Reports Server (NTRS)

    Annett, Martin S.; Horta, Lucas G.; Jackson, Karen E.; Polanco, Michael A.; Littell, Justin D.

    2012-01-01

    Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber (DEA) under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. The presence of this energy absorbing device reduced the peak impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to a system-integrated finite element model of the test article developed in parallel with the test program. In preparation for the full-scale crash test, a series of sub-scale and MD-500 mass simulator tests were conducted to evaluate the impact performances of various components and subsystems, including new crush tubes and the DEA blocks. Parameters defined for the system-integrated finite element model were determined from these tests. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the full-scale crash test without the DEA. This combination of heuristic and quantitative methods identified modeling deficiencies, evaluated parameter importance, and proposed required model changes. The multidimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and copilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. One lesson learned was that this approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and pretest predictions. Complete crash simulations with validated finite element models can be used to satisfy crash certification requirements, potentially reducing overall development costs.

  5. The performance and customization of SAPS 3 admission score in a Thai medical intensive care unit.

    PubMed

    Khwannimit, Bodin; Bhurayanontachai, Rungsun

    2010-02-01

    The aim of this study was to evaluate the performance of Simplified Acute Physiology Score 3 (SAPS 3) admission scores, both the original and a customized version, in mixed medical critically ill patients. A prospective cohort study was conducted over a 2-year period in the medical intensive care unit (MICU) of a tertiary referral university teaching hospital in Thailand. The probability of hospital mortality of the original SAPS 3 was calculated using the general and customized Australasia version (SAPS 3-AUS). The patients were randomly divided into equal calibration and validation groups for customization. A total of 1,873 patients were enrolled. The hospital mortality rate was 28.6%. The general equation of SAPS 3 had excellent discrimination with an area under the receiver operating characteristic curve of 0.933, but poor calibration with the Hosmer-Lemeshow goodness-of-fit H = 106.7 and C = 101.2 (P < 0.001), and it overestimated mortality with a standardized mortality ratio of 0.86 (95% confidence interval, 0.79-0.93). The calibration of SAPS 3-AUS was also poor. The customized SAPS 3 showed a good calibration of all patients in the validation group (H = 14, P = 0.17 and C = 11.3, P = 0.33) and all subgroups according to main diagnosis, age, gender and co-morbidities. The SAPS 3 provided excellent discrimination but poor calibration in our MICU. A first level customization of the SAPS 3 improved the calibration and could be used to predict mortality and quality assessment in our ICU or other ICUs with a similar case mix.

  6. Chemometrics resolution and quantification power evaluation: Application on pharmaceutical quaternary mixture of Paracetamol, Guaifenesin, Phenylephrine and p-aminophenol

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.; Mohamed, Heba M.

    2016-01-01

    Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.

  7. Cross-calibration of liquid and solid QCT calibration standards: corrections to the UCSF normative data

    NASA Technical Reports Server (NTRS)

    Faulkner, K. G.; Gluer, C. C.; Grampp, S.; Genant, H. K.

    1993-01-01

    Quantitative computed tomography (QCT) has been shown to be a precise and sensitive method for evaluating spinal bone mineral density (BMD) and skeletal response to aging and therapy. Precise and accurate determination of BMD using QCT requires a calibration standard to compensate for and reduce the effects of beam-hardening artifacts and scanner drift. The first standards were based on dipotassium hydrogen phosphate (K2HPO4) solutions. Recently, several manufacturers have developed stable solid calibration standards based on calcium hydroxyapatite (CHA) in water-equivalent plastic. Due to differences in attenuating properties of the liquid and solid standards, the calibrated BMD values obtained with each system do not agree. In order to compare and interpret the results obtained on both systems, cross-calibration measurements were performed in phantoms and patients using the University of California San Francisco (UCSF) liquid standard and the Image Analysis (IA) solid standard on the UCSF GE 9800 CT scanner. From the phantom measurements, a highly linear relationship was found between the liquid- and solid-calibrated BMD values. No influence on the cross-calibration due to simulated variations in body size or vertebral fat content was seen, though a significant difference in the cross-calibration was observed between scans acquired at 80 and 140 kVp. From the patient measurements, a linear relationship between the liquid (UCSF) and solid (IA) calibrated values was derived for GE 9800 CT scanners at 80 kVp (IA = [1.15 x UCSF] - 7.32).(ABSTRACT TRUNCATED AT 250 WORDS).

  8. A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times

    PubMed Central

    Heath, Tracy A.

    2012-01-01

    In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343

  9. Agricultural Policy Environmental eXtender Simulation of Three Adjacent Row-Crop Watersheds in the Claypan Region.

    PubMed

    Anomaa Senaviratne, G M M M; Udawatta, Ranjith P; Baffaut, Claire; Anderson, Stephen H

    2013-01-01

    The Agricultural Policy Environmental Extender (APEX) model is used to evaluate best management practices on pollutant loading in whole farms or small watersheds. The objectives of this study were to conduct a sensitivity analysis to determine the effect of model parameters on APEX output and use the parameterized, calibrated, and validated model to evaluate long-term benefits of grass waterways. The APEX model was used to model three (East, Center, and West) adjacent field-size watersheds with claypan soils under a no-till corn ( L.)/soybean [ (L.) Merr.] rotation. Twenty-seven parameters were sensitive for crop yield, runoff, sediment, nitrogen (dissolved and total), and phosphorous (dissolved and total) simulations. The model was calibrated using measured event-based data from the Center watershed from 1993 to 1997 and validated with data from the West and East watersheds. Simulated crop yields were within ±13% of the measured yield. The model performance for event-based runoff was excellent, with calibration and validation > 0.9 and Nash-Sutcliffe coefficients (NSC) > 0.8, respectively. Sediment and total nitrogen calibration results were satisfactory for larger rainfall events (>50 mm), with > 0.5 and NSC > 0.4, but validation results remained poor, with NSC between 0.18 and 0.3. Total phosphorous was well calibrated and validated, with > 0.8 and NSC > 0.7, respectively. The presence of grass waterways reduced annual total phosphorus loadings by 13 to 25%. The replicated study indicates that APEX provides a convenient and efficient tool to evaluate long-term benefits of conservation practices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  10. Calibration and Evaluation of Ultrasound Thermography using Infrared Imaging

    PubMed Central

    Hsiao, Yi-Sing; Deng, Cheri X.

    2015-01-01

    Real-time monitoring of the spatiotemporal evolution of tissue temperature is important to ensure safe and effective treatment in thermal therapies including hyperthermia and thermal ablation. Ultrasound thermography has been proposed as a non-invasive technique for temperature measurement, and accurate calibration of the temperature-dependent ultrasound signal changes against temperature is required. Here we report a method that uses infrared (IR) thermography for calibration and validation of ultrasound thermography. Using phantoms and cardiac tissue specimens subjected to high-intensity focused ultrasound (HIFU) heating, we simultaneously acquired ultrasound and IR imaging data from the same surface plane of a sample. The commonly used echo time shift-based method was chosen to compute ultrasound thermometry. We first correlated the ultrasound echo time shifts with IR-measured temperatures for material-dependent calibration and found that the calibration coefficient was positive for fat-mimicking phantom (1.49 ± 0.27) but negative for tissue-mimicking phantom (− 0.59 ± 0.08) and cardiac tissue (− 0.69 ± 0.18 °C-mm/ns). We then obtained the estimation error of the ultrasound thermometry by comparing against the IR measured temperature and revealed that the error increased with decreased size of the heated region. Consistent with previous findings, the echo time shifts were no longer linearly dependent on temperature beyond 45 – 50 °C in cardiac tissues. Unlike previous studies where thermocouples or water-bath techniques were used to evaluate the performance of ultrasound thermography, our results show that high resolution IR thermography provides a useful tool that can be applied to evaluate and understand the limitations of ultrasound thermography methods. PMID:26547634

  11. Calibration and Evaluation of Ultrasound Thermography Using Infrared Imaging.

    PubMed

    Hsiao, Yi-Sing; Deng, Cheri X

    2016-02-01

    Real-time monitoring of the spatiotemporal evolution of tissue temperature is important to ensure safe and effective treatment in thermal therapies including hyperthermia and thermal ablation. Ultrasound thermography has been proposed as a non-invasive technique for temperature measurement, and accurate calibration of the temperature-dependent ultrasound signal changes against temperature is required. Here we report a method that uses infrared thermography for calibration and validation of ultrasound thermography. Using phantoms and cardiac tissue specimens subjected to high-intensity focused ultrasound heating, we simultaneously acquired ultrasound and infrared imaging data from the same surface plane of a sample. The commonly used echo time shift-based method was chosen to compute ultrasound thermometry. We first correlated the ultrasound echo time shifts with infrared-measured temperatures for material-dependent calibration and found that the calibration coefficient was positive for fat-mimicking phantom (1.49 ± 0.27) but negative for tissue-mimicking phantom (-0.59 ± 0.08) and cardiac tissue (-0.69 ± 0.18°C-mm/ns). We then obtained the estimation error of the ultrasound thermometry by comparing against the infrared-measured temperature and revealed that the error increased with decreased size of the heated region. Consistent with previous findings, the echo time shifts were no longer linearly dependent on temperature beyond 45°C-50°C in cardiac tissues. Unlike previous studies in which thermocouples or water bath techniques were used to evaluate the performance of ultrasound thermography, our results indicate that high-resolution infrared thermography is a useful tool that can be applied to evaluate and understand the limitations of ultrasound thermography methods. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  12. On-orbit test results from the EO-1 Advanced Land Imager

    NASA Astrophysics Data System (ADS)

    Evans, Jenifer B.; Digenis, Constantine J.; Gibbs, Margaret D.; Hearn, David R.; Lencioni, Donald E.; Mendenhall, Jeffrey A.; Welsh, Ralph D.

    2002-01-01

    The Advanced Land Imager (ALI) is the primary instrument flown on the first Earth Observing mission (EO-1), launched on November 21, 2000. It was developed under NASA's New Millennium Program (NMP). The NMP mission objective is to flight-validate advanced technologies that will enable dramatic improvements in performance, cost, mass, and schedule for future, Landsat-like, Earth Science Enterprise instruments. ALI contains a number of innovative features designed to achieve this objective. These include the basic instrument architecture which employs a push-broom data collection mode, a wide field of view optical design, compact multi-spectral detector arrays, non-cryogenic HgCdTe for the short wave infrared bands, silicon carbide optics, and a multi-level solar calibration technique. During the first ninety days on orbit, the instrument performance was evaluated by collecting several Earth scenes and comparing them to identical scenes obtained by Landsat7. In addition, various on-orbit calibration techniques were exercised. This paper will present an overview of the EO-1 mission activities during the first ninety days on-orbit, details of the ALI instrument performance and a comparison with the ground calibration measurements.

  13. Exploring the performance of the SEDD model to predict sediment yield in eucalyptus plantations. Long-term results from an experimental catchment in Southern Italy

    NASA Astrophysics Data System (ADS)

    Porto, P.; Cogliandro, V.; Callegari, G.

    2018-01-01

    In this paper, long-term sediment yield data, collected in a small (1.38 ha) Calabrian catchment (W2), reafforested with eucalyptus trees (Eucalyptus occidentalis Engl.) are used to validate the performance of the SEdiment Delivery Distributed Model (SEDD) in areas with high erosion rates. At first step, the SEDD model was calibrated using field data collected in previous field campaigns undertaken during the period 1978-1994. This first phase allowed the model calibration parameter β to be calculated using direct measurements of rainfall, runoff, and sediment output. The model was then validated in its calibrated form for an independent period (2006-2016) for which new measurements of rainfall, runoff and sediment output are also available. The analysis, carried out at event and annual scale showed good agreement between measured and predicted values of sediment yield and suggested that the SEDD model can be seen as an appropriate means of evaluating erosion risk associated with manmade plantations in marginal areas. Further work is however required to test the performance of the SEDD model as a prediction tool in different geomorphic contexts.

  14. a Contemporary Approach for Evaluation of the best Measurement Capability of a Force Calibration Machine

    NASA Astrophysics Data System (ADS)

    Kumar, Harish

    The present paper discusses the procedure for evaluation of best measurement capability of a force calibration machine. The best measurement capability of force calibration machine is evaluated by a comparison through the precision force transfer standards to the force standard machines. The force transfer standards are calibrated by the force standard machine and then by the force calibration machine by adopting the similar procedure. The results are reported and discussed in the paper and suitable discussion has been made for force calibration machine of 200 kN capacity. Different force transfer standards of nominal capacity 20 kN, 50 kN and 200 kN are used. It is found that there are significant variations in the .uncertainty of force realization by the force calibration machine according to the proposed method in comparison to the earlier method adopted.

  15. State and parameter estimation of two land surface models using the ensemble Kalman filter and the particle filter

    NASA Astrophysics Data System (ADS)

    Zhang, Hongjuan; Hendricks Franssen, Harrie-Jan; Han, Xujun; Vrugt, Jasper A.; Vereecken, Harry

    2017-09-01

    Land surface models (LSMs) use a large cohort of parameters and state variables to simulate the water and energy balance at the soil-atmosphere interface. Many of these model parameters cannot be measured directly in the field, and require calibration against measured fluxes of carbon dioxide, sensible and/or latent heat, and/or observations of the thermal and/or moisture state of the soil. Here, we evaluate the usefulness and applicability of four different data assimilation methods for joint parameter and state estimation of the Variable Infiltration Capacity Model (VIC-3L) and the Community Land Model (CLM) using a 5-month calibration (assimilation) period (March-July 2012) of areal-averaged SPADE soil moisture measurements at 5, 20, and 50 cm depths in the Rollesbroich experimental test site in the Eifel mountain range in western Germany. We used the EnKF with state augmentation or dual estimation, respectively, and the residual resampling PF with a simple, statistically deficient, or more sophisticated, MCMC-based parameter resampling method. The performance of the calibrated LSM models was investigated using SPADE water content measurements of a 5-month evaluation period (August-December 2012). As expected, all DA methods enhance the ability of the VIC and CLM models to describe spatiotemporal patterns of moisture storage within the vadose zone of the Rollesbroich site, particularly if the maximum baseflow velocity (VIC) or fractions of sand, clay, and organic matter of each layer (CLM) are estimated jointly with the model states of each soil layer. The differences between the soil moisture simulations of VIC-3L and CLM are much larger than the discrepancies among the four data assimilation methods. The EnKF with state augmentation or dual estimation yields the best performance of VIC-3L and CLM during the calibration and evaluation period, yet results are in close agreement with the PF using MCMC resampling. Overall, CLM demonstrated the best performance for the Rollesbroich site. The large systematic underestimation of water storage at 50 cm depth by VIC-3L during the first few months of the evaluation period questions, in part, the validity of its fixed water table depth at the bottom of the modeled soil domain.

  16. A new approach for the pixel map sensitivity (PMS) evaluation of an electronic portal imaging device (EPID)

    PubMed Central

    Lucio, Francesco; Calamia, Elisa; Russi, Elvio; Marchetto, Flavio

    2013-01-01

    When using an electronic portal imaging device (EPID) for dosimetric verifications, the calibration of the sensitive area is of paramount importance. Two calibration methods are generally adopted: one, empirical, based on an external reference dosimeter or on multiple narrow beam irradiations, and one based on the EPID response simulation. In this paper we present an alternative approach based on an intercalibration procedure, independent from external dosimeters and from simulations, and is quick and easy to perform. Each element of a detector matrix is characterized by a different gain; the aim of the calibration procedure is to relate the gain of each element to a reference one. The method that we used to compute the relative gains is based on recursive acquisitions with the EPID placed in different positions, assuming a constant fluence of the beam for subsequent deliveries. By applying an established procedure and analysis algorithm, the EPID calibration was repeated in several working conditions. Data show that both the photons energy and the presence of a medium between the source and the detector affect the calibration coefficients less than 1%. The calibration coefficients were then applied to the acquired images, comparing the EPID dose images with films. Measurements were performed with open field, placing the film at the level of the EPID. The standard deviation of the distribution of the point‐to‐point difference is 0.6%. An approach of this type for the EPID calibration has many advantages with respect to the standard methods — it does not need an external dosimeter, it is not related to the irradiation techniques, and it is easy to implement in the clinical practice. Moreover, it can be applied in case of transit or nontransit dosimetry, solving the problem of the EPID calibration independently from the dose reconstruction method. PACS number: 87.56.‐v PMID:24257285

  17. Development, calibration, and validation of performance prediction models for the Texas M-E flexible pavement design system.

    DOT National Transportation Integrated Search

    2010-08-01

    This study was intended to recommend future directions for the development of TxDOTs Mechanistic-Empirical : (TexME) design system. For stress predictions, a multi-layer linear elastic system was evaluated and its validity was : verified by compar...

  18. Lidar - DOE ARM StreamLine Doppler Lidar (Halo) - Raw Data

    DOE Data Explorer

    Newsom, Rob

    2017-11-20

    1. Evaluate performance of the Halo Photonics Streamline lidar against a calibrated reference (i.e. the BAO tower). 2. Provide measurements of vertical velocity for use with other scanning lidars to better constrain velocity retrievals. 3. Provide colocated reference for comparison with Vindicator lidars.

  19. Evaluation of the Logistic Model for GAC Performance in Water Treatment

    EPA Science Inventory

    Full-scale field measurement and rapid small-scale column test data from the Greater Cincinnati (Ohio) Water Works (GCWW) were used to calibrate and investigate the application of the logistic model for simulating breakthrough of total organic carbon (TOC) in granular activated c...

  20. Evaluating the role of evapotranspiration remote sensing data in improving hydrological modeling predictability

    NASA Astrophysics Data System (ADS)

    Herman, Matthew R.; Nejadhashemi, A. Pouyan; Abouali, Mohammad; Hernandez-Suarez, Juan Sebastian; Daneshvar, Fariborz; Zhang, Zhen; Anderson, Martha C.; Sadeghi, Ali M.; Hain, Christopher R.; Sharifi, Amirreza

    2018-01-01

    As the global demands for the use of freshwater resources continues to rise, it has become increasingly important to insure the sustainability of this resources. This is accomplished through the use of management strategies that often utilize monitoring and the use of hydrological models. However, monitoring at large scales is not feasible and therefore model applications are becoming challenging, especially when spatially distributed datasets, such as evapotranspiration, are needed to understand the model performances. Due to these limitations, most of the hydrological models are only calibrated for data obtained from site/point observations, such as streamflow. Therefore, the main focus of this paper is to examine whether the incorporation of remotely sensed and spatially distributed datasets can improve the overall performance of the model. In this study, actual evapotranspiration (ETa) data was obtained from the two different sets of satellite based remote sensing data. One dataset estimates ETa based on the Simplified Surface Energy Balance (SSEBop) model while the other one estimates ETa based on the Atmosphere-Land Exchange Inverse (ALEXI) model. The hydrological model used in this study is the Soil and Water Assessment Tool (SWAT), which was calibrated against spatially distributed ETa and single point streamflow records for the Honeyoey Creek-Pine Creek Watershed, located in Michigan, USA. Two different techniques, multi-variable and genetic algorithm, were used to calibrate the SWAT model. Using the aforementioned datasets, the performance of the hydrological model in estimating ETa was improved using both calibration techniques by achieving Nash-Sutcliffe efficiency (NSE) values >0.5 (0.73-0.85), percent bias (PBIAS) values within ±25% (±21.73%), and root mean squared error - observations standard deviation ratio (RSR) values <0.7 (0.39-0.52). However, the genetic algorithm technique was more effective with the ETa calibration while significantly reducing the model performance for estimating the streamflow (NSE: 0.32-0.52, PBIAS: ±32.73%, and RSR: 0.63-0.82). Meanwhile, using the multi-variable technique, the model performance for estimating the streamflow was maintained with a high level of accuracy (NSE: 0.59-0.61, PBIAS: ±13.70%, and RSR: 0.63-0.64) while the evapotranspiration estimations were improved. Results from this assessment shows that incorporation of remotely sensed and spatially distributed data can improve the hydrological model performance if it is coupled with a right calibration technique.

  1. Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut

    2017-04-01

    Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization

  2. Building an Evaluation Framework for the VIC Model in the NLDAS Testbed

    NASA Astrophysics Data System (ADS)

    Xia, Y.; Mocko, D. M.; Wang, S.; Pan, M.; Kumar, S.; Peters-Lidard, C. D.; Wei, H.; Ek, M. B.

    2017-12-01

    Since the second phase of North American Land Data Assimilation System (NLDAS-2) was operationally implemented at NCEP in August 2014, developing the third phase of NLDAS system (NLDAS-3) has been a key task for the NCEP and NASA NLDAS team. The Variable Infiltration Capacity (VIC) model is one major component of the NLDAS system. The current operational NLDAS-2 uses version 4.0.3 (VIC403), research NLDAS-2 uses version 4.0.5 (VIC405), and LIS-based (Land Information System) NLDAS uses version 4.1.2 (VIC412). The purpose of this study is to compressively evaluate three versions and document changes in model behavior towards VIC412 for NLDAS-3. To do that, we develop a relatively comprehensive framework including multiple variables and metrics to assess the performance of different versions. This framework is being incorporated into the NASA Land Verification Toolkit (LVT) for evaluation of other LSMs for NLDAS-3 development. The evaluation results show that there are large and significant improvements for VIC412 in southeastern United States when compared with VIC403 and VIC405. In the other regions, there are very limited improvements or even some degree of deteriorations. Potential reasons are due to: (1) few USGS streamflow observations for soil and hydrologic parameter calibration, (2) the lack of re-calibration of VIC412 in the NLDAS domain, and (3) changes in model physics from VIC403 to VIC412. Overall, the model version upgrade largely/significantly enhances model performance and skill score for all United States except for the Great Plains, suggesting a right direction for VIC model development. Some further efforts are needed for science understanding of land surface physical processes in GP and a re-calibration for VIC412 using reasonable reference datasets is suggested.

  3. Comparing Hp(3) evaluated from the conversion coefficients from air kerma to personal dose equivalent for eye lens dosimetry calibrated on a new cylindrical PMMA phantom

    NASA Astrophysics Data System (ADS)

    Esor, J.; Sudchai, W.; Monthonwattana, S.; Pungkun, V.; Intang, A.

    2017-06-01

    Based on a new occupational dose limit recommended by ICRP (2011), the annual dose limit for the lens of the eye for workers should be reduced from 150 mSv/y to 20 mSv/y averaged over 5 consecutive years in which no single year exceeding 50 mSv. This new dose limit directly affects radiologists and cardiologists whose work involves high radiation exposure over 20 mSv/y. Eye lens dosimetry (Hp(3)) has become increasingly important and should be evaluated directly based on dosimeters that are worn closely to the eye. Normally, Hp(3) dose algorithm was carried out by the combination of Hp(0.07) and Hp(10) values while dosimeters were calibrated on slab PMMA phantom. Recently, there were three reports from European Union that have shown the conversion coefficients from air kerma to Hp(3). These conversion coefficients carried out by ORAMED, PTB and CEA Saclay projects were performed by using a new cylindrical head phantom. In this study, various delivered doses were calculated using those three conversion coefficients while nanoDot, small OSL dosimeters, were used for Hp(3) measurement. These calibrations were performed with a standard X-ray generator at Secondary Standard Dosimetry Laboratory (SSDL). Delivered doses (Hp(3)) using those three conversion coefficients were compared with Hp(3) from nanoDot measurements. The results showed that percentage differences between delivered doses evaluated from the conversion coefficient of each project and Hp(3) doses evaluated from the nanoDots were found to be not exceeding -11.48 %, -8.85 % and -8.85 % for ORAMED, PTB and CEA Saclay project, respectively.

  4. A Consistency Evaluation and Calibration Method for Piezoelectric Transmitters.

    PubMed

    Zhang, Kai; Tan, Baohai; Liu, Xianping

    2017-04-28

    Array transducer and transducer combination technologies are evolving rapidly. While adapting transmitter combination technologies, the parameter consistencies between each transmitter are extremely important because they can determine a combined effort directly. This study presents a consistency evaluation and calibration method for piezoelectric transmitters by using impedance analyzers. Firstly, electronic parameters of transmitters that can be measured by impedance analyzers are introduced. A variety of transmitter acoustic energies that are caused by these parameter differences are then analyzed and certified and, thereafter, transmitter consistency is evaluated. Lastly, based on the evaluations, consistency can be calibrated by changing the corresponding excitation voltage. Acoustic experiments show that this method accurately evaluates and calibrates transducer consistencies, and is easy to realize.

  5. Does ADHD in adults affect the relative accuracy of metamemory judgments?

    PubMed

    Knouse, Laura E; Paradise, Matthew J; Dunlosky, John

    2006-11-01

    Prior research suggests that individuals with ADHD overestimate their performance across domains despite performing more poorly in these domains. The authors introduce measures of accuracy from the larger realm of judgment and decision making--namely, relative accuracy and calibration--to the study of self-evaluative judgment accuracy in adults with ADHD. Twenty-eight adults with ADHD and 28 matched controls participate in a computer-administered paired-associate learning task and predict their future recall using immediate and delayed judgments of learning (JOLs). Retrospective confidence judgments are also collected. Groups perform equally in terms of judgment magnitude and absolute judgment accuracy as measured by discrepancy scores and calibration curves. Both groups benefit equally from making their JOL at a delay, and the group with ADHD show higher relative accuracy for delayed judgments. Results suggest that under certain circumstances, adults with ADHD can make accurate judgments about their future memory.

  6. Polarized-pixel performance model for DoFP polarimeter

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Shi, Zelin; Liu, Haizheng; Liu, Li; Zhao, Yaohong; Zhang, Junchao

    2018-06-01

    A division of a focal plane (DoFP) polarimeter is manufactured by placing a micropolarizer array directly onto the focal plane array (FPA) of a detector. Each element of the DoFP polarimeter is a polarized pixel. This paper proposes a performance model for a polarized pixel. The proposed model characterizes the optical and electronic performance of a polarized pixel by three parameters. They are respectively major polarization responsivity, minor polarization responsivity and polarization orientation. Each parameter corresponds to an intuitive physical feature of a polarized pixel. This paper further extends this model to calibrate polarization images from a DoFP (division of focal plane) polarimeter. This calibration work is evaluated quantitatively by a developed DoFP polarimeter under varying illumination intensity and angle of linear polarization. The experiment proves that our model reduces nonuniformity to 6.79% of uncalibrated DoLP (degree of linear polarization) images, and significantly improves the visual effect of DoLP images.

  7. Ergonomics Calibration Training Utilizing Photography for Dental Hygiene Faculty Members.

    PubMed

    Partido, Brian B

    2017-10-01

    Dental and dental hygiene clinical faculty members often do not provide consistent instruction, especially since most procedures involve clinical judgment. Although instructional variations frequently translate into variations in student performance, the effect of inconsistent instruction is unknown, especially related to ergonomics. The aim of this study was to determine whether photography-assisted calibration training would improve interrater reliability among dental hygiene faculty members in ergonomics evaluation. The photography-assisted ergonomics calibration program incorporated features to improve accessibility and optimize the quality of the training. The study used a two-group repeated measures design with a convenience sample of 11 dental hygiene faculty members (eight full-time and three part-time) during the autumn 2016 term at one U.S. dental school. At weeks one and seven, all participants evaluated imaged postures of five dental students using a modified-dental operator posture assessment instrument. During weeks three and five, training group participants completed calibration training using independent and group review of imaged postures. All pre-training and post-training evaluations were evaluated for interrater reliability. Two-way random effects intraclass coefficient (ICC) values were calculated to measure the effects of the training on interrater reliability. The average measure of ICC of the training group improved from 0.694 with a 95% confidence interval (CI) of 0.001 to 0.965 (F(4,8)=3.465, p>0.05) to 0.766 with a 95% CI of 0.098 to 0.972 (F(4,8)=7.913, p<0.01). The average measure of ICC of the control group improved from 0.821 with a 95% CI of 0.480 to 0.978 (F(4,28)=7.702, p<0.01) to 0.846 with a 95% CI of 0.542 to 0.981 (F(4,28)=8.561, p<0.01). These results showed that the photography-assisted calibration training with the opportunity to reconcile different opinions resulted in improved agreement among these faculty members.

  8. Novel Hyperspectral Sun Photometer for Satellite Remote Sensing Data Radiometeic Calibration and Atmospheric Aerosol Studies

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Ryan, Robert E.; Holekamp, Kara; Harrington, Gary; Frisbie, Troy

    2006-01-01

    A simple and cost-effective, hyperspectral sun photometer for radiometric vicarious remote sensing system calibration, air quality monitoring, and potentially in-situ planetary climatological studies, was developed. The device was constructed solely from off the shelf components and was designed to be easily deployable for support of short-term verification and validation data collects. This sun photometer not only provides the same data products as existing multi-band sun photometers but also the potential of hyperspectral optical depth and diffuse-to-global products. As compared to traditional sun photometers, this device requires a simpler setup, less data acquisition time and allows for a more direct calibration approach. Fielding this instrument has also enabled Stennis Space Center (SSC) Applied Sciences Directorate personnel to cross-calibrate existing sun photometers. This innovative research will position SSC personnel to perform air quality assessments in support of the NASA Applied Sciences Program's National Applications program element as well as to develop techniques to evaluate aerosols in a Martian or other planetary atmosphere.

  9. Ground-based measurements of the 1.3 to 0.3 millimeter spectrum of Jupiter and Saturn, and their detailed calibration.

    PubMed

    Pardo, Juan R; Serabyn, Eugene; Wiedner, Martina C; Moreno, Raphäel; Orton, Glenn

    2017-07-01

    One of the legacies of the now retired Caltech Submillimeter Observatory (CSO) is presented in this paper. We measured for the first time the emission of the giant planets Jupiter and Saturn across the 0.3 to 1.3 mm wavelength range using a Fourier Transform Spectrometer mounted on the 10.4-meter dish of the CSO at Mauna Kea, Hawaii, 4100 meters above sea level. A careful calibration, including the evaluation of the antenna performance over such a wide wavelength range and the removal of the Earth's atmosphere effects, has allowed the detection of broad absorption lines on those planets' atmospheres. The calibrated data allowed us to verify the predictions of standard models for both planets in this spectral region, and to confirm the absolute radiometry in the case of Jupiter. Besides their physical interest, the results are also important as both planets are calibration references in the current era of operating ground-based and space-borne submillimeter instruments.

  10. A framework for streamflow prediction in the world's most severely data-limited regions: Test of applicability and performance in a poorly-gauged region of China

    NASA Astrophysics Data System (ADS)

    Alipour, M. H.; Kibler, Kelly M.

    2018-02-01

    A framework methodology is proposed for streamflow prediction in poorly-gauged rivers located within large-scale regions of sparse hydrometeorologic observation. A multi-criteria model evaluation is developed to select models that balance runoff efficiency with selection of accurate parameter values. Sparse observed data are supplemented by uncertain or low-resolution information, incorporated as 'soft' data, to estimate parameter values a priori. Model performance is tested in two catchments within a data-poor region of southwestern China, and results are compared to models selected using alternative calibration methods. While all models perform consistently with respect to runoff efficiency (NSE range of 0.67-0.78), models selected using the proposed multi-objective method may incorporate more representative parameter values than those selected by traditional calibration. Notably, parameter values estimated by the proposed method resonate with direct estimates of catchment subsurface storage capacity (parameter residuals of 20 and 61 mm for maximum soil moisture capacity (Cmax), and 0.91 and 0.48 for soil moisture distribution shape factor (B); where a parameter residual is equal to the centroid of a soft parameter value minus the calibrated parameter value). A model more traditionally calibrated to observed data only (single-objective model) estimates a much lower soil moisture capacity (residuals of Cmax = 475 and 518 mm and B = 1.24 and 0.7). A constrained single-objective model also underestimates maximum soil moisture capacity relative to a priori estimates (residuals of Cmax = 246 and 289 mm). The proposed method may allow managers to more confidently transfer calibrated models to ungauged catchments for streamflow predictions, even in the world's most data-limited regions.

  11. Improvement of the repeatability of parallel transmission at 7T using interleaved acquisition in the calibration scan.

    PubMed

    Kameda, Hiroyuki; Kudo, Kohsuke; Matsuda, Tsuyoshi; Harada, Taisuke; Iwadate, Yuji; Uwano, Ikuko; Yamashita, Fumio; Yoshioka, Kunihiro; Sasaki, Makoto; Shirato, Hiroki

    2017-12-04

    Respiration-induced phase shift affects B 0 /B 1 + mapping repeatability in parallel transmission (pTx) calibration for 7T brain MRI, but is improved by breath-holding (BH). However, BH cannot be applied during long scans. To examine whether interleaved acquisition during calibration scanning could improve pTx repeatability and image homogeneity. Prospective. Nine healthy subjects. 7T MRI with a two-channel RF transmission system was used. Calibration scanning for B 0 /B 1 + mapping was performed under sequential acquisition/free-breathing (Seq-FB), Seq-BH, and interleaved acquisition/FB (Int-FB) conditions. The B 0 map was calculated with two echo times, and the B 1 + map was obtained using the Bloch-Siegert method. Actual flip-angle imaging (AFI) and gradient echo (GRE) imaging were performed using pTx and quadrature-Tx (qTx). All scans were acquired in five sessions. Repeatability was evaluated using intersession standard deviation (SD) or coefficient of variance (CV), and in-plane homogeneity was evaluated using in-plane CV. A paired t-test with Bonferroni correction for multiple comparisons was used. The intersession CV/SDs for the B 0 /B 1 + maps were significantly smaller in Int-FB than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The intersession CVs for the AFI and GRE images were also significantly smaller in Int-FB, Seq-BH, and qTx than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The in-plane CVs for the AFI and GRE images in Seq-FB, Int-FB, and Seq-BH were significantly smaller than in qTx (Bonferroni-corrected P < 0.01 for all). Using interleaved acquisition during calibration scans of pTx for 7T brain MRI improved the repeatability of B 0 /B 1 + mapping, AFI, and GRE images, without BH. 1 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  12. Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination

    PubMed Central

    Fasano, Giancarmine; Grassi, Michele

    2017-01-01

    In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective-n-Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal. PMID:28946651

  13. Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination.

    PubMed

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2017-09-24

    In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective- n -Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal.

  14. Assessment of the Collection 6 Terra and Aqua MODIS bands 1 and 2 calibration performance

    NASA Astrophysics Data System (ADS)

    Wu, A.; Chen, X.; Angal, A.; Li, Y.; Xiong, X.

    2015-09-01

    MODIS (Moderate Resolution Imaging Spectroradiometer) is a key sensor aboard the Terra (EOS AM) and Aqua (EOS PM) satellites. MODIS collects data in 36 spectral bands and generates over 40 data products for land, atmosphere, cryosphere and oceans. MODIS bands 1 and 2 have nadir spatial resolution of 250 m, compared with 500 m for bands 3 to 7 and 1000 m for all the remaining bands, and their measurements are crucial to derive key land surface products. This study evaluates the calibration performance of the Collection-6 L1B for both Terra and Aqua MODIS bands 1 and 2 using three vicarious approaches. The first and second approaches focus on stability assessment using data collected from two pseudo-invariant sites, Libya 4 desert and Antarctic Dome C snow surface. The third approach examines the relative stability between Terra and Aqua in reference to a third sensor from a series of NOAA 15-19 Advanced Very High Resolution Radiometer (AVHRR). The comparison is based on measurements from MODIS and AVHRR Simultaneous Nadir Overpasses (SNO) over a thirteen-year period from 2002 to 2015. Results from this study provide a quantitative assessment of Terra and Aqua MODIS bands 1 and 2 calibration stability and the relative calibration differences between the two sensors.

  15. Basis material decomposition in spectral CT using a semi-empirical, polychromatic adaption of the Beer-Lambert model.

    PubMed

    Ehn, S; Sellerer, T; Mechlem, K; Fehringer, A; Epple, M; Herzen, J; Pfeiffer, F; Noël, P B

    2017-01-07

    Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.

  16. Basis material decomposition in spectral CT using a semi-empirical, polychromatic adaption of the Beer-Lambert model

    NASA Astrophysics Data System (ADS)

    Ehn, S.; Sellerer, T.; Mechlem, K.; Fehringer, A.; Epple, M.; Herzen, J.; Pfeiffer, F.; Noël, P. B.

    2017-01-01

    Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.

  17. TOPEX/POSEIDON microwave radiometer performance and in-flight calibration

    NASA Technical Reports Server (NTRS)

    Ruf, C. S.; Keihm, Stephen J.; Subramanya, B.; Janssen, Michael A.

    1994-01-01

    Results of the in-flight calibration and performance evaluation campaign for the TOPEX/POSEIDON microwave radiometer (TMR) are presented. Intercomparisons are made between TMR and various sources of ground truth, including ground-based microwave water vapor radiometers, radiosondes, global climatological models, special sensor microwave imager data over the Amazon rain forest, and models of clear, calm, subpolar ocean regions. After correction for preflight errors in the processing of thermal/vacuum data, relative channel offsets in the open ocean TMR brightness temperatures were noted at the approximately = 1 K level for the three TMR frequencies. Larger absolute offsets of 6-9 K over the rain forest indicated a approximately = 5% gain error in the three channel calibrations. This was corrected by adjusting the antenna pattern correction (APC) algorithm. AS 10% scale error in the TMR path delay estimates, relative to coincident radiosondes, was corrected in part by the APC adjustment and in part by a 5% modification to the value assumed for the 22.235 FGHz water vapor line strength in the path delay retrieval algorithm. After all in-flight corrections to the calibration, TMR global retrieval accuracy for the wet tropospheric range correction is estimated at 1.1 cm root mean square (RMS) with consistent peformance under clear, cloudy, and windy conditions.

  18. High Gain Antenna Calibration on Three Spacecraft

    NASA Technical Reports Server (NTRS)

    Hashmall, Joseph A.

    2011-01-01

    This paper describes the alignment calibration of spacecraft High Gain Antennas (HGAs) for three missions. For two of the missions (the Lunar Reconnaissance Orbiter and the Solar Dynamics Observatory) the calibration was performed on orbit. For the third mission (the Global Precipitation Measurement core satellite) ground simulation of the calibration was performed in a calibration feasibility study. These three satellites provide a range of calibration situations-Lunar orbit transmitting to a ground antenna for LRO, geosynchronous orbit transmitting to a ground antenna fer SDO, and low Earth orbit transmitting to TDRS satellites for GPM The calibration results depend strongly on the quality and quantity of calibration data. With insufficient data the calibration Junction may give erroneous solutions. Manual intervention in the calibration allowed reliable parameters to be generated for all three missions.

  19. Effect of the Modified Glasgow Coma Scale Score Criteria for Mild Traumatic Brain Injury on Mortality Prediction: Comparing Classic and Modified Glasgow Coma Scale Score Model Scores of 13

    PubMed Central

    Mena, Jorge Humberto; Sanchez, Alvaro Ignacio; Rubiano, Andres M.; Peitzman, Andrew B.; Sperry, Jason L.; Gutierrez, Maria Isabel; Puyana, Juan Carlos

    2011-01-01

    Objective The Glasgow Coma Scale (GCS) classifies Traumatic Brain Injuries (TBI) as Mild (14–15); Moderate (9–13) or Severe (3–8). The ATLS modified this classification so that a GCS score of 13 is categorized as mild TBI. We investigated the effect of this modification on mortality prediction, comparing patients with a GCS of 13 classified as moderate TBI (Classic Model) to patients with GCS of 13 classified as mild TBI (Modified Model). Methods We selected adult TBI patients from the Pennsylvania Outcome Study database (PTOS). Logistic regressions adjusting for age, sex, cause, severity, trauma center level, comorbidities, and isolated TBI were performed. A second evaluation included the time trend of mortality. A third evaluation also included hypothermia, hypotension, mechanical ventilation, screening for drugs, and severity of TBI. Discrimination of the models was evaluated using the area under receiver operating characteristic curve (AUC). Calibration was evaluated using the Hoslmer-Lemershow goodness of fit (GOF) test. Results In the first evaluation, the AUCs were 0.922 (95 %CI, 0.917–0.926) and 0.908 (95 %CI, 0.903–0.912) for classic and modified models, respectively. Both models showed poor calibration (p<0.001). In the third evaluation, the AUCs were 0.946 (95 %CI, 0.943 – 0.949) and 0.938 (95 %CI, 0.934 –0.940) for the classic and modified models, respectively, with improvements in calibration (p=0.30 and p=0.02 for the classic and modified models, respectively). Conclusion The lack of overlap between ROC curves of both models reveals a statistically significant difference in their ability to predict mortality. The classic model demonstrated better GOF than the modified model. A GCS of 13 classified as moderate TBI in a multivariate logistic regression model performed better than a GCS of 13 classified as mild. PMID:22071923

  20. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.

  1. GIFTS SM EDU Level 1B Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Gazarik, Michael J.; Reisse, Robert A.; Johnson, David G.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) SensorModule (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the GIFTS SM EDU Level 1B algorithms involved in the calibration. The GIFTS Level 1B calibration procedures can be subdivided into four blocks. In the first block, the measured raw interferograms are first corrected for the detector nonlinearity distortion, followed by the complex filtering and decimation procedure. In the second block, a phase correction algorithm is applied to the filtered and decimated complex interferograms. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected spectrum. The phase correction and spectral smoothing operations are performed on a set of interferogram scans for both ambient and hot blackbody references. To continue with the calibration, we compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. We now can estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. The correction schemes that compensate for the fore-optics offsets and off-axis effects are also implemented. In the third block, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation. Finally, in the fourth block, the single pixel algorithms are applied to the entire FPA.

  2. The feasibility of using explicit method for linear correction of the particle size variation using NIR Spectroscopy combined with PLS2regression method

    NASA Astrophysics Data System (ADS)

    Yulia, M.; Suhandy, D.

    2018-03-01

    NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.

  3. PV Calibration Insights | NREL

    Science.gov Websites

    PV Calibration Insights PV Calibration Insights The Photovoltaic (PV) Calibration Insights blog will provide updates on the testing done by the NREL PV Device Performance group. This NREL research group measures the performance of any and all technologies and sizes of PV devices from around the world

  4. Some aspects of robotics calibration, design and control

    NASA Technical Reports Server (NTRS)

    Tawfik, Hazem

    1990-01-01

    The main objective is to introduce techniques in the areas of testing and calibration, design, and control of robotic systems. A statistical technique is described that analyzes a robot's performance and provides quantitative three-dimensional evaluation of its repeatability, accuracy, and linearity. Based on this analysis, a corrective action should be taken to compensate for any existing errors and enhance the robot's overall accuracy and performance. A comparison between robotics simulation software packages that were commercially available (SILMA, IGRIP) and that of Kennedy Space Center (ROBSIM) is also included. These computer codes simulate the kinematics and dynamics patterns of various robot arm geometries to help the design engineer in sizing and building the robot manipulator and control system. A brief discussion on an adaptive control algorithm is provided.

  5. WE-G-BRB-08: TG-51 Calibration of First Commercial MRI-Guided IMRT System in the Presence of 0.35 Tesla Magnetic Field.

    PubMed

    Goddu, S; Green, O Pechenaya; Mutic, S

    2012-06-01

    The first real-time-MRI-guided radiotherapy system has been installed in a clinic and it is being evaluated. Presence of magnetic field (MF) during radiation output calibration may have implications on ionization measurements and there is a possibility that standard calibration protocols may not be suitable for dose measurements for such devices. In this study, we evaluated whether a standard calibration protocol (AAPM- TG-51) is appropriate for absolute dose measurement in presence of MF. Treatment delivery of the ViewRay (VR) system is via three 15,000Ci Cobalt-60 heads positioned 120-degrees apart and all calibration measurements were done in the presence of 0.35T MF. Two ADCL- calibrated ionization-chambers (Exradin A12, A16) were used for TG-51 calibration. Chambers were positioned at 5-cm depth, (SSD=105cm: VR's isocenter), and the MLC leaves were shaped to a 10.5cm × 10.5 cm field size. Percent-depth-dose (PDD) measurements were performed for 5 and 10 cm depths. Individual output of each head was measured using the AAPM- TG51 protocol. Calibration accuracy for each head was subsequently verified by Radiological Physics Center (RPC) TLD measurements. Measured ion-recombination (Pion) and polarity (Ppol) correction factors were less-than 1.002 and 1.006, respectively. Measured PDDs agreed with BJR-25 within ±0.2%. Maximum dose rates for the reference field size at VR's isocenter for heads 1, 2 and 3 were 1.445±0.005, 1.446±0.107, 1.431±0.006 Gy/minute, respectively. Our calibrations agreed with RPC- TLD measurements within ±1.3%, ±2.6% and ±2.0% for treatment-heads 1, 2 and 3, respectively. At the time of calibration, mean activity of the Co-60 sources was 10,800Ci±0.1%. This study shows that the TG- 51 calibration is feasible in the presence of 0.35T MF and the measurement agreement is within the range of results obtainable for conventional treatment machines. Drs. Green, Goddu, and Mutic served as scientific consultants for ViewRay, Inc. Dr. Mutic is on the clinical focus group for ViewRay, Inc., and his spouse holds shares in ViewRay, Inc. © 2012 American Association of Physicists in Medicine.

  6. Dose calibrator linearity test: 99mTc versus 18F radioisotopes*

    PubMed Central

    Willegaignon, José; Sapienza, Marcelo Tatit; Coura-Filho, George Barberio; Garcez, Alexandre Teles; Alves, Carlos Eduardo Gonzalez Ribeiro; Cardona, Marissa Anabel Rivera; Gutterres, Ricardo Fraga; Buchpiguel, Carlos Alberto

    2015-01-01

    Objective The present study was aimed at evaluating the viability of replacing 18F with 99mTc in dose calibrator linearity testing. Materials and Methods The test was performed with sources of 99mTc (62 GBq) and 18F (12 GBq) whose activities were measured up to values lower than 1 MBq. Ratios and deviations between experimental and theoretical 99mTc and 18F sources activities were calculated and subsequently compared. Results Mean deviations between experimental and theoretical 99mTc and 18F sources activities were 0.56 (± 1.79)% and 0.92 (± 1.19)%, respectively. The mean ratio between activities indicated by the device for the 99mTc source as measured with the equipment pre-calibrated to measure 99mTc and 18F was 3.42 (± 0.06), and for the 18F source this ratio was 3.39 (± 0.05), values considered constant over the measurement time. Conclusion The results of the linearity test using 99mTc were compatible with those obtained with the 18F source, indicating the viability of utilizing both radioisotopes in dose calibrator linearity testing. Such information in association with the high potential of radiation exposure and costs involved in 18F acquisition suggest 99mTc as the element of choice to perform dose calibrator linearity tests in centers that use 18F, without any detriment to the procedure as well as to the quality of the nuclear medicine service. PMID:25798005

  7. Soybean Physiology Calibration in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Drewniak, B. A.; Bilionis, I.; Constantinescu, E. M.

    2014-12-01

    With the large influence of agricultural land use on biophysical and biogeochemical cycles, integrating cultivation into Earth System Models (ESMs) is increasingly important. The Community Land Model (CLM) was augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. However, the strong nonlinearity of ESMs makes parameter fitting a difficult task. In this study, our goal is to calibrate ten of the CLM-Crop parameters for one crop type, soybean, in order to improve model projection of plant development and carbon fluxes. We used measurements of gross primary productivity, net ecosystem exchange, and plant biomass from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC). Our scheme can perform model calibration using very few evaluations and, by exploiting parallelism, at a fraction of the time required by plain vanilla Markov Chain Monte Carlo (MCMC). We present the results from a twin experiment (self-validation) and calibration results and validation using real observations from an AmeriFlux tower site in the Midwestern United States, for the soybean crop type. The improved model will help researchers understand how climate affects crop production and resulting carbon fluxes, and additionally, how cultivation impacts climate.

  8. High Energy Astronomy Observatory (HEAO)

    NASA Image and Video Library

    1977-01-01

    This photograph is of the High Energy Astronomy Observatory (HEAO)-2 telescope being evaluated by engineers in the clean room of the X-Ray Calibration Facility at the Marshall Space Flight Center (MSFC). The MSFC was heavily engaged in the technical and scientific aspects, testing and calibration, of the HEAO-2 telescope The HEAO-2 was the first imaging and largest x-ray telescope built to date. The X-Ray Calibration Facility was built in 1976 for testing MSFC's HEAO-2. The facility is the world's largest, most advanced laboratory for simulating x-ray emissions from distant celestial objects. It produced a space-like environment in which components related to x-ray telescope imaging are tested and the quality of their performance in space is predicted. The original facility contained a 1,000-foot long by 3-foot diameter vacuum tube (for the x-ray path) cornecting an x-ray generator and an instrument test chamber. Recently, the facility was upgraded to evaluate the optical elements of NASA's Hubble Space Telescope, Chandra X-Ray Observatory and Compton Gamma-Ray Observatory.

  9. A Compound Sensor for Simultaneous Measurement of Packing Density and Moisture Content of Silage.

    PubMed

    Meng, Delun; Meng, Fanjia; Sun, Wei; Deng, Shuang

    2017-12-28

    Packing density and moisture content are important factors in investigating the ensiling quality. Low packing density is a major cause of loss of sugar content. The moisture content also plays a determinant role in biomass degradation. To comprehensively evaluate the ensiling quality, this study focused on developing a compound sensor. In it, moisture electrodes and strain gauges were embedded into an ASABE Standard small cone for the simultaneous measurements of the penetration resistance (PR) and moisture content (MC) of silage. In order to evaluate the performance of the designed sensor and the theoretical analysis being used, relevant calibration and validation tests were conducted. The determination coefficients are 0.996 and 0.992 for PR calibration and 0.934 for MC calibration. The validation indicated that this measurement technique could determine the packing density and moisture content of the silage simultaneously and eliminate the influence of the friction between the penetration shaft and silage. In this study, we not only design a compound sensor but also provide an alternative way to investigate the ensiling quality which would be useful for further silage research.

  10. Development and evaluation of a finite element model of the THOR for occupant protection of spaceflight crewmembers.

    PubMed

    Putnam, Jacob B; Somers, Jeffrey T; Wells, Jessica A; Perry, Chris E; Untaroiu, Costin D

    2015-09-01

    New vehicles are currently being developed to transport humans to space. During the landing phases, crewmembers may be exposed to spinal and frontal loading. To reduce the risk of injuries during these common impact scenarios, the National Aeronautics and Space Administration (NASA) is developing new safety standards for spaceflight. The Test Device for Human Occupant Restraint (THOR) advanced multi-directional anthropomorphic test device (ATD), with the National Highway Traffic Safety Administration modification kit, has been chosen to evaluate occupant spacecraft safety because of its improved biofidelity. NASA tested the THOR ATD at Wright-Patterson Air Force Base (WPAFB) in various impact configurations, including frontal and spinal loading. A computational finite element model (FEM) of the THOR to match these latest modifications was developed in LS-DYNA software. The main goal of this study was to calibrate and validate the THOR FEM for use in future spacecraft safety studies. An optimization-based method was developed to calibrate the material models of the lumbar joints and pelvic flesh. Compression test data were used to calibrate the quasi-static material properties of the pelvic flesh, while whole body THOR ATD kinematic and kinetic responses under spinal and frontal loading conditions were used for dynamic calibration. The performance of the calibrated THOR FEM was evaluated by simulating separate THOR ATD tests with different crash pulses along both spinal and frontal directions. The model response was compared with test data by calculating its correlation score using the CORrelation and Analysis rating system. The biofidelity of the THOR FEM was then evaluated against tests recorded on human volunteers under 3 different frontal and spinal impact pulses. The calibrated THOR FEM responded with high similarity to the THOR ATD in all validation tests. The THOR FEM showed good biofidelity relative to human-volunteer data under spinal loading, but limited biofidelity under frontal loading. This may suggest a need for further improvements in both the THOR ATD and FEM. Overall, results presented in this study provide confidence in the THOR FEM for use in predicting THOR ATD responses for conditions, such as those observed in spacecraft landing, and for use in evaluating THOR ATD biofidelity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Diagnostic utility of appetite loss in addition to existing prediction models for community-acquired pneumonia in the elderly: a prospective diagnostic study in acute care hospitals in Japan

    PubMed Central

    Yamamoto, Yosuke; Terada, Kazuhiko; Ohta, Mitsuyasu; Mikami, Wakako; Yokota, Hajime; Hayashi, Michio; Miyashita, Jun; Azuma, Teruhisa; Fukuma, Shingo; Fukuhara, Shunichi

    2017-01-01

    Objective Diagnosis of community-acquired pneumonia (CAP) in the elderly is often delayed because of atypical presentation and non-specific symptoms, such as appetite loss, falls and disturbance in consciousness. The aim of this study was to investigate the external validity of existing prediction models and the added value of the non-specific symptoms for the diagnosis of CAP in elderly patients. Design Prospective cohort study. Setting General medicine departments of three teaching hospitals in Japan. Participants A total of 109 elderly patients who consulted for upper respiratory symptoms between 1 October 2014 and 30 September 2016. Main outcome measures The reference standard for CAP was chest radiograph evaluated by two certified radiologists. The existing models were externally validated for diagnostic performance by calibration plot and discrimination. To evaluate the additional value of the non-specific symptoms to the existing prediction models, we developed an extended logistic regression model. Calibration, discrimination, category-free net reclassification improvement (NRI) and decision curve analysis (DCA) were investigated in the extended model. Results Among the existing models, the model by van Vugt demonstrated the best performance, with an area under the curve of 0.75(95% CI 0.63 to 0.88); calibration plot showed good fit despite a significant Hosmer-Lemeshow test (p=0.017). Among the non-specific symptoms, appetite loss had positive likelihood ratio of 3.2 (2.0–5.3), negative likelihood ratio of 0.4 (0.2–0.7) and OR of 7.7 (3.0–19.7). Addition of appetite loss to the model by van Vugt led to improved calibration at p=0.48, NRI of 0.53 (p=0.019) and higher net benefit by DCA. Conclusions Information on appetite loss improved the performance of an existing model for the diagnosis of CAP in the elderly. PMID:29122806

  12. Internal stray radiation measurement for cryogenic infrared imaging systems using a spherical mirror.

    PubMed

    Tian, Qijie; Chang, Songtao; He, Fengyun; Li, Zhou; Qiao, Yanfeng

    2017-06-10

    Internal stray radiation is a key factor that influences infrared imaging systems, and its suppression level is an important criterion to evaluate system performance, especially for cryogenic infrared imaging systems, which are highly sensitive to thermal sources. In order to achieve accurate measurement for internal stray radiation, an approach is proposed, which is based on radiometric calibration using a spherical mirror. First of all, the theory of spherical mirror design is introduced. Then, the calibration formula considering the integration time is presented. Following this, the details regarding the measurement method are presented. By placing a spherical mirror in front of the infrared detector, the influence of internal factors of the detector on system output can be obtained. According to the calibration results of the infrared imaging system, the output caused by internal stray radiation can be acquired. Finally, several experiments are performed in a chamber with controllable inside temperatures to validate the theory proposed in this paper. Experimental results show that the measurement results are in good accordance with the theoretical analysis, and demonstrate that the proposed theories are valid and can be employed in practical applications. The proposed method can achieve accurate measurement for internal stray radiation at arbitrary integration time and ambient temperatures. The measurement result can be used to evaluate whether the suppression level meets the system requirement.

  13. Performance Evaluation of an Infrared Thermocouple

    PubMed Central

    Chen, Chiachung; Weng, Yu-Kai; Shen, Te-Ching

    2010-01-01

    The measurement of the leaf temperature of forests or agricultural plants is an important technique for the monitoring of the physiological state of crops. The infrared thermometer is a convenient device due to its fast response and nondestructive measurement technique. Nowadays, a novel infrared thermocouple, developed with the same measurement principle of the infrared thermometer but using a different detector, has been commercialized for non-contact temperature measurement. The performances of two-kinds of infrared thermocouples were evaluated in this study. The standard temperature was maintained by a temperature calibrator and a special black cavity device. The results indicated that both types of infrared thermocouples had good precision. The error distribution ranged from −1.8 °C to 18 °C as the reading values served as the true values. Within the range from 13 °C to 37 °C, the adequate calibration equations were the high-order polynomial equations. Within the narrower range from 20 °C to 35 °C, the adequate equation was a linear equation for one sensor and a two-order polynomial equation for the other sensor. The accuracy of the two kinds of infrared thermocouple was improved by nearly 0.4 °C with the calibration equations. These devices could serve as mobile monitoring tools for in situ and real time routine estimation of leaf temperatures. PMID:22163458

  14. Development of PBPK Models for Gasoline in Adult and ...

    EPA Pesticide Factsheets

    Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of

  15. A graphical method to evaluate spectral preprocessing in multivariate regression calibrations: example with Savitzky-Golay filters and partial least squares regression

    USDA-ARS?s Scientific Manuscript database

    In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly ...

  16. Urban tree growth modeling

    Treesearch

    E. Gregory McPherson; Paula J. Peper

    2012-01-01

    This paper describes three long-term tree growth studies conducted to evaluate tree performance because repeated measurements of the same trees produce critical data for growth model calibration and validation. Several empirical and process-based approaches to modeling tree growth are reviewed. Modeling is more advanced in the fields of forestry and...

  17. Success and challenges met during the calibration of APEX on large plots

    USDA-ARS?s Scientific Manuscript database

    As the APEX model is increasingly considered for the evaluation of agricultural systems, satisfactory performance of APEX on fields is critical. APEX was applied to 16 replicated large plots established in 1991 in Northeast Missouri. Until 2009, each phase of each rotation was represented every year...

  18. Global Space-Based Inter-Calibration System Reflective Solar Calibration Reference: From Aqua MODIS to S-NPP VIIRS

    NASA Technical Reports Server (NTRS)

    Xiong, Xiaoxiong; Angal, Amit; Butler, James; Cao, Changyong; Doelling, Daivd; Wu, Aisheng; Wu, Xiangqian

    2016-01-01

    The MODIS has successfully operated on-board the NASA's EOS Terra and Aqua spacecraft for more than 16 and 14 years, respectively. MODIS instrument was designed with stringent calibration requirements and comprehensive on-board calibration capability. In the reflective solar spectral region, Aqua MODIS has performed better than Terra MODIS and, therefore, has been chosen by the Global Space-based Inter-Calibration System (GSICS) operational community as the calibration reference sensor in cross-sensor calibration and calibration inter-comparisons. For the same reason, it has also been used by a number of earth observing sensors as their calibration reference. Considering that Aqua MODIS has already operated for nearly 14 years, it is essential to transfer its calibration to a follow-on reference sensor with a similar calibration capability and stable performance. The VIIRS is a follow-on instrument to MODIS and has many similar design features as MODIS, including their on-board calibrators (OBC). As a result, VIIRS is an ideal candidate to replace MODIS to serve as the future GSICS reference sensor. Since launch, the S-NPP VIIRS has already operated for more than 4 years and its overall performance has been extensively characterized and demonstrated to meet its overall design requirements. This paper provides an overview of Aqua MODIS and S-NPP VIIRS reflective solar bands (RSB) calibration methodologies and strategies, traceability, and their on-orbit performance. It describes and illustrates different methods and approaches that can be used to facilitate the calibration reference transfer, including the use of desert and Antarctic sites, deep convective clouds (DCC), and the lunar observations.

  19. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE PAGES

    Xi, Maolong; Lu, Dan; Gui, Dongwei; ...

    2016-11-27

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  20. Accuracy of subcutaneous continuous glucose monitoring in critically ill adults: improved sensor performance with enhanced calibrations.

    PubMed

    Leelarathna, Lalantha; English, Shane W; Thabit, Hood; Caldwell, Karen; Allen, Janet M; Kumareswaran, Kavita; Wilinska, Malgorzata E; Nodale, Marianna; Haidar, Ahmad; Evans, Mark L; Burnstein, Rowan; Hovorka, Roman

    2014-02-01

    Accurate real-time continuous glucose measurements may improve glucose control in the critical care unit. We evaluated the accuracy of the FreeStyle(®) Navigator(®) (Abbott Diabetes Care, Alameda, CA) subcutaneous continuous glucose monitoring (CGM) device in critically ill adults using two methods of calibration. In a randomized trial, paired CGM and reference glucose (hourly arterial blood glucose [ABG]) were collected over a 48-h period from 24 adults with critical illness (mean±SD age, 60±14 years; mean±SD body mass index, 29.6±9.3 kg/m(2); mean±SD Acute Physiology and Chronic Health Evaluation score, 12±4 [range, 6-19]) and hyperglycemia. In 12 subjects, the CGM device was calibrated at variable intervals of 1-6 h using ABG. In the other 12 subjects, the sensor was calibrated according to the manufacturer's instructions (1, 2, 10, and 24 h) using arterial blood and the built-in point-of-care glucometer. In total, 1,060 CGM-ABG pairs were analyzed over the glucose range from 4.3 to 18.8 mmol/L. Using enhanced calibration median (interquartile range) every 169 (122-213) min, the absolute relative deviation was lower (7.0% [3.5, 13.0] vs. 12.8% [6.3, 21.8], P<0.001), and the percentage of points in the Clarke error grid Zone A was higher (87.8% vs. 70.2%). Accuracy of the Navigator CGM device during critical illness was comparable to that observed in non-critical care settings. Further significant improvements in accuracy may be obtained by frequent calibrations with ABG measurements.

  1. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    NASA Astrophysics Data System (ADS)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  2. Evaluation of the 24-Hour Recall as a Reference Instrument for Calibrating Other Self-Report Instruments in Nutritional Cohort Studies: Evidence From the Validation Studies Pooling Project.

    PubMed

    Freedman, Laurence S; Commins, John M; Willett, Walter; Tinker, Lesley F; Spiegelman, Donna; Rhodes, Donna; Potischman, Nancy; Neuhouser, Marian L; Moshfegh, Alanna J; Kipnis, Victor; Baer, David J; Arab, Lenore; Prentice, Ross L; Subar, Amy F

    2017-07-01

    Calibrating dietary self-report instruments is recommended as a way to adjust for measurement error when estimating diet-disease associations. Because biomarkers available for calibration are limited, most investigators use self-reports (e.g., 24-hour recalls (24HRs)) as the reference instrument. We evaluated the performance of 24HRs as reference instruments for calibrating food frequency questionnaires (FFQs), using data from the Validation Studies Pooling Project, comprising 5 large validation studies using recovery biomarkers. Using 24HRs as reference instruments, we estimated attenuation factors, correlations with truth, and calibration equations for FFQ-reported intakes of energy and for protein, potassium, and sodium and their densities, and we compared them with values derived using biomarkers. Based on 24HRs, FFQ attenuation factors were substantially overestimated for energy and sodium intakes, less for protein and potassium, and minimally for nutrient densities. FFQ correlations with truth, based on 24HRs, were substantially overestimated for all dietary components. Calibration equations did not capture dependencies on body mass index. We also compared predicted bias in estimated relative risks adjusted using 24HRs as reference instruments with bias when making no adjustment. In disease models with energy and 1 or more nutrient intakes, predicted bias in estimated nutrient relative risks was reduced on average, but bias in the energy risk coefficient was unchanged. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  3. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xi, Maolong; Lu, Dan; Gui, Dongwei

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  4. Effect of display type, DICOM calibration and room illuminance in bitewing radiographs.

    PubMed

    Kallio-Pulkkinen, Soili; Huumonen, Sisko; Haapea, Marianne; Liukkonen, Esa; Sipola, Annina; Tervonen, Osmo; Nieminen, Miika T

    2016-01-01

    To compare observer performance in the detection of both anatomical structures and caries in bitewing radiographs using consumer grade displays with and without digital imaging and communications in medicine (DICOM) calibration, tablets (third generation iPad; Apple, Cupertino, CA) and 6-megapixel (MP) displays under different lighting. 30 bitewing radiographs were blindly evaluated on four displays under bright (510 lx) and dim (16 lx) ambient lighting by two observers. The dentinoenamel junction, enamel and dentinal caries, and the cortical border of the alveolar crests were evaluated. Consensus was considered as reference. Intraobserver agreement was determined. The proportion of equivalent ratings and weighted kappa were used to assess reliability. The proportion of equivalent ratings with consensus differed significantly between uncalibrated and DICOM-calibrated consumer grade display in enamel caries in upper and lower molars in bright (p = 0.013 and p = 0.003) lighting, and in dentinal caries in lower molars in both bright (p = 0.022) and dim (p = 0.004) lighting. The proportion also differed significantly between DICOM-calibrated consumer grade and 6-MP display in dentinal caries in lower molars in bright lighting (p = 0.039), tablet and consumer grade display in enamel caries in upper molars (p = 0.017) in bright lighting, tablet and 6-MP display in dentinal caries in lower molars (p = 0.003) in bright lighting and in enamel caries in lower molars (p = 0.012) in dim lighting. DICOM calibration improves the detection of enamel and dentinal caries in bitewing radiographs, particularly in bright lighting. Therefore, a calibrated consumer grade display can be recommended as a diagnostic tool for viewing bitewing radiographs.

  5. First Evaluation of the Climatological Calibration Algorithm in the Real-time TMPA Precipitation Estimates over Two Basins at High and Low Latitudes

    NASA Technical Reports Server (NTRS)

    Yong, Bin; Ren, Liliang; Hong, Yang; Gourley, Jonathan; Tian, Yudong; Huffman, George J.; Chen, Xi; Wang, Weiguang; Wen, Yixin

    2013-01-01

    The TRMM Multi-satellite Precipitation Analysis (TMPA) system underwent a crucial upgrade in early 2009 to include a climatological calibration algorithm (CCA) to its realtime product 3B42RT, and this algorithm will continue to be applied in the future Global Precipitation Measurement era constellation precipitation products. In this study, efforts are focused on the comparison and validation of the Version 6 3B42RT estimates before and after the climatological calibration is applied. The evaluation is accomplished using independent rain gauge networks located within the high-latitude Laohahe basin and the low-latitude Mishui basin, both in China. The analyses indicate the CCA can effectively reduce the systematic errors over the low-latitude Mishui basin but misrepresent the intensity distribution pattern of medium-high rain rates. This behavior could adversely affect TMPA's hydrological applications, especially for extreme events (e.g., floods and landslides). Results also show that the CCA tends to perform slightly worse, in particular, during summer and winter, over the high-latitude Laohahe basin. This is possibly due to the simplified calibration-processing scheme in the CCA that directly applies the climatological calibrators developed within 40 degrees latitude to the latitude belts of 40 degrees N-50 degrees N. Caution should therefore be exercised when using the calibrated 3B42RT for heavy rainfall-related flood forecasting (or landslide warning) over high-latitude regions, as the employment of the smooth-fill scheme in the CCA bias correction could homogenize the varying rainstorm characteristics. Finally, this study highlights that accurate detection and estimation of snow at high latitudes is still a challenging task for the future development of satellite precipitation retrievals.

  6. Evaluation of Hydrologic Simulations Developed Using Multi-Model Synthesis and Remotely-Sensed Data within a Portfolio of Calibration Strategies

    NASA Astrophysics Data System (ADS)

    Lafontaine, J.; Hay, L.; Markstrom, S. L.

    2016-12-01

    The United States Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the conterminous United States (CONUS). As many stream reaches in the CONUS are either not gaged, or are substantially impacted by water use or flow regulation, ancillary information must be used to determine reasonable parameter estimations for streamflow simulations. Hydrologic models for 1,576 gaged watersheds across the CONUS were developed to test the feasibility of improving streamflow simulations linking physically-based hydrologic models with remotely-sensed data products (i.e. snow water equivalent). Initially, the physically-based models were calibrated to measured streamflow data to provide a baseline for comparison across multiple calibration strategy tests. In addition, not all ancillary datasets are appropriate for application to all parts of the CONUS (e.g. snow water equivalent in the southeastern U.S., where snow is a rarity). As it is not expected that any one data product or model simulation will be sufficient for representing hydrologic behavior across the entire CONUS, a systematic evaluation of which data products improve hydrologic simulations for various regions across the CONUS was performed. The resulting portfolio of calibration strategies can be used to guide selection of an appropriate combination of modeled and measured information for hydrologic model development and calibration. In addition, these calibration strategies have been developed to be flexible so that new data products can be assimilated. This analysis provides a foundation to understand how well models work when sufficient streamflow data are not available and could be used to further inform hydrologic model parameter development for ungaged areas.

  7. Accuracy of Subcutaneous Continuous Glucose Monitoring in Critically Ill Adults: Improved Sensor Performance with Enhanced Calibrations

    PubMed Central

    Leelarathna, Lalantha; English, Shane W.; Thabit, Hood; Caldwell, Karen; Allen, Janet M.; Kumareswaran, Kavita; Wilinska, Malgorzata E.; Nodale, Marianna; Haidar, Ahmad; Evans, Mark L.; Burnstein, Rowan

    2014-01-01

    Abstract Objective: Accurate real-time continuous glucose measurements may improve glucose control in the critical care unit. We evaluated the accuracy of the FreeStyle® Navigator® (Abbott Diabetes Care, Alameda, CA) subcutaneous continuous glucose monitoring (CGM) device in critically ill adults using two methods of calibration. Subjects and Methods: In a randomized trial, paired CGM and reference glucose (hourly arterial blood glucose [ABG]) were collected over a 48-h period from 24 adults with critical illness (mean±SD age, 60±14 years; mean±SD body mass index, 29.6±9.3 kg/m2; mean±SD Acute Physiology and Chronic Health Evaluation score, 12±4 [range, 6–19]) and hyperglycemia. In 12 subjects, the CGM device was calibrated at variable intervals of 1–6 h using ABG. In the other 12 subjects, the sensor was calibrated according to the manufacturer's instructions (1, 2, 10, and 24 h) using arterial blood and the built-in point-of-care glucometer. Results: In total, 1,060 CGM–ABG pairs were analyzed over the glucose range from 4.3 to 18.8 mmol/L. Using enhanced calibration median (interquartile range) every 169 (122–213) min, the absolute relative deviation was lower (7.0% [3.5, 13.0] vs. 12.8% [6.3, 21.8], P<0.001), and the percentage of points in the Clarke error grid Zone A was higher (87.8% vs. 70.2%). Conclusions: Accuracy of the Navigator CGM device during critical illness was comparable to that observed in non–critical care settings. Further significant improvements in accuracy may be obtained by frequent calibrations with ABG measurements. PMID:24180327

  8. Evaluation of an Integrated Read-Out Layer Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Ajamieh, Fayez

    2011-07-01

    This thesis presents evaluation results of an Integrated Read-out Layer (IRL), a proposed concept in scintillator-based calorimetry intended to meet the exceptional calorimetric requirements of the envisaged International Linear Collider (ILC). This study presents a full characterization of the prototype IRL, including exploration of relevant parameters, calibration performance, and the uniformity of response. The study represents proof of the IRL concept. Finally, proposed design enhancements are presented.

  9. Evaluation of an Integrated Read-Out Layer Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Ajamieh, Fayez; /NIU

    2011-08-18

    This thesis presents evaluation results of an Integrated Read-out Layer (IRL), a proposed concept in scintillator-based calorimetry intended to meet the exceptional calorimetric requirements of the envisaged International Linear Collider (ILC). This study presents a full characterization of the prototype IRL, including exploration of relevant parameters, calibration performance, and the uniformity of response. The study represents proof of the IRL concept. Finally, proposed design enhancements are presented.

  10. Phase Calibration for the Block 1 VLBI System

    NASA Technical Reports Server (NTRS)

    Roth, M. G.; Runge, T. F.

    1983-01-01

    Very Long Baseline Interferometry (VLBI) in the DSN provides support for spacecraft navigation, Earth orientation measurements, and synchronization of network time and frequency standards. An improved method for calibrating instrumental phase shifts has recently been implemented as a computer program in the Block 1 system. The new calibration program, called PRECAL, performs calibrations over intervals as small as 0.4 seconds and greatly reduces the amount of computer processing required to perform phase calibration.

  11. Accounting for sensor calibration, data validation, measurement and sampling uncertainties in monitoring urban drainage systems.

    PubMed

    Bertrand-Krajewski, J L; Bardin, J P; Mourad, M; Béranger, Y

    2003-01-01

    Assessing the functioning and the performance of urban drainage systems on both rainfall event and yearly time scales is usually based on online measurements of flow rates and on samples of influent effluent for some rainfall events per year. In order to draw pertinent scientific and operational conclusions from the measurement results, it is absolutely necessary to use appropriate methods and techniques in order to i) calibrate sensors and analytical methods, ii) validate raw data, iii) evaluate measurement uncertainties, iv) evaluate the number of rainfall events to sample per year in order to determine performance indicator with a given uncertainty. Based an previous work, the paper gives a synthetic review of required and techniques, and illustrates their application to storage and settling tanks. Experiments show that, controlled and careful experimental conditions, relative uncertainties are about 20% for flow rates in sewer pipes, 6-10% for volumes, 25-35% for TSS concentrations and loads, and 18-276% for TSS removal rates. In order to evaluate the annual pollutant interception efficiency of storage and settling tanks with a given uncertainty, efforts should first be devoted to decrease the sampling uncertainty by increasing the number of sampled events.

  12. Bayesian calibration for electrochemical thermal model of lithium-ion cells

    NASA Astrophysics Data System (ADS)

    Tagade, Piyush; Hariharan, Krishnan S.; Basu, Suman; Verma, Mohan Kumar Singh; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang

    2016-07-01

    Pseudo-two dimensional electrochemical thermal (P2D-ECT) model contains many parameters that are difficult to evaluate experimentally. Estimation of these model parameters is challenging due to computational cost and the transient model. Due to lack of complete physical understanding, this issue gets aggravated at extreme conditions like low temperature (LT) operations. This paper presents a Bayesian calibration framework for estimation of the P2D-ECT model parameters. The framework uses a matrix variate Gaussian process representation to obtain a computationally tractable formulation for calibration of the transient model. Performance of the framework is investigated for calibration of the P2D-ECT model across a range of temperatures (333 Ksbnd 263 K) and operating protocols. In the absence of complete physical understanding, the framework also quantifies structural uncertainty in the calibrated model. This information is used by the framework to test validity of the new physical phenomena before incorporation in the model. This capability is demonstrated by introducing temperature dependence on Bruggeman's coefficient and lithium plating formation at LT. With the incorporation of new physics, the calibrated P2D-ECT model accurately predicts the cell voltage with high confidence. The accurate predictions are used to obtain new insights into the low temperature lithium ion cell behavior.

  13. Calibration methods influence quantitative material decomposition in photon-counting spectral CT

    NASA Astrophysics Data System (ADS)

    Curtis, Tyler E.; Roeder, Ryan K.

    2017-03-01

    Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.

  14. SU-F-P-49: Comparison of Mapcheck 2 Commission for Photon and Electron Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, J; Yang, C; Morris, B

    2016-06-15

    Purpose: We will investigate the performance variation of the MapCheck2 detector array with different array calibration and dose calibration pairs from different radiation therapy machine. Methods: A MapCheck2 detector array was calibrated on 3 Elekta accelerators with different energy of photon (6 MV, 10 MV, 15 MV and 18 MV) and electron (6 MeV, 9 MeV, 12 MeV, 15 MeV, 18 MeV and 20 MeV) beams. Dose calibration was conducted by referring a water phantom measurement following TG-51 protocol and commission data for each accelerator. A 10 cm × 10 cm beam was measured. This measured map was morphed bymore » applying different calibration pairs. Then the difference was quantified by comparing the doses and similarity using gamma analysis of criteria (0.5 %, 0 mm). Profile variation was evaluated on a same dataset with different calibration pairs. The passing rate of an IMRT QA planar dose was calculated by using 3 mm and 3% criteria and compared with respect to each calibration pairs. Results: In this study, a dose variation up to 0.67% for matched photons and 1.0% for electron beams is observed. Differences of flatness and symmetry can be as high as 1% and 0.7% respectively. Gamma analysis shows a passing rate ranging from 34% to 85% for the standard 10 × 10 cm field. Conclusion: Our work demonstrated that a customized array calibration and dose calibration for each machine is preferred to fulfill a high standard patient QA task.« less

  15. Regionalisation of a distributed method for flood quantiles estimation: Revaluation of local calibration hypothesis to enhance the spatial structure of the optimised parameter

    NASA Astrophysics Data System (ADS)

    Odry, Jean; Arnaud, Patrick

    2016-04-01

    The SHYREG method (Aubert et al., 2014) associates a stochastic rainfall generator and a rainfall-runoff model to produce rainfall and flood quantiles on a 1 km2 mesh covering the whole French territory. The rainfall generator is based on the description of rainy events by descriptive variables following probability distributions and is characterised by a high stability. This stochastic generator is fully regionalised, and the rainfall-runoff transformation is calibrated with a single parameter. Thanks to the stability of the approach, calibration can be performed against only flood quantiles associated with observated frequencies which can be extracted from relatively short time series. The aggregation of SHYREG flood quantiles to the catchment scale is performed using an areal reduction factor technique unique on the whole territory. Past studies demonstrated the accuracy of SHYREG flood quantiles estimation for catchments where flow data are available (Arnaud et al., 2015). Nevertheless, the parameter of the rainfall-runoff model is independently calibrated for each target catchment. As a consequence, this parameter plays a corrective role and compensates approximations and modelling errors which makes difficult to identify its proper spatial pattern. It is an inherent objective of the SHYREG approach to be completely regionalised in order to provide a complete and accurate flood quantiles database throughout France. Consequently, it appears necessary to identify the model configuration in which the calibrated parameter could be regionalised with acceptable performances. The revaluation of some of the method hypothesis is a necessary step before the regionalisation. Especially the inclusion or the modification of the spatial variability of imposed parameters (like production and transfer reservoir size, base flow addition and quantiles aggregation function) should lead to more realistic values of the only calibrated parameter. The objective of the work presented here is to develop a SHYREG evaluation scheme focusing on both local and regional performances. Indeed, it is necessary to maintain the accuracy of at site flood quantiles estimation while identifying a configuration leading to a satisfactory spatial pattern of the calibrated parameter. This ability to be regionalised can be appraised by the association of common regionalisation techniques and split sample validation tests on a set of around 1,500 catchments representing the whole diversity of France physiography. Also, the presence of many nested catchments and a size-based split sample validation make possible to assess the relevance of the calibrated parameter spatial structure inside the largest catchments. The application of this multi-objective evaluation leads to the selection of a version of SHYREG more suitable for regionalisation. References: Arnaud, P., Cantet, P., Aubert, Y., 2015. Relevance of an at-site flood frequency analysis method for extreme events based on stochastic simulation of hourly rainfall. Hydrological Sciences Journal: on press. DOI:10.1080/02626667.2014.965174 Aubert, Y., Arnaud, P., Ribstein, P., Fine, J.A., 2014. The SHYREG flow method-application to 1605 basins in metropolitan France. Hydrological Sciences Journal, 59(5): 993-1005. DOI:10.1080/02626667.2014.902061

  16. Chemometrics resolution and quantification power evaluation: Application on pharmaceutical quaternary mixture of Paracetamol, Guaifenesin, Phenylephrine and p-aminophenol.

    PubMed

    Yehia, Ali M; Mohamed, Heba M

    2016-01-05

    Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. A Consistency Evaluation and Calibration Method for Piezoelectric Transmitters

    PubMed Central

    Zhang, Kai; Tan, Baohai; Liu, Xianping

    2017-01-01

    Array transducer and transducer combination technologies are evolving rapidly. While adapting transmitter combination technologies, the parameter consistencies between each transmitter are extremely important because they can determine a combined effort directly. This study presents a consistency evaluation and calibration method for piezoelectric transmitters by using impedance analyzers. Firstly, electronic parameters of transmitters that can be measured by impedance analyzers are introduced. A variety of transmitter acoustic energies that are caused by these parameter differences are then analyzed and certified and, thereafter, transmitter consistency is evaluated. Lastly, based on the evaluations, consistency can be calibrated by changing the corresponding excitation voltage. Acoustic experiments show that this method accurately evaluates and calibrates transducer consistencies, and is easy to realize. PMID:28452947

  18. Numerical simulation of groundwater flow in Dar es Salaam Coastal Plain (Tanzania)

    NASA Astrophysics Data System (ADS)

    Luciani, Giulia; Sappa, Giuseppe; Cella, Antonella

    2016-04-01

    They are presented the results of a groundwater modeling study on the Coastal Aquifer of Dar es Salaam (Tanzania). Dar es Salaam is one of the fastest-growing coastal cities in Sub-Saharan Africa, with with more than 4 million of inhabitants and a population growth rate of about 8 per cent per year. The city faces periodic water shortages, due to the lack of an adequate water supply network. These two factors have determined, in the last ten years, an increasing demand of groundwater exploitation, carried on by quite a number of private wells, which have been drilled to satisfy human demand. A steady-state three dimensional groundwater model has been set up by the MODFLOW code, and calibrated with the UCODE code for inverse modeling. The aim of the model was to carry out a characterization of groundwater flow system in the Dar es Salaam Coastal Plain. The inputs applied to the model included net recharge rate, calculated from time series of precipitation data (1961-2012), estimations of average groundwater extraction, and estimations of groundwater recharge, coming from zones, outside the area under study. Parametrization of the hydraulic conductivities was realized referring to the main geological features of the study area, based on available literature data and information. Boundary conditions were assigned based on hydrogeological boundaries. The conceptual model was defined in subsequent steps, which added some hydrogeological features and excluded other ones. Calibration was performed with UCODE 2014, using 76 measures of hydraulic head, taken in 2012 referred to the same season. Data were weighted on the basis of the expected errors. Sensitivity analysis of data was performed during calibration, and permitted to identify which parameters were possible to be estimated, and which data could support parameters estimation. Calibration was evaluated based on statistical index, maps of error distribution and test of independence of residuals. Further model analysis was performed after calibration, to test model performance under a range of variations of input variables.

  19. Online examiner calibration across specialties.

    PubMed

    Sturman, Nancy; Wong, Wai Yee; Turner, Jane; Allan, Chris

    2017-09-26

    Integrating undergraduate medical curricula horizontally across clinical medical specialties may be a more patient-centred and learner-centred approach than rotating students through specialty-specific teaching and assessment, but requires some interspecialty calibration of examiner judgements. Our aim was to evaluate the acceptability and feasibility of an online pilot of interdisciplinary examiner calibration. Fair clinical assessment is important to both medical students and clinical teachers METHODS: Clinical teachers were invited to rate video-recorded student objective structured clinical examination (OSCE) performances and join subsequent online discussions using the university's learning management system. Post-project survey free-text and Likert-scale participant responses were analysed to evaluate the acceptability of the pilot and to identify recommendations for improvement. Although 68 clinicians were recruited to participate, and there were 1599 hits on recordings and discussion threads, only 25 clinical teachers rated at least one student performance, and 18 posted at least one comment. Participants, including rural doctors, appeared to value the opportunity for interdisciplinary rating calibration and discussion. Although the asynchronous online format had advantages, especially for rural doctors, participants reported considerable IT challenges. Our findings suggest that fair clinical assessment is important to both medical students and clinical teachers. Interspecialty discussions about assessment may have the potential to enrich intraspecialty perspectives, enhance interspecialty engagement and collaboration, and improve the quality of clinical teacher assessment. Better alignment of university and hospital systems, a face to face component and other modifications may have enhanced clinician engagement with this project. Findings suggest that specialty assessment cultures and content expertise may not be barriers to pursuing more integrated approaches to assessment. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  20. IMRT plan verification with EBT2 and EBT3 films compared to PTW 2D-ARRAY seven29

    NASA Astrophysics Data System (ADS)

    Hanušová, Tereza; Horáková, Ivana; Koniarová, Irena

    2017-11-01

    The aim of this study was to compare dosimetry with Gafchromic EBT2 and EBT3 films to the ion chamber array PTW seven29 in terms of their performance in clinical IMRT plan verification. A methodology for film processing and calibration was developed. Calibration curves were obtained in MATLAB and in FilmQA Pro. The best calibration curve was then used to calibrate EBT2 and EBT3 films for IMRT plan verification measurements. Films were placed in several coronal planes into an RW3 slab phantom and irradiated with a clinical IMRT plan for prostate and lymph nodes using 18 MV photon beams. Individual fields were tested and irradiated with gantry at 0°. Results were evaluated using gamma analysis with 3%/3 mm criteria in OmniPro I'mRT version 1.7. The same measurements were performed with the ion chamber array PTW seven29 in RW3 slabs (different depths) and in the OCTAVIUS II phantom (isocenter depth only; both original and nominal gantry angles). Results were evaluated in PTW VeriSoft version 3.1 using the same criteria. Altogether, 45 IMRT planes were tested with film and 25 planes with the PTW 2D-ARRAY seven29. Film measuerements showed different results than ion chamber matrix measurements. With PTW 2D-ARRAY seven29, worse results were obtained when the detector was placed into the OCTAVIUS phantom than into the RW3 slab phantom, and the worst pass rates were seen for rotational measurements. EBT2 films showed inconsistent results and could differ significantly for different planes in one field. EBT3 films seemed to give the best results of all the tested configurations.

  1. Comparison of proton therapy treatment planning for head tumors with a pencil beam algorithm on dual and single energy CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudobivnik, Nace; Dedes, George; Parodi, Katia

    2016-01-15

    Purpose: Dual energy CT (DECT) has recently been proposed as an improvement over single energy CT (SECT) for stopping power ratio (SPR) estimation for proton therapy treatment planning (TP), thereby potentially reducing range uncertainties. Published literature investigated phantoms. This study aims at performing proton therapy TP on SECT and DECT head images of the same patients and at evaluating whether the reported improved DECT SPR accuracy translates into clinically relevant range shifts in clinical head treatment scenarios. Methods: Two phantoms were scanned at a last generation dual source DECT scanner at 90 and 150 kVp with Sn filtration. The firstmore » phantom (Gammex phantom) was used to calibrate the scanner in terms of SPR while the second served as evaluation (CIRS phantom). DECT images of five head trauma patients were used as surrogate cancer patient images for TP of proton therapy. Pencil beam algorithm based TP was performed on SECT and DECT images and the dose distributions corresponding to the optimized proton plans were calculated using a Monte Carlo (MC) simulation platform using the same patient geometry for both plans obtained from conversion of the 150 kVp images. Range shifts between the MC dose distributions from SECT and DECT plans were assessed using 2D range maps. Results: SPR root mean square errors (RMSEs) for the inserts of the Gammex phantom were 1.9%, 1.8%, and 1.2% for SECT phantom calibration (SECT{sub phantom}), SECT stoichiometric calibration (SECT{sub stoichiometric}), and DECT calibration, respectively. For the CIRS phantom, these were 3.6%, 1.6%, and 1.0%. When investigating patient anatomy, group median range differences of up to −1.4% were observed for head cases when comparing SECT{sub stoichiometric} with DECT. For this calibration the 25th and 75th percentiles varied from −2% to 0% across the five patients. The group median was found to be limited to 0.5% when using SECT{sub phantom} and the 25th and 75th percentiles varied from −1% to 2%. Conclusions: Proton therapy TP using a pencil beam algorithm and DECT images was performed for the first time. Given that the DECT accuracy as evaluated by two phantoms was 1.2% and 1.0% RMSE, it is questionable whether the range differences reported here are significant.« less

  2. Transient Flow through an Unsaturated Levee Embankment during the 2011 Mississippi River Flood

    NASA Astrophysics Data System (ADS)

    Jafari, N.; Stark, T.; Vahedifard, F.; Cadigan, J.

    2017-12-01

    The Mississippi River and corresponding tributaries drain approximately 3.23 million km2 (1.25 million mi2) or the equivalent of 41% of the contiguous United States. Approximately 2,600 km ( 1,600 miles) of earthen levees presently protect major urban cities and agricultural land against the periodic Mississippi River floods within the Lower Mississippi River Valley. The 2011 flood also severely stressed the levees and highlighted the need to evaluate the behavior of levee embankments during high water levels. The performance of earthen levees is complex because of the uncertainties in construction materials, antecedent moisture contents, hydraulic properties, and lack of field monitoring. In particular, calibration of unsaturated and saturated soil properties of levee embankment and foundation layers along with the evaluation of phreatic surface during high river stage is lacking. Due to the formation of sand boils at the Duncan Point Levee in Baton Rouge, LA during the 2011 flood event, a reconnaissance survey was conducted to collect pore-water pressures in the sand foundation using piezometers and identifying the phreatic surface at the peak river level. Transient seepage analyses were performed to calibrate the foundation and levee embankment material properties using field data collected. With this calibrated levee model, numerical experiments were conducted to characterize the effects of rainfall intensity and duration, progression of phreatic surface, and seasonal climate variability prior to floods on the performance of the levee embankment. For example, elevated phreatic surface from river floods are maintained for several months and can be compounded with rainfall to lead to slope instability.

  3. Performance analysis of multiple Indoor Positioning Systems in a healthcare environment.

    PubMed

    Van Haute, Tom; De Poorter, Eli; Crombez, Pieter; Lemic, Filip; Handziski, Vlado; Wirström, Niklas; Wolisz, Adam; Voigt, Thiemo; Moerman, Ingrid

    2016-02-03

    The combination of an aging population and nursing staff shortages implies the need for more advanced systems in the healthcare industry. Many key enablers for the optimization of healthcare systems require provisioning of location awareness for patients (e.g. with dementia), nurses, doctors, assets, etc. Therefore, many Indoor Positioning Systems (IPSs) will be indispensable in healthcare systems. However, although many IPSs have been proposed in literature, most of these have been evaluated in non-representative environments such as office buildings rather than in a hospital. To remedy this, the paper evaluates the performance of existing IPSs in an operational modern healthcare environment: the "Sint-Jozefs kliniek Izegem" hospital in Belgium. The evaluation (data-collecting and data-processing) is executed using a standardized methodology and evaluates the point accuracy, room accuracy and latency of multiple IPSs. To evaluate the solutions, the position of a stationary device was requested at 73 evaluation locations. By using the same evaluation locations for all IPSs the performance of all systems could objectively be compared. Several trends can be identified such as the fact that Wi-Fi based fingerprinting solutions have the best accuracy result (point accuracy of 1.21 m and room accuracy of 98%) however it requires calibration before use and needs 5.43 s to estimate the location. On the other hand, proximity based solutions (based on sensor nodes) are significantly cheaper to install, do not require calibration and still obtain acceptable room accuracy results. As a conclusion of this paper, Wi-Fi based solutions have the most potential for an indoor positioning service in case when accuracy is the most important metric. Applying the fingerprinting approach with an anchor installed in every two rooms is the preferred solution for a hospital environment.

  4. 2006 Interferometry Imaging Beauty Contest

    NASA Technical Reports Server (NTRS)

    Lawson, Peter R.; Cotton, William D.; Hummel, Christian A.; Ireland, Michael; Monnier, John D.; Thiebaut, Eric; Rengaswamy, Sridharan; Baron, Fabien; Young, John S.; Kraus, Stefan; hide

    2006-01-01

    We present a formal comparison of the performance of algorithms used for synthesis imaging with optical/infrared long-baseline interferometers. Five different algorithms are evaluated based on their performance with simulated test data. Each set of test data is formatted in the OI-FITS format. The data are calibrated power spectra and bispectra measured with an array intended to be typical of existing imaging interferometers. The strengths and limitations of each algorithm are discussed.

  5. The Calibration and Use of Capacitance Sensors to Monitor Stem Water Content in Trees.

    PubMed

    Matheny, Ashley M; Garrity, Steven R; Bohrer, Gil

    2017-12-27

    Water transport and storage through the soil-plant-atmosphere continuum is critical to the terrestrial water cycle, and has become a major research focus area. Biomass capacitance plays an integral role in the avoidance of hydraulic impairment to transpiration. However, high temporal resolution measurements of dynamic changes in the hydraulic capacitance of large trees are rare. Here, we present procedures for the calibration and use of capacitance sensors, typically used to monitor soil water content, to measure the volumetric water content in trees in the field. Frequency domain reflectometry-style observations are sensitive to the density of the media being studied. Therefore, it is necessary to perform species-specific calibrations to convert from the sensor-reported values of dielectric permittivity to volumetric water content. Calibration is performed on a harvested branch or stem cut into segments that are dried or re-hydrated to produce a full range of water contents used to generate a best-fit regression with sensor observations. Sensors are inserted into calibration segments or installed in trees after pre-drilling holes to a tolerance fit using a fabricated template to ensure proper drill alignment. Special care is taken to ensure that sensor tines make good contact with the surrounding media, while allowing them to be inserted without excessive force. Volumetric water content dynamics observed via the presented methodology align with sap flow measurements recorded using thermal dissipation techniques and environmental forcing data. Biomass water content data can be used to observe the onset of water stress, drought response and recovery, and has the potential to be applied to the calibration and evaluation of new plant-level hydrodynamics models, as well as to the partitioning of remotely sensed moisture products into above- and belowground components.

  6. NREL Improves Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop Building Energy Simulation Test for Existing Homes (BESTEST-EX) to increase the quality and accuracy of energy analysis tools for the building retrofit market. Researchers at the National Renewable Energy Laboratory (NREL) have developed a new test procedure to increase the quality and accuracy of energy analysis tools for the building retrofit market. The Building Energy Simulation Test for Existing Homes (BESTEST-EX) is a test procedure that enables software developers to evaluate the performance of their audit tools in modeling energy use and savings in existing homes when utility bills are available formore » model calibration. Similar to NREL's previous energy analysis tests, such as HERS BESTEST and other BESTEST suites included in ANSI/ASHRAE Standard 140, BESTEST-EX compares software simulation findings to reference results generated with state-of-the-art simulation tools such as EnergyPlus, SUNREL, and DOE-2.1E. The BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX includes building physics and utility bill calibration test cases. The diagram illustrates the utility bill calibration test cases. Participants are given input ranges and synthetic utility bills. Software tools use the utility bills to calibrate key model inputs and predict energy savings for the retrofit cases. Participant energy savings predictions using calibrated models are compared to NREL predictions using state-of-the-art building energy simulation programs.« less

  7. Preparation for implementation of the mechanistic-empirical pavement design guide in Michigan, part 3 : local calibration and validation of the pavement-ME performance models.

    DOT National Transportation Integrated Search

    2014-11-01

    The main objective of Part 3 was to locally calibrate and validate the mechanistic-empirical pavement : design guide (Pavement-ME) performance models to Michigan conditions. The local calibration of the : performance models in the Pavement-ME is a ch...

  8. Evaluating fossil calibrations for dating phylogenies in light of rates of molecular evolution: a comparison of three approaches.

    PubMed

    Lukoschek, Vimoksalehi; Scott Keogh, J; Avise, John C

    2012-01-01

    Evolutionary and biogeographic studies increasingly rely on calibrated molecular clocks to date key events. Although there has been significant recent progress in development of the techniques used for molecular dating, many issues remain. In particular, controversies abound over the appropriate use and placement of fossils for calibrating molecular clocks. Several methods have been proposed for evaluating candidate fossils; however, few studies have compared the results obtained by different approaches. Moreover, no previous study has incorporated the effects of nucleotide saturation from different data types in the evaluation of candidate fossils. In order to address these issues, we compared three approaches for evaluating fossil calibrations: the single-fossil cross-validation method of Near, Meylan, and Shaffer (2005. Assessing concordance of fossil calibration points in molecular clock studies: an example using turtles. Am. Nat. 165:137-146), the empirical fossil coverage method of Marshall (2008. A simple method for bracketing absolute divergence times on molecular phylogenies using multiple fossil calibration points. Am. Nat. 171:726-742), and the Bayesian multicalibration method of Sanders and Lee (2007. Evaluating molecular clock calibrations using Bayesian analyses with soft and hard bounds. Biol. Lett. 3:275-279) and explicitly incorporate the effects of data type (nuclear vs. mitochondrial DNA) for identifying the most reliable or congruent fossil calibrations. We used advanced (Caenophidian) snakes as a case study; however, our results are applicable to any taxonomic group with multiple candidate fossils, provided appropriate taxon sampling and sufficient molecular sequence data are available. We found that data type strongly influenced which fossil calibrations were identified as outliers, regardless of which method was used. Despite the use of complex partitioned models of sequence evolution and multiple calibrations throughout the tree, saturation severely compressed basal branch lengths obtained from mitochondrial DNA compared with nuclear DNA. The effects of mitochondrial saturation were not ameliorated by analyzing a combined nuclear and mitochondrial data set. Although removing the third codon positions from the mitochondrial coding regions did not ameliorate saturation effects in the single-fossil cross-validations, it did in the Bayesian multicalibration analyses. Saturation significantly influenced the fossils that were selected as most reliable for all three methods evaluated. Our findings highlight the need to critically evaluate the fossils selected by data with different rates of nucleotide substitution and how data with different evolutionary rates affect the results of each method for evaluating fossils. Our empirical evaluation demonstrates that the advantages of using multiple independent fossil calibrations significantly outweigh any disadvantages.

  9. Wind Tunnel Balance Calibration: Are 1,000,000 Data Points Enough?

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Parker, Peter A.

    2016-01-01

    Measurement systems are typically calibrated based on standard practices established by a metrology standards laboratory, for example the National Institute for Standards and Technology (NIST), or dictated by an organization's metrology manual. Therefore, the calibration is designed and executed according to an established procedure. However, for many aerodynamic research measurement systems a universally accepted standard, traceable approach does not exist. Therefore, a strategy for how to develop a calibration protocol is left to the developer or user to define based on experience and recommended practice in their respective industry. Wind tunnel balances are one such measurement system. Many different calibration systems, load schedules and procedures have been developed for balances with little consensus on a recommended approach. Especially lacking is guidance the number of calibration data points needed. Regrettably, the number of data points tends to be correlated with the perceived quality of the calibration. Often, the number of data points is associated with ones ability to generate the data rather than by a defined need in support of measurement objectives. Hence the title of the paper was conceived to challenge recent observations in the wind tunnel balance community that shows an ever increasing desire for more data points per calibration absent of guidance to determine when there are enough. This paper presents fundamental concepts and theory to aid in the development of calibration procedures for wind tunnel balances and provides a framework that is generally applicable to the characterization and calibration of other measurement systems. Questions that need to be answered are for example: What constitutes an adequate calibration? How much data are needed in the calibration? How good is the calibration? This paper will assist a practitioner in answering these questions by presenting an underlying theory on how to evaluate a calibration based on objective measures. This will enable the developer and user to design calibrations with quantified performance in terms of their capability to meet the user's objectives and a basis for comparing existing calibrations that may have been developed in an ad-hoc manner.

  10. DEVELOPMENT OF GUIDELINES FOR CALIBRATING, VALIDATING, AND EVALUATING HYDROLOGIC AND WATER QUALITY MODELS: ASABE ENGINEERING PRACTICE 621

    USDA-ARS?s Scientific Manuscript database

    Information to support application of hydrologic and water quality (H/WQ) models abounds, yet modelers commonly use arbitrary, ad hoc methods to conduct, document, and report model calibration, validation, and evaluation. Consistent methods are needed to improve model calibration, validation, and e...

  11. A machine learning calibration model using random forests to improve sensor performance for lower-cost air quality monitoring

    NASA Astrophysics Data System (ADS)

    Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.

    2018-01-01

    Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that it accounts for pollutant cross-sensitivities. This highlights the importance of developing multipollutant sensor packages (as opposed to single-pollutant monitors); we determined this is especially critical for NO2 and CO2. The evaluation reveals that only the RF-calibrated sensors meet the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. We also demonstrate that the RF-model-calibrated sensors could detect differences in NO2 concentrations between a near-road site and a suburban site less than 1.5 km away. From this study, we conclude that combining RF models with carefully controlled state-of-the-art multipollutant sensor packages as in the RAMP monitors appears to be a very promising approach to address the poor performance that has plagued low-cost air quality sensors.

  12. Ground truth and benchmarks for performance evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  13. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2015-08-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  14. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  15. Real-time Imaging Orientation Determination System to Verify Imaging Polarization Navigation Algorithm

    PubMed Central

    Lu, Hao; Zhao, Kaichun; Wang, Xiaochu; You, Zheng; Huang, Kaoli

    2016-01-01

    Bio-inspired imaging polarization navigation which can provide navigation information and is capable of sensing polarization information has advantages of high-precision and anti-interference over polarization navigation sensors that use photodiodes. Although all types of imaging polarimeters exist, they may not qualify for the research on the imaging polarization navigation algorithm. To verify the algorithm, a real-time imaging orientation determination system was designed and implemented. Essential calibration procedures for the type of system that contained camera parameter calibration and the inconsistency of complementary metal oxide semiconductor calibration were discussed, designed, and implemented. Calibration results were used to undistort and rectify the multi-camera system. An orientation determination experiment was conducted. The results indicated that the system could acquire and compute the polarized skylight images throughout the calibrations and resolve orientation by the algorithm to verify in real-time. An orientation determination algorithm based on image processing was tested on the system. The performance and properties of the algorithm were evaluated. The rate of the algorithm was over 1 Hz, the error was over 0.313°, and the population standard deviation was 0.148° without any data filter. PMID:26805851

  16. Design and Calibration of the X-33 Flush Airdata Sensing (FADS) System

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Cobleigh, Brent R.; Haering, Edward A.

    1998-01-01

    This paper presents the design of the X-33 Flush Airdata Sensing (FADS) system. The X-33 FADS uses a matrix of pressure orifices on the vehicle nose to estimate airdata parameters. The system is designed with dual-redundant measurement hardware, which produces two independent measurement paths. Airdata parameters that correspond to the measurement path with the minimum fit error are selected as the output values. This method enables a single sensor failure to occur with minimal degrading of the system performance. The paper shows the X-33 FADS architecture, derives the estimating algorithms, and demonstrates a mathematical analysis of the FADS system stability. Preliminary aerodynamic calibrations are also presented here. The calibration parameters, the position error coefficient (epsilon), and flow correction terms for the angle of attack (delta alpha), and angle of sideslip (delta beta) are derived from wind tunnel data. Statistical accuracy of' the calibration is evaluated by comparing the wind tunnel reference conditions to the airdata parameters estimated. This comparison is accomplished by applying the calibrated FADS algorithm to the sensed wind tunnel pressures. When the resulting accuracy estimates are compared to accuracy requirements for the X-33 airdata, the FADS system meets these requirements.

  17. A suggestion for computing objective function in model calibration

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang

    2014-01-01

    A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).

  18. Sustained prediction ability of net analyte preprocessing methods using reduced calibration sets. Theoretical and experimental study involving the spectrophotometric analysis of multicomponent mixtures.

    PubMed

    Goicoechea, H C; Olivieri, A C

    2001-07-01

    A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.

  19. Temporal dynamics of sand dune bidirectional reflectance characteristics for absolute radiometric calibration of optical remote sensing data

    NASA Astrophysics Data System (ADS)

    Coburn, Craig A.; Logie, Gordon S. J.

    2018-01-01

    Attempts to use pseudoinvariant calibration sites (PICS) for establishing absolute radiometric calibration of Earth observation (EO) satellites requires high-quality information about the nature of the bidirectional reflectance distribution function (BRDF) of the surfaces used for these calibrations. Past studies have shown that the PICS method is useful for evaluating the trend of sensors over time or for the intercalibration of sensors. The PICS method was not considered until recently for deriving absolute radiometric calibration. This paper presents BRDF data collected by a high-performance portable goniometer system to develop a temporal BRDF model for the Algodones Dunes in California. By sampling the BRDF of the sand surface at similar solar zenith angles to those normally encountered by EO satellites, additional information on the changing nature of the surface can improve models used to provide absolute radiometric correction. The results demonstrated that the BRDF of a reasonably simple sand surface was complex with changes in anisotropy taking place in response to changing solar zenith angles. For the majority of observation and illumination angles, the spectral reflectance anisotropy observed varied between 1% and 5% in patterns that repeat around solar noon.

  20. Factory-Calibrated Continuous Glucose Sensors: The Science Behind the Technology.

    PubMed

    Hoss, Udo; Budiman, Erwin Satrya

    2017-05-01

    The use of commercially available continuous glucose monitors for diabetes management requires sensor calibrations, which until recently are exclusively performed by the patient. A new development is the implementation of factory calibration for subcutaneous glucose sensors, which eliminates the need for user calibrations and the associated blood glucose tests. Factory calibration means that the calibration process is part of the sensor manufacturing process and performed under controlled laboratory conditions. The ability to move from a user calibration to factory calibration is based on several technical requirements related to sensor stability and the robustness of the sensor manufacturing process. The main advantages of factory calibration over the conventional user calibration are: (a) more convenience for the user, since no more fingersticks are required for calibration and (b) elimination of use errors related to the execution of the calibration process, which can lead to sensor inaccuracies. The FreeStyle Libre ™ and FreeStyle Libre Pro ™ flash continuous glucose monitoring systems are the first commercially available sensor systems using factory-calibrated sensors. For these sensor systems, no user calibrations are required throughout the sensor wear duration.

  1. Factory-Calibrated Continuous Glucose Sensors: The Science Behind the Technology

    PubMed Central

    Budiman, Erwin Satrya

    2017-01-01

    Abstract The use of commercially available continuous glucose monitors for diabetes management requires sensor calibrations, which until recently are exclusively performed by the patient. A new development is the implementation of factory calibration for subcutaneous glucose sensors, which eliminates the need for user calibrations and the associated blood glucose tests. Factory calibration means that the calibration process is part of the sensor manufacturing process and performed under controlled laboratory conditions. The ability to move from a user calibration to factory calibration is based on several technical requirements related to sensor stability and the robustness of the sensor manufacturing process. The main advantages of factory calibration over the conventional user calibration are: (a) more convenience for the user, since no more fingersticks are required for calibration and (b) elimination of use errors related to the execution of the calibration process, which can lead to sensor inaccuracies. The FreeStyle Libre™ and FreeStyle Libre Pro™ flash continuous glucose monitoring systems are the first commercially available sensor systems using factory-calibrated sensors. For these sensor systems, no user calibrations are required throughout the sensor wear duration. PMID:28541139

  2. Improving integrity of on-line grammage measurement with traceable basic calibration.

    PubMed

    Kangasrääsiö, Juha

    2010-07-01

    The automatic control of grammage (basis weight) in paper and board production is based upon on-line grammage measurement. Furthermore, the automatic control of other quality variables such as moisture, ash content and coat weight, may rely on the grammage measurement. The integrity of Kr-85 based on-line grammage measurement systems was studied, by performing basic calibrations with traceably calibrated plastic reference standards. The calibrations were performed according to the EN ISO/IEC 17025 standard, which is a requirement for calibration laboratories. The observed relative measurement errors were 3.3% in the first time calibrations at the 95% confidence level. With the traceable basic calibration method, however, these errors can be reduced to under 0.5%, thus improving the integrity of on-line grammage measurements. Also a standardised algorithm, based on the experience from the performed calibrations, is proposed to ease the adjustment of the different grammage measurement systems. The calibration technique can basically be applied to all beta-radiation based grammage measurements. 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Calibration of electret-based integral radon monitors using NIST polyethylene-encapsulated {sup 226}Ra/{sup 222}Rn emanation (PERE) standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colle, R.; Hutchinson, J.M.R.; Kotrappa, P.

    1995-11-01

    The recently developed {sup 222}Rn emanation standards that are based on polyethylene-encapsulated {sup 226}Ra solutions were employed for a first field-measurement application test to demonstrate their efficacy in calibrating passive integral radon monitors. The performance of the capsules was evaluated with respect to the calibration needs of electret ionization chambers (E-PERM{reg_sign}, Rad Elec Inc.). The encapsulated standards emanate well-characterized and known quantities of {sup 222}Rn, and were used in two different-sized, relatively-small, accumulation vessels (about 3.6 L and 10 L) which also contained the deployed electret monitors under test. Calculated integral {sup 222}Rn activities from the capsules over various accumulationmore » times were compared to the averaged electret responses. Evaluations were made with four encapsulated standards ranging in {sup 226}Ra activity from approximately 15 Bq to 540 Bq (with {sup 222}Rn emanation fractions of 0.888); over accumulation times from 1 d to 33 d; and with four different types of E-PERM detectors that were independently calibrated. The ratio of the electret chamber response E{sub Rn} to the integral {sup 222}Rn activity I{sub Rn} was constant (within statistical variations) over the variables of the specific capsule used, the accumulation volume, accumulation time, and detector type. The results clearly demonstrated the practicality and suitability of the encapsulated standards for providing a simple and readily-available calibration for those measurement applications. However, the mean ratio E{sub Rn}/I{sub Rn} was approximately 0.91, suggesting a possible systematic bias in the extant E-PERM calibrations. This 9% systematic difference was verified by an independent test of the E-PERM calibration based on measurements with the NIST radon-in-water standard generator.« less

  4. A graphical method to evaluate spectral preprocessing in multivariate regression calibrations: example with Savitzky-Golay filters and partial least squares regression.

    PubMed

    Delwiche, Stephen R; Reeves, James B

    2010-01-01

    In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various types of spectroscopy data.

  5. Laboratory evaluation of Fecker and Loral optical IR PWI systems

    NASA Technical Reports Server (NTRS)

    Gorstein, M.; Hallock, J. N.; Houten, M.; Mcwilliams, I. G.

    1971-01-01

    A previous flight test of two electro-optical pilot warning indicators, using a flashing xenon strobe and silicon detectors as cooperative elements, pointed out several design deficiencies. The present laboratory evaluation program corrected these faults and calibrated the sensitivity of both systems in azimuth elevation and range. The laboratory tests were performed on an optical bench and consisted of three basic components: (1) a xenon strobe lamp whose output is monitored at the indicator detector to give pulse to pulse information on energy content at the receiver; (2) a strobe light attenuating optical system which is calibrated photometrically to provide simulated range; and (3) a positioning table on which the indicator system under study is mounted and which provides spatial location coordinates for all data points. The test results for both systems are tabulated.

  6. Risk assessment model for development of advanced age-related macular degeneration.

    PubMed

    Klein, Michael L; Francis, Peter J; Ferris, Frederick L; Hamon, Sara C; Clemons, Traci E

    2011-12-01

    To design a risk assessment model for development of advanced age-related macular degeneration (AMD) incorporating phenotypic, demographic, environmental, and genetic risk factors. We evaluated longitudinal data from 2846 participants in the Age-Related Eye Disease Study. At baseline, these individuals had all levels of AMD, ranging from none to unilateral advanced AMD (neovascular or geographic atrophy). Follow-up averaged 9.3 years. We performed a Cox proportional hazards analysis with demographic, environmental, phenotypic, and genetic covariates and constructed a risk assessment model for development of advanced AMD. Performance of the model was evaluated using the C statistic and the Brier score and externally validated in participants in the Complications of Age-Related Macular Degeneration Prevention Trial. The final model included the following independent variables: age, smoking history, family history of AMD (first-degree member), phenotype based on a modified Age-Related Eye Disease Study simple scale score, and genetic variants CFH Y402H and ARMS2 A69S. The model did well on performance measures, with very good discrimination (C statistic = 0.872) and excellent calibration and overall performance (Brier score at 5 years = 0.08). Successful external validation was performed, and a risk assessment tool was designed for use with or without the genetic component. We constructed a risk assessment model for development of advanced AMD. The model performed well on measures of discrimination, calibration, and overall performance and was successfully externally validated. This risk assessment tool is available for online use.

  7. Antenna Calibration and Measurement Equipment

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.; Cortes, Manuel Vazquez

    2012-01-01

    A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.

  8. A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.

    PubMed

    Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang

    2009-01-01

    This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.

  9. Global evaluation of runoff from 10 state-of-the-art hydrological models

    NASA Astrophysics Data System (ADS)

    Beck, Hylke E.; van Dijk, Albert I. J. M.; de Roo, Ad; Dutra, Emanuel; Fink, Gabriel; Orth, Rene; Schellekens, Jaap

    2017-06-01

    Observed streamflow data from 966 medium sized catchments (1000-5000 km2) around the globe were used to comprehensively evaluate the daily runoff estimates (1979-2012) of six global hydrological models (GHMs) and four land surface models (LSMs) produced as part of tier-1 of the eartH2Observe project. The models were all driven by the WATCH Forcing Data ERA-Interim (WFDEI) meteorological dataset, but used different datasets for non-meteorologic inputs and were run at various spatial and temporal resolutions, although all data were re-sampled to a common 0. 5° spatial and daily temporal resolution. For the evaluation, we used a broad range of performance metrics related to important aspects of the hydrograph. We found pronounced inter-model performance differences, underscoring the importance of hydrological model uncertainty in addition to climate input uncertainty, for example in studies assessing the hydrological impacts of climate change. The uncalibrated GHMs were found to perform, on average, better than the uncalibrated LSMs in snow-dominated regions, while the ensemble mean was found to perform only slightly worse than the best (calibrated) model. The inclusion of less-accurate models did not appreciably degrade the ensemble performance. Overall, we argue that more effort should be devoted on calibrating and regionalizing the parameters of macro-scale models. We further found that, despite adjustments using gauge observations, the WFDEI precipitation data still contain substantial biases that propagate into the simulated runoff. The early bias in the spring snowmelt peak exhibited by most models is probably primarily due to the widespread precipitation underestimation at high northern latitudes.

  10. Predicting herbivore faecal nitrogen using a multispecies near-infrared reflectance spectroscopy calibration.

    PubMed

    Villamuelas, Miriam; Serrano, Emmanuel; Espunyes, Johan; Fernández, Néstor; López-Olvera, Jorge R; Garel, Mathieu; Santos, João; Parra-Aguado, María Ángeles; Ramanzin, Maurizio; Fernández-Aguilar, Xavier; Colom-Cadena, Andreu; Marco, Ignasi; Lavín, Santiago; Bartolomé, Jordi; Albanell, Elena

    2017-01-01

    Optimal management of free-ranging herbivores requires the accurate assessment of an animal's nutritional status. For this purpose 'near-infrared reflectance spectroscopy' (NIRS) is very useful, especially when nutritional assessment is done through faecal indicators such as faecal nitrogen (FN). In order to perform an NIRS calibration, the default protocol recommends starting by generating an initial equation based on at least 50-75 samples from the given species. Although this protocol optimises prediction accuracy, it limits the use of NIRS with rare or endangered species where sample sizes are often small. To overcome this limitation we tested a single NIRS equation (i.e., multispecies calibration) to predict FN in herbivores. Firstly, we used five herbivore species with highly contrasting digestive physiologies to build monospecies and multispecies calibrations, namely horse, sheep, Pyrenean chamois, red deer and European rabbit. Secondly, the equation accuracy was evaluated by two procedures using: (1) an external validation with samples from the same species, which were not used in the calibration process; and (2) samples from different ungulate species, specifically Alpine ibex, domestic goat, European mouflon, roe deer and cattle. The multispecies equation was highly accurate in terms of the coefficient of determination for calibration R2 = 0.98, standard error of validation SECV = 0.10, standard error of external validation SEP = 0.12, ratio of performance to deviation RPD = 5.3, and range error of prediction RER = 28.4. The accuracy of the multispecies equation to predict other herbivore species was also satisfactory (R2 > 0.86, SEP < 0.27, RPD > 2.6, and RER > 8.1). Lastly, the agreement between multi- and monospecies calibrations was also confirmed by the Bland-Altman method. In conclusion, our single multispecies equation can be used as a reliable, cost-effective, easy and powerful analytical method to assess FN in a wide range of herbivore species.

  11. Exploration of attenuated total reflectance mid-infrared spectroscopy and multivariate calibration to measure immunoglobulin G in human sera.

    PubMed

    Hou, Siyuan; Riley, Christopher B; Mitchell, Cynthia A; Shaw, R Anthony; Bryanton, Janet; Bigsby, Kathryn; McClure, J Trenton

    2015-09-01

    Immunoglobulin G (IgG) is crucial for the protection of the host from invasive pathogens. Due to its importance for human health, tools that enable the monitoring of IgG levels are highly desired. Consequently there is a need for methods to determine the IgG concentration that are simple, rapid, and inexpensive. This work explored the potential of attenuated total reflectance (ATR) infrared spectroscopy as a method to determine IgG concentrations in human serum samples. Venous blood samples were collected from adults and children, and from the umbilical cord of newborns. The serum was harvested and tested using ATR infrared spectroscopy. Partial least squares (PLS) regression provided the basis to develop the new analytical methods. Three PLS calibrations were determined: one for the combined set of the venous and umbilical cord serum samples, the second for only the umbilical cord samples, and the third for only the venous samples. The number of PLS factors was chosen by critical evaluation of Monte Carlo-based cross validation results. The predictive performance for each PLS calibration was evaluated using the Pearson correlation coefficient, scatter plot and Bland-Altman plot, and percent deviations for independent prediction sets. The repeatability was evaluated by standard deviation and relative standard deviation. The results showed that ATR infrared spectroscopy is potentially a simple, quick, and inexpensive method to measure IgG concentrations in human serum samples. The results also showed that it is possible to build a united calibration curve for the umbilical cord and the venous samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Influence of three common calibration metrics on the diagnosis of climate change impacts on water resources

    NASA Astrophysics Data System (ADS)

    Seiller, G.; Roy, R.; Anctil, F.

    2017-04-01

    Uncertainties associated to the evaluation of the impacts of climate change on water resources are broad, from multiple sources, and lead to diagnoses sometimes difficult to interpret. Quantification of these uncertainties is a key element to yield confidence in the analyses and to provide water managers with valuable information. This work specifically evaluates the influence of hydrological modeling calibration metrics on future water resources projections, on thirty-seven watersheds in the Province of Québec, Canada. Twelve lumped hydrologic models, representing a wide range of operational options, are calibrated with three common objective functions derived from the Nash-Sutcliffe efficiency. The hydrologic models are forced with climate simulations corresponding to two RCP, twenty-nine GCM from CMIP5 (Coupled Model Intercomparison Project phase 5) and two post-treatment techniques, leading to future projections in the 2041-2070 period. Results show that the diagnosis of the impacts of climate change on water resources are quite affected by the hydrologic models selection and calibration metrics. Indeed, for the four selected hydrological indicators, dedicated to water management, parameters from the three objective functions can provide different interpretations in terms of absolute and relative changes, as well as projected changes direction and climatic ensemble consensus. The GR4J model and a multimodel approach offer the best modeling options, based on calibration performance and robustness. Overall, these results illustrate the need to provide water managers with detailed information on relative changes analysis, but also absolute change values, especially for hydrological indicators acting as security policy thresholds.

  13. Simultaneous determination of the impurity and radial tensile strength of reduced glutathione tablets by a high selective NIR-PLS method.

    PubMed

    Li, Juan; Jiang, Yue; Fan, Qi; Chen, Yang; Wu, Ruanqi

    2014-05-05

    This paper establishes a high-throughput and high selective method to determine the impurity named oxidized glutathione (GSSG) and radial tensile strength (RTS) of reduced glutathione (GSH) tablets based on near infrared (NIR) spectroscopy and partial least squares (PLS). In order to build and evaluate the calibration models, the NIR diffuse reflectance spectra (DRS) and transmittance spectra (TS) for 330 GSH tablets were accurately measured by using the optimized parameter values. For analyzing GSSG or RTS of GSH tablets, the NIR-DRS or NIR-TS were selected, subdivided reasonably into calibration and prediction sets, and processed appropriately with chemometric techniques. After selecting spectral sub-ranges and neglecting spectrum outliers, the PLS calibration models were built and the factor numbers were optimized. Then, the PLS models were evaluated by the root mean square errors of calibration (RMSEC), cross-validation (RMSECV) and prediction (RMSEP), and by the correlation coefficients of calibration (R(c)) and prediction (R(p)). The results indicate that the proposed models have good performances. It is thus clear that the NIR-PLS can simultaneously, selectively, nondestructively and rapidly analyze the GSSG and RTS of GSH tablets, although the contents of GSSG impurity were quite low while those of GSH active pharmaceutical ingredient (API) quite high. This strategy can be an important complement to the common NIR methods used in the on-line analysis of API in pharmaceutical preparations. And this work expands the NIR applications in the high-throughput and extraordinarily selective analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Parameter identification of the SWAT model on the BANI catchment (West Africa) under limited data condition

    NASA Astrophysics Data System (ADS)

    Chaibou Begou, Jamilatou; Jomaa, Seifeddine; Benabdallah, Sihem; Rode, Michael

    2015-04-01

    Due to the climate change, drier conditions have prevailed in West Africa, since the seventies, and the consequences are important on water resources. In order to identify and implement management strategies of adaptation to climate change in the sector of water, it is crucial to improve our physical understanding of water resources evolution in the region. To this end, hydrologic modelling is an appropriate tool for flow predictions under changing climate and land use conditions. In this study, the applicability and performance of the recent version of Soil and Water Assessment Tool (SWAT2012) model were tested on the Bani catchment in West Africa under limited data condition. Model parameters identification was also tested using one site and multisite calibration approaches. The Bani is located in the upper part of the Niger River and drains an area of about 101, 000 km2 at the outlet of Douna. The climate is tropical, humid to semi-arid from the South to the North with an average annual rainfall of 1050 mm (period 1981-2000). Global datasets were used for the model setup such as: USGS hydrosheds DEM, USGS LCI GlobCov2009 and the FAO Digital Soil Map of the World. Daily measured rainfall from nine rain gauges and maximum and minimum temperature from five weather stations covering the period 1981-1997 were used for model setup. Sensitivity analysis, calibration and validation are performed within SWATCUP using GLUE procedure at Douna station first (one site calibration), then at three additional internal stations, Bougouni, Pankourou and Kouoro1 (multi-site calibration). Model parameters were calibrated at daily time step for the period 1983-1992, then validated for the period 1993-1997. A period of two years (1981-1982) was used for model warming up. Results of one-site calibration showed that the model performance is evaluated by 0.76 and 0.79 for Nash-Sutcliffe (NS) and correlation coefficient (R2), respectively. While for the validation period the performance improved considerably with NS and R2 equal to 0.84 and 0.87, respectively. The degree of total uncertainties is quantified by a minimum P-factor of 0.61 and a maximum R-factor of 0.59. These statistics suggest that the model performance can be judged as very good, especially considering limited data condition and high climate, land use and soil variability in the studied basin. The most sensitive parameters (CN2, OVN and SLSUBBSN) are related to surface runoff reflecting the dominance of this process on the streamflow generation. In the next step, multisite calibration approach will be performed on the BANI basin to assess how much additional observations improve the model parameter identification.

  15. Variability in Students' Evaluating Processes in Peer Assessment with Calibrated Peer Review

    ERIC Educational Resources Information Center

    Russell, J.; Van Horne, S.; Ward, A. S.; Bettis, E. A., III; Gikonyo, J.

    2017-01-01

    This study investigated students' evaluating process and their perceptions of peer assessment when they engaged in peer assessment using Calibrated Peer Review. Calibrated Peer Review is a web-based application that facilitates peer assessment of writing. One hundred and thirty-two students in an introductory environmental science course…

  16. Energy conversion research and development with diminiodes

    NASA Technical Reports Server (NTRS)

    Morris, J. F.

    1974-01-01

    Diminiodes are variable-gap cesium diodes with plane miniature guarded electrodes. These converters allow thermionic evaluations of tiny pieces of rare solids. In addition to smallness, diminiode advantages comprise simplicity, precision, fabrication ease, parts interchangeability, cleanliness, full instrumentation, direct calibration, ruggedness, and economy. Diminiodes with computerized thermionic performance mapping make electrode screening programs practical.

  17. Field performance of three real-time moisture sensors in sandy loam and clay loam soils

    USDA-ARS?s Scientific Manuscript database

    The study was conducted to evaluate HydraProbe (HyP), Campbell Time Domain Reflectometry (TDR) and Watermarks (WM) moisture sensors for their ability to estimate water content based on calibrated neutron probe measurements. The three sensors were in-situ tested under natural weather conditions over ...

  18. Simultaneous determination of phosphatidylcholine-derived quaternary ammonium compounds by a LC-MS/MS method in human blood plasma, serum and urine samples.

    PubMed

    Steuer, Christian; Schütz, Philipp; Bernasconi, Luca; Huber, Andreas R

    2016-01-01

    The determination of circulating trimethylamine-N-oxide (TMAO), choline, betaine, l-carnitine and O-acetyl-l-carnitine concentration in different human matrices is of great clinical interest. Recent results highlighted the prognostic value of TMAO and quaternary ammonium containing metabolites in the field of cardiovascular and kidney diseases. Herein, we report a method for the rapid and simultaneous measurement of closely related phosphatidylcholine-derived metabolites in three different biological matrices by stable isotope dilution assay. Plasma, serum and urine samples were simply deproteinized and separated by HILIC-chromatography. Detection and quantification were performed using LC-MS/MS with electrospray ionization in positive mode. For accuracy and precision, full calibration was performed covering more than the full reference range. Assay performance metrics include intra- and interday imprecision were below 10% for all analytes. To exclude matrix effects standard addition methods were applied for all matrices. It was shown that calibration standards and quality control prepared in water can be used instead of matrix-matched calibration and controls. The LC/MS/MS-based assay described in this article may improve future clinical studies evaluating TMAO and related substances as prognostic markers for cardiovascular risk and all-cause mortality in different patient populations. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Methodology for the preliminary design of high performance schools in hot and humid climates

    NASA Astrophysics Data System (ADS)

    Im, Piljae

    A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the accelerated dissemination of energy efficient design. For the development of the toolkit, first, a survey was performed to identify high performance measures available today being implemented in new K-5 school buildings. Then an existing case-study school building in a hot and humid climate was selected and analyzed to understand the energy use pattern in a school building and to be used in developing a calibrated simulation. Based on the information from the previous step, an as-built and calibrated simulation was then developed. To accomplish this, five calibration steps were performed to match the simulation results with the measured energy use. The five steps include: (1) Using an actual 2006 weather file with measured solar radiation, (2) Modifying lighting & equipment schedule using ASHRAE's RP-1093 methods, (3) Using actual equipment performance curves (i.e., scroll chiller), (4) Using the Winkelmann's method for the underground floor heat transfer, and (5) Modifying the HVAC and room setpoint temperature based on the measured field data. Next, the calibrated simulation of the case-study K-5 school was compared to an ASHRAE Standard 90.1-1999 code-compliant school. In the next step, the energy savings potentials from the application of several high performance measures to an equivalent ASHRAE Standard 90.1-1999 code-compliant school. The high performance measures applied included the recommendations from the ASHRAE Advanced Energy Design Guides (AEDG) for K-12 and other high performance measures from the literature review as well as a daylighting strategy and solar PV and thermal systems. The results show that the net energy consumption of the final high performance school with the solar thermal and a solar PV system would be 1,162.1 MMBtu, which corresponds to the 14.9 kBtu/sqft-yr of EUI. The calculated final energy and cost savings over the code compliant school are 68.2% and 69.9%, respectively. As a final step of the research, specifications for a simplified easy-to-use toolkit were then developed, and a prototype screenshot of the toolkit was developed. The toolkit is expected to be used by non-technical decision-maker to select and evaluate high performance measures for a new school building in terms of energy and cost savings in a quick and easy way.

  20. A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis.

    PubMed

    Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A

    2005-07-01

    Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.

  1. Vicarious calibration of the Geostationary Ocean Color Imager.

    PubMed

    Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram; Oh, Im Sang

    2015-09-07

    Measurements of ocean color from Geostationary Ocean Color Imager (GOCI) with a moderate spatial resolution and a high temporal frequency demonstrate high value for a number of oceanographic applications. This study aims to propose and evaluate the calibration of GOCI as needed to achieve the level of radiometric accuracy desired for ocean color studies. Previous studies reported that the GOCI retrievals of normalized water-leaving radiances (nLw) are biased high for all visible bands due to the lack of vicarious calibration. The vicarious calibration approach described here relies on the assumed constant aerosol characteristics over the open-ocean sites to accurately estimate atmospheric radiances for the two near-infrared (NIR) bands. The vicarious calibration of visible bands is performed using in situ nLw measurements and the satellite-estimated atmospheric radiance using two NIR bands over the case-1 waters. Prior to this analysis, the in situ nLw spectra in the NIR are corrected by the spectrum optimization technique based on the NIR similarity spectrum assumption. The vicarious calibration gain factors derived for all GOCI bands (except 865nm) significantly improve agreement in retrieved remote-sensing reflectance (Rrs) relative to in situ measurements. These gain factors are independent of angular geometry and possible temporal variability. To further increase the confidence in the calibration gain factors, a large data set from shipboard measurements and AERONET-OC is used in the validation process. It is shown that the absolute percentage difference of the atmospheric correction results from the vicariously calibrated GOCI system is reduced by ~6.8%.

  2. A Consistent EPIC Visible Channel Calibration Using VIIRS and MODIS as a Reference.

    NASA Astrophysics Data System (ADS)

    Haney, C.; Doelling, D. R.; Minnis, P.; Bhatt, R.; Scarino, B. R.; Gopalan, A.

    2017-12-01

    The Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) satellite constantly images the sunlit disk of Earth from the Lagrange-1 (L1) point in 10 spectral channels spanning the UV, VIS, and NIR spectrums. Recently, the DSCOVR EPIC team has publicly released version 2 dataset, which has implemented improved navigation, stray-light correction, and flat-fielding of the CCD array. The EPIC 2-year data record must be well-calibrated for consistent cloud, aerosol, trace gas, land use and other retrievals. Because EPIC lacks onboard calibrators, the observations made by EPIC channels must be calibrated vicariously using the coincident measurements from radiometrically stable instruments that have onboard calibration systems. MODIS and VIIRS are best-suited instruments for this task as they contain similar spectral bands that are well-calibrated onboard using solar diffusers and lunar tracking. We have previously calibrated the EPIC version 1 dataset by using EPIC and VIIRS angularly matched radiance pairs over both all-sky ocean and deep convective clouds (DCC). We noted that the EPIC image required navigations adjustments, and that the EPIC stray-light correction provided an offset term closer to zero based on the linear regression of the EPIC and VIIRS ray-matched radiance pairs. We will evaluate the EPIC version 2 navigation and stray-light improvements using the same techniques. In addition, we will monitor the EPIC channel calibration over the two years for any temporal degradation or anomalous behavior. These two calibration methods will be further validated using desert and DCC invariant Earth targets. The radiometric characterization of the selected invariant targets is performed using multiple years of MODIS and VIIRS measurements. Results of these studies will be shown at the conference.

  3. A Consistent EPIC Visible Channel Calibration using VIIRS and MODIS as a Reference

    NASA Technical Reports Server (NTRS)

    Haney, C. O.; Doelling, D. R.; Minnis, P.; Bhatt, R.; Scarino, B. R.; Gopalan, A.

    2017-01-01

    The Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) satellite constantly images the sunlit disk of Earth from the Lagrange-1 (L1) point in 10 spectral channels spanning the UV, VIS, and NIR spectrums. Recently, the DSCOVR EPIC team has publicly released version 2 dataset, which has implemented improved navigation, stray-light correction, and flat-fielding of the CCD array. The EPIC 2-year data record must be well-calibrated for consistent cloud, aerosol, trace gas, land use and other retrievals. Because EPIC lacks onboard calibrators, the observations made by EPIC channels must be calibrated vicariously using the coincident measurements from radiometrically stable instruments that have onboard calibration systems. MODIS and VIIRS are best-suited instruments for this task as they contain similar spectral bands that are well-calibrated onboard using solar diffusers and lunar tracking. We have previously calibrated the EPIC version 1 dataset by using EPIC and VIIRS angularly matched radiance pairs over both all-sky ocean and deep convective clouds (DCC). We noted that the EPIC image required navigations adjustments, and that the EPIC stray-light correction provided an offset term closer to zero based on the linear regression of the EPIC and VIIRS ray-matched radiance pairs. We will evaluate the EPIC version 2 navigation and stray-light improvements using the same techniques. In addition, we will monitor the EPIC channel calibration over the two years for any temporal degradation or anomalous behavior. These two calibration methods will be further validated using desert and DCC invariant Earth targets. The radiometric characterization of the selected invariant targets is performed using multiple years of MODIS and VIIRS measurements. Results of these studies will be shown at the conference.

  4. Evaluation of factors to convert absorbed dose calibrations from graphite to water for the NPL high-energy photon calibration service.

    PubMed

    Nutbrown, R F; Duane, S; Shipley, D R; Thomas, R A S

    2002-02-07

    The National Physical Laboratory (NPL) provides a high-energy photon calibration service using 4-19 MV x-rays and 60Co gamma-radiation for secondary standard dosemeters in terms of absorbed dose to water. The primary standard used for this service is a graphite calorimeter and so absorbed dose calibrations must be converted from graphite to water. The conversion factors currently in use were determined prior to the launch of this service in 1988. Since then, it has been found that the differences in inherent filtration between the NPL LINAC and typical clinical machines are large enough to affect absorbed dose calibrations and, since 1992, calibrations have been performed in heavily filtered qualities. The conversion factors for heavily filtered qualities were determined by interpolation and extrapolation of lightly filtered results as a function of tissue phantom ratio 20,10 (TPR20,10). This paper aims to evaluate these factors for all mega-voltage photon energies provided by the NPL LINAC for both lightly and heavily filtered qualities and for 60Co y-radiation in two ways. The first method involves the use of the photon fluence-scaling theorem. This states that if two blocks of different material are irradiated by the same photon beam, and if all dimensions are scaled in the inverse ratio of the electron densities of the two media, then, assuming that all photon interactions occur by Compton scatter the photon attenuation and scatter factors at corresponding scaled points of measurement in the phantom will be identical. The second method involves making in-phantom measurements of chamber response at a constant target-chamber distance. Monte Carlo techniques are then used to determine the corresponding dose to the medium in order to determine the chamber calibration factor directly. Values of the ratio of absorbed dose calibration factors in water and in graphite determined in these two ways agree with each other to within 0.2% (1sigma uncertainty). The best fit to both sets of results agrees with values determined in previous work to within 0.3% (1sigma uncertainty). It is found that the conversion factor is not sensitive to beam filtration.

  5. Longitudinal train dynamics model for a rail transit simulation system

    DOE PAGES

    Wang, Jinghui; Rakha, Hesham A.

    2018-01-01

    The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less

  6. Longitudinal train dynamics model for a rail transit simulation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jinghui; Rakha, Hesham A.

    The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less

  7. Improved patient notes from medical students during web-based teaching using faculty-calibrated peer review and self-assessment.

    PubMed

    McCarty, Teresita; Parkes, Marie V; Anderson, Teresa T; Mines, Jan; Skipper, Betty J; Grebosky, James

    2005-10-01

    This study examines the effectiveness of Calibrated Peer Review (CPR), a Web-based writing development program, to teach and assess medical students' patient note-writing skills in a standardized fashion. At the end of the clerkship year, 67 medical students were divided into three groups, introduced to CPR, and instructed in patient note-writing. Students then wrote notes for three clinical cases, presented in different order to each group. After training on faculty-calibrated standards, students evaluated their peers' notes and their own notes. Trained faculty, blinded to author, order, and group, also graded student notes. Faculty gave lower scores than students, but both groups found students' scores improved significantly from the first to the third note written. Student-written patient notes improved in quality while using CPR. The program uses approaches valued in medicine (accurate peer review and self-reflection) to enhance performance.

  8. Automated image quality assessment for chest CT scans.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2018-02-01

    Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.

  9. Data filtering with support vector machines in geometric camera calibration.

    PubMed

    Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C

    2010-02-01

    The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.

  10. Assessment of SNPP VIIRS VIS NIR Radiometric Calibration Stability Using Aqua MODIS and Invariant Surface Targets

    NASA Technical Reports Server (NTRS)

    Wu, Aisheng; Xiong, Xiaoxiong; Cao, Changyong; Chiang, Kwo-Fu

    2016-01-01

    The first Visible Infrared Imaging Radiometer Suite (VIIRS) is onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite. As a primary sensor, it collects imagery and radiometric measurements of the land, atmosphere, cryosphere, and oceans in the spectral regions from visible (VIS) to long-wave infrared. NASA's National Polar-orbiting Partnership (NPP) VIIRS Characterization Support Team has been actively involved in the VIIRS radiometric and geometric calibration to support its Science Team Principal Investigators for their independent quality assessment of VIIRS Environmental Data Records. This paper presents the performance assessment of the radiometric calibration stability of the VIIRS VIS and NIR spectral bands using measurements from SNPP VIIRS and Aqua MODIS simultaneous nadir overpasses and over the invariant surface targets at the Libya-4 desert and Antarctic Dome Concordia snow sites. The VIIRS sensor data records (SDRs) used in this paper are reprocessed by the NASA SNPP Land Product Evaluation and Analysis Tool Element. This paper shows that the reprocessed VIIRS SDRs have been consistently calibrated from the beginning of the mission, and the calibration stability is similar to or better than MODIS. Results from different approaches indicate that the calibrations of the VIIRS VIS and NIR spectral bands are maintained to be stable to within 1% over the first three-year mission. The absolute calibration differences between VIIRS and MODIS are within 2%, with an exception for the 0.865-m band, after correction of their spectral response differences.

  11. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Darren M.

    Sandia National Laboratories has tested and evaluated Geotech Smart24 data acquisition system with active Fortezza crypto card data signing and authentication. The test results included in this report were in response to static and tonal-dynamic input signals. Most test methodologies used were based on IEEE Standards 1057 for Digitizing Waveform Recorders and 1241 for Analog to Digital Converters; others were designed by Sandia specifically for infrasound application evaluation and for supplementary criteria not addressed in the IEEE standards. The objective of this work was to evaluate the overall technical performance of the Geotech Smart24 digitizer with a Fortezza PCMCIA cryptomore » card actively implementing the signing of data packets. The results of this evaluation were compared to relevant specifications provided within manufacturer's documentation notes. The tests performed were chosen to demonstrate different performance aspects of the digitizer under test. The performance aspects tested include determining noise floor, least significant bit (LSB), dynamic range, cross-talk, relative channel-to-channel timing, time-tag accuracy, analog bandwidth and calibrator performance.« less

  13. Climate, soil or both? Which variables are better predictors of the distributions of Australian shrub species?

    PubMed Central

    Esperón-Rodríguez, Manuel; Baumgartner, John B.; Beaumont, Linda J.

    2017-01-01

    Background Shrubs play a key role in biogeochemical cycles, prevent soil and water erosion, provide forage for livestock, and are a source of food, wood and non-wood products. However, despite their ecological and societal importance, the influence of different environmental variables on shrub distributions remains unclear. We evaluated the influence of climate and soil characteristics, and whether including soil variables improved the performance of a species distribution model (SDM), Maxent. Methods This study assessed variation in predictions of environmental suitability for 29 Australian shrub species (representing dominant members of six shrubland classes) due to the use of alternative sets of predictor variables. Models were calibrated with (1) climate variables only, (2) climate and soil variables, and (3) soil variables only. Results The predictive power of SDMs differed substantially across species, but generally models calibrated with both climate and soil data performed better than those calibrated only with climate variables. Models calibrated solely with soil variables were the least accurate. We found regional differences in potential shrub species richness across Australia due to the use of different sets of variables. Conclusions Our study provides evidence that predicted patterns of species richness may be sensitive to the choice of predictor set when multiple, plausible alternatives exist, and demonstrates the importance of considering soil properties when modeling availability of habitat for plants. PMID:28652933

  14. Automated acid and base number determination of mineral-based lubricants by fourier transform infrared spectroscopy: commercial laboratory evaluation.

    PubMed

    Winterfield, Craig; van de Voort, F R

    2014-12-01

    The Fluid Life Corporation assessed and implemented Fourier transform infrared spectroscopy (FTIR)-based methods using American Society for Testing and Materials (ASTM)-like stoichiometric reactions for determination of acid and base number for in-service mineral-based oils. The basic protocols, quality control procedures, calibration, validation, and performance of these new quantitative methods are assessed. ASTM correspondence is attained using a mixed-mode calibration, using primary reference standards to anchor the calibration, supplemented by representative sample lubricants analyzed by ASTM procedures. A partial least squares calibration is devised by combining primary acid/base reference standards and representative samples, focusing on the main spectral stoichiometric response with chemometrics assisting in accounting for matrix variability. FTIR(AN/BN) methodology is precise, accurate, and free of most interference that affects ASTM D664 and D4739 results. Extensive side-by-side operational runs produced normally distributed differences with mean differences close to zero and standard deviations of 0.18 and 0.26 mg KOH/g, respectively. Statistically, the FTIR methods are a direct match to the ASTM methods, with superior performance in terms of analytical throughput, preparation time, and solvent use. FTIR(AN/BN) analysis is a viable, significant advance for in-service lubricant analysis, providing an economic means of trending samples instead of tedious and expensive conventional ASTM(AN/BN) procedures. © 2014 Society for Laboratory Automation and Screening.

  15. Bayesian calibration of terrestrial ecosystem models: A study of advanced Markov chain Monte Carlo methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this study, a Differential Evolution Adaptive Metropolis (DREAM) algorithm was used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The DREAM is a multi-chainmore » method and uses differential evolution technique for chain movement, allowing it to be efficiently applied to high-dimensional problems, and can reliably estimate heavy-tailed and multimodal distributions that are difficult for single-chain schemes using a Gaussian proposal distribution. The results were evaluated against the popular Adaptive Metropolis (AM) scheme. DREAM indicated that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identified one mode. The calibration of DREAM resulted in a better model fit and predictive performance compared to the AM. DREAM provides means for a good exploration of the posterior distributions of model parameters. Lastly, it reduces the risk of false convergence to a local optimum and potentially improves the predictive performance of the calibrated model.« less

  16. Bayesian calibration of terrestrial ecosystem models: A study of advanced Markov chain Monte Carlo methods

    DOE PAGES

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony; ...

    2017-02-22

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this study, a Differential Evolution Adaptive Metropolis (DREAM) algorithm was used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The DREAM is a multi-chainmore » method and uses differential evolution technique for chain movement, allowing it to be efficiently applied to high-dimensional problems, and can reliably estimate heavy-tailed and multimodal distributions that are difficult for single-chain schemes using a Gaussian proposal distribution. The results were evaluated against the popular Adaptive Metropolis (AM) scheme. DREAM indicated that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identified one mode. The calibration of DREAM resulted in a better model fit and predictive performance compared to the AM. DREAM provides means for a good exploration of the posterior distributions of model parameters. Lastly, it reduces the risk of false convergence to a local optimum and potentially improves the predictive performance of the calibrated model.« less

  17. Application of a laser scanner to three dimensional visual sensing tasks

    NASA Technical Reports Server (NTRS)

    Ryan, Arthur M.

    1992-01-01

    The issues are described which are associated with using a laser scanner for visual sensing and the methods developed by the author to address them. A laser scanner is a device that controls the direction of a laser beam by deflecting it through a pair of orthogonal mirrors, the orientations of which are specified by a computer. If a calibrated laser scanner is combined with a calibrated camera, it is possible to perform three dimensional sensing by directing the laser at objects within the field of view of the camera. There are several issues associated with using a laser scanner for three dimensional visual sensing that must be addressed in order to use the laser scanner effectively. First, methods are needed to calibrate the laser scanner and estimate three dimensional points. Second, methods to estimate three dimensional points using a calibrated camera and laser scanner are required. Third, methods are required for locating the laser spot in a cluttered image. Fourth, mathematical models that predict the laser scanner's performance and provide structure for three dimensional data points are necessary. Several methods were developed to address each of these and has evaluated them to determine how and when they should be applied. The theoretical development, implementation, and results when used in a dual arm eighteen degree of freedom robotic system for space assembly is described.

  18. Landsat 8 operational land imager on-orbit geometric calibration and performance

    USGS Publications Warehouse

    Storey, James C.; Choate, Michael J.; Lee, Kenton

    2014-01-01

    The Landsat 8 spacecraft was launched on 11 February 2013 carrying the Operational Land Imager (OLI) payload for moderate resolution imaging in the visible, near infrared (NIR), and short-wave infrared (SWIR) spectral bands. During the 90-day commissioning period following launch, several on-orbit geometric calibration activities were performed to refine the prelaunch calibration parameters. The results of these calibration activities were subsequently used to measure geometric performance characteristics in order to verify the OLI geometric requirements. Three types of geometric calibrations were performed including: (1) updating the OLI-to-spacecraft alignment knowledge; (2) refining the alignment of the sub-images from the multiple OLI sensor chips; and (3) refining the alignment of the OLI spectral bands. The aspects of geometric performance that were measured and verified included: (1) geolocation accuracy with terrain correction, but without ground control (L1Gt); (2) Level 1 product accuracy with terrain correction and ground control (L1T); (3) band-to-band registration accuracy; and (4) multi-temporal image-to-image registration accuracy. Using the results of the on-orbit calibration update, all aspects of geometric performance were shown to meet or exceed system requirements.

  19. Forecasting Lightning Threat Using WRF Proxy Fields

    NASA Technical Reports Server (NTRS)

    McCaul, E. W., Jr.

    2010-01-01

    Objectives: Given that high-resolution WRF forecasts can capture the character of convective outbreaks, we seek to: 1. Create WRF forecasts of LTG threat (1-24 h), based on 2 proxy fields from explicitly simulated convection: - graupel flux near -15 C (captures LTG time variability) - vertically integrated ice (captures LTG threat area). 2. Calibrate each threat to yield accurate quantitative peak flash rate densities. 3. Also evaluate threats for areal coverage, time variability. 4. Blend threats to optimize results. 5. Examine sensitivity to model mesh, microphysics. Methods: 1. Use high-resolution 2-km WRF simulations to prognose convection for a diverse series of selected case studies. 2. Evaluate graupel fluxes; vertically integrated ice (VII). 3. Calibrate WRF LTG proxies using peak total LTG flash rate densities from NALMA; relationships look linear, with regression line passing through origin. 4. Truncate low threat values to make threat areal coverage match NALMA flash extent density obs. 5. Blend proxies to achieve optimal performance 6. Study CAPS 4-km ensembles to evaluate sensitivities.

  20. A pollution fate and transport model application in a semi-arid region: Is some number better than no number?

    PubMed

    Özcan, Zeynep; Başkan, Oğuz; Düzgün, H Şebnem; Kentel, Elçin; Alp, Emre

    2017-10-01

    Fate and transport models are powerful tools that aid authorities in making unbiased decisions for developing sustainable management strategies. Application of pollution fate and transport models in semi-arid regions has been challenging because of unique hydrological characteristics and limited data availability. Significant temporal and spatial variability in rainfall events, complex interactions between soil, vegetation and topography, and limited water quality and hydrological data due to insufficient monitoring network make it a difficult task to develop reliable models in semi-arid regions. The performances of these models govern the final use of the outcomes such as policy implementation, screening, economical analysis, etc. In this study, a deterministic distributed fate and transport model, SWAT, is applied in Lake Mogan Watershed, a semi-arid region dominated by dry agricultural practices, to estimate nutrient loads and to develop the water budget of the watershed. To minimize the discrepancy due to limited availability of historical water quality data extensive efforts were placed in collecting site-specific data for model inputs such as soil properties, agricultural practice information and land use. Moreover, calibration parameter ranges suggested in the literature are utilized during calibration in order to obtain more realistic representation of Lake Mogan Watershed in the model. Model performance is evaluated using comparisons of the measured data with 95%CI for the simulated data and comparison of unit pollution load estimations with those provided in the literature for similar catchments, in addition to commonly used evaluation criteria such as Nash-Sutcliffe simulation efficiency, coefficient of determination and percent bias. These evaluations demonstrated that even though the model prediction power is not high according to the commonly used model performance criteria, the calibrated model may provide useful information in the comparison of the effects of different management practices on diffuse pollution and water quality in Lake Mogan Watershed. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Shortwave Radiometer Calibration Methods Comparison and Resulting Solar Irradiance Measurement Differences: A User Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Banks financing solar energy projects require assurance that these systems will produce the energy predicted. Furthermore, utility planners and grid system operators need to understand the impact of the variable solar resource on solar energy conversion system performance. Accurate solar radiation data sets reduce the expense associated with mitigating performance risk and assist in understanding the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methods provided by radiometric calibrationmore » service providers, such as NREL and manufacturers of radiometers, on the resulting calibration responsivity. Some of these radiometers are calibrated indoors and some outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides the outdoor calibration responsivity of pyranometers and pyrheliometers at 45 degree solar zenith angle, and as a function of solar zenith angle determined by clear-sky comparisons with reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison between the test radiometer under calibration and a reference radiometer of the same type. In both methods, the reference radiometer calibrations are traceable to the World Radiometric Reference (WRR). These different methods of calibration demonstrated +1% to +2% differences in solar irradiance measurement. Analyzing these differences will ultimately help determine the uncertainty of the field radiometer data and guide the development of a consensus standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainty will allow more accurate prediction of solar output and improve the bankability of solar projects.« less

  2. Challenges in modeling the X-29 flight test performance

    NASA Technical Reports Server (NTRS)

    Hicks, John W.; Kania, Jan; Pearce, Robert; Mills, Glen

    1987-01-01

    Presented are methods, instrumentation, and difficulties associated with drag measurement of the X-29A aircraft. The initial performance objective of the X-29A program emphasized drag polar shapes rather than absolute drag levels. Priorities during the flight envelope expansion restricted the evaluation of aircraft performance. Changes in aircraft configuration, uncertainties in angle-of-attack calibration, and limitations in instrumentation complicated the analysis. Limited engine instrumentation with uncertainties in overall in-flight thrust accuracy made it difficult to obtain reliable values of coefficient of parasite drag. The aircraft was incapable of tracking the automatic camber control trim schedule for optimum wing flaperon deflection during typical dynamic performance maneuvers; this has also complicated the drag polar shape modeling. The X-29A was far enough off the schedule that the developed trim drag correction procedure has proven inadequate. However, good drag polar shapes have been developed throughout the flight envelope. Preliminary flight results have compared well with wind tunnel predictions. A more comprehensive analysis must be done to complete performance models. The detailed flight performance program with a calibrated engine will benefit from the experience gained during this preliminary performance phase.

  3. Challenges in modeling the X-29A flight test performance

    NASA Technical Reports Server (NTRS)

    Hicks, John W.; Kania, Jan; Pearce, Robert; Mills, Glen

    1987-01-01

    The paper presents the methods, instrumentation, and difficulties associated with drag measurement of the X-29A aircraft. The initial performance objective of the X-29A program emphasized drag polar shapes rather than absolute drag levels. Priorities during the flight envelope expansion restricted the evaluation of aircraft performance. Changes in aircraft configuration, uncertainties in angle-of-attack calibration, and limitations in instrumentation complicated the analysis. Limited engine instrumentation with uncertainties in overall in-flight thrust accuracy made it difficult to obtain reliable values of coefficient of parasite drag. The aircraft was incapable of tracking the automatic camber control trim schedule for optimum wing flaperon deflection during typical dynamic performance maneuvers; this has also complicated the drag polar shape modeling. The X-29A was far enough off the schedule that the developed trim drag correction procedure has proven inadequate. Despite these obstacles, good drag polar shapes have been developed throughout the flight envelope. Preliminary flight results have compared well with wind tunnel predictions. A more comprehensive analysis must be done to complete the performance models. The detailed flight performance program with a calibrated engine will benefit from the experience gained during this preliminary performance phase.

  4. Inertial Sensor Error Reduction through Calibration and Sensor Fusion.

    PubMed

    Lambrecht, Stefan; Nogueira, Samuel L; Bortole, Magdo; Siqueira, Adriano A G; Terra, Marco H; Rocon, Eduardo; Pons, José L

    2016-02-17

    This paper presents the comparison between cooperative and local Kalman Filters (KF) for estimating the absolute segment angle, under two calibration conditions. A simplified calibration, that can be replicated in most laboratories; and a complex calibration, similar to that applied by commercial vendors. The cooperative filters use information from either all inertial sensors attached to the body, Matricial KF; or use information from the inertial sensors and the potentiometers of an exoskeleton, Markovian KF. A one minute walking trial of a subject walking with a 6-DoF exoskeleton was used to assess the absolute segment angle of the trunk, thigh, shank, and foot. The results indicate that regardless of the segment and filter applied, the more complex calibration always results in a significantly better performance compared to the simplified calibration. The interaction between filter and calibration suggests that when the quality of the calibration is unknown the Markovian KF is recommended. Applying the complex calibration, the Matricial and Markovian KF perform similarly, with average RMSE below 1.22 degrees. Cooperative KFs perform better or at least equally good as Local KF, we therefore recommend to use cooperative KFs instead of local KFs for control or analysis of walking.

  5. Self-calibration of cone-beam CT geometry using 3D–2D image registration

    PubMed Central

    Ouadah, S; Stayman, J W; Gang, G J; Ehtiati, T; Siewerdsen, J H

    2016-01-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM = 0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p < 0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE = 0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p < 0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is applicable to situations where conventional calibration is not feasible, such as complex non-circular CBCT orbits and systems with irreproducible source-detector trajectory. PMID:26961687

  6. Self-calibration of cone-beam CT geometry using 3D-2D image registration

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Stayman, J. W.; Gang, G. J.; Ehtiati, T.; Siewerdsen, J. H.

    2016-04-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is applicable to situations where conventional calibration is not feasible, such as complex non-circular CBCT orbits and systems with irreproducible source-detector trajectory.

  7. Development and Evaluation of a Calibrator Material for Nucleic Acid-Based Assays for Diagnosing Aspergillosis

    PubMed Central

    Abdul-Ali, Deborah; Loeffler, Juergen; White, P. Lewis; Wickes, Brian; Herrera, Monica L.; Alexander, Barbara D.; Baden, Lindsey R.; Clancy, Cornelius; Denning, David; Nguyen, M. Hong; Sugrue, Michele; Wheat, L. Joseph; Wingard, John R.; Donnelly, J. Peter; Barnes, Rosemary; Patterson, Thomas F.; Caliendo, Angela M.

    2013-01-01

    Twelve laboratories evaluated candidate material for an Aspergillus DNA calibrator. The DNA material was quantified using limiting-dilution analysis; the mean concentration was determined to be 1.73 × 1010 units/ml. The calibrator can be used to standardize aspergillosis diagnostic assays which detect and/or quantify nucleic acid. PMID:23616459

  8. Development and evaluation of a calibrator material for nucleic acid-based assays for diagnosing aspergillosis.

    PubMed

    Lyon, G Marshall; Abdul-Ali, Deborah; Loeffler, Juergen; White, P Lewis; Wickes, Brian; Herrera, Monica L; Alexander, Barbara D; Baden, Lindsey R; Clancy, Cornelius; Denning, David; Nguyen, M Hong; Sugrue, Michele; Wheat, L Joseph; Wingard, John R; Donnelly, J Peter; Barnes, Rosemary; Patterson, Thomas F; Caliendo, Angela M

    2013-07-01

    Twelve laboratories evaluated candidate material for an Aspergillus DNA calibrator. The DNA material was quantified using limiting-dilution analysis; the mean concentration was determined to be 1.73 × 10(10) units/ml. The calibrator can be used to standardize aspergillosis diagnostic assays which detect and/or quantify nucleic acid.

  9. The Geostationary Lightning Mapper: Its Performance and Calibration

    NASA Astrophysics Data System (ADS)

    Christian, H. J., Jr.

    2015-12-01

    The Geostationary Lightning Mapper (GLM) has been developed to be an operational instrument on the GOES-R series of spacecraft. The GLM is a unique instrument, unlike other meteorological instruments, both in how it operates and in the information content that it provides. Instrumentally, it is an event detector, rather than an imager. While processing almost a billion pixels per second with 14 bits of resolution, the event detection process reduces the required telemetry bandwidth by almost 105, thus keeping the telemetry requirements modest and enabling efficient ground processing that leads to rapid data distribution to operational users. The GLM was designed to detect about 90 percent of the total lightning flashes within its almost hemispherical field of view. Based on laboratory calibration, we expect the on-orbit detection efficiency to be closer to 85%, making it the highest performing, large area coverage total lightning detector. It has a number of unique design features that will enable it have near uniform special resolution over most of its field of view and to operate with minimal impact on performance during solar eclipses. The GLM has no dedicated on-orbit calibration system, thus the ground-based calibration provides the bases for the predicted radiometric performance. A number of problems were encountered during the calibration of Flight Model 1. The issues arouse from GLM design features including its wide field of view, fast lens, the narrow-band interference filters located in both object and collimated space and the fact that the GLM is inherently a event detector yet the calibration procedures required both calibration of images and events. The GLM calibration techniques were based on those developed for the Lightning Imaging Sensor calibration, but there are enough differences between the sensors that the initial GLM calibration suggested that it is significantly more sensitive than its design parameters. The calibration discrepancies have been resolved and will be discussed. Absolute calibration will be verified on-orbit using vicarious cloud reflections. In addition to details of the GLM calibration, the presentation will address the unique design of the GLM, its features, capabilities and performance.

  10. In-Flight Calibration Processes for the MMS Fluxgate Magnetometers

    NASA Technical Reports Server (NTRS)

    Bromund, K. R.; Leinweber, H. K.; Plaschke, F.; Strangeway, R. J.; Magnes, W.; Fischer, D.; Nakamura, R.; Anderson, B. J.; Russell, C. T.; Baumjohann, W.; hide

    2015-01-01

    The calibration effort for the Magnetospheric Multiscale Mission (MMS) Analog Fluxgate (AFG) and DigitalFluxgate (DFG) magnetometers is a coordinated effort between three primary institutions: University of California, LosAngeles (UCLA); Space Research Institute, Graz, Austria (IWF); and Goddard Space Flight Center (GSFC). Since thesuccessful deployment of all 8 magnetometers on 17 March 2015, the effort to confirm and update the groundcalibrations has been underway during the MMS commissioning phase. The in-flight calibration processes evaluatetwelve parameters that determine the alignment, orthogonalization, offsets, and gains for all 8 magnetometers usingalgorithms originally developed by UCLA and the Technical University of Braunschweig and tailored to MMS by IWF,UCLA, and GSFC. We focus on the processes run at GSFC to determine the eight parameters associated with spin tonesand harmonics. We will also discuss the processing flow and interchange of parameters between GSFC, IWF, and UCLA.IWF determines the low range spin axis offsets using the Electron Drift Instrument (EDI). UCLA determines the absolutegains and sensor azimuth orientation using Earth field comparisons. We evaluate the performance achieved for MMS andgive examples of the quality of the resulting calibrations.

  11. Bayesian correction for covariate measurement error: A frequentist evaluation and comparison with regression calibration.

    PubMed

    Bartlett, Jonathan W; Keogh, Ruth H

    2018-06-01

    Bayesian approaches for handling covariate measurement error are well established and yet arguably are still relatively little used by researchers. For some this is likely due to unfamiliarity or disagreement with the Bayesian inferential paradigm. For others a contributory factor is the inability of standard statistical packages to perform such Bayesian analyses. In this paper, we first give an overview of the Bayesian approach to handling covariate measurement error, and contrast it with regression calibration, arguably the most commonly adopted approach. We then argue why the Bayesian approach has a number of statistical advantages compared to regression calibration and demonstrate that implementing the Bayesian approach is usually quite feasible for the analyst. Next, we describe the closely related maximum likelihood and multiple imputation approaches and explain why we believe the Bayesian approach to generally be preferable. We then empirically compare the frequentist properties of regression calibration and the Bayesian approach through simulation studies. The flexibility of the Bayesian approach to handle both measurement error and missing data is then illustrated through an analysis of data from the Third National Health and Nutrition Examination Survey.

  12. Results of the 1973 NASA/JPL balloon flight solar cell calibration program

    NASA Technical Reports Server (NTRS)

    Yasui, R. K.; Greenwood, R. F.

    1975-01-01

    High altitude balloon flights carried 37 standard solar cells for calibration above 99.5 percent of the earth's atmosphere. The cells were assembled into standard modules with appropriate resistors to load each cell at short circuit current. Each standardized module was mounted at the apex of the balloon on a sun tracker which automatically maintained normal incidence to the sun within 1.0 deg. The balloons were launched to reach a float altitude of approximately 36.6 km two hours before solar noon and remain at float altitude for two hours beyond solar noon. Telemetered calibration data on each standard solar cell was collected and recorded on magnetic tape. At the end of each float period the solar cell payload was separated from the balloon by radio command and descended via parachute to a ground recovery crew. Standard solar cells calibrated and recovered in this manner are used as primary intensity reference standards in solar simulators and in terrestrial sunlight for evaluating the performance of other solar cells and solar arrays with similar spectral response characteristics.

  13. Comparison of Analytical Methods for the Determination of Uranium in Seawater Using Inductively Coupled Plasma Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Jordana R.; Gill, Gary A.; Kuo, Li-Jung

    2016-04-20

    Trace element determinations in seawater by inductively coupled plasma mass spectrometry are analytically challenging due to the typically very low concentrations of the trace elements and the potential interference of the salt matrix. In this study, we did a comparison for uranium analysis using inductively coupled plasma mass spectrometry (ICP-MS) of Sequim Bay seawater samples and three seawater certified reference materials (SLEW-3, CASS-5 and NASS-6) using seven different analytical approaches. The methods evaluated include: direct analysis, Fe/Pd reductive precipitation, standard addition calibration, online automated dilution using an external calibration with and without matrix matching, and online automated pre-concentration. The methodmore » which produced the most accurate results was the method of standard addition calibration, recovering uranium from a Sequim Bay seawater sample at 101 ± 1.2%. The on-line preconcentration method and the automated dilution with matrix-matched calibration method also performed well. The two least effective methods were the direct analysis and the Fe/Pd reductive precipitation using sodium borohydride« less

  14. Calibration of solid state nuclear track detectors at high energy ion beams for cosmic radiation measurements: HAMLET results

    NASA Astrophysics Data System (ADS)

    Szabó, J.; Pálfalvi, J. K.

    2012-12-01

    The MATROSHKA experiments and the related HAMLET project funded by the European Commission aimed to study the dose burden of the crew working on the International Space Station (ISS). During these experiments a human phantom equipped with several thousands of radiation detectors was exposed to cosmic rays inside and outside the ISS. Besides the measurements realized in Earth orbit, the HAMLET project included also a ground-based program of calibration and intercomparison of the different detectors applied by the participating groups using high-energy ion beams. The Space Dosimetry Group of the Centre for Energy Research (formerly Atomic Energy Research Institute) participated in these experiments with passive solid state nuclear track detectors (SSNTDs). The paper presents the results of the calibration experiments performed in the years 2008-2011 at the Heavy Ion Medical Accelerator (HIMAC) of the National Institute of Radiological Sciences (NIRS), Chiba, Japan. The data obtained serve as update and improvement for the previous calibration curves which are necessary for the evaluation of the SSNTDs exposed in unknown space radiation fields.

  15. Patient-dependent count-rate adaptive normalization for PET detector efficiency with delayed-window coincidence events

    NASA Astrophysics Data System (ADS)

    Niu, Xiaofeng; Ye, Hongwei; Xia, Ting; Asma, Evren; Winkler, Mark; Gagnon, Daniel; Wang, Wenli

    2015-07-01

    Quantitative PET imaging is widely used in clinical diagnosis in oncology and neuroimaging. Accurate normalization correction for the efficiency of each line-of- response is essential for accurate quantitative PET image reconstruction. In this paper, we propose a normalization calibration method by using the delayed-window coincidence events from the scanning phantom or patient. The proposed method could dramatically reduce the ‘ring’ artifacts caused by mismatched system count-rates between the calibration and phantom/patient datasets. Moreover, a modified algorithm for mean detector efficiency estimation is proposed, which could generate crystal efficiency maps with more uniform variance. Both phantom and real patient datasets are used for evaluation. The results show that the proposed method could lead to better uniformity in reconstructed images by removing ring artifacts, and more uniform axial variance profiles, especially around the axial edge slices of the scanner. The proposed method also has the potential benefit to simplify the normalization calibration procedure, since the calibration can be performed using the on-the-fly acquired delayed-window dataset.

  16. Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices

    NASA Astrophysics Data System (ADS)

    Semkow, T. M.; Bradt, C. J.; Beach, S. E.; Haines, D. K.; Khan, A. J.; Bari, A.; Torres, M. A.; Marrantino, J. C.; Syed, U.-F.; Kitto, M. E.; Hoffman, T. J.; Curtis, P.

    2015-11-01

    A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm-3. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid.

  17. Ground Albedo Neutron Sensing (GANS) method for measurements of soil moisture in cropped fields

    NASA Astrophysics Data System (ADS)

    Andres Rivera Villarreyes, Carlos; Baroni, Gabriele; Oswald, Sascha E.

    2013-04-01

    Measurement of soil moisture at the plot or hill-slope scale is an important link between local vadose zone hydrology and catchment hydrology. However, so far only few methods are on the way to close this gap between point measurements and remote sensing. This study evaluates the applicability of the Ground Albedo Neutron Sensing (GANS) for integral quantification of seasonal soil moisture in the root zone at the scale of a field or small watershed, making use of the crucial role of hydrogen as neutron moderator relative to other landscape materials. GANS measurements were performed at two locations in Germany under different vegetative situations and seasonal conditions. Ground albedo neutrons were measured at (i) a lowland Bornim farmland (Brandenburg) cropped with sunflower in 2011 and winter rye in 2012, and (ii) a mountainous farmland catchment (Schaefertal, Harz Mountains) since middle 2011. At both sites depth profiles of soil moisture were measured at several locations in parallel by frequency domain reflectometry (FDR) for comparison and calibration. Initially, calibration parameters derived from a previous study with corn cover were tested under sunflower and winter rye periods at the same farmland. GANS soil moisture based on these parameters showed a large discrepancy compared to classical soil moisture measurements. Therefore, two new calibration approaches and four different ways of integration the soil moisture profile to an integral value for GANS were evaluated in this study. This included different sets of calibration parameters based on different growing periods of sunflower. New calibration parameters showed a good agreement with FDR network during sunflower period (RMSE = 0.023 m3 m-3), but they underestimated soil moisture in the winter rye period. The GANS approach resulted to be highly affected by temporal changes of biomass and crop types which suggest the need of neutron corrections for long-term observations with crop rotation. Finally, Bornim sunflower parameters were transferred to Schaefertal catchment for further evaluation. This study proves GANS potential to close the measurement gap between point scale and remote sensing scale; however, its calibration needs to be adapted for vegetation in cropped fields.

  18. SeaWiFS Postlaunch Calibration and Validation Analyses

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); McClain, Charles R.; Ainsworth, Ewa J.; Barnes, Robert A.; Eplee, Robert E., Jr.; Patt, Frederick S.; Robinson, Wayne D.; Wang, Menghua; Bailey, Sean W.

    2000-01-01

    The effort to resolve data quality issues and improve on the initial data evaluation methodologies of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Project was an extensive one. These evaluations have resulted, to date, in three major reprocessings of the entire data set where each reprocessing addressed the data quality issues that could be identified up to the time of each reprocessing. The number of chapters (21) needed to document this extensive work in the SeaWiFS Postlaunch Technical Report Series requires three volumes. The chapters in Volumes 9, 10, and 11 are in a logical order sequencing through sensor calibration, atmospheric correction, masks and flags, product evaluations, and bio-optical algorithms. The first chapter of Volume 9 is an overview of the calibration and validation program, including a table of activities from the inception of the SeaWiFS Project. Chapter 2 describes the fine adjustments of sensor detector knee radiances, i.e., radiance levels where three of the four detectors in each SeaWiFS band saturate. Chapters 3 and 4 describe the analyses of the lunar and solar calibration time series, respectively, which are used to track the temporal changes in radiometric sensitivity in each band. Chapter 5 outlines the procedure used to adjust band 7 relative to band 8 to derive reasonable aerosol radiances in band 7 as compared to those in band 8 in the vicinity of Lanai, Hawaii, the vicarious calibration site. Chapter 6 presents the procedure used to estimate the vicarious calibration gain adjustment factors for bands 1-6 using the waterleaving radiances from the Marine Optical Buoy (MOBY) offshore of Lanai. Chapter 7 provides the adjustments to the coccolithophore flag algorithm which were required for improved performance over the prelaunch version. Chapter 8 is an overview of the numerous modifications to the atmospheric correction algorithm that have been implemented. Chapter 9 describes the methodology used to remove artifacts of sun glint contamination for portions of the imagery outside the sun glint mask. Finally, Chapter 10 explains a modification to the ozone interpolation method to account for actual time differences between the SeaWiFS and Total Ozone Mapping Spectrometer (TOMS) orbits.

  19. External validation of a Cox prognostic model: principles and methods

    PubMed Central

    2013-01-01

    Background A prognostic model should not enter clinical practice unless it has been demonstrated that it performs a useful role. External validation denotes evaluation of model performance in a sample independent of that used to develop the model. Unlike for logistic regression models, external validation of Cox models is sparsely treated in the literature. Successful validation of a model means achieving satisfactory discrimination and calibration (prediction accuracy) in the validation sample. Validating Cox models is not straightforward because event probabilities are estimated relative to an unspecified baseline function. Methods We describe statistical approaches to external validation of a published Cox model according to the level of published information, specifically (1) the prognostic index only, (2) the prognostic index together with Kaplan-Meier curves for risk groups, and (3) the first two plus the baseline survival curve (the estimated survival function at the mean prognostic index across the sample). The most challenging task, requiring level 3 information, is assessing calibration, for which we suggest a method of approximating the baseline survival function. Results We apply the methods to two comparable datasets in primary breast cancer, treating one as derivation and the other as validation sample. Results are presented for discrimination and calibration. We demonstrate plots of survival probabilities that can assist model evaluation. Conclusions Our validation methods are applicable to a wide range of prognostic studies and provide researchers with a toolkit for external validation of a published Cox model. PMID:23496923

  20. Comprehensive Performance Evaluation for Hydrological and Nutrients Simulation Using the Hydrological Simulation Program–Fortran in a Mesoscale Monsoon Watershed, China

    PubMed Central

    Luo, Chuan; Jiang, Kaixia; Wan, Rongrong; Li, Hengpeng

    2017-01-01

    The Hydrological Simulation Program–Fortran (HSPF) is a hydrological and water quality computer model that was developed by the United States Environmental Protection Agency. Comprehensive performance evaluations were carried out for hydrological and nutrient simulation using the HSPF model in the Xitiaoxi watershed in China. Streamflow simulation was calibrated from 1 January 2002 to 31 December 2007 and then validated from 1 January 2008 to 31 December 2010 using daily observed data, and nutrient simulation was calibrated and validated using monthly observed data during the period from July 2009 to July 2010. These results of model performance evaluation showed that the streamflows were well simulated over the study period. The determination coefficient (R2) was 0.87, 0.77 and 0.63, and the Nash-Sutcliffe coefficient of efficiency (Ens) was 0.82, 0.76 and 0.65 for the streamflow simulation in annual, monthly and daily time-steps, respectively. Although limited to monthly observed data, satisfactory performance was still achieved during the quantitative evaluation for nutrients. The R2 was 0.73, 0.82 and 0.92, and the Ens was 0.67, 0.74 and 0.86 for nitrate, ammonium and orthophosphate simulation, respectively. Some issues may affect the application of HSPF were also discussed, such as input data quality, parameter values, etc. Overall, the HSPF model can be successfully used to describe streamflow and nutrients transport in the mesoscale watershed located in the East Asian monsoon climate area. This study is expected to serve as a comprehensive and systematic documentation of understanding the HSPF model for wide application and avoiding possible misuses. PMID:29257117

  1. Comprehensive Performance Evaluation for Hydrological and Nutrients Simulation Using the Hydrological Simulation Program-Fortran in a Mesoscale Monsoon Watershed, China.

    PubMed

    Li, Zhaofu; Luo, Chuan; Jiang, Kaixia; Wan, Rongrong; Li, Hengpeng

    2017-12-19

    The Hydrological Simulation Program-Fortran (HSPF) is a hydrological and water quality computer model that was developed by the United States Environmental Protection Agency. Comprehensive performance evaluations were carried out for hydrological and nutrient simulation using the HSPF model in the Xitiaoxi watershed in China. Streamflow simulation was calibrated from 1 January 2002 to 31 December 2007 and then validated from 1 January 2008 to 31 December 2010 using daily observed data, and nutrient simulation was calibrated and validated using monthly observed data during the period from July 2009 to July 2010. These results of model performance evaluation showed that the streamflows were well simulated over the study period. The determination coefficient ( R ²) was 0.87, 0.77 and 0.63, and the Nash-Sutcliffe coefficient of efficiency (Ens) was 0.82, 0.76 and 0.65 for the streamflow simulation in annual, monthly and daily time-steps, respectively. Although limited to monthly observed data, satisfactory performance was still achieved during the quantitative evaluation for nutrients. The R ² was 0.73, 0.82 and 0.92, and the Ens was 0.67, 0.74 and 0.86 for nitrate, ammonium and orthophosphate simulation, respectively. Some issues may affect the application of HSPF were also discussed, such as input data quality, parameter values, etc. Overall, the HSPF model can be successfully used to describe streamflow and nutrients transport in the mesoscale watershed located in the East Asian monsoon climate area. This study is expected to serve as a comprehensive and systematic documentation of understanding the HSPF model for wide application and avoiding possible misuses.

  2. CNV-ROC: A cost effective, computer-aided analytical performance evaluator of chromosomal microarrays

    PubMed Central

    Goodman, Corey W.; Major, Heather J.; Walls, William D.; Sheffield, Val C.; Casavant, Thomas L.; Darbro, Benjamin W.

    2016-01-01

    Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. PMID:25595567

  3. Calibration of Airframe and Occupant Models for Two Full-Scale Rotorcraft Crash Tests

    NASA Technical Reports Server (NTRS)

    Annett, Martin S.; Horta, Lucas G.; Polanco, Michael A.

    2012-01-01

    Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. Accelerations and kinematic data collected from the crash tests were compared to a system integrated finite element model of the test article. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the second full-scale crash test. This combination of heuristic and quantitative methods was used to identify modeling deficiencies, evaluate parameter importance, and propose required model changes. It is shown that the multi-dimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and co-pilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. This approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and test planning guidance. Complete crash simulations with validated finite element models can be used to satisfy crash certification requirements, thereby reducing overall development costs.

  4. Evaluation of the Main Ceos Pseudo Calibration Sites Using Modis Brdf/albedo Products

    NASA Astrophysics Data System (ADS)

    Kharbouche, Said; Muller, Jan-Peter

    2016-06-01

    This work describes our findings about an evaluation of the stability and the consistency of twenty primary PICSs (Pseudo-Invariant Calibration Sites). We present an analysis of 13 years of 8-daily MODIS products of BRDF parameters and white-sky-albedos (WSA) over the shortwave band. This time series of WSA and BRDFs shows the variation of the "stability" varies significantly from site to site. Using a 10x10 km window size over all the sites, the change in of WSA stability is around 4% but the isotropicity, which is an important element in inter-satellite calibration, can vary from 75% to 98%. Moreover, some PICS, especially, Libya-4 which is one of the PICS which is most employed, has significant and relatively fast changes in wintertime. PICS observations of BRDF/albedo shows that the Libya-4 PICS has the best performance but it is not too far from some sites such as Libya-1 and Mali. This study also reveals that Niger-3 PICS has the longest continuous period of high stability per year, and Sudan has the most isotropic surface. These observations have important implications for the use of these sites.

  5. Setting Standards for Reporting and Quantification in Fluorescence-Guided Surgery.

    PubMed

    Hoogstins, Charlotte; Burggraaf, Jan Jaap; Koller, Marjory; Handgraaf, Henricus; Boogerd, Leonora; van Dam, Gooitzen; Vahrmeijer, Alexander; Burggraaf, Jacobus

    2018-05-29

    Intraoperative fluorescence imaging (FI) is a promising technique that could potentially guide oncologic surgeons toward more radical resections and thus improve clinical outcome. Despite the increase in the number of clinical trials, fluorescent agents and imaging systems for intraoperative FI, a standardized approach for imaging system performance assessment and post-acquisition image analysis is currently unavailable. We conducted a systematic, controlled comparison between two commercially available imaging systems using a novel calibration device for FI systems and various fluorescent agents. In addition, we analyzed fluorescence images from previous studies to evaluate signal-to-background ratio (SBR) and determinants of SBR. Using the calibration device, imaging system performance could be quantified and compared, exposing relevant differences in sensitivity. Image analysis demonstrated a profound influence of background noise and the selection of the background on SBR. In this article, we suggest clear approaches for the quantification of imaging system performance assessment and post-acquisition image analysis, attempting to set new standards in the field of FI.

  6. A Comparative Distributed Evaluation of the NWS-RDHM using Shape Matching and Traditional Measures with In Situ and Remotely Sensed Information

    NASA Astrophysics Data System (ADS)

    KIM, J.; Bastidas, L. A.

    2011-12-01

    We evaluate, calibrate and diagnose the performance of National Weather Service RDHM distributed model over the Durango River Basin in Colorado using simultaneously in situ and remotely sensed information from different discharge gaging stations (USGS), information about snow cover (SCV) and snow water equivalent (SWE) in situ from several SNOTEL sites and snow information distributed over the catchment from remotely sensed information (NOAA-NASA). In the process of evaluation we attempt to establish the optimal degree of parameter distribution over the catchment by calibration. A multi-criteria approach based on traditional measures (RMSE) and similarity based pattern comparisons using the Hausdorff and Earth Movers Distance approaches is used for the overall evaluation of the model performance. These pattern based approaches (shape matching) are found to be extremely relevant to account for the relatively large degree of inaccuracy in the remotely sensed SWE (judged inaccurate in terms of the value but reliable in terms of the distribution pattern) and the high reliability of the SCV (yes/no situation) while at the same time allow for an evaluation that quantifies the accuracy of the model over the entire catchment considering the different types of observations. The Hausdorff norm, due to its intrinsically multi-dimensional nature, allows for the incorporation of variables such as the terrain elevation as one of the variables for evaluation. The EMD, because of its extremely high computational overburden, requires the mapping of the set of evaluation variables into a two dimensional matrix for computation.

  7. Blood transfer devices for malaria rapid diagnostic tests: evaluation of accuracy, safety and ease of use.

    PubMed

    Hopkins, Heidi; Oyibo, Wellington; Luchavez, Jennifer; Mationg, Mary Lorraine; Asiimwe, Caroline; Albertini, Audrey; González, Iveth J; Gatton, Michelle L; Bell, David

    2011-02-08

    Malaria rapid diagnostic tests (RDTs) are increasingly used by remote health personnel with minimal training in laboratory techniques. RDTs must, therefore, be as simple, safe and reliable as possible. Transfer of blood from the patient to the RDT is critical to safety and accuracy, and poses a significant challenge to many users. Blood transfer devices were evaluated for accuracy and precision of volume transferred, safety and ease of use, to identify the most appropriate devices for use with RDTs in routine clinical care. Five devices, a loop, straw-pipette, calibrated pipette, glass capillary tube, and a new inverted cup device, were evaluated in Nigeria, the Philippines and Uganda. The 227 participating health workers used each device to transfer blood from a simulated finger-prick site to filter paper. For each transfer, the number of attempts required to collect and deposit blood and any spilling of blood during transfer were recorded. Perceptions of ease of use and safety of each device were recorded for each participant. Blood volume transferred was calculated from the area of blood spots deposited on filter paper. The overall mean volumes transferred by devices differed significantly from the target volume of 5 microliters (p < 0.001). The inverted cup (4.6 microliters) most closely approximated the target volume. The glass capillary was excluded from volume analysis as the estimation method used is not compatible with this device. The calibrated pipette accounted for the largest proportion of blood exposures (23/225, 10%); exposures ranged from 2% to 6% for the other four devices. The inverted cup was considered easiest to use in blood collection (206/226, 91%); the straw-pipette and calibrated pipette were rated lowest (143/225 [64%] and 135/225 [60%] respectively). Overall, the inverted cup was the most preferred device (72%, 163/227), followed by the loop (61%, 138/227). The performance of blood transfer devices varied in this evaluation of accuracy, blood safety, ease of use, and user preference. The inverted cup design achieved the highest overall performance, while the loop also performed well. These findings have relevance for any point-of-care diagnostics that require blood sampling.

  8. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adekola, A.S.; Colaresi, J.; Douwen, J.

    2015-07-01

    Environmental scientific research requires a detector that has sensitivity low enough to reveal the presence of any contaminant in the sample at a reasonable counting time. Canberra developed the germanium detector geometry called Small Anode Germanium (SAGe) Well detector, which is now available commercially. The SAGe Well detector is a new type of low capacitance germanium well detector manufactured using small anode technology capable of advancing many environmental scientific research applications. The performance of this detector has been evaluated for a range of sample sizes and geometries counted inside the well, and on the end cap of the detector. Themore » detector has energy resolution performance similar to semi-planar detectors, and offers significant improvement over the existing coaxial and Well detectors. Energy resolution performance of 750 eV Full Width at Half Maximum (FWHM) at 122 keV γ-ray energy and resolution of 2.0 - 2.3 keV FWHM at 1332 keV γ-ray energy are guaranteed for detector volumes up to 425 cm{sup 3}. The SAGe Well detector offers an optional 28 mm well diameter with the same energy resolution as the standard 16 mm well. Such outstanding resolution performance will benefit environmental applications in revealing the detailed radionuclide content of samples, particularly at low energy, and will enhance the detection sensitivity resulting in reduced counting time. The detector is compatible with electric coolers without any sacrifice in performance and supports the Canberra Mathematical efficiency calibration method (In situ Object Calibration Software or ISOCS, and Laboratory Source-less Calibration Software or LABSOCS). In addition, the SAGe Well detector supports true coincidence summing available in the ISOCS/LABSOCS framework. The improved resolution performance greatly enhances detection sensitivity of this new detector for a range of sample sizes and geometries counted inside the well. This results in lower minimum detectable concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)« less

  9. [Study on freshness evaluation of ice-stored large yellow croaker (Pseudosciaena crocea) using near infrared spectroscopy].

    PubMed

    Liu, Yuan; Chen, Wei-Hua; Hou, Qiao-Juan; Wang, Xi-Chang; Dong, Ruo-Yan; Wu, Hao

    2014-04-01

    Near infrared spectroscopy (NIR) was used in this experiment to evaluate the freshness of ice-stored large yellow croaker (Pseudosciaena crocea) during different storage periods. And the TVB-N was used as an index to evaluate the freshness. Through comparing the correlation coefficent and standard deviations of calibration set and validation set of models established by singly and combined using of different pretreatment methods, different modeling methods and different wavelength region, the best TVB-N models of ice-stored large yellow croaker sold in the market were established to predict the freshness quickly. According to the research, the model shows that the best performance could be established by using the normalization by closure (Ncl) with 1st derivative (Dbl) and normalization to unit length (Nle) with 1st derivative as the pretreated method and partial least square (PLS) as the modeling method combined with choosing the wavelength region of 5 000-7 144, and 7 404-10 000 cm(-1). The calibration model gave the correlation coefficient of 0.992, with a standard error of calibration of 1.045 and the validation model gave the correlation coefficient of 0.999, with a standard error of prediction of 0.990. This experiment attempted to combine several pretreatment methods and choose the best wavelength region, which has got a good result. It could have a good prospective application of freshness detection and quality evaluation of large yellow croaker in the market.

  10. Evaluation of a Tactical Surface Metering Tool for Charlotte Douglas International Airport via Human-in-the-Loop Simulation

    NASA Technical Reports Server (NTRS)

    Verma, Savita; Lee, Hanbong; Dulchinos, Victoria L.; Martin, Lynne; Stevens, Lindsay; Jung, Yoon; Chevalley, Eric; Jobe, Kim; Parke, Bonny

    2017-01-01

    NASA has been working with the FAA and aviation industry partners to develop and demonstrate new concepts and technologies that integrate arrival, departure, and surface traffic management capabilities. In March 2017, NASA conducted a human-in-the-loop (HITL) simulation for integrated surface and airspace operations, modeling Charlotte Douglas International Airport, to evaluate the operational procedures and information requirements for the tactical surface metering tool, and data exchange elements between the airline controlled ramp and ATC Tower. In this paper, we focus on the calibration of the tactical surface metering tool using various metrics measured from the HITL simulation results. Key performance metrics include gate hold times from pushback advisories, taxi-in-out times, runway throughput, and departure queue size. Subjective metrics presented in this paper include workload, situational awareness, and acceptability of the metering tool and its calibration.

  11. Evaluation of a Tactical Surface Metering Tool for Charlotte Douglas International Airport Via Human-in-the-Loop Simulation

    NASA Technical Reports Server (NTRS)

    Verma, Savita; Lee, Hanbong; Martin, Lynne; Stevens, Lindsay; Jung, Yoon; Dulchinos, Victoria; Chevalley, Eric; Jobe, Kim; Parke, Bonny

    2017-01-01

    NASA has been working with the FAA and aviation industry partners to develop and demonstrate new concepts and technologies that integrate arrival, departure, and surface traffic management capabilities. In March 2017, NASA conducted a human-in-the-loop (HITL) simulation for integrated surface and airspace operations, modeling Charlotte Douglas International Airport, to evaluate the operational procedures and information requirements for the tactical surface metering tool, and data exchange elements between the airline controlled ramp and ATC Tower. In this paper, we focus on the calibration of the tactical surface metering tool using various metrics measured from the HITL simulation results. Key performance metrics include gate hold times from pushback advisories, taxi-in/out times, runway throughput, and departure queue size. Subjective metrics presented in this paper include workload, situational awareness, and acceptability of the metering tool and its calibration

  12. Evaluation of digital PCR for absolute RNA quantification.

    PubMed

    Sanders, Rebecca; Mason, Deborah J; Foy, Carole A; Huggett, Jim F

    2013-01-01

    Gene expression measurements detailing mRNA quantities are widely employed in molecular biology and are increasingly important in diagnostic fields. Reverse transcription (RT), necessary for generating complementary DNA, can be both inefficient and imprecise, but remains a quintessential RNA analysis tool using qPCR. This study developed a Transcriptomic Calibration Material and assessed the RT reaction using digital (d)PCR for RNA measurement. While many studies characterise dPCR capabilities for DNA quantification, less work has been performed investigating similar parameters using RT-dPCR for RNA analysis. RT-dPCR measurement using three, one-step RT-qPCR kits was evaluated using single and multiplex formats when measuring endogenous and synthetic RNAs. The best performing kit was compared to UV quantification and sensitivity and technical reproducibility investigated. Our results demonstrate assay and kit dependent RT-dPCR measurements differed significantly compared to UV quantification. Different values were reported by different kits for each target, despite evaluation of identical samples using the same instrument. RT-dPCR did not display the strong inter-assay agreement previously described when analysing DNA. This study demonstrates that, as with DNA measurement, RT-dPCR is capable of accurate quantification of low copy RNA targets, but the results are both kit and target dependent supporting the need for calibration controls.

  13. Calibration of the Dutch-Flemish PROMIS Pain Behavior item bank in patients with chronic pain.

    PubMed

    Crins, M H P; Roorda, L D; Smits, N; de Vet, H C W; Westhovens, R; Cella, D; Cook, K F; Revicki, D; van Leeuwen, J; Boers, M; Dekker, J; Terwee, C B

    2016-02-01

    The aims of the current study were to calibrate the item parameters of the Dutch-Flemish PROMIS Pain Behavior item bank using a sample of Dutch patients with chronic pain and to evaluate cross-cultural validity between the Dutch-Flemish and the US PROMIS Pain Behavior item banks. Furthermore, reliability and construct validity of the Dutch-Flemish PROMIS Pain Behavior item bank were evaluated. The 39 items in the bank were completed by 1042 Dutch patients with chronic pain. To evaluate unidimensionality, a one-factor confirmatory factor analysis (CFA) was performed. A graded response model (GRM) was used to calibrate the items. To evaluate cross-cultural validity, Differential item functioning (DIF) for language (Dutch vs. English) was evaluated. Reliability of the item bank was also examined and construct validity was studied using several legacy instruments, e.g. the Roland Morris Disability Questionnaire. CFA supported the unidimensionality of the Dutch-Flemish PROMIS Pain Behavior item bank (CFI = 0.960, TLI = 0.958), the data also fit the GRM, and demonstrated good coverage across the pain behavior construct (threshold parameters range: -3.42 to 3.54). Analysis showed good cross-cultural validity (only six DIF items), reliability (Cronbach's α = 0.95) and construct validity (all correlations ≥0.53). The Dutch-Flemish PROMIS Pain Behavior item bank was found to have good cross-cultural validity, reliability and construct validity. The development of the Dutch-Flemish PROMIS Pain Behavior item bank will serve as the basis for Dutch-Flemish PROMIS short forms and computer adaptive testing (CAT). © 2015 European Pain Federation - EFIC®

  14. Calibration transfer of a Raman spectroscopic quantification method for the assessment of liquid detergent compositions between two at-line instruments installed at two liquid detergent production plants.

    PubMed

    Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T

    2017-09-01

    Calibration transfer of partial least squares (PLS) quantification models is established between two Raman spectrometers located at two liquid detergent production plants. As full recalibration of existing calibration models is time-consuming, labour-intensive and costly, it is investigated whether the use of mathematical correction methods requiring only a handful of standardization samples can overcome the dissimilarities in spectral response observed between both measurement systems. Univariate and multivariate standardization approaches are investigated, ranging from simple slope/bias correction (SBC), local centring (LC) and single wavelength standardization (SWS) to more complex direct standardization (DS) and piecewise direct standardization (PDS). The results of these five calibration transfer methods are compared reciprocally, as well as with regard to a full recalibration. Four PLS quantification models, each predicting the concentration of one of the four main ingredients in the studied liquid detergent composition, are aimed at transferring. Accuracy profiles are established from the original and transferred quantification models for validation purposes. A reliable representation of the calibration models performance before and after transfer is thus established, based on β-expectation tolerance intervals. For each transferred model, it is investigated whether every future measurement that will be performed in routine will be close enough to the unknown true value of the sample. From this validation, it is concluded that instrument standardization is successful for three out of four investigated calibration models using multivariate (DS and PDS) transfer approaches. The fourth transferred PLS model could not be validated over the investigated concentration range, due to a lack of precision of the slave instrument. Comparing these transfer results to a full recalibration on the slave instrument allows comparison of the predictive power of both Raman systems and leads to the formulation of guidelines for further standardization projects. It is concluded that it is essential to evaluate the performance of the slave instrument prior to transfer, even when it is theoretically identical to the master apparatus. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Calibration of PMIS pavement performance prediction models.

    DOT National Transportation Integrated Search

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  16. Characterization of Global Near-Nadir Backscatter for Remote Sensing Radar Design

    NASA Technical Reports Server (NTRS)

    Spencer, Michael W.; Long, David G.

    2000-01-01

    In order to evaluate side-lobe contamination from the near-nadir region for Ku-Band radars, a statistical characterization of global near-nadir backscatter is constructed. This characterization is performed for a variety of surface types using data from TRMM, Seasat, and Topex. An assessment of the relative calibration accuracy of these sensors is also presented.

  17. Characterization of Global Near-Nadir Backscatter for Remote Sensing Radar Design

    NASA Technical Reports Server (NTRS)

    Spencer, Michael W.; Long, David G.

    2000-01-01

    In order to evaluate side-lobe contamination from the near-nadir region for Ku-Band radars, a statistical characterization of global near-nadir backscatter is constructed. This characterization is performed for a variety of surface types using data from TRMM, Seasat, and Topex. An assessment of the relative calibration accuracy of them sensors is also presented.

  18. Development of Rapid, Continuous Calibration Techniques and Implementation as a Prototype System for Civil Engineering Materials Evaluation

    NASA Astrophysics Data System (ADS)

    Scott, M. L.; Gagarin, N.; Mekemson, J. R.; Chintakunta, S. R.

    2011-06-01

    Until recently, civil engineering material calibration data could only be obtained from material sample cores or via time consuming, stationary calibration measurements in a limited number of locations. Calibration data are used to determine material propagation velocities of electromagnetic waves in test materials for use in layer thickness measurements and subsurface imaging. Limitations these calibration methods impose have been a significant impediment to broader use of nondestructive evaluation methods such as ground-penetrating radar (GPR). In 2006, a new rapid, continuous calibration approach was designed using simulation software to address these measurement limitations during a Federal Highway Administration (FHWA) research and development effort. This continuous calibration method combines a digitally-synthesized step-frequency (SF)-GPR array and a data collection protocol sequence for the common midpoint (CMP) method. Modeling and laboratory test results for various data collection protocols and materials are presented in this paper. The continuous-CMP concept was finally implemented for FHWA in a prototype demonstration system called the Advanced Pavement Evaluation (APE) system in 2009. Data from the continuous-CMP protocol is processed using a semblance/coherency analysis to determine material propagation velocities. Continuously calibrated pavement thicknesses measured with the APE system in 2009 are presented. This method is efficient, accurate, and cost-effective.

  19. Evaluation of four fast-response flow measurement devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gero, A.J.; Suppers, K.L.; Tomb, T.F.

    1988-01-01

    The Federal Mine Safety and Health Act of 1977 requires that sampling of dust in coal mine environments be conducted with an approved sampler operating at a flow rate of 2.0 liters of air per minute or at such other flow rate as prescribed by the Secretaries of Labor and of Health and Human Services. Standard procedures for calibration of these samplers within the Mine Safety and Health Administration utilize either a 3.0 liter capacity wet test meter or a 1.0 liter soap film calibrator. Several new flow calibrating devices have become commercially available. This paper describes an evaluation conductedmore » on four such devices: the Mast Model 823-2 bubble flowmeter, the Buck Calibrator, the Kurz Model 541S mass flowmeter and the Kurz Pocket Calibrator. The precision of a series of measurements made with each instrument was compared to the precision of a series of measurements made with the wet test meter. The comparison showed that the variability of calibration measurements obtained with the fast response flow calibrators was between 1.5 and 4.5 times larger than that obtained with the WTM; however, with all of the calibration devices evaluated, three repetitive measurements were sufficient to obtain a precision of {plus minus}0.1 liters per minute. 4 refs., 2 figs., 1 tab.« less

  20. Development of rapid, continuous calibration techniques and implementation as a prototype system for civil engineering materials evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, M. L.; Gagarin, N.; Mekemson, J. R.

    Until recently, civil engineering material calibration data could only be obtained from material sample cores or via time consuming, stationary calibration measurements in a limited number of locations. Calibration data are used to determine material propagation velocities of electromagnetic waves in test materials for use in layer thickness measurements and subsurface imaging. Limitations these calibration methods impose have been a significant impediment to broader use of nondestructive evaluation methods such as ground-penetrating radar (GPR). In 2006, a new rapid, continuous calibration approach was designed using simulation software to address these measurement limitations during a Federal Highway Administration (FHWA) research andmore » development effort. This continuous calibration method combines a digitally-synthesized step-frequency (SF)-GPR array and a data collection protocol sequence for the common midpoint (CMP) method. Modeling and laboratory test results for various data collection protocols and materials are presented in this paper. The continuous-CMP concept was finally implemented for FHWA in a prototype demonstration system called the Advanced Pavement Evaluation (APE) system in 2009. Data from the continuous-CMP protocol is processed using a semblance/coherency analysis to determine material propagation velocities. Continuously calibrated pavement thicknesses measured with the APE system in 2009 are presented. This method is efficient, accurate, and cost-effective.« less

  1. Field tests of a new, extractive, airborne 1.4 μm -TDLAS hygrometer (SEALDH-I) on a Learjet 35A

    NASA Astrophysics Data System (ADS)

    Buchholz, Bernhard; Ebert, Volker

    2013-04-01

    A highly accurate and precise quantification of atmospheric humidity is a prerequisite for cloud studies as well as for environmental models in order to get a deeper understanding of physical processes and effects. On the one hand numerous trace gases measurements in airborne "laboratories" have to be corrected for water vapor influence; on the other hand satellite measurements have to be validated by in-situ H2O measurements on aircrafts. The vast majority of the airborne hygrometers require a precise and frequent sensor calibration in order to ensure a sufficient performance. UT/LS sensors in particular are often calibrated before and after each individual flight. But even this might not be sufficient which explains why recently in-flight calibrations are becoming more common. Nevertheless all calibrated sensors completely depend on the performance of the water standard used for calibration. Therefore it remains an open question if in-flight calibrations are the way to go: They also might suffer from inflight disturbances and they would need validation during flight conditions. Water calibrations at low humidity are even more complicated due to the strong water adsorption and the resulting sampling problems. An abstention from calibration would avoid many of these problems. In addition, calibration free sensors are much easier to debug as they can hardly have errors which can be hidden by calibration parameters (such as leaks, etc.). Robust cal-free sensors should therefore perform more stable in flight when the sensors boundary conditions might change. The situation can be improved further with extractive cal-free sensors as the boundary condition in measurement volume (pressure, temperature, path length, flow pattern, etc.), i.e. in an extractive cell, are much better controlled than for an open path sensor. Further cal-free extractive sensors can be designed maintain its integrity when attaching and detaching it from the carrier (airplane). This makes it much easier to validate the sensor function e.g. by a direct comparison with a primary water standard and to ensure traceability of the results to metrological standards. On the other hand it remains important to investigate sampling effects and artifacts in order to provide true measurements of the outside air. The SEALDH-I (Selective Extractive Airborne Laser Diode Hygrometer) is a new, absolute 1.37 μm Tunable Diode Laser Absorption Spectroscopy (TDLAS) hygrometer, which uses an advanced spectroscopic multiline fit and instrument stabilization process to enable a calibrations-free [1] evaluation of TDLAS signals [2]. SEALDHI is a compact (19" 4 HU), light weight (23 kg), fully extractive TDL hygrometer especially designed for space- and weight-limited airborne applications. It is based on an internal optical cell with 1.5 m optical path length. SEALDH-I's time resolution is limited by the flow through the cell: With an unpressurized inlet and gas handling system, we achieve with typical flows of 40 liter/min which leads to exchange times in the order of 0.5 sec. The laser scanning frequency of typically 140 Hz sets a maximum time resolution of 7 msec. Averaging data for about 2.1 sec ensures an excellent precision of 0.033 ppmv, which results in a band width and path length normalized precision of 72 ppbv?m?(Hz)-1-2. A dynamic range from 30 to 30000 ppmv has been proved and already validated in a blind intercomparison campaign [3]. The fast measurements, its excellent precision, validated accuracy, and absolute, calibration-free evaluation in combination with the compact, robust setup, allows airborne measurements from ground level up to the lower stratosphere. Furthermore SEALDH-I permits via its fast response time in combination with the large concentration range the resolution of fine atmospheric spatial structures and temporal fluctuations, particularly in clouds [4], where concentration gradients of 1000 ppmv per second can be present. We will present the result of the first successful flights of SEALDH-I on board of a Learjet 35A. Further detailed evaluations of the inflight data and discussion on the performance and future application possibilities will be presented at the meeting. The flights, supported by enviscope GmbH, took place during the DENCHAR campaign (Development and Evaluation of Novel Compact Hygrometer for Airborne Research, Grant No 227159), organized by H. G. J. Smit (FZ Jülich) within the framework of the EU-funded EUFAR network. [1] C. Lauer, D. Weber, S. Wagner, and V. Ebert, "Calibration Free Measurement of Atmospheric Methane Background via Tunable Diode Laser Absorption Spectroscopy at 1.6um," Laser Applications to Chemical, Security and Environmental Analysis (LACSEA), St. Petersburg, Florida, USA" vol. LMA2, 2008. [2] V. Ebert and J. Wolfrum, "Absorption spectroscopy," in OPTICAL MEASUREMENTS-Techniques and Applications, ed. F. Mayinger, Springer, 1994, pp. 273-312. [3] B. Buchholz, B. Kühnreich, H. G. J. Smit, and V. Ebert, "Validation of an extractive, airborne, compact TDL spectrometer for atmospheric humidity sensing by blind intercomparison," Applied Physics B, pp. DOI 10.1007/s00340-012-5143-1, Sep. 2012. [4] B. J. Murray, T. W. Wilson, S. Dobbie, Z. Cui, S. M. R. K. Al-Jumur, O. Möhler, M. Schnaiter, R. Wagner, S. Benz, M. Niemand, H. Saathoff, V. Ebert, S. Wagner, and B. Kärcher, "Heterogeneous nucleation of ice particles on glassy aerosols under cirrus conditions," Nature Geoscience, vol. 3, no. 4, pp. 233-237, Mar. 2010.

  2. Projection technologies for imaging sensor calibration, characterization, and HWIL testing at AEDC

    NASA Astrophysics Data System (ADS)

    Lowry, H. S.; Breeden, M. F.; Crider, D. H.; Steely, S. L.; Nicholson, R. A.; Labello, J. M.

    2010-04-01

    The characterization, calibration, and mission simulation testing of imaging sensors require continual involvement in the development and evaluation of radiometric projection technologies. Arnold Engineering Development Center (AEDC) uses these technologies to perform hardware-in-the-loop (HWIL) testing with high-fidelity complex scene projection technologies that involve sophisticated radiometric source calibration systems to validate sensor mission performance. Testing with the National Institute of Standards and Technology (NIST) Ballistic Missile Defense Organization (BMDO) transfer radiometer (BXR) and Missile Defense Agency (MDA) transfer radiometer (MDXR) offers improved radiometric and temporal fidelity in this cold-background environment. The development of hardware and test methodologies to accommodate wide field of view (WFOV), polarimetric, and multi/hyperspectral imaging systems is being pursued to support a variety of program needs such as space situational awareness (SSA). Test techniques for the acquisition of data needed for scene generation models (solar/lunar exclusion, radiation effects, etc.) are also needed and are being sought. The extension of HWIL testing to the 7V Chamber requires the upgrade of the current satellite emulation scene generation system. This paper provides an overview of pertinent technologies being investigated and implemented at AEDC.

  3. The laser control of the muon g -2 experiment at Fermilab

    DOE PAGES

    Anastasi, A.; Anastasio, A.; Avino, S.; ...

    2017-11-09

    Here, we present that the Muon g-2 Experiment at Fermilab is expected to start data taking in 2017. It will measure the muon anomalous magnetic moment, a μ = (g μ-2)/2 to an unprecedented precision: the goal is 0.14 parts per million (ppm). The new experiment will require upgrades of detectors, electronics and data acquisition equipment to handle the much higher data volumes and slightly higher instantaneous rates. In particular, it will require a continuous monitoring and state-of-art calibration of the detectors, whose response may vary on both the millisecond and hour long timescale. The calibration system is composed ofmore » six laser sources and a light distribution system will provide short light pulses directly into each crystal (54) of the 24 calorimeters which measure energy and arrival time of the decay positrons. A Laser Control board will manage the interface between the experiment and the laser source, allowing the generation of light pulses according to specific needs including detector calibration, study of detector performance in running conditions, evaluation of DAQ performance. Here we present and discuss the main features of the Laser Control board.« less

  4. Frozen soil parameterization in a distributed biosphere hydrological model

    NASA Astrophysics Data System (ADS)

    Wang, L.; Koike, T.; Yang, K.; Jin, R.; Li, H.

    2009-11-01

    In this study, a frozen soil parameterization has been modified and incorporated into a distributed biosphere hydrological model (WEB-DHM). The WEB-DHM with the frozen scheme was then rigorously evaluated in a small cold area, the Binngou watershed, against the in-situ observations from the WATER (Watershed Allied Telemetry Experimental Research). In the summer 2008, land surface parameters were optimized using the observed surface radiation fluxes and the soil temperature profile at the Dadongshu-Yakou (DY) station in July; and then soil hydraulic parameters were obtained by the calibration of the July soil moisture profile at the DY station and by the calibration of the discharges at the basin outlet in July and August that covers the annual largest flood peak of 2008. The calibrated WEB-DHM with the frozen scheme was then used for a yearlong simulation from 21 November 2007 to 20 November 2008, to check its performance in cold seasons. Results showed that the WEB-DHM with the frozen scheme has given much better performance than the WEB-DHM without the frozen scheme, in the simulations of soil moisture profile at the DY station and the discharges at the basin outlet in the yearlong simulation.

  5. The Thomson scattering diagnostic at Wendelstein 7-X and its performance in the first operation phase

    NASA Astrophysics Data System (ADS)

    Bozhenkov, S. A.; Beurskens, M.; Dal Molin, A.; Fuchert, G.; Pasch, E.; Stoneking, M. R.; Hirsch, M.; Höfel, U.; Knauer, J.; Svensson, J.; Trimino Mora, H.; Wolf, R. C.

    2017-10-01

    The optimized stellarator Wendelstein 7-X started operation in December 2015 with a 10 week limiter campaign. Divertor experiments will begin in the second half of 2017. The W7-X Thomson scattering system is an essential diagnostic for electron density and temperature profiles. In this paper the Thomson scattering diagnostic is described in detail, including its design, calibration, data evaluation and first experimental results. Plans for further development are also presented. The W7-X Thomson system is a Nd:YAG setup with up to five lasers, two sets of light collection lenses viewing the entire plasma cross-section, fiber bundles and filter based polychromators. To reduce hardware costs, two or three scattering volumes are measured with a single polychromator. The relative spectral calibration is carried out with the aid of a broadband supercontinuum light source. The absolute calibration is performed by observing Raman scattering in nitrogen. The electron temperatures and densities are recovered by Bayesian modelling. In the first campaign, the diagnostic was equipped for 10 scattering volumes. It provided temperature profiles comparable to those measured using an electron cyclotron emission diagnostic and line integrated densities within 10% of those from a dispersion interferometer.

  6. The laser control of the muon g -2 experiment at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anastasi, A.; Anastasio, A.; Avino, S.

    Here, we present that the Muon g-2 Experiment at Fermilab is expected to start data taking in 2017. It will measure the muon anomalous magnetic moment, a μ = (g μ-2)/2 to an unprecedented precision: the goal is 0.14 parts per million (ppm). The new experiment will require upgrades of detectors, electronics and data acquisition equipment to handle the much higher data volumes and slightly higher instantaneous rates. In particular, it will require a continuous monitoring and state-of-art calibration of the detectors, whose response may vary on both the millisecond and hour long timescale. The calibration system is composed ofmore » six laser sources and a light distribution system will provide short light pulses directly into each crystal (54) of the 24 calorimeters which measure energy and arrival time of the decay positrons. A Laser Control board will manage the interface between the experiment and the laser source, allowing the generation of light pulses according to specific needs including detector calibration, study of detector performance in running conditions, evaluation of DAQ performance. Here we present and discuss the main features of the Laser Control board.« less

  7. Evaluation of “Autotune” calibration against manual calibration of building energy models

    DOE PAGES

    Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; ...

    2016-08-26

    Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less

  8. Climate change impact assessment on hydrology of a small watershed using semi-distributed model

    NASA Astrophysics Data System (ADS)

    Pandey, Brij Kishor; Gosain, A. K.; Paul, George; Khare, Deepak

    2017-07-01

    This study is an attempt to quantify the impact of climate change on the hydrology of Armur watershed in Godavari river basin, India. A GIS-based semi-distributed hydrological model, soil and water assessment tool (SWAT) has been employed to estimate the water balance components on the basis of unique combinations of slope, soil and land cover classes for the base line (1961-1990) and future climate scenarios (2071-2100). Sensitivity analysis of the model has been performed to identify the most critical parameters of the watershed. Average monthly calibration (1987-1994) and validation (1995-2000) have been performed using the observed discharge data. Coefficient of determination (R2), Nash-Sutcliffe efficiency (ENS) and root mean square error (RMSE) were used to evaluate the model performance. Calibrated SWAT setup has been used to evaluate the changes in water balance components of future projection over the study area. HadRM3, a regional climatic data, have been used as input of the hydrological model for climate change impact studies. In results, it was found that changes in average annual temperature (+3.25 °C), average annual rainfall (+28 %), evapotranspiration (28 %) and water yield (49 %) increased for GHG scenarios with respect to the base line scenario.

  9. Integrated calibration sphere and calibration step fixture for improved coordinate measurement machine calibration

    DOEpatents

    Clifford, Harry J [Los Alamos, NM

    2011-03-22

    A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.

  10. Comparison Between One-Point Calibration and Two-Point Calibration Approaches in a Continuous Glucose Monitoring Algorithm

    PubMed Central

    Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl

    2014-01-01

    Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420

  11. RGB color calibration for quantitative image analysis: the "3D thin-plate spline" warping approach.

    PubMed

    Menesatti, Paolo; Angelini, Claudio; Pallottino, Federico; Antonucci, Francesca; Aguzzi, Jacopo; Costa, Corrado

    2012-01-01

    In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples' color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix). This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker) and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data.

  12. External validation of prognostic models to predict risk of gestational diabetes mellitus in one Dutch cohort: prospective multicentre cohort study.

    PubMed

    Lamain-de Ruiter, Marije; Kwee, Anneke; Naaktgeboren, Christiana A; de Groot, Inge; Evers, Inge M; Groenendaal, Floris; Hering, Yolanda R; Huisjes, Anjoke J M; Kirpestein, Cornel; Monincx, Wilma M; Siljee, Jacqueline E; Van 't Zelfde, Annewil; van Oirschot, Charlotte M; Vankan-Buitelaar, Simone A; Vonk, Mariska A A W; Wiegers, Therese A; Zwart, Joost J; Franx, Arie; Moons, Karel G M; Koster, Maria P H

    2016-08-30

     To perform an external validation and direct comparison of published prognostic models for early prediction of the risk of gestational diabetes mellitus, including predictors applicable in the first trimester of pregnancy.  External validation of all published prognostic models in large scale, prospective, multicentre cohort study.  31 independent midwifery practices and six hospitals in the Netherlands.  Women recruited in their first trimester (<14 weeks) of pregnancy between December 2012 and January 2014, at their initial prenatal visit. Women with pre-existing diabetes mellitus of any type were excluded.  Discrimination of the prognostic models was assessed by the C statistic, and calibration assessed by calibration plots.  3723 women were included for analysis, of whom 181 (4.9%) developed gestational diabetes mellitus in pregnancy. 12 prognostic models for the disorder could be validated in the cohort. C statistics ranged from 0.67 to 0.78. Calibration plots showed that eight of the 12 models were well calibrated. The four models with the highest C statistics included almost all of the following predictors: maternal age, maternal body mass index, history of gestational diabetes mellitus, ethnicity, and family history of diabetes. Prognostic models had a similar performance in a subgroup of nulliparous women only. Decision curve analysis showed that the use of these four models always had a positive net benefit.  In this external validation study, most of the published prognostic models for gestational diabetes mellitus show acceptable discrimination and calibration. The four models with the highest discriminative abilities in this study cohort, which also perform well in a subgroup of nulliparous women, are easy models to apply in clinical practice and therefore deserve further evaluation regarding their clinical impact. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  13. Techniques for improving the accuracy of cyrogenic temperature measurement in ground test programs

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Fabik, Richard H.

    1993-01-01

    The performance of a sensor is often evaluated by determining to what degree of accuracy a measurement can be made using this sensor. The absolute accuracy of a sensor is an important parameter considered when choosing the type of sensor to use in research experiments. Tests were performed to improve the accuracy of cryogenic temperature measurements by calibration of the temperature sensors when installed in their experimental operating environment. The calibration information was then used to correct for temperature sensor measurement errors by adjusting the data acquisition system software. This paper describes a method to improve the accuracy of cryogenic temperature measurements using corrections in the data acquisition system software such that the uncertainty of an individual temperature sensor is improved from plus or minus 0.90 deg R to plus or minus 0.20 deg R over a specified range.

  14. A new form of the calibration curve in radiochromic dosimetry. Properties and results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamponi, Matteo, E-mail: mtamponi@aslsassari.it; B

    Purpose: This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. Methods: The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer–Lambert law and a simple modeling of the film. The new calibration curve hasmore » been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. Results: The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. Conclusions: The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time after exposure. This form of the calibration curve could become even more useful with new optical digital devices using monochromatic light.« less

  15. A new form of the calibration curve in radiochromic dosimetry. Properties and results.

    PubMed

    Tamponi, Matteo; Bona, Rossana; Poggiu, Angela; Marini, Piergiorgio

    2016-07-01

    This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer-Lambert law and a simple modeling of the film. The new calibration curve has been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time after exposure. This form of the calibration curve could become even more useful with new optical digital devices using monochromatic light.

  16. The Impact of Indoor and Outdoor Radiometer Calibration on Solar Measurements: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    2016-07-01

    Accurate solar radiation data sets are critical to reducing the expenses associated with mitigating performance risk for solar energy conversion systems, and they help utility planners and grid system operators understand the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of calibration methodologies and the resulting calibration responsivities provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these radiometers are calibratedmore » indoors, and some are calibrated outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The reference radiometer calibrations are traceable to the World Radiometric Reference. These different methods of calibration demonstrated 1% to 2% differences in solar irradiance measurement. Analyzing these values will ultimately assist in determining the uncertainties of the radiometer data and will assist in developing consensus on a standard for calibration.« less

  17. Reconstructing the calibrated strain signal in the Advanced LIGO detectors

    NASA Astrophysics Data System (ADS)

    Viets, A. D.; Wade, M.; Urban, A. L.; Kandhasamy, S.; Betzwieser, J.; Brown, Duncan A.; Burguet-Castell, J.; Cahillane, C.; Goetz, E.; Izumi, K.; Karki, S.; Kissel, J. S.; Mendell, G.; Savage, R. L.; Siemens, X.; Tuyenbayev, D.; Weinstein, A. J.

    2018-05-01

    Advanced LIGO’s raw detector output needs to be calibrated to compute dimensionless strain h(t) . Calibrated strain data is produced in the time domain using both a low-latency, online procedure and a high-latency, offline procedure. The low-latency h(t) data stream is produced in two stages, the first of which is performed on the same computers that operate the detector’s feedback control system. This stage, referred to as the front-end calibration, uses infinite impulse response (IIR) filtering and performs all operations at a 16 384 Hz digital sampling rate. Due to several limitations, this procedure currently introduces certain systematic errors in the calibrated strain data, motivating the second stage of the low-latency procedure, known as the low-latency gstlal calibration pipeline. The gstlal calibration pipeline uses finite impulse response (FIR) filtering to apply corrections to the output of the front-end calibration. It applies time-dependent correction factors to the sensing and actuation components of the calibrated strain to reduce systematic errors. The gstlal calibration pipeline is also used in high latency to recalibrate the data, which is necessary due mainly to online dropouts in the calibrated data and identified improvements to the calibration models or filters.

  18. Performance Evaluation of the Operational Air Quality Monitor for Water Testing Aboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Wallace, William T.; Limero, Thomas F.; Gazda, Daniel B.; Minton, John M.; Macatangay, Ariel V.; Dwivedi, Prabha; Fernandez, Facundo M.

    2014-01-01

    Real-time environmental monitoring on ISS is necessary to provide data in a timely fashion and to help ensure astronaut health. Current real-time water TOC monitoring provides high-quality trending information, but compound-specific data is needed. The combination of ETV with the AQM showed that compounds of interest could be liberated from water and analyzed in the same manner as air sampling. Calibration of the AQM using water samples allowed for the quantitative analysis of ISS archival samples. Some calibration issues remain, but the excellent accuracy of DMSD indicates that ETV holds promise for as a sample introduction method for water analysis in spaceflight.

  19. Improved luminosity determination in pp collisions at $$\\sqrt {s} = 7\\ \\mathrm{TeV}$$ using the ATLAS detector at the LHC

    DOE PAGES

    Aad, G.; Abajyan, T.; Abbott, B.; ...

    2013-08-14

    The luminosity calibration for the ATLAS detector at the LHC during pp collisions at √s = 7 TeV in 2010 and 2011 is presented. Evaluation of the luminosity scale is performed using several luminosity-sensitive detectors, and comparisons are made of the long-term stability and accuracy of this calibration applied to the pp collisions at √s = 7 TeV. A luminosity uncertainty of δL/L= ± 3.5 % is obtained for the 47 pb -1 of data delivered to ATLAS in 2010, and an uncertainty of δL/L= ± 1.8 % is obtained for the 5.5 fb -1 delivered in 2011.

  20. Quantitative aspects of inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Bulska, Ewa; Wagner, Barbara

    2016-10-01

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.

Top