Sample records for specific calibration problems

  1. Using Multiple Calibration Indices in Order to Capture the Complex Picture of What Affects Students' Accuracy of Feeling of Confidence

    ERIC Educational Resources Information Center

    Boekaerts, Monique; Rozendaal, Jeroen S.

    2010-01-01

    The present study used multiple calibration indices to capture the complex picture of fifth graders' calibration of feeling of confidence in mathematics. Specifically, the effects of gender, type of mathematical problem, instruction method, and time of measurement (before and after problem solving) on calibration skills were investigated. Fourteen…

  2. Redundant interferometric calibration as a complex optimization problem

    NASA Astrophysics Data System (ADS)

    Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.

    2018-05-01

    Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.

  3. The algorithm for automatic detection of the calibration object

    NASA Astrophysics Data System (ADS)

    Artem, Kruglov; Irina, Ugfeld

    2017-06-01

    The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.

  4. Improving a complex finite-difference ground water flow model through the use of an analytic element screening model

    USGS Publications Warehouse

    Hunt, R.J.; Anderson, M.P.; Kelson, V.A.

    1998-01-01

    This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.

  5. Linear and nonlinear trending and prediction for AVHRR time series data

    NASA Technical Reports Server (NTRS)

    Smid, J.; Volf, P.; Slama, M.; Palus, M.

    1995-01-01

    The variability of AVHRR calibration coefficient in time was analyzed using algorithms of linear and non-linear time series analysis. Specifically we have used the spline trend modeling, autoregressive process analysis, incremental neural network learning algorithm and redundancy functional testing. The analysis performed on available AVHRR data sets revealed that (1) the calibration data have nonlinear dependencies, (2) the calibration data depend strongly on the target temperature, (3) both calibration coefficients and the temperature time series can be modeled, in the first approximation, as autonomous dynamical systems, (4) the high frequency residuals of the analyzed data sets can be best modeled as an autoregressive process of the 10th degree. We have dealt with a nonlinear identification problem and the problem of noise filtering (data smoothing). The system identification and filtering are significant problems for AVHRR data sets. The algorithms outlined in this study can be used for the future EOS missions. Prediction and smoothing algorithms for time series of calibration data provide a functional characterization of the data. Those algorithms can be particularly useful when calibration data are incomplete or sparse.

  6. A definitive calibration record for the Landsat-5 thematic mapper anchored to the Landsat-7 radiometric scale

    USGS Publications Warehouse

    Teillet, P.M.; Helder, D.L.; Ruggles, T.A.; Landry, R.; Ahern, F.J.; Higgs, N.J.; Barsi, J.; Chander, G.; Markham, B.L.; Barker, J.L.; Thome, K.J.; Schott, J.R.; Palluconi, Frank Don

    2004-01-01

    A coordinated effort on the part of several agencies has led to the specification of a definitive radiometric calibration record for the Landsat-5 thematic mapper (TM) for its lifetime since launch in 1984. The time-dependent calibration record for Landsat-5 TM has been placed on the same radiometric scale as the Landsat-7 enhanced thematic mapper plus (ETM+). It has been implemented in the National Landsat Archive Production Systems (NLAPS) in use in North America. This paper documents the results of this collaborative effort and the specifications for the related calibration processing algorithms. The specifications include (i) anchoring of the Landsat-5 TM calibration record to the Landsat-7 ETM+ absolute radiometric calibration, (ii) new time-dependent calibration processing equations and procedures applicable to raw Landsat-5 TM data, and (iii) algorithms for recalibration computations applicable to some of the existing processed datasets in the North American context. The cross-calibration between Landsat-5 TM and Landsat-7 ETM+ was achieved using image pairs from the tandem-orbit configuration period that was programmed early in the Laridsat-7 mission. The time-dependent calibration for Landsat-5 TM is based on a detailed trend analysis of data from the on-board internal calibrator. The new lifetime radiometric calibration record for Landsat-5 will overcome problems with earlier product generation owing to inadequate maintenance and documentation of the calibration over time and will facilitate the quantitative examination of a continuous, near-global dataset at 30-m scale that spans almost two decades.

  7. Development and testing of item response theory-based item banks and short forms for eye, skin and lung problems in sarcoidosis.

    PubMed

    Victorson, David E; Choi, Seung; Judson, Marc A; Cella, David

    2014-05-01

    Sarcoidosis is a multisystem disease that can negatively impact health-related quality of life (HRQL) across generic (e.g., physical, social and emotional wellbeing) and disease-specific (e.g., pulmonary, ocular, dermatologic) domains. Measurement of HRQL in sarcoidosis has largely relied on generic patient-reported outcome tools, with little disease-specific measures available. The purpose of this paper is to present the development and testing of disease-specific item banks and short forms of lung, skin and eye problems, which are a part of a new patient-reported outcome (PRO) instrument called the sarcoidosis assessment tool. After prioritizing and selecting the most important disease-specific domains, we wrote new items to reflect disease-specific problems by drawing from patient focus group and clinician expert survey data that were used to create our conceptual model of HRQL in sarcoidosis. Item pools underwent cognitive interviews by sarcoidosis patients (n = 13), and minor modifications were made. These items were administered in a multi-site study (n = 300) to obtain item calibrations and create calibrated short forms using item response theory (IRT) approaches. From the available item pools, we created four new item banks and short forms: (1) skin problems, (2) skin stigma, (3) lung problems, and (4) eye Problems. We also created and tested supplemental forms of the most common constitutional symptoms and negative effects of corticosteroids. Several new sarcoidosis-specific PROs were developed and tested using IRT approaches. These new measures can advance more precise and targeted HRQL assessment in sarcoidosis clinical trials and clinical practice.

  8. Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation

    NASA Astrophysics Data System (ADS)

    Bueno, Diana R.; Montano, L.

    2017-04-01

    Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.

  9. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    PubMed

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. BioVapor Model Evaluation

    EPA Science Inventory

    General background on modeling and specifics of modeling vapor intrusion are given. Three classical model applications are described and related to the problem of petroleum vapor intrusion. These indicate the need for model calibration and uncertainty analysis. Evaluation of Bi...

  11. Approach to derivation of SIR-C science requirements for calibration. [Shuttle Imaging Radar

    NASA Technical Reports Server (NTRS)

    Dubois, Pascale C.; Evans, Diane; Van Zyl, Jakob

    1992-01-01

    Many of the experiments proposed for the forthcoming SIR-C mission require calibrated data, for example those which emphasize (1) deriving quantitative geophysical information (e.g., surface roughness and dielectric constant), (2) monitoring daily and seasonal changes in the Earth's surface (e.g., soil moisture), (3) extending local case studies to regional and worldwide scales, and (4) using SIR-C data with other spaceborne sensors (e.g., ERS-1, JERS-1, and Radarsat). There are three different aspects to the SIR-C calibration problem: radiometric and geometric calibration, which have been previously reported, and polarimetric calibration. The study described in this paper is an attempt at determining the science requirements for polarimetric calibration for SIR-C. A model describing the effect of miscalibration is presented first, followed by an example describing how to assess the calibration requirements specific to an experiment. The effects of miscalibration on some commonly used polarimetric parameters are also discussed. It is shown that polarimetric calibration requirements are strongly application dependent. In consequence, the SIR-C investigators are advised to assess the calibration requirements of their own experiment. A set of numbers summarizing SIR-C polarimetric calibration goals concludes this paper.

  12. Calibration of the Urbana lidar system

    NASA Technical Reports Server (NTRS)

    Cerny, T.; Sechrist, C. F., Jr.

    1980-01-01

    A method for calibrating data obtained by the Urban sodium lidar system is presented. First, an expression relating the number of photocounts originating from a specific altitude range to the soodium concentration is developed. This relation is then simplified by normalizing the sodium photocounts with photocounts originating from the Rayleigh region of the atmosphere. To evaluate the calibration expression, the laser linewidth must be known. Therefore, a method for measuring the laser linewidth using a Fabry-Perot interferometer is given. The laser linewidth was found to be 6 + or - 2.5 pm. Problems due to photomultiplier tube overloading are discussed. Finally, calibrated data is presented. The sodium column abundance exhibits something close to a sinusoidal variation throughout the year with the winter months showing an enhancement of a factor of 5 to 7 over the summer months.

  13. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    NASA Astrophysics Data System (ADS)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  14. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    NASA Astrophysics Data System (ADS)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal (up to a poly-log factor). Applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach.

  15. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  16. A Review of Calibration Transfer Practices and Instrument Differences in Spectroscopy.

    PubMed

    Workman, Jerome J

    2018-03-01

    Calibration transfer for use with spectroscopic instruments, particularly for near-infrared, infrared, and Raman analysis, has been the subject of multiple articles, research papers, book chapters, and technical reviews. There has been a myriad of approaches published and claims made for resolving the problems associated with transferring calibrations; however, the capability of attaining identical results over time from two or more instruments using an identical calibration still eludes technologists. Calibration transfer, in a precise definition, refers to a series of analytical approaches or chemometric techniques used to attempt to apply a single spectral database, and the calibration model developed using that database, for two or more instruments, with statistically retained accuracy and precision. Ideally, one would develop a single calibration for any particular application, and move it indiscriminately across instruments and achieve identical analysis or prediction results. There are many technical aspects involved in such precision calibration transfer, related to the measuring instrument reproducibility and repeatability, the reference chemical values used for the calibration, the multivariate mathematics used for calibration, and sample presentation repeatability and reproducibility. Ideally, a multivariate model developed on a single instrument would provide a statistically identical analysis when used on other instruments following transfer. This paper reviews common calibration transfer techniques, mostly related to instrument differences, and the mathematics of the uncertainty between instruments when making spectroscopic measurements of identical samples. It does not specifically address calibration maintenance or reference laboratory differences.

  17. A simplified gross primary production and evapotranspiration model for boreal coniferous forests - is a generic calibration sufficient?

    NASA Astrophysics Data System (ADS)

    Minunno, F.; Peltoniemi, M.; Launiainen, S.; Aurela, M.; Lindroth, A.; Lohila, A.; Mammarella, I.; Minkkinen, K.; Mäkelä, A.

    2015-07-01

    The problem of model complexity has been lively debated in environmental sciences as well as in the forest modelling community. Simple models are less input demanding and their calibration involves a lower number of parameters, but they might be suitable only at local scale. In this work we calibrated a simplified ecosystem process model (PRELES) to data from multiple sites and we tested if PRELES can be used at regional scale to estimate the carbon and water fluxes of Boreal conifer forests. We compared a multi-site (M-S) with site-specific (S-S) calibrations. Model calibrations and evaluations were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. To evaluate model performances BMC results were combined with more classical analysis of model-data mismatch (M-DM). Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 10 sites of Finland and Sweden were used in the study. Calibration results showed that similar estimates were obtained for the parameters at which model outputs are most sensitive. No significant differences were encountered in the predictions of the multi-site and site-specific versions of PRELES with exception of a site with agricultural history (Alkkia). Although PRELES predicted GPP better than evapotranspiration, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Our analyses underlined also the importance of using long and carefully collected flux datasets in model calibration. In fact, even a single site can provide model calibrations that can be applied at a wider spatial scale, since it covers a wide range of variability in climatic conditions.

  18. Commodity-Free Calibration

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Commodity-free calibration is a reaction rate calibration technique that does not require the addition of any commodities. This technique is a specific form of the reaction rate technique, where all of the necessary reactants, other than the sample being analyzed, are either inherent in the analyzing system or specifically added or provided to the system for a reason other than calibration. After introduction, the component of interest is exposed to other reactants or flow paths already present in the system. The instrument detector records one of the following to determine the rate of reaction: the increase in the response of the reaction product, a decrease in the signal of the analyte response, or a decrease in the signal from the inherent reactant. With this data, the initial concentration of the analyte is calculated. This type of system can analyze and calibrate simultaneously, reduce the risk of false positives and exposure to toxic vapors, and improve accuracy. Moreover, having an excess of the reactant already present in the system eliminates the need to add commodities, which further reduces cost, logistic problems, and potential contamination. Also, the calculations involved can be simplified by comparison to those of the reaction rate technique. We conducted tests with hypergols as an initial investigation into the feasiblility of the technique.

  19. Active Subspace Methods for Data-Intensive Inverse Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qiqi

    2017-04-27

    The project has developed theory and computational tools to exploit active subspaces to reduce the dimension in statistical calibration problems. This dimension reduction enables MCMC methods to calibrate otherwise intractable models. The same theoretical and computational tools can also reduce the measurement dimension for calibration problems that use large stores of data.

  20. Calibration of a stochastic health evolution model using NHIS data

    NASA Astrophysics Data System (ADS)

    Gupta, Aparna; Li, Zhisheng

    2011-10-01

    This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.

  1. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.

    PubMed

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method

    NASA Astrophysics Data System (ADS)

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods.

  3. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  4. Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation

    NASA Astrophysics Data System (ADS)

    Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno

    2014-05-01

    A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides further validation of FSD, the next steps include an extension of the study to different catchments and other hydrological models with a similar structure.

  5. A monolithic 3D-0D coupled closed-loop model of the heart and the vascular system: Experiment-based parameter estimation for patient-specific cardiac mechanics.

    PubMed

    Hirschvogel, Marc; Bassilious, Marina; Jagschies, Lasse; Wildhirt, Stephen M; Gee, Michael W

    2016-10-15

    A model for patient-specific cardiac mechanics simulation is introduced, incorporating a 3-dimensional finite element model of the ventricular part of the heart, which is coupled to a reduced-order 0-dimensional closed-loop vascular system, heart valve, and atrial chamber model. The ventricles are modeled by a nonlinear orthotropic passive material law. The electrical activation is mimicked by a prescribed parameterized active stress acting along a generic muscle fiber orientation. Our activation function is constructed such that the start of ventricular contraction and relaxation as well as the active stress curve's slope are parameterized. The imaging-based patient-specific ventricular model is prestressed to low end-diastolic pressure to account for the imaged, stressed configuration. Visco-elastic Robin boundary conditions are applied to the heart base and the epicardium to account for the embedding surrounding. We treat the 3D solid-0D fluid interaction as a strongly coupled monolithic problem, which is consistently linearized with respect to 3D solid and 0D fluid model variables to allow for a Newton-type solution procedure. The resulting coupled linear system of equations is solved iteratively in every Newton step using 2  ×  2 physics-based block preconditioning. Furthermore, we present novel efficient strategies for calibrating active contractile and vascular resistance parameters to experimental left ventricular pressure and stroke volume data gained in porcine experiments. Two exemplary states of cardiovascular condition are considered, namely, after application of vasodilatory beta blockers (BETA) and after injection of vasoconstrictive phenylephrine (PHEN). The parameter calibration to the specific individual and cardiovascular state at hand is performed using a 2-stage nonlinear multilevel method that uses a low-fidelity heart model to compute a parameter correction for the high-fidelity model optimization problem. We discuss 2 different low-fidelity model choices with respect to their ability to augment the parameter optimization. Because the periodic state conditions on the model (active stress, vascular pressures, and fluxes) are a priori unknown and also dependent on the parameters to be calibrated (and vice versa), we perform parameter calibration and periodic state condition estimation simultaneously. After a couple of heart beats, the calibration algorithm converges to a settled, periodic state because of conservation of blood volume within the closed-loop circulatory system. The proposed model and multilevel calibration method are cost-efficient and allow for an efficient determination of a patient-specific in silico heart model that reproduces physiological observations very well. Such an individual and state accurate model is an important predictive tool in intervention planning, assist device engineering and other medical applications. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    NASA Technical Reports Server (NTRS)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  7. Infrared stereo calibration for unmanned ground vehicle navigation

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  8. Fast and robust curve skeletonization for real-world elongated objects

    USDA-ARS?s Scientific Manuscript database

    These datasets were generated for calibrating robot-camera systems. In an extension, we also considered the problem of calibrating robots with more than one camera. These datasets are provided as a companion to the paper, "Solving the Robot-World Hand-Eye(s) Calibration Problem with Iterative Meth...

  9. High-frequency measurements of aeolian saltation flux: Field-based methodology and applications

    NASA Astrophysics Data System (ADS)

    Martin, Raleigh L.; Kok, Jasper F.; Hugenholtz, Chris H.; Barchyn, Thomas E.; Chamecki, Marcelo; Ellis, Jean T.

    2018-02-01

    Aeolian transport of sand and dust is driven by turbulent winds that fluctuate over a broad range of temporal and spatial scales. However, commonly used aeolian transport models do not explicitly account for such fluctuations, likely contributing to substantial discrepancies between models and measurements. Underlying this problem is the absence of accurate sand flux measurements at the short time scales at which wind speed fluctuates. Here, we draw on extensive field measurements of aeolian saltation to develop a methodology for generating high-frequency (up to 25 Hz) time series of total (vertically-integrated) saltation flux, namely by calibrating high-frequency (HF) particle counts to low-frequency (LF) flux measurements. The methodology follows four steps: (1) fit exponential curves to vertical profiles of saltation flux from LF saltation traps, (2) determine empirical calibration factors through comparison of LF exponential fits to HF number counts over concurrent time intervals, (3) apply these calibration factors to subsamples of the saltation count time series to obtain HF height-specific saltation fluxes, and (4) aggregate the calibrated HF height-specific saltation fluxes into estimates of total saltation fluxes. When coupled to high-frequency measurements of wind velocity, this methodology offers new opportunities for understanding how aeolian saltation dynamics respond to variability in driving winds over time scales from tens of milliseconds to days.

  10. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.

  11. Application of six sigma and AHP in analysis of variable lead time calibration process instrumentation

    NASA Astrophysics Data System (ADS)

    Rimantho, Dino; Rahman, Tomy Abdul; Cahyadi, Bambang; Tina Hernawati, S.

    2017-02-01

    Calibration of instrumentation equipment in the pharmaceutical industry is an important activity to determine the true value of a measurement. Preliminary studies indicated that occur lead-time calibration resulted in disruption of production and laboratory activities. This study aimed to analyze the causes of lead-time calibration. Several methods used in this study such as, Six Sigma in order to determine the capability process of the calibration instrumentation of equipment. Furthermore, the method of brainstorming, Pareto diagrams, and Fishbone diagrams were used to identify and analyze the problems. Then, the method of Hierarchy Analytical Process (AHP) was used to create a hierarchical structure and prioritize problems. The results showed that the value of DPMO around 40769.23 which was equivalent to the level of sigma in calibration equipment approximately 3,24σ. This indicated the need for improvements in the calibration process. Furthermore, the determination of problem-solving strategies Lead Time Calibration such as, shortens the schedule preventive maintenance, increase the number of instrument Calibrators, and train personnel. Test results on the consistency of the whole matrix of pairwise comparisons and consistency test showed the value of hierarchy the CR below 0.1.

  12. Air data position-error calibration using state reconstruction techniques

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.; Larson, T. J.; Ehernberger, L. J.

    1984-01-01

    During the highly maneuverable aircraft technology (HiMAT) flight test program recently completed at NASA Ames Research Center's Dryden Flight Research Facility, numerous problems were experienced in airspeed calibration. This necessitated the use of state reconstruction techniques to arrive at a position-error calibration. For the HiMAT aircraft, most of the calibration effort was expended on flights in which the air data pressure transducers were not performing accurately. Following discovery of this problem, the air data transducers of both aircraft were wrapped in heater blankets to correct the problem. Additional calibration flights were performed, and from the resulting data a satisfactory position-error calibration was obtained. This calibration and data obtained before installation of the heater blankets were used to develop an alternate calibration method. The alternate approach took advantage of high-quality inertial data that was readily available. A linearized Kalman filter (LKF) was used to reconstruct the aircraft's wind-relative trajectory; the trajectory was then used to separate transducer measurement errors from the aircraft position error. This calibration method is accurate and inexpensive. The LKF technique has an inherent advantage of requiring that no flight maneuvers be specially designed for airspeed calibrations. It is of particular use when the measurements of the wind-relative quantities are suspected to have transducer-related errors.

  13. Definition and sensitivity of the conceptual MORDOR rainfall-runoff model parameters using different multi-criteria calibration strategies

    NASA Astrophysics Data System (ADS)

    Garavaglia, F.; Seyve, E.; Gottardi, F.; Le Lay, M.; Gailhard, J.; Garçon, R.

    2014-12-01

    MORDOR is a conceptual hydrological model extensively used in Électricité de France (EDF, French electric utility company) operational applications: (i) hydrological forecasting, (ii) flood risk assessment, (iii) water balance and (iv) climate change studies. MORDOR is a lumped, reservoir, elevation based model with hourly or daily areal rainfall and air temperature as the driving input data. The principal hydrological processes represented are evapotranspiration, direct and indirect runoff, ground water, snow accumulation and melt and routing. The model has been intensively used at EDF for more than 20 years, in particular for modeling French mountainous watersheds. In the matter of parameters calibration we propose and test alternative multi-criteria techniques based on two specific approaches: automatic calibration using single-objective functions and a priori parameter calibration founded on hydrological watershed features. The automatic calibration approach uses single-objective functions, based on Kling-Gupta efficiency, to quantify the good agreement between the simulated and observed runoff focusing on four different runoff samples: (i) time-series sample, (I) annual hydrological regime, (iii) monthly cumulative distribution functions and (iv) recession sequences.The primary purpose of this study is to analyze the definition and sensitivity of MORDOR parameters testing different calibration techniques in order to: (i) simplify the model structure, (ii) increase the calibration-validation performance of the model and (iii) reduce the equifinality problem of calibration process. We propose an alternative calibration strategy that reaches these goals. The analysis is illustrated by calibrating MORDOR model to daily data for 50 watersheds located in French mountainous regions.

  14. Signal inference with unknown response: calibration-uncertainty renormalized estimator.

    PubMed

    Dorn, Sebastian; Enßlin, Torsten A; Greiner, Maksim; Selig, Marco; Boehm, Vanessa

    2015-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.

  15. Magnetic suspension and balance systems (MSBSs)

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.; Kilgore, Robert A.

    1987-01-01

    The problems of wind tunnel testing are outlined, with attention given to the problems caused by mechanical support systems, such as support interference, dynamic-testing restrictions, and low productivity. The basic principles of magnetic suspension are highlighted, along with the history of magnetic suspension and balance systems. Roll control, size limitations, high angle of attack, reliability, position sensing, and calibration are discussed among the problems and limitations of the existing magnetic suspension and balance systems. Examples of the existing systems are presented, and design studies for future systems are outlined. Problems specific to large-scale magnetic suspension and balance systems, such as high model loads, requirements for high-power electromagnets, high-capacity power supplies, highly sophisticated control systems and position sensors, and high costs are assessed.

  16. Amelioration de la precision d'un bras robotise pour une application d'ebavurage

    NASA Astrophysics Data System (ADS)

    Mailhot, David

    Process automation is a more and more referred solution when it comes to complex, tedious or even dangerous tasks for human. Flexibility, low cost and compactness make industrial robots very attractive for automation. Even if many developments have been made to enhance robot's performances, they still can not meet some industries requirements. For instance, aerospace industry requires very tight tolerances on a large variety of parts, which is not what robots were designed for at first. When it comes to robotic deburring, robot imprecision is a major problem that needs to be addressed before it can be implemented in production. This master's thesis explores different calibration techniques for robot's dimensions that could overcome the problem and make the robotic deburring application possible. Some calibration techniques that are easy to implement in production environment are simulated and compared. A calibration technique for tool's dimensions is simulated and implemented to evaluate its potential. The most efficient technique will be used within the application. Finally, the production environment and requirements are explained. The remaining imprecision will be compensated by the use of a force/torque sensor integrated with the robot's controller and by the use of a camera. Many tests are made to define the best parameters to use to deburr a specific feature on a chosen part. Concluding tests are shown and demonstrate the potential use of robotic deburring. Keywords: robotic calibration, robotic arm, robotic precision, robotic deburring

  17. A cooperative strategy for parameter estimation in large scale systems biology models.

    PubMed

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R

    2012-06-22

    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.

  18. A cooperative strategy for parameter estimation in large scale systems biology models

    PubMed Central

    2012-01-01

    Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112

  19. Development of SIR-C Ground Calibration Equipment

    NASA Technical Reports Server (NTRS)

    Freeman, A.; Azeem, M.; Haub, D.; Sarabandi, K.

    1993-01-01

    SIR-C/X-SAR is currently scheduled for launch in April 1994. SIR-C is an L-Band and C-Band, multi-polarization spaceborne SAR system developed by NASA/JPL. X- SAR is an X-Band SAR system developed by DARA/ASI. One of the problems involved in calibrating the SIR-C instrument is to make sure that the horizontal (H) and vertical (V) polarized beams are aligned in the azimuth direction, i.e.. that they are pointing in the same direction. This is important if the polarimetric performance specifications for the system are to be met. To solve this problem, we have designed and built a prototype of a low-cost ground receiver capable of recording received power from two antennas, one H-polarized, the other V-polarized. The two signals are mixed to audio then recorded on the left and right stereo channels of a standard audio cassette player. The audio cassette recording can then be played back directly into a Macintosh computer, where it is digitized. Analysis of.

  20. Synthesis Polarimetry Calibration

    NASA Astrophysics Data System (ADS)

    Moellenbrock, George

    2017-10-01

    Synthesis instrumental polarization calibration fundamentals for both linear (ALMA) and circular (EVLA) feed bases are reviewed, with special attention to the calibration heuristics supported in CASA. Practical problems affecting modern instruments are also discussed.

  1. Problems in the use of interference filters for spectrophotometric determination of total ozone

    NASA Technical Reports Server (NTRS)

    Basher, R. E.; Matthews, W. A.

    1977-01-01

    An analysis of the use of ultraviolet narrow-band interference filters for total ozone determination is given with reference to the New Zealand filter spectrophotometer under the headings of filter monochromaticity, temperature dependence, orientation dependence, aging, and specification tolerances and nonuniformity. Quantitative details of each problem are given, together with the means used to overcome them in the New Zealand instrument. The tuning of the instrument's filter center wavelengths to a common set of values by tilting the filters is also described, along with a simple calibration method used to adjust and set these center wavelengths.

  2. Design, development and fabrication of a Solar Experiment Alignment Sensor (SEAS)

    NASA Technical Reports Server (NTRS)

    Bancroft, J. R.; Fain, M. Z.; Johnson, D. F.

    1971-01-01

    The design, development and testing of a laboratory SEAS (Solar Experiment Alignment Sensor) system are presented. The system is capable of overcoming traditional alignment and calibration problems to permit pointing anywhere on the solar disc to an accuracy of five arc seconds. The concept, development and laboratory testing phases of the program are discussed, and particular attention has been given to specific problems associated with selection of materials, and components. The conclusions summarize performance capability and discuss areas for further study including the effects of solar limb darkening and effects of annual variations in the apparent solar diameter.

  3. Cryogenic balances for the US NTF

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T.

    1989-01-01

    Force balances were used to obtain aerodynamic data in the National Transonic Facility (NTF) wind tunnel since it became operational in 1983. These balances were designed, fabricated, gaged, and calibrated to Langley Research Center's specifications to operate over the temperature range of -320 F to +140 F without thermal control. Some of the materials and procedures developed to obtain a balance that would perform in this environment are reviewed. The degree of success in using these balances thus far is reported. Some of the problem areas that need additional work are specified and some of the progress addressing these problems is described.

  4. GEODYN system description, volume 1. [computer program for estimation of orbit and geodetic parameters

    NASA Technical Reports Server (NTRS)

    Chin, M. M.; Goad, C. C.; Martin, T. V.

    1972-01-01

    A computer program for the estimation of orbit and geodetic parameters is presented. The areas in which the program is operational are defined. The specific uses of the program are given as: (1) determination of definitive orbits, (2) tracking instrument calibration, (3) satellite operational predictions, and (4) geodetic parameter estimation. The relationship between the various elements in the solution of the orbit and geodetic parameter estimation problem is analyzed. The solution of the problems corresponds to the orbit generation mode in the first case and to the data reduction mode in the second case.

  5. Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture. Part 1: Non-reactive physical mass transfer across the wetted wall column: Original Research Article: Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    A hierarchical model calibration and validation is proposed for quantifying the confidence level of mass transfer prediction using a computational fluid dynamics (CFD) model, where the solvent-based carbon dioxide (CO2) capture is simulated and simulation results are compared to the parallel bench-scale experimental data. Two unit problems with increasing level of complexity are proposed to breakdown the complex physical/chemical processes of solvent-based CO2 capture into relatively simpler problems to separate the effects of physical transport and chemical reaction. This paper focuses on the calibration and validation of the first unit problem, i.e. the CO2 mass transfer across a falling ethanolaminemore » (MEA) film in absence of chemical reaction. This problem is investigated both experimentally and numerically using nitrous oxide (N2O) as a surrogate for CO2. To capture the motion of gas-liquid interface, a volume of fluid method is employed together with a one-fluid formulation to compute the mass transfer between the two phases. Bench-scale parallel experiments are designed and conducted to validate and calibrate the CFD models using a general Bayesian calibration. Two important transport parameters, e.g. Henry’s constant and gas diffusivity, are calibrated to produce the posterior distributions, which will be used as the input for the second unit problem to address the chemical adsorption of CO2 across the MEA falling film, where both mass transfer and chemical reaction are involved.« less

  6. Common problems in the elicitation and analysis of expert opinion affecting probabilistic safety assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, M.A.; Booker, J.M.

    1990-01-01

    Expert opinion is frequently used in probabilistic safety assessment (PSA), particularly in estimating low probability events. In this paper, we discuss some of the common problems encountered in eliciting and analyzing expert opinion data and offer solutions or recommendations. The problems are: that experts are not naturally Bayesian. People fail to update their existing information to account for new information as it becomes available, as would be predicted by the Bayesian philosophy; that experts cannot be fully calibrated. To calibrate experts, the feedback from the known quantities must be immediate, frequent, and specific to the task; that experts are limitedmore » in the number of things that they can mentally juggle at a time to 7 {plus minus} 2; that data gatherers and analysts can introduce bias by unintentionally causing an altering of the expert's thinking or answers; that the level of detail the data, or granularity, can affect the analyses; and the conditioning effect poses difficulties in gathering and analyzing of the expert data. The data that the expert gives can be conditioned on a variety of factors that can affect the analysis and the interpretation of the results. 31 refs.« less

  7. Multimode ergometer system

    NASA Technical Reports Server (NTRS)

    Bynum, B. G.; Gause, R. L.; Spier, R. A.

    1971-01-01

    System overcomes previous ergometer design and calibration problems including inaccurate measurements, large weight, size, and input power requirements, poor heat dissipation, high flammability, and inaccurate calibration. Device consists of lightweight, accurately controlled ergometer, restraint system, and calibration system.

  8. The Calibration and error analysis of Shallow water (less than 100m) Multibeam Echo-Sounding System

    NASA Astrophysics Data System (ADS)

    Lin, M.

    2016-12-01

    Multibeam echo-sounders(MBES) have been developed to gather bathymetric and acoustic data for more efficient and more exact mapping of the oceans. This gain in efficiency does not come without drawbacks. Indeed, the finer the resolution of remote sensing instruments, the harder they are to calibrate. This is the case for multibeam echo-sounding systems (MBES). We are no longer dealing with sounding lines where the bathymetry must be interpolated between them to engender consistent representations of the seafloor. We now need to match together strips (swaths) of totally ensonified seabed. As a consequence, misalignment and time lag problems emerge as artifacts in the bathymetry from adjacent or overlapping swaths, particularly when operating in shallow water. More importantly, one must still verify that bathymetric data meet the accuracy requirements. This paper aims to summarize the system integration involved with MBES and identify the various source of error pertaining to shallow water survey (100m and less). A systematic method for the calibration of shallow water MBES is proposed and presented as a set of field procedures. The procedures aim at detecting, quantifying and correcting systematic instrumental and installation errors. Hence, calibrating for variations of the speed of sound in the water column, which is natural in origin, is not addressed in this document. The data which used in calibration will reference International Hydrographic Organization(IHO) and other related standards to compare. This paper aims to set a model in the specific area which can calibrate the error due to instruments. We will construct a procedure in patch test and figure out all the possibilities may make sounding data with error then calculate the error value to compensate. In general, the problems which have to be solved is the patch test's 4 correction in the Hypack system 1.Roll 2.GPS Latency 3.Pitch 4.Yaw. Cause These 4 correction affect each others, we run each survey line to calibrate. GPS Latency is synchronized GPS to echo sounder. Future studies concerning any shallower portion of an area, by this procedure can be more accurate sounding value and can do more detailed research.

  9. Decomposition of the Mean Squared Error and NSE Performance Criteria: Implications for Improving Hydrological Modelling

    NASA Technical Reports Server (NTRS)

    Gupta, Hoshin V.; Kling, Harald; Yilmaz, Koray K.; Martinez-Baquero, Guillermo F.

    2009-01-01

    The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.

  10. Signal processing and calibration procedures for in situ diode-laser absorption spectroscopy.

    PubMed

    Werle, P W; Mazzinghi, P; D'Amato, F; De Rosa, M; Maurer, K; Slemr, F

    2004-07-01

    Gas analyzers based on tunable diode-laser spectroscopy (TDLS) provide high sensitivity, fast response and highly specific in situ measurements of several atmospheric trace gases simultaneously. Under optimum conditions even a shot noise limited performance can be obtained. For field applications outside the laboratory practical limitations are important. At ambient mixing ratios below a few parts-per-billion spectrometers become more and more sensitive towards noise, interference, drift effects and background changes associated with low level signals. It is the purpose of this review to address some of the problems which are encountered at these low levels and to describe a signal processing strategy for trace gas monitoring and a concept for in situ system calibration applicable for tunable diode-laser spectroscopy. To meet the requirement of quality assurance for field measurements and monitoring applications, procedures to check the linearity according to International Standard Organization regulations are described and some measurements of calibration functions are presented and discussed.

  11. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    PubMed

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  12. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    PubMed Central

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. PMID:28672823

  13. Source calibrations and SDC calorimeter requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.

    Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a local'' calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an in situ'' calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to mask'' an optical cookie'' in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase global'' calibration of towers by movable radioactive sources is adopted.« less

  14. Source calibrations and SDC calorimeter requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.

    Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a ``local`` calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an ``in situ`` calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to ``mask`` an optical ``cookie`` in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase ``global`` calibration of towers by movable radioactive sources is adopted.« less

  15. Parameterizations for reducing camera reprojection error for robot-world hand-eye calibration

    USDA-ARS?s Scientific Manuscript database

    Accurate robot-world, hand-eye calibration is crucial to automation tasks. In this paper, we discuss the robot-world, hand-eye calibration problem which has been modeled as the linear relationship AX equals ZB, where X and Z are the unknown calibration matrices composed of rotation and translation ...

  16. Camera calibration correction in shape from inconsistent silhouette

    USDA-ARS?s Scientific Manuscript database

    The use of shape from silhouette for reconstruction tasks is plagued by two types of real-world errors: camera calibration error and silhouette segmentation error. When either error is present, we call the problem the Shape from Inconsistent Silhouette (SfIS) problem. In this paper, we show how sm...

  17. From Healthcare to Warfare and Reverse: How Should We Regulate Dual-Use Neurotechnology?

    PubMed

    Ienca, Marcello; Jotterand, Fabrice; Elger, Bernice S

    2018-01-17

    Recent advances in military-funded neurotechnology and novel opportunities for misusing neurodevices show that the problem of dual use is inherent to neuroscience. This paper discusses how the neuroscience community should respond to these dilemmas and delineates a neuroscience-specific biosecurity framework. This neurosecurity framework involves calibrated regulation, (neuro)ethical guidelines, and awareness-raising activities within the scientific community. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Spectrophotometry: Past and Present

    NASA Astrophysics Data System (ADS)

    Adelman, Saul J.

    2009-01-01

    I describe the rise of optical region spectrophotometry in the 1960's and 1970's when it achieved a status as a major tool in stellar research through its decline and near demise at present. With absolutely calibrated fluxes and Balmer profiles usually of H-gamma, astronomers used model atmospheres predictions to find both the effective temperatures and surface gravities of many stars. Spectrophotometry as I knew it was photometrically calibrated low dispersion spectroscopy with a typical resolution of order 25 A. A typical data set consists of 10 to 15 values covering most of the optical spectral region. The strengths and shortcomings of the rotating grating scanners are discussed. The accomplishments achieved using spectrophotometric data, which were obtained with instruments using photomultipliers, are reviewed. Extensions to other spectral regions are noted and attempts to use observations from space to calibrate the optical region will be discussed. There are two steps to fully calibrate flux data. The first requires the calibration of the fluxes of one or more standard stars against sources calibrated absolutely in a laboratory. The use of Vega as the primary standard has been both a blessing as it is so bright and a curse especially as modeling it correctly requires treating it as a fast rotating star seen nearly pole-on. At best its calibration has errors of about 1%. The other step is to apply extinction corrections for the Earth's atmosphere and then calibrate the fluxes using the fluxes of standard stars. Now the ASTRA Spectrophotometer promises a revitalization of the use and availability of optical flux data. Its design specifications included solutions to the problems of past optical spectrophotometric instruments.

  19. Mapping From an Instrumented Glove to a Robot Hand

    NASA Technical Reports Server (NTRS)

    Goza, Michael

    2005-01-01

    An algorithm has been developed to solve the problem of mapping from (1) a glove instrumented with joint-angle sensors to (2) an anthropomorphic robot hand. Such a mapping is needed to generate control signals to make the robot hand mimic the configuration of the hand of a human attempting to control the robot. The mapping problem is complicated by uncertainties in sensor locations caused by variations in sizes and shapes of hands and variations in the fit of the glove. The present mapping algorithm is robust in the face of these uncertainties, largely because it includes a calibration sub-algorithm that inherently adapts the mapping to the specific hand and glove, without need for measuring the hand and without regard for goodness of fit. The algorithm utilizes a forward-kinematics model of the glove derived from documentation provided by the manufacturer of the glove. In this case, forward-kinematics model signifies a mathematical model of the glove fingertip positions as functions of the sensor readings. More specifically, given the sensor readings, the forward-kinematics model calculates the glove fingertip positions in a Cartesian reference frame nominally attached to the palm. The algorithm also utilizes an inverse-kinematics model of the robot hand. In this case, inverse-kinematics model signifies a mathematical model of the robot finger-joint angles as functions of the robot fingertip positions. Again, more specifically, the inverse-kinematics model calculates the finger-joint commands needed to place the fingertips at specified positions in a Cartesian reference frame that is attached to the palm of the robot hand and that nominally corresponds to the Cartesian reference frame attached to the palm of the glove. Initially, because of the aforementioned uncertainties, the glove fingertip positions calculated by the forwardkinematics model in the glove Cartesian reference frame cannot be expected to match the robot fingertip positions in the robot-hand Cartesian reference frame. A calibration must be performed to make the glove and robot-hand fingertip positions correspond more precisely. The calibration procedure involves a few simple hand poses designed to provide well-defined fingertip positions. One of the poses is a fist. In each of the other poses, a finger touches the thumb. The calibration subalgorithm uses the sensor readings from these poses to modify the kinematical models to make the two sets of fingertip positions agree more closely.

  20. Geometrical calibration of an AOTF hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Špiclin, Žiga; Katrašnik, Jaka; Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    Optical aberrations present an important problem in optical measurements. Geometrical calibration of an imaging system is therefore of the utmost importance for achieving accurate optical measurements. In hyper-spectral imaging systems, the problem of optical aberrations is even more pronounced because optical aberrations are wavelength dependent. Geometrical calibration must therefore be performed over the entire spectral range of the hyper-spectral imaging system, which is usually far greater than that of the visible light spectrum. This problem is especially adverse in AOTF (Acousto- Optic Tunable Filter) hyper-spectral imaging systems, as the diffraction of light in AOTF filters is dependent on both wavelength and angle of incidence. Geometrical calibration of hyper-spectral imaging system was performed by stable caliber of known dimensions, which was imaged at different wavelengths over the entire spectral range. The acquired images were then automatically registered to the caliber model by both parametric and nonparametric transformation based on B-splines and by minimizing normalized correlation coefficient. The calibration method was tested on an AOTF hyper-spectral imaging system in the near infrared spectral range. The results indicated substantial wavelength dependent optical aberration that is especially pronounced in the spectral range closer to the infrared part of the spectrum. The calibration method was able to accurately characterize the aberrations and produce transformations for efficient sub-pixel geometrical calibration over the entire spectral range, finally yielding better spatial resolution of hyperspectral imaging system.

  1. Calibration and evaluation of a dispersant application system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shum, J.S.

    1987-05-01

    The report presents recommended methods for calibrating and operating boat-mounted dispersant application systems. Calibration of one commercially-available system and several unusual problems encountered in calibration are described. Charts and procedures for selecting pump rates and other operating parameters in order to achieve a desired dosage are provided. The calibration was performed at the EPA's Oil and Hazardous Materials Simulated Environmental Test Tank (OHMSETT) facility in Leonardo, New Jersey.

  2. Absolute mass scale calibration in the inverse problem of the physical theory of fireballs.

    NASA Astrophysics Data System (ADS)

    Kalenichenko, V. V.

    A method of the absolute mass scale calibration is suggested for solving the inverse problem of the physical theory of fireballs. The method is based on the data on the masses of the fallen meteorites whose fireballs have been photographed in their flight. The method may be applied to those fireballs whose bodies have not experienced considerable fragmentation during their destruction in the atmosphere and have kept their form well enough. Statistical analysis of the inverse problem solution for a sufficiently representative sample makes it possible to separate a subsample of such fireballs. The data on the Lost City and Innisfree meteorites are used to obtain calibration coefficients.

  3. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap

    PubMed Central

    Al-Widyan, Khalid

    2017-01-01

    Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot–world hand–eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX=ZB, where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B, which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0.12∘ respectively. PMID:29036905

  4. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap.

    PubMed

    Ahmad Yousef, Khalil M; Mohd, Bassam J; Al-Widyan, Khalid; Hayajneh, Thaier

    2017-10-14

    Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot-world hand-eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  5. A practical approach to spectral calibration of short wavelength infrared hyper-spectral imaging systems

    NASA Astrophysics Data System (ADS)

    Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    Near-infrared spectroscopy is a promising, rapidly developing, reliable and noninvasive technique, used extensively in the biomedicine and in pharmaceutical industry. With the introduction of acousto-optic tunable filters (AOTF) and highly sensitive InGaAs focal plane sensor arrays, real-time high resolution hyper-spectral imaging has become feasible for a number of new biomedical in vivo applications. However, due to the specificity of the AOTF technology and lack of spectral calibration standardization, maintaining long-term stability and compatibility of the acquired hyper-spectral images across different systems is still a challenging problem. Efficiently solving both is essential as the majority of methods for analysis of hyper-spectral images relay on a priori knowledge extracted from large spectral databases, serving as the basis for reliable qualitative or quantitative analysis of various biological samples. In this study, we propose and evaluate fast and reliable spectral calibration of hyper-spectral imaging systems in the short wavelength infrared spectral region. The proposed spectral calibration method is based on light sources or materials, exhibiting distinct spectral features, which enable robust non-rigid registration of the acquired spectra. The calibration accounts for all of the components of a typical hyper-spectral imaging system such as AOTF, light source, lens and optical fibers. The obtained results indicated that practical, fast and reliable spectral calibration of hyper-spectral imaging systems is possible, thereby assuring long-term stability and inter-system compatibility of the acquired hyper-spectral images.

  6. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  7. Elementary Students' Metacognitive Processes and Post-Performance Calibration on Mathematical Problem-Solving Tasks

    ERIC Educational Resources Information Center

    García, Trinidad; Rodríguez, Celestino; González-Castro, Paloma; González-Pienda, Julio Antonio; Torrance, Mark

    2016-01-01

    Calibration, or the correspondence between perceived performance and actual performance, is linked to students' metacognitive and self-regulatory skills. Making students more aware of the quality of their performance is important in elementary school settings, and more so when math problems are involved. However, many students seem to be poorly…

  8. Solving the robot-world, hand-eye(s) calibration problem with iterative methods

    USDA-ARS?s Scientific Manuscript database

    Robot-world, hand-eye calibration is the problem of determining the transformation between the robot end effector and a camera, as well as the transformation between the robot base and the world coordinate system. This relationship has been modeled as AX = ZB, where X and Z are unknown homogeneous ...

  9. 10 CFR 70.39 - Specific licenses for the manufacture or initial transfer of calibration or reference sources.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Specific licenses for the manufacture or initial transfer... manufacture or initial transfer of calibration or reference sources. (a) An application for a specific license to manufacture or initially transfer calibration or reference sources containing plutonium, for...

  10. Self-calibration of robot-sensor system

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1990-01-01

    The process of finding the coordinate transformation between a robot and an external sensor system has been addressed. This calibration is equivalent to solving a nonlinear optimization problem for the parameters that characterize the transformation. A two-step procedure is herein proposed for solving the problem. The first step involves finding a nominal solution that is a good approximation of the final solution. A varational problem is then generated to replace the original problem in the next step. With the assumption that the variational parameters are small compared to unity, the problem that can be more readily solved with relatively small computation effort.

  11. Absolute calibration of the mass scale in the inverse problem of the physical theory of fireballs

    NASA Astrophysics Data System (ADS)

    Kalenichenko, V. V.

    1992-08-01

    A method of the absolute calibration of the mass scale is proposed for solving the inverse problem of the physical theory of fireballs. The method is based on data on the masses of fallen meteorites whose fireballs have been photographed in flight. The method can be applied to fireballs whose bodies have not experienced significant fragmentation during their flight in the atmosphere and have kept their shape relatively well. Data on the Lost City and Innisfree meteorites are used to calculate the calibration coefficients.

  12. Contributions to the problem of piezoelectric accelerometer calibration. [using lock-in voltmeter

    NASA Technical Reports Server (NTRS)

    Jakab, I.; Bordas, A.

    1974-01-01

    After discussing the principal calibration methods for piezoelectric accelerometers, an experimental setup for accelerometer calibration by the reciprocity method is described It is shown how the use of a lock-in voltmeter eliminates errors due to viscous damping and electrical loading.

  13. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  14. Hierarchical calibration and validation of computational fluid dynamics models for solid sorbent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Canhai; Xu, Zhijie; Pan, Wenxiao

    2016-01-01

    To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less

  15. Calibration of neural networks using genetic algorithms, with application to optimal path planning

    NASA Technical Reports Server (NTRS)

    Smith, Terence R.; Pitney, Gilbert A.; Greenwood, Daniel

    1987-01-01

    Genetic algorithms (GA) are used to search the synaptic weight space of artificial neural systems (ANS) for weight vectors that optimize some network performance function. GAs do not suffer from some of the architectural constraints involved with other techniques and it is straightforward to incorporate terms into the performance function concerning the metastructure of the ANS. Hence GAs offer a remarkably general approach to calibrating ANS. GAs are applied to the problem of calibrating an ANS that finds optimal paths over a given surface. This problem involves training an ANS on a relatively small set of paths and then examining whether the calibrated ANS is able to find good paths between arbitrary start and end points on the surface.

  16. Organ-specific SPECT activity calibration using 3D printed phantoms for molecular radiotherapy dosimetry.

    PubMed

    Robinson, Andrew P; Tipping, Jill; Cullen, David M; Hamilton, David; Brown, Richard; Flynn, Alex; Oldfield, Christopher; Page, Emma; Price, Emlyn; Smith, Andrew; Snee, Richard

    2016-12-01

    Patient-specific absorbed dose calculations for molecular radiotherapy require accurate activity quantification. This is commonly derived from Single-Photon Emission Computed Tomography (SPECT) imaging using a calibration factor relating detected counts to known activity in a phantom insert. A series of phantom inserts, based on the mathematical models underlying many clinical dosimetry calculations, have been produced using 3D printing techniques. SPECT/CT data for the phantom inserts has been used to calculate new organ-specific calibration factors for (99m) Tc and (177)Lu. The measured calibration factors are compared to predicted values from calculations using a Gaussian kernel. Measured SPECT calibration factors for 3D printed organs display a clear dependence on organ shape for (99m) Tc and (177)Lu. The observed variation in calibration factor is reproduced using Gaussian kernel-based calculation over two orders of magnitude change in insert volume for (99m) Tc and (177)Lu. These new organ-specific calibration factors show a 24, 11 and 8 % reduction in absorbed dose for the liver, spleen and kidneys, respectively. Non-spherical calibration factors from 3D printed phantom inserts can significantly improve the accuracy of whole organ activity quantification for molecular radiotherapy, providing a crucial step towards individualised activity quantification and patient-specific dosimetry. 3D printed inserts are found to provide a cost effective and efficient way for clinical centres to access more realistic phantom data.

  17. The Prediction Properties of Inverse and Reverse Regression for the Simple Linear Calibration Problem

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.

    2010-01-01

    The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.

  18. Multiple Use One-Sided Hypotheses Testing in Univariate Linear Calibration

    NASA Technical Reports Server (NTRS)

    Krishnamoorthy, K.; Kulkarni, Pandurang M.; Mathew, Thomas

    1996-01-01

    Consider a normally distributed response variable, related to an explanatory variable through the simple linear regression model. Data obtained on the response variable, corresponding to known values of the explanatory variable (i.e., calibration data), are to be used for testing hypotheses concerning unknown values of the explanatory variable. We consider the problem of testing an unlimited sequence of one sided hypotheses concerning the explanatory variable, using the corresponding sequence of values of the response variable and the same set of calibration data. This is the situation of multiple use of the calibration data. The tests derived in this context are characterized by two types of uncertainties: one uncertainty associated with the sequence of values of the response variable, and a second uncertainty associated with the calibration data. We derive tests based on a condition that incorporates both of these uncertainties. The solution has practical applications in the decision limit problem. We illustrate our results using an example dealing with the estimation of blood alcohol concentration based on breath estimates of the alcohol concentration. In the example, the problem is to test if the unknown blood alcohol concentration of an individual exceeds a threshold that is safe for driving.

  19. 40 CFR Appendix I to Part 92 - Emission Related Locomotive and Engine Parameters and Specifications

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... b. Idle mixture. c. Transient enrichment system calibration. d. Starting enrichment system... shutoff system calibration. d. Starting enrichment system calibration. e. Transient enrichment system... parameters and calibrations. b. Transient enrichment system calibration. c. Air-fuel flow calibration. d...

  20. Muscle Synergies May Improve Optimization Prediction of Knee Contact Forces During Walking

    PubMed Central

    Walter, Jonathan P.; Kinney, Allison L.; Banks, Scott A.; D'Lima, Darryl D.; Besier, Thor F.; Lloyd, David G.; Fregly, Benjamin J.

    2014-01-01

    The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values. PMID:24402438

  1. Muscle synergies may improve optimization prediction of knee contact forces during walking.

    PubMed

    Walter, Jonathan P; Kinney, Allison L; Banks, Scott A; D'Lima, Darryl D; Besier, Thor F; Lloyd, David G; Fregly, Benjamin J

    2014-02-01

    The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values.

  2. Problems of millipound thrust measurement. The "Hansen Suspension"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carta, David G.

    Considered in detail are problems which led to the need and use of the 'Hansen Suspension'. Also discussed are problems which are likely to be encountered in any low level thrust measuring system. The methods of calibration and the accuracies involved are given careful attention. With all parameters optimized and calibration techniques perfected, the system was found capable of a resolution of 10 {mu} lbs. A comparison of thrust measurements made by the 'Hansen Suspension' with measurements of a less sophisticated device leads to some surprising results.

  3. Operating manual for the U.S. Geological Survey minimonitor, 1988 revised edition; punched-paper-tape model

    USGS Publications Warehouse

    Ficken, James H.; Scott, Carl T.

    1988-01-01

    This manual describes the U.S. Geological Survey Minimonitor Water Quality Data Measuring and Recording System. Instructions for calibrating, servicing, maintaining, and operating the system are provided. The Survey Minimonitor is a battery-powered , multiparameter water quality monitoring instrument designed for field use. A watertight can containing signal conditioners is connected with cable and waterproof connectors to various water quality sensors. Data are recorded on a punched paper-tape recorder. An external battery is required. The operation and maintenance of various sensors and signal conditioners are discussed, for temperature, specific conductance, dissolved oxygen, and pH. Calibration instructions are provided for each parameter, along with maintenance instructions. Sections of the report explain how to connect the Minimonitor to measure direct-current voltages, such as signal outputs from other instruments. Instructions for connecting a satellite data-collection platform or a solid-state data recorder to the Minimonitor are given also. Basic information is given for servicing the Minimonitor and trouble-shooting some of its electronic components. The use of test boxes to test sensors, isolate component problems, and verify calibration values is discussed. (USGS)

  4. Progressive calibration and averaging for tandem mass spectrometry statistical confidence estimation: Why settle for a single decoy?

    PubMed Central

    Keich, Uri; Noble, William Stafford

    2017-01-01

    Estimating the false discovery rate (FDR) among a list of tandem mass spectrum identifications is mostly done through target-decoy competition (TDC). Here we offer two new methods that can use an arbitrarily small number of additional randomly drawn decoy databases to improve TDC. Specifically, “Partial Calibration” utilizes a new meta-scoring scheme that allows us to gradually benefit from the increase in the number of identifications calibration yields and “Averaged TDC” (a-TDC) reduces the liberal bias of TDC for small FDR values and its variability throughout. Combining a-TDC with “Progressive Calibration” (PC), which attempts to find the “right” number of decoys required for calibration we see substantial impact in real datasets: when analyzing the Plasmodium falciparum data it typically yields almost the entire 17% increase in discoveries that “full calibration” yields (at FDR level 0.05) using 60 times fewer decoys. Our methods are further validated using a novel realistic simulation scheme and importantly, they apply more generally to the problem of controlling the FDR among discoveries from searching an incomplete database. PMID:29326989

  5. Performance appraisal of VAS radiometry for GOES-4, -5 and -6

    NASA Technical Reports Server (NTRS)

    Chesters, D.; Robinson, W. D.

    1983-01-01

    The first three VISSR Atmospheric Sounders (VAS) were launched on GOES-4, -5, and -6 in 1980, 1981 and 1983. Postlaunch radiometric performance is assessed for noise, biases, registration and reliability, with special attention to calibration and problems in the data processing chain. The postlaunch performance of the VAS radiometer meets its prelaunch design specifications, particularly those related to image formation and noise reduction. The best instrument is carried on GOES-5, currently operational as GOES-EAST. Single sample noise is lower than expected, especially for the small longwave and large shortwave detectors. Detector to detector offsets are correctable to within the resolution limits of the instrument. Truncation, zero point and droop errors are insignificant. Absolute calibration errors, estimated from HIRS and from radiation transfer calculations, indicate moderate, but stable biases. Relative calibration errors from scanline to scanline are noticeable, but meet sounding requirements for temporarily and spatially averaged sounding fields of view. The VAS instrument is a potentially useful radiometer for mesoscale sounding operations. Image quality is very good. Soundings derived from quality controlled data meet prelaunch requirements when calculated with noise and bias resistant algorithms.

  6. A New Method for Calibrating Perceptual Salience across Dimensions in Infants: The Case of Color vs. Luminance

    ERIC Educational Resources Information Center

    Kaldy, Zsuzsa; Blaser, Erik A.; Leslie, Alan M.

    2006-01-01

    We report a new method for calibrating differences in perceptual salience across feature dimensions, in infants. The problem of inter-dimensional salience arises in many areas of infant studies, but a general method for addressing the problem has not previously been described. Our method is based on a preferential looking paradigm, adapted to…

  7. Calibration plots for risk prediction models in the presence of competing risks.

    PubMed

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-08-15

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.

  8. An Interferometry Imaging Beauty Contest

    NASA Technical Reports Server (NTRS)

    Lawson, Peter R.; Cotton, William D.; Hummel, Christian A.; Monnier, John D.; Zhaod, Ming; Young, John S.; Thorsteinsson, Hrobjartur; Meimon, Serge C.; Mugnier, Laurent; LeBesnerais, Guy; hide

    2004-01-01

    We present a formal comparison of the performance of algorithms used for synthesis imaging with optical/infrared long-baseline interferometers. Six different algorithms are evaluated based on their performance with simulated test data. Each set of test data is formated in the interferometry Data Exchange Standard and is designed to simulate a specific problem relevant to long-baseline imaging. The data are calibrated power spectra and bispectra measured with a ctitious array, intended to be typical of existing imaging interferometers. The strengths and limitations of each algorithm are discussed.

  9. Vector magnetometer design study: Analysis of a triaxial fluxgate sensor design demonstrates that all MAGSAT Vector Magnetometer specifications can be met

    NASA Technical Reports Server (NTRS)

    Adams, D. F.; Hartmann, U. G.; Lazarow, L. L.; Maloy, J. O.; Mohler, G. W.

    1976-01-01

    The design of the vector magnetometer selected for analysis is capable of exceeding the required accuracy of 5 gamma per vector field component. The principal elements that assure this performance level are very low power dissipation triaxial feedback coils surrounding ring core flux-gates and temperature control of the critical components of two-loop feedback electronics. An analysis of the calibration problem points to the need for improved test facilities.

  10. Refurbishment of durban fixed ukzn lidar for atmospheric studies - current status

    NASA Astrophysics Data System (ADS)

    Sivakumar, Venkataraman

    2018-04-01

    The fixed LIDAR system at University of KwaZulu-Natal (UKZN) in Durban was installed in 1999 and operated until 2004. In 2004, the system was relocated and operation closed due to various technical and instrument problems. The restructuring of the LIDAR system was initiated in 2013 and it is now used to measure vertical aerosol profiles in the height range 03-25 km. Here, we describe the present system in detail, including technical specifications and results obtained from a recent LIDAR calibration campaign.

  11. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  12. Validation and Calibration of Nuclear Thermal Hydraulics Multiscale Multiphysics Models - Subcooled Flow Boiling Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anh Bui; Nam Dinh; Brian Williams

    In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Suchmore » sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this work’s calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the “CIPS Validation Data Plan” at the Consortium for Advanced Simulation of LWRs to enable quantitative assessment of the CASL modeling of Crud-Induced Power Shift (CIPS) phenomenon, in particular, and the CASL advanced predictive capabilities, in general. This report is prepared for the Department of Energy’s Consortium for Advanced Simulation of LWRs program’s VUQ Focus Area.« less

  13. Comparative artificial neural network and partial least squares models for analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in pharmaceutical preparations.

    PubMed

    Elkhoudary, Mahmoud M; Abdel Salam, Randa A; Hadad, Ghada M

    2014-09-15

    Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components' mixtures using easy and widely used UV spectrophotometer. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Comparative artificial neural network and partial least squares models for analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in pharmaceutical preparations

    NASA Astrophysics Data System (ADS)

    Elkhoudary, Mahmoud M.; Abdel Salam, Randa A.; Hadad, Ghada M.

    2014-09-01

    Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components’ mixtures using easy and widely used UV spectrophotometer.

  15. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  16. Calibration of a parsimonious distributed ecohydrological daily model in a data-scarce basin by exclusively using the spatio-temporal variation of NDVI

    NASA Astrophysics Data System (ADS)

    Ruiz-Pérez, Guiomar; Koch, Julian; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2017-12-01

    Ecohydrological modeling studies in developing countries, such as sub-Saharan Africa, often face the problem of extensive parametrical requirements and limited available data. Satellite remote sensing data may be able to fill this gap, but require novel methodologies to exploit their spatio-temporal information that could potentially be incorporated into model calibration and validation frameworks. The present study tackles this problem by suggesting an automatic calibration procedure, based on the empirical orthogonal function, for distributed ecohydrological daily models. The procedure is tested with the support of remote sensing data in a data-scarce environment - the upper Ewaso Ngiro river basin in Kenya. In the present application, the TETIS-VEG model is calibrated using only NDVI (Normalized Difference Vegetation Index) data derived from MODIS. The results demonstrate that (1) satellite data of vegetation dynamics can be used to calibrate and validate ecohydrological models in water-controlled and data-scarce regions, (2) the model calibrated using only satellite data is able to reproduce both the spatio-temporal vegetation dynamics and the observed discharge at the outlet and (3) the proposed automatic calibration methodology works satisfactorily and it allows for a straightforward incorporation of spatio-temporal data into the calibration and validation framework of a model.

  17. Partnership for the Revitalization of National Wind Tunnel Force Measurement Capability

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Skelley, Marcus L.; Woike, Mark R.; Bader, Jon B.; Marshall, Timothy J.

    2009-01-01

    Lack of funding and lack of focus on research over the past several years, coupled with force measurement capabilities being decentralized and distributed across the National Aeronautics and Space Administration (NASA) research centers, has resulted in a significant erosion of (1) capability and infrastructure to produce and calibrate force measurement systems; (2) NASA s working knowledge of those systems; and (3) the quantity of high-quality, full-capability force measurement systems available for use in aeronautics testing. Simultaneously, and at proportional rates, the capability of industry to design, manufacture, and calibrate these test instruments has been eroding primarily because of a lack of investment by the aeronautics community. Technical expertise in this technology area is a core competency in aeronautics testing; it is highly specialized and experience-based, and it represents a niche market for only a few small precision instrument shops in the United States. With this backdrop, NASA s Aeronautics Test Program (ATP) chartered a team to examine the issues and risks associated with the problem, focusing specifically on strain- gage balances. The team partnered with the U.S. Air Force s Arnold Engineering Development Center (AEDC) to exploit their combined capabilities and take a national level government view of the problem. This paper describes the team s approach, its findings, and its recommendations, and the current status for revitalizing the government s balance capability with respect to designing, fabricating, calibrating, and using the instruments.

  18. In pursuit of precision: the calibration of minds and machines in late nineteenth-century psychology.

    PubMed

    Benschop, R; Draaisma, D

    2000-01-01

    A prominent feature of late nineteenth-century psychology was its intense preoccupation with precision. Precision was at once an ideal and an argument: the quest for precision helped psychology to establish its status as a mature science, sharing a characteristic concern with the natural sciences. We will analyse how psychologists set out to produce precision in 'mental chronometry', the measurement of the duration of psychological processes. In his Leipzig laboratory, Wundt inaugurated an elaborate research programme on mental chronometry. We will look at the problem of calibration of experimental apparatus and will describe the intricate material, literary, and social technologies involved in the manufacture of precision. First, we shall discuss some of the technical problems involved in the measurement of ever shorter time-spans. Next, the Cattell-Berger experiments will help us to argue against the received view that all the precision went into the hardware, and practically none into the social organization of experimentation. Experimenters made deliberate efforts to bring themselves and their subjects under a regime of control and calibration similar to that which reigned over the experimental machinery. In Leipzig psychology, the particular blend of material and social technology resulted in a specific object of study: the generalized mind. We will then show that the distribution of precision in experimental psychology outside Leipzig demanded a concerted effort of instruments, texts, and people. It will appear that the forceful attempts to produce precision and uniformity had some rather paradoxical consequences.

  19. An in-situ Mobile pH Calibrator for application with HOV and ROV platform in deep sea environments

    NASA Astrophysics Data System (ADS)

    Tan, C.; Ding, K.; Seyfried, W. E., Jr.

    2014-12-01

    Recently, a novel in-situ sensor calibration instrument, Mobile pH Calibrator (MpHC), was developed for application with HOV Alvin. It was specifically designed to conduct in-situ pH measurement in deep sea hydrothermal diffuse fluids with in-situ calibration function. In general, the sensor calibrator involves three integrated electrodes (pH, dissolved H2 and H2S) and a temperature sensor, all of which are installed in a cell with a volume of ~ 1 ml. A PEEK check valve cartridge is installed at the inlet end of the cell to guide the flow path during the measurement and calibration processes. Two PEEK tubes are connected at outlet end of the cell for drawing out hydrothermal fluid and delivering pH buffer fluids. During its measurement operation, the pump draws in hydrothermal fluid, which then passes through the check valve directly into the sensing cell. When in calibration mode, the pump delivers pH buffers into the cell, while automatically closing the check valve to the outside environment. This probe has two advantages compared to our previous unit used during KNOX18RR MAR cruise in 2008 and MARS cabled observatory deployment in 2012. First, in the former design, a 5 cm solenoid valve was equipped with the probe. This enlarged size prevented its application in specific point or small area. In this version, the probe has only a dimension of 1.6 cm for an easy access to hydrothermal biological environments. Secondly, the maximum temperature condition of the earlier system was limited by the solenoid valve precluding operation in excess of 50 ºC. The new design avoids this problem, which improves its temperature tolerance. The upper limit of temperature condition is now up to 100oC, therefore enabling broader application in hydrothermal diffuse flow system on the seafloor. During SVC cruise (AT26-12) in the Gulf of Mexico this year, the MpHC was successfully tested with Alvin dives at the depth up to 2600 m for measuring pH with in-situ calibration in seafloor cold seep environment. The measurement and calibration were also conducted in hydrothermal diffuse flow at temperature condition exceeding 70 ºC with Alvin dives during a recent cruise AT26-17 in ASHES vent field and Main Endeavour Field on Juan de Fuca Ridge. Data from these seagoing deployments will be presented, with emphasis on both technical and scientific aplications.

  20. A simple, accurate, field-portable mixing ratio generator and Rayleigh distillation device

    USDA-ARS?s Scientific Manuscript database

    Routine field calibration of water vapor analyzers has always been a challenging problem for those making long-term flux measurements at remote sites. Automated sampling of standard gases from compressed tanks, the method of choice for CO2 calibration, cannot be used for H2O. Calibrations are typica...

  1. Calibration of Lévy Processes with American Options

    NASA Astrophysics Data System (ADS)

    Achdou, Yves

    We study options on financial assets whose discounted prices are exponential of Lévy processes. The price of an American vanilla option as a function of the maturity and the strike satisfies a linear complementarity problem involving a non-local partial integro-differential operator. It leads to a variational inequality in a suitable weighted Sobolev space. Calibrating the Lévy process may be done by solving an inverse least square problem where the state variable satisfies the previously mentioned variational inequality. We first assume that the volatility is positive: after carefully studying the direct problem, we propose necessary optimality conditions for the least square inverse problem. We also consider the direct problem when the volatility is zero.

  2. Calibration of radio-astronomical data on the cloud. LOFAR, the pathway to SKA

    NASA Astrophysics Data System (ADS)

    Sabater, J.; Sánchez-Expósito, S.; Garrido, J.; Ruiz, J. E.; Best, P. N.; Verdes-Montenegro, L.

    2015-05-01

    The radio interferometer LOFAR (LOw Frequency ARray) is fully operational now. This Square Kilometre Array (SKA) pathfinder allows the observation of the sky at frequencies between 10 and 240 MHz, a relatively unexplored region of the spectrum. LOFAR is a software defined telescope: the data is mainly processed using specialized software running in common computing facilities. That means that the capabilities of the telescope are virtually defined by software and mainly limited by the available computing power. However, the quantity of data produced can quickly reach huge volumes (several Petabytes per day). After the correlation and pre-processing of the data in a dedicated cluster, the final dataset is handled to the user (typically several Terabytes). The calibration of these data requires a powerful computing facility in which the specific state of the art software under heavy continuous development can be easily installed and updated. That makes this case a perfect candidate for a cloud infrastructure which adds the advantages of an on demand, flexible solution. We present our approach to the calibration of LOFAR data using Ibercloud, the cloud infrastructure provided by Ibergrid. With the calibration work-flow adapted to the cloud, we can explore calibration strategies for the SKA and show how private or commercial cloud infrastructures (Ibercloud, Amazon EC2, Google Compute Engine, etc.) can help to solve the problems with big datasets that will be prevalent in the future of astronomy.

  3. DEM Calibration Approach: design of experiment

    NASA Astrophysics Data System (ADS)

    Boikov, A. V.; Savelev, R. V.; Payor, V. A.

    2018-05-01

    The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.

  4. Compressor blade clearance measurement using capacitance and phase lock techniques

    NASA Astrophysics Data System (ADS)

    Demers, Rosario N.

    1986-11-01

    The clearance measurement system has several unique features which mimimize problems plaguing earlier systems. These include tuning stability and sensitivity drift. Both these problems are intensified by the environmental factors present in compressors i.e., wide temperature fluctuations, vibrations, and conductive contamination of probe tips. The circuitry in this new system provides phase lock feedback to control tuning and shut calibration to measure sensitivity. The use of high frequency excitation lowers the probe tip impedance, thus miminizing the effects of contamination. A prototype has been built and tested. The ability to calibrate has been demonstrated. An eight channel system is now being constructed for use in the Compressor Research Facility at Wright-Patterson AFB. The efficiency of a turbine engine is to a large extent dependent upon the mechanical tolerances maintained between its moving parts. On critical tolerance is the blade span. Although this tolerance may not appear severe, the impact on compressor efficiency is dramatic. The penalty in percent efficiency has been shown to be three times the percent clearance to blade span ratio. In addition, each percent loss in compressor efficiency represents one half percent loss in specific fuel consumption. Factors which affect blade tip clearance are identified.

  5. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems

    PubMed Central

    de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625

  6. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems.

    PubMed

    de Paula, Lauro C M; Soares, Anderson S; de Lima, Telma W; Delbem, Alexandre C B; Coelho, Clarimar J; Filho, Arlindo R G

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation.

  7. Advances in spectroscopic methods for quantifying soil carbon

    USGS Publications Warehouse

    Liebig, Mark; Franzluebbers, Alan J.; Follett, Ronald F.; Hively, W. Dean; Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco

    2012-01-01

    The gold standard for soil C determination is combustion. However, this method requires expensive consumables, is limited to the determination of the total carbon and in the number of samples which can be processed (~100/d). With increased interest in soil C sequestration, faster methods are needed. Thus, interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared ranges using either proximal or remote sensing. These methods have the ability to analyze more samples (2 to 3X/d) or huge areas (imagery) and do multiple analytes simultaneously, but require calibrations relating spectral and reference data and have specific problems, i.e., remote sensing is capable of scanning entire watersheds, thus reducing the sampling needed, but is limiting to the surface layer of tilled soils and by difficulty in obtaining proper calibration reference values. The objective of this discussion is the present state of spectroscopic methods for soil C determination.

  8. A Self-Calibrating Radar Sensor System for Measuring Vital Signs.

    PubMed

    Huang, Ming-Chun; Liu, Jason J; Xu, Wenyao; Gu, Changzhan; Li, Changzhi; Sarrafzadeh, Majid

    2016-04-01

    Vital signs (i.e., heartbeat and respiration) are crucial physiological signals that are useful in numerous medical applications. The process of measuring these signals should be simple, reliable, and comfortable for patients. In this paper, a noncontact self-calibrating vital signs monitoring system based on the Doppler radar is presented. The system hardware and software were designed with a four-tiered layer structure. To enable accurate vital signs measurement, baseband signals in the radar sensor were modeled and a framework for signal demodulation was proposed. Specifically, a signal model identification method was formulated into a quadratically constrained l1 minimization problem and solved using the upper bound and linear matrix inequality (LMI) relaxations. The performance of the proposed system was comprehensively evaluated using three experimental sets, and the results indicated that this system can be used to effectively measure human vital signs.

  9. Problems with GH assays and strategies toward standardization.

    PubMed

    Bidlingmaier, Martin

    2008-12-01

    Disorders affecting GH secretion--either GH deficiency or GH excess (acromegaly)--are biochemically defined through peak or nadir concentrations of human GH in response to dynamic tests. Immunoassays employing polyclonal or monoclonal antibodies are routinely used for the analysis of GH concentrations, and many different assays are available on the market today. Unfortunately, the actual value reported for the GH concentration in a specific patient's sample to a large extent depends on the assay method used by the respective laboratory. Variability between assay results exceeds 200%, limiting the applicability of consensus guidelines in clinical practice. Reasons for the heterogeneity in GH assay results include the heterogeneity of the analyte itself, the availability of different preparations for calibration, and the interference from matrix components such as GH-binding protein. Furthermore, the reporting of results in mass units or international units together with the application of variable conversion factors led to confusion. International collaborations proposed measures to improve the comparability of assay results, recommending the use of a single, recombinant calibrator for all assays and reporting only in mass units as first steps. However, because of the differences in epitope specificity of antibodies used in different assays, method-specific cut-off levels for dynamic tests might remain necessary to correctly interpret and compare results from different laboratories.

  10. Calibration of the APEX Model to Simulate Management Practice Effects on Runoff, Sediment, and Phosphorus Loss.

    PubMed

    Bhandari, Ammar B; Nelson, Nathan O; Sweeney, Daniel W; Baffaut, Claire; Lory, John A; Senaviratne, Anomaa; Pierzynski, Gary M; Janssen, Keith A; Barnes, Philip L

    2017-11-01

    Process-based computer models have been proposed as a tool to generate data for Phosphorus (P) Index assessment and development. Although models are commonly used to simulate P loss from agriculture using managements that are different from the calibration data, this use of models has not been fully tested. The objective of this study is to determine if the Agricultural Policy Environmental eXtender (APEX) model can accurately simulate runoff, sediment, total P, and dissolved P loss from 0.4 to 1.5 ha of agricultural fields with managements that are different from the calibration data. The APEX model was calibrated with field-scale data from eight different managements at two locations (management-specific models). The calibrated models were then validated, either with the same management used for calibration or with different managements. Location models were also developed by calibrating APEX with data from all managements. The management-specific models resulted in satisfactory performance when used to simulate runoff, total P, and dissolved P within their respective systems, with > 0.50, Nash-Sutcliffe efficiency > 0.30, and percent bias within ±35% for runoff and ±70% for total and dissolved P. When applied outside the calibration management, the management-specific models only met the minimum performance criteria in one-third of the tests. The location models had better model performance when applied across all managements compared with management-specific models. Our results suggest that models only be applied within the managements used for calibration and that data be included from multiple management systems for calibration when using models to assess management effects on P loss or evaluate P Indices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  11. Calibrated tree priors for relaxed phylogenetics and divergence time estimation.

    PubMed

    Heled, Joseph; Drummond, Alexei J

    2012-01-01

    The use of fossil evidence to calibrate divergence time estimation has a long history. More recently, Bayesian Markov chain Monte Carlo has become the dominant method of divergence time estimation, and fossil evidence has been reinterpreted as the specification of prior distributions on the divergence times of calibration nodes. These so-called "soft calibrations" have become widely used but the statistical properties of calibrated tree priors in a Bayesian setting hashave not been carefully investigated. Here, we clarify that calibration densities, such as those defined in BEAST 1.5, do not represent the marginal prior distribution of the calibration node. We illustrate this with a number of analytical results on small trees. We also describe an alternative construction for a calibrated Yule prior on trees that allows direct specification of the marginal prior distribution of the calibrated divergence time, with or without the restriction of monophyly. This method requires the computation of the Yule prior conditional on the height of the divergence being calibrated. Unfortunately, a practical solution for multiple calibrations remains elusive. Our results suggest that direct estimation of the prior induced by specifying multiple calibration densities should be a prerequisite of any divergence time dating analysis.

  12. Quasi-Static Calibration Method of a High-g Accelerometer

    PubMed Central

    Wang, Yan; Fan, Jinbiao; Zu, Jing; Xu, Peng

    2017-01-01

    To solve the problem of resonance during quasi-static calibration of high-g accelerometers, we deduce the relationship between the minimum excitation pulse width and the resonant frequency of the calibrated accelerometer according to the second-order mathematical model of the accelerometer, and improve the quasi-static calibration theory. We establish a quasi-static calibration testing system, which uses a gas gun to generate high-g acceleration signals, and apply a laser interferometer to reproduce the impact acceleration. These signals are used to drive the calibrated accelerometer. By comparing the excitation acceleration signal and the output responses of the calibrated accelerometer to the excitation signals, the impact sensitivity of the calibrated accelerometer is obtained. As indicated by the calibration test results, this calibration system produces excitation acceleration signals with a pulse width of less than 1000 μs, and realize the quasi-static calibration of high-g accelerometers with a resonant frequency above 20 kHz when the calibration error was 3%. PMID:28230743

  13. Data analysis and calibration for a bulk-refractive-index-compensated surface plasmon resonance affinity sensor

    NASA Astrophysics Data System (ADS)

    Chinowsky, Timothy M.; Yee, Sinclair S.

    2002-02-01

    Surface plasmon resonance (SPR) affinity sensing, the problem of bulk refractive index (RI) interference in SPR sensing, and a sensor developed to overcome this problem are briefly reviewed. The sensor uses a design based on Texas Instruments' Spreeta SPR sensor to simultaneously measure both bulk and surface RI. The bulk RI measurement is then used to compensate the surface measurement and remove the effects of bulk RI interference. To achieve accurate compensation, robust data analysis and calibration techniques are necessary. Simple linear data analysis techniques derived from measurements of the sensor response were found to provide a versatile, low noise method for extracting measurements of bulk and surface refractive index from the raw sensor data. Automatic calibration using RI gradients was used to correct the linear estimates, enabling the sensor to produce accurate data even when the sensor has a complicated nonlinear response which varies with time. The calibration procedure is described, and the factors influencing calibration accuracy are discussed. Data analysis and calibration principles are illustrated with an experiment in which sucrose and detergent solutions are used to produce changes in bulk and surface RI, respectively.

  14. Chaos, Consternation and CALIPSO Calibration: New Strategies for Calibrating the CALIOP 1064 nm Channel

    NASA Technical Reports Server (NTRS)

    Vaughan, Mark; Garnier, Anne; Liu, Zhaoyan; Josset, Damien; Hu, Yongxiang; Lee, Kam-Pui; Hunt, William; Vernier, Jean-Paul; Rodier, Sharon; Pelon, Jaques; hide

    2012-01-01

    The very low signal-to-noise ratios of the 1064 nm CALIOP molecular backscatter signal make it effectively impossible to employ the "clear air" normalization technique typically used to calibrate elastic back-scatter lidars. The CALIPSO mission has thus chosen to cross-calibrate their 1064 nm measurements with respect to the 532 nm data using the two-wavelength backscatter from cirrus clouds. In this paper we discuss several known issues in the version 3 CALIOP 1064 nm calibration procedure, and describe the strategies that will be employed in the version 4 data release to surmount these problems.

  15. Research relative to weather radar measurement techniques

    NASA Technical Reports Server (NTRS)

    Smith, Paul L.

    1992-01-01

    Research relative to weather radar measurement techniques, which involves some investigations related to measurement techniques applicable to meteorological radar systems in Thailand, is reported. A major part of the activity was devoted to instruction and discussion with Thai radar engineers, technicians, and meteorologists concerning the basic principles of radar meteorology and applications to specific problems, including measurement of rainfall and detection of wind shear/microburst hazards. Weather radar calibration techniques were also considered during this project. Most of the activity took place during two visits to Thailand, in December 1990 and February 1992.

  16. SLC-off Landsat-7 ETM+ reflective band radiometric calibration

    USGS Publications Warehouse

    Markham, B.L.; Barsi, J.A.; Thome, K.J.; Barker, J.L.; Scaramuzza, P.L.; Helder, D.L.; ,

    2005-01-01

    Since May 31, 2003, when the scan line corrector (SLC) on the Landsat-7 ETM+ failed, the primary foci of Landsat-7 ETM+ analyses have been on understanding and attempting to fix the problem and later on developing composited products to mitigate the problem. In the meantime, the Image Assessment System personnel and vicarious calibration teams have continued to monitor the radiometric performance of the ETM+ reflective bands. The SLC failure produced no measurable change in the radiometric calibration of the ETM+ bands. No trends in the calibration are definitively present over the mission lifetime, and, if present, are less than 0.5% per year. Detector 12 in Band 7 dropped about 0.5% in response relative to the rest of the detectors in the band in May 2004 and recovered back to within 0.1% of its initial relative gain in October 2004.

  17. The flying hot wire and related instrumentation

    NASA Technical Reports Server (NTRS)

    Coles, D.; Cantnell, B.; Wadcock, A.

    1978-01-01

    A flying hot-wire technique is proposed for studies of separated turbulent flow in wind tunnels. The technique avoids the problem of signal rectification in regions of high turbulence level by moving the probe rapidly through the flow on the end of a rotating arm. New problems which arise include control of effects of torque variation on rotor speed, avoidance of interference from the wake of the moving arms, and synchronization of data acquisition with rotation. Solutions for these problems are described. The self-calibrating feature of the technique is illustrated by a sample X-array calibration.

  18. Solving the forward problem of magnetoacoustic tomography with magnetic induction by means of the finite element method

    NASA Astrophysics Data System (ADS)

    Li, Xun; Li, Xu; Zhu, Shanan; He, Bin

    2009-05-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) is a recently proposed imaging modality to image the electrical impedance of biological tissue. It combines the good contrast of electrical impedance tomography with the high spatial resolution of sonography. In this paper, a three-dimensional MAT-MI forward problem was investigated using the finite element method (FEM). The corresponding FEM formulae describing the forward problem are introduced. In the finite element analysis, magnetic induction in an object with conductivity values close to biological tissues was first carried out. The stimulating magnetic field was simulated as that generated from a three-dimensional coil. The corresponding acoustic source and field were then simulated. Computer simulation studies were conducted using both concentric and eccentric spherical conductivity models with different geometric specifications. In addition, the grid size for finite element analysis was evaluated for the model calibration and evaluation of the corresponding acoustic field.

  19. Solving the Forward Problem of Magnetoacoustic Tomography with Magnetic Induction by Means of the Finite Element Method

    PubMed Central

    Li, Xun; Li, Xu; Zhu, Shanan; He, Bin

    2010-01-01

    Magnetoacoustic Tomography with Magnetic Induction (MAT-MI) is a recently proposed imaging modality to image the electrical impedance of biological tissue. It combines the good contrast of electrical impedance tomography with the high spatial resolution of sonography. In this paper, three-dimensional MAT-MI forward problem was investigated using the finite element method (FEM). The corresponding FEM formulas describing the forward problem are introduced. In the finite element analysis, magnetic induction in an object with conductivity values close to biological tissues was first carried out. The stimulating magnetic field was simulated as that generated from a three-dimensional coil. The corresponding acoustic source and field were then simulated. Computer simulation studies were conducted using both concentric and eccentric spherical conductivity models with different geometric specifications. In addition, the grid size for finite element analysis was evaluated for model calibration and evaluation of the corresponding acoustic field. PMID:19351978

  20. Landsat-5 bumper-mode geometric correction

    USGS Publications Warehouse

    Storey, James C.; Choate, Michael J.

    2004-01-01

    The Landsat-5 Thematic Mapper (TM) scan mirror was switched from its primary operating mode to a backup mode in early 2002 in order to overcome internal synchronization problems arising from long-term wear of the scan mirror mechanism. The backup bumper mode of operation removes the constraints on scan start and stop angles enforced in the primary scan angle monitor operating mode, requiring additional geometric calibration effort to monitor the active scan angles. It also eliminates scan timing telemetry used to correct the TM scan geometry. These differences require changes to the geometric correction algorithms used to process TM data. A mathematical model of the scan mirror's behavior when operating in bumper mode was developed. This model includes a set of key timing parameters that characterize the time-varying behavior of the scan mirror bumpers. To simplify the implementation of the bumper-mode model, the bumper timing parameters were recast in terms of the calibration and telemetry data items used to process normal TM imagery. The resulting geometric performance, evaluated over 18 months of bumper-mode operations, though slightly reduced from that achievable in the primary operating mode, is still within the Landsat specifications when the data are processed with the most up-to-date calibration parameters.

  1. Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data

    NASA Astrophysics Data System (ADS)

    Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam

    2018-04-01

    Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.

  2. Calibration and characterization of UV sensors for water disinfection

    NASA Astrophysics Data System (ADS)

    Larason, T.; Ohno, Y.

    2006-04-01

    The National Institute of Standards and Technology (NIST), USA is participating in a project with the American Water Works Association Research Foundation (AwwaRF) to develop new guidelines for ultraviolet (UV) sensor characteristics to monitor the performance of UV water disinfection plants. The current UV water disinfection standards, ÖNORM M5873-1 and M5873-2 (Austria) and DVGW W294 3 (Germany), on the requirements for UV sensors for low-pressure mercury (LPM) and medium-pressure mercury (MPM) lamp systems have been studied. Additionally, the characteristics of various types of UV sensors from several different commercial vendors have been measured and analysed. This information will aid in the development of new guidelines to address issues such as sensor requirements, calibration methods, uncertainty and traceability. Practical problems were found in the calibration methods and evaluation of spectral responsivity requirements for sensors designed for MPM lamp systems. To solve the problems, NIST is proposing an alternative sensor calibration method for MPM lamp systems. A future calibration service is described for UV sensors intended for low- and medium-pressure mercury lamp systems used in water disinfection applications.

  3. Bayesian calibration of the Community Land Model using surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural errormore » in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.« less

  4. The Use of Gamma-Ray Imaging to Improve Portal Monitor Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziock, Klaus-Peter; Collins, Jeff; Fabris, Lorenzo

    2008-01-01

    We have constructed a prototype, rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. Our Roadside Tracker uses automated target acquisition and tracking (TAT) software to identify and track vehicles in visible light images. The field of view of the visible camera overlaps with and is calibrated to that of a one-dimensional gamma-ray imager. The TAT code passes information on when vehicles enter and exit the system field of view and when they cross gamma-ray pixel boundaries. Based on this in-formation, the gamma-ray imager "harvests"more » the gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. In this fashion we are able to generate vehicle-specific radiation signatures and avoid source confusion problems that plague nonimaging approaches to the same problem.« less

  5. OPTICAL–NEAR-INFRARED PHOTOMETRIC CALIBRATION OF M DWARF METALLICITY AND ITS APPLICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hejazi, N.; Robertis, M. M. De; Dawson, P. C., E-mail: nedahej@yorku.ca, E-mail: mmdr@yorku.ca, E-mail: pdawson@trentu.ca

    2015-04-15

    Based on a carefully constructed sample of dwarf stars, a new optical–near-infrared photometric calibration to estimate the metallicity of late-type K and early-to-mid-type M dwarfs is presented. The calibration sample has two parts; the first part includes 18 M dwarfs with metallicities determined by high-resolution spectroscopy and the second part contains 49 dwarfs with metallicities obtained through moderate-resolution spectra. By applying this calibration to a large sample of around 1.3 million M dwarfs from the Sloan Digital Sky Survey and 2MASS, the metallicity distribution of this sample is determined and compared with those of previous studies. Using photometric parallaxes, themore » Galactic heights of M dwarfs in the large sample are also estimated. Our results show that stars farther from the Galactic plane, on average, have lower metallicity, which can be attributed to the age–metallicity relation. A scarcity of metal-poor dwarf stars in the metallicity distribution relative to the Simple Closed Box Model indicates the existence of the “M dwarf problem,” similar to the previously known G and K dwarf problems. Several more complicated Galactic chemical evolution models which have been proposed to resolve the G and K dwarf problems are tested and it is shown that these models could, to some extent, mitigate the M dwarf problem as well.« less

  6. Limits of Predictability in Commuting Flows in the Absence of Data for Calibration

    PubMed Central

    Yang, Yingxiang; Herrera, Carlos; Eagle, Nathan; González, Marta C.

    2014-01-01

    The estimation of commuting flows at different spatial scales is a fundamental problem for different areas of study. Many current methods rely on parameters requiring calibration from empirical trip volumes. Their values are often not generalizable to cases without calibration data. To solve this problem we develop a statistical expression to calculate commuting trips with a quantitative functional form to estimate the model parameter when empirical trip data is not available. We calculate commuting trip volumes at scales from within a city to an entire country, introducing a scaling parameter α to the recently proposed parameter free radiation model. The model requires only widely available population and facility density distributions. The parameter can be interpreted as the influence of the region scale and the degree of heterogeneity in the facility distribution. We explore in detail the scaling limitations of this problem, namely under which conditions the proposed model can be applied without trip data for calibration. On the other hand, when empirical trip data is available, we show that the proposed model's estimation accuracy is as good as other existing models. We validated the model in different regions in the U.S., then successfully applied it in three different countries. PMID:25012599

  7. 40 CFR Appendix I to Part 94 - Emission-Related Engine Parameters and Specifications

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Temperature control system calibration. 4. Maximum allowable inlet air restriction. III. Fuel System. 1. General. a. Engine idle speed. 2. Fuel injection—compression ignition engines. a. Control parameters and calibrations. b. Transient enrichment system calibration. c. Air-fuel flow calibration. d. Altitude...

  8. 40 CFR Appendix I to Part 94 - Emission-Related Engine Parameters and Specifications

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... Temperature control system calibration. 4. Maximum allowable inlet air restriction. III. Fuel System. 1. General. a. Engine idle speed. 2. Fuel injection—compression ignition engines. a. Control parameters and calibrations. b. Transient enrichment system calibration. c. Air-fuel flow calibration. d. Altitude...

  9. 40 CFR Appendix I to Part 94 - Emission-Related Engine Parameters and Specifications

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Temperature control system calibration. 4. Maximum allowable inlet air restriction. III. Fuel System. 1. General. a. Engine idle speed. 2. Fuel injection—compression ignition engines. a. Control parameters and calibrations. b. Transient enrichment system calibration. c. Air-fuel flow calibration. d. Altitude...

  10. 40 CFR Appendix I to Part 94 - Emission-Related Engine Parameters and Specifications

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Temperature control system calibration. 4. Maximum allowable inlet air restriction. III. Fuel System. 1. General. a. Engine idle speed. 2. Fuel injection—compression ignition engines. a. Control parameters and calibrations. b. Transient enrichment system calibration. c. Air-fuel flow calibration. d. Altitude...

  11. An investigation into force-moment calibration techniques applicable to a magnetic suspension and balance system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Eskins, Jonathan

    1988-01-01

    The problem of determining the forces and moments acting on a wind tunnel model suspended in a Magnetic Suspension and Balance System is addressed. Two calibration methods were investigated for three types of model cores, i.e., Alnico, Samarium-Cobalt, and a superconducting solenoid. Both methods involve calibrating the currents in the electromagnetic array against known forces and moments. The first is a static calibration method using calibration weights and a system of pulleys. The other method, dynamic calibration, involves oscillating the model and using its inertia to provide calibration forces and moments. Static calibration data, found to produce the most reliable results, is presented for three degrees of freedom at 0, 15, and -10 deg angle of attack. Theoretical calculations are hampered by the inability to represent iron-cored electromagnets. Dynamic calibrations, despite being quicker and easier to perform, are not as accurate as static calibrations. Data for dynamic calibrations at 0 and 15 deg is compared with the relevant static data acquired. Distortion of oscillation traces is cited as a major source of error in dynamic calibrations.

  12. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  13. Computation and analysis for a constrained entropy optimization problem in finance

    NASA Astrophysics Data System (ADS)

    He, Changhong; Coleman, Thomas F.; Li, Yuying

    2008-12-01

    In [T. Coleman, C. He, Y. Li, Calibrating volatility function bounds for an uncertain volatility model, Journal of Computational Finance (2006) (submitted for publication)], an entropy minimization formulation has been proposed to calibrate an uncertain volatility option pricing model (UVM) from market bid and ask prices. To avoid potential infeasibility due to numerical error, a quadratic penalty function approach is applied. In this paper, we show that the solution to the quadratic penalty problem can be obtained by minimizing an objective function which can be evaluated via solving a Hamilton-Jacobian-Bellman (HJB) equation. We prove that the implicit finite difference solution of this HJB equation converges to its viscosity solution. In addition, we provide computational examples illustrating accuracy of calibration.

  14. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  15. Axial calibration methods of piezoelectric load sharing dynamometer

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Chang, Qingbing; Ren, Zongjin; Shao, Jun; Wang, Xinlei; Tian, Yu

    2018-06-01

    The relationship between input and output of load sharing dynamometer is seriously non-linear in different loading points of a plane, so it's significant for accutately measuring force to precisely calibrate the non-linear relationship. In this paper, firstly, based on piezoelectric load sharing dynamometer, calibration experiments of different loading points are performed in a plane. And then load sharing testing system is respectively calibrated based on BP algorithm and ELM (Extreme Learning Machine) algorithm. Finally, the results show that the calibration result of ELM is better than BP for calibrating the non-linear relationship between input and output of loading sharing dynamometer in the different loading points of a plane, which verifies that ELM algorithm is feasible in solving force non-linear measurement problem.

  16. Motion data classification on the basis of dynamic time warping with a cloud point distance measure

    NASA Astrophysics Data System (ADS)

    Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad

    2016-06-01

    The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.

  17. Space plasma contractor research, 1988

    NASA Technical Reports Server (NTRS)

    Williams, John D.; Wilbur, Paul J.

    1989-01-01

    Results of experiments conducted on hollow cathode-based plasma contractors are reported. Specific tests in which attempts were made to vary plasma conditions in the simulated ionospheric plasma are described. Experimental results showing the effects of contractor flowrate and ion collecting surface size on contactor performance and contactor plasma plume geometry are presented. In addition to this work, one-dimensional solutions to spherical and cylindircal space-charge limited double-sheath problems are developed. A technique is proposed that can be used to apply these solutions to the problem of current flow through elongated double-sheaths that separate two cold plasmas. Two conference papers which describe the essential features of the plasma contacting process and present data that should facilitate calibration of comprehensive numerical models of the plasma contacting process are also included.

  18. Standing on the shoulders of giants: improving medical image segmentation via bias correction.

    PubMed

    Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul

    2010-01-01

    We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.

  19. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    DOE PAGES

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; ...

    2015-01-01

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  20. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  1. Experience from the in-flight calibration of the Extreme Ultraviolet Explorer (EUVE) and Upper Atmosphere Research Satellite (UARS) fixed head star trackers (FHSTs)

    NASA Technical Reports Server (NTRS)

    Lee, Michael

    1995-01-01

    Since the original post-launch calibration of the FHSTs (Fixed Head Star Trackers) on EUVE (Extreme Ultraviolet Explorer) and UARS (Upper Atmosphere Research Satellite), the Flight Dynamics task has continued to analyze the FHST performance. The algorithm used for inflight alignment of spacecraft sensors is described and the equations for the errors in the relative alignment for the simple 2 star tracker case are shown. Simulated data and real data are used to compute the covariance of the relative alignment errors. Several methods for correcting the alignment are compared and results analyzed. The specific problems seen on orbit with UARS and EUVE are then discussed. UARS has experienced anomalous tracker performance on an FHST resulting in continuous variation in apparent tracker alignment. On EUVE, the FHST residuals from the attitude determination algorithm showed a dependence on the direction of roll during survey mode. This dependence is traced back to time tagging errors and the original post launch alignment is found to be in error due to the impact of the time tagging errors on the alignment algorithm. The methods used by the FDF (Flight Dynamics Facility) to correct for these problems is described.

  2. Recent Surface Reflectance Measurement Campaigns with Emphasis on Best Practices, SI Traceability and Uncertainty Estimation

    NASA Technical Reports Server (NTRS)

    Helder, Dennis; Thome, Kurtis John; Aaron, Dave; Leigh, Larry; Czapla-Myers, Jeff; Leisso, Nathan; Biggar, Stuart; Anderson, Nik

    2012-01-01

    A significant problem facing the optical satellite calibration community is limited knowledge of the uncertainties associated with fundamental measurements, such as surface reflectance, used to derive satellite radiometric calibration estimates. In addition, it is difficult to compare the capabilities of calibration teams around the globe, which leads to differences in the estimated calibration of optical satellite sensors. This paper reports on two recent field campaigns that were designed to isolate common uncertainties within and across calibration groups, particularly with respect to ground-based surface reflectance measurements. Initial results from these efforts suggest the uncertainties can be as low as 1.5% to 2.5%. In addition, methods for improving the cross-comparison of calibration teams are suggested that can potentially reduce the differences in the calibration estimates of optical satellite sensors.

  3. Integrating ecosystems measurements from multiple eddy-covariance sites to a simple model of ecosystem process - Are there possibilities for a uniform model calibration?

    NASA Astrophysics Data System (ADS)

    Minunno, Francesco; Peltoniemi, Mikko; Launiainen, Samuli; Mäkelä, Annikki

    2014-05-01

    Biogeochemical models quantify the material and energy flux exchanges between biosphere, atmosphere and soil, however there is still considerable uncertainty underpinning model structure and parametrization. The increasing availability of data from of multiple sources provides useful information for model calibration and validation at different space and time scales. We calibrated a simplified ecosystem process model PRELES to data from multiple sites. In this work we had the following objective: to compare a multi-site calibration and site-specific calibrations, in order to test if PRELES is a model of general applicability, and to test how well one parameterization can predict ecosystem fluxes. Model calibration and evaluation were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 9 sites of Finland and Sweden were used in the study; half dataset was used for model calibrations and half for the comparative analyses. 10 BCs were performed; the model was independently calibrated for each of the nine sites (site-specific calibrations) and a multi-site calibration was achieved using the data from all the sites in one BC. Then 9 BMCs were carried out, one for each site, using output from the multi-site and the site-specific versions of PRELES. Similar estimates were obtained for the parameters at which model outputs are most sensitive. Not surprisingly, the joint posterior distribution achieved through the multi-site calibration was characterized by lower uncertainty, because more data were involved in the calibration process. No significant differences were encountered in the prediction of the multi-site and site-specific versions of PRELES, and after BMC, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Despite being a simple model, PRELES provided good estimates of GPP and ET; only for one site PRELES multi-site version underestimated water fluxes. Our study implies convergence of GPP and water processes in boreal zone to the extent that their plausible prediction is possible with a simple model using global parameterization.

  4. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS)

    NASA Astrophysics Data System (ADS)

    Park, Suhyung; Park, Jaeseok

    2015-05-01

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  5. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS).

    PubMed

    Park, Suhyung; Park, Jaeseok

    2015-05-07

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  6. Laboratory evaluation of the Sequoia Scientific LISST-ABS acoustic backscatter sediment sensor

    USGS Publications Warehouse

    Snazelle, Teri T.

    2017-12-18

    Sequoia Scientific’s LISST-ABS is an acoustic backscatter sensor designed to measure suspended-sediment concentration at a point source. Three LISST-ABS were evaluated at the U.S. Geological Survey (USGS) Hydrologic Instrumentation Facility (HIF). Serial numbers 6010, 6039, and 6058 were assessed for accuracy in solutions with varying particle-size distributions and for the effect of temperature on sensor accuracy. Certified sediment samples composed of different ranges of particle size were purchased from Powder Technology Inc. These sediment samples were 30–80-micron (µm) Arizona Test Dust; less than 22-µm ISO 12103-1, A1 Ultrafine Test Dust; and 149-µm MIL-STD 810E Silica Dust. The sensor was able to accurately measure suspended-sediment concentration when calibrated with sediment of the same particle-size distribution as the measured. Overall testing demonstrated that sensors calibrated with finer sized sediments overdetect sediment concentrations with coarser sized sediments, and sensors calibrated with coarser sized sediments do not detect increases in sediment concentrations from small and fine sediments. These test results are not unexpected for an acoustic-backscatter device and stress the need for using accurate site-specific particle-size distributions during sensor calibration. When calibrated for ultrafine dust with a less than 22-µm particle size (silt) and with the Arizona Test Dust with a 30–80-µm range, the data from sensor 6039 were biased high when fractions of the coarser (149-µm) Silica Dust were added. Data from sensor 6058 showed similar results with an elevated response to coarser material when calibrated with a finer particle-size distribution and a lack of detection when subjected to finer particle-size sediment. Sensor 6010 was also tested for the effect of dissimilar particle size during the calibration and showed little effect. Subsequent testing revealed problems with this sensor, including an inadequate temperature compensation, making this data questionable. The sensor was replaced by Sequoia Scientific with serial number 6039. Results from the extended temperature testing showed proper temperature compensation for sensor 6039, and results from the dissimilar calibration/testing particle-size distribution closely corroborated the results from sensor 6058.

  7. Adaptive scene-based correction algorithm for removal of residual fixed pattern noise in microgrid image data

    NASA Astrophysics Data System (ADS)

    Ratliff, Bradley M.; LeMaster, Daniel A.

    2012-06-01

    Pixel-to-pixel response nonuniformity is a common problem that affects nearly all focal plane array sensors. This results in a frame-to-frame fixed pattern noise (FPN) that causes an overall degradation in collected data. FPN is often compensated for through the use of blackbody calibration procedures; however, FPN is a particularly challenging problem because the detector responsivities drift relative to one another in time, requiring that the sensor be recalibrated periodically. The calibration process is obstructive to sensor operation and is therefore only performed at discrete intervals in time. Thus, any drift that occurs between calibrations (along with error in the calibration sources themselves) causes varying levels of residual calibration error to be present in the data at all times. Polarimetric microgrid sensors are particularly sensitive to FPN due to the spatial differencing involved in estimating the Stokes vector images. While many techniques exist in the literature to estimate FPN for conventional video sensors, few have been proposed to address the problem in microgrid imaging sensors. Here we present a scene-based nonuniformity correction technique for microgrid sensors that is able to reduce residual fixed pattern noise while preserving radiometry under a wide range of conditions. The algorithm requires a low number of temporal data samples to estimate the spatial nonuniformity and is computationally efficient. We demonstrate the algorithm's performance using real data from the AFRL PIRATE and University of Arizona LWIR microgrid sensors.

  8. Evaluation of Piecewise Polynomial Equations for Two Types of Thermocouples

    PubMed Central

    Chen, Andrew; Chen, Chiachung

    2013-01-01

    Thermocouples are the most frequently used sensors for temperature measurement because of their wide applicability, long-term stability and high reliability. However, one of the major utilization problems is the linearization of the transfer relation between temperature and output voltage of thermocouples. The linear calibration equation and its modules could be improved by using regression analysis to help solve this problem. In this study, two types of thermocouple and five temperature ranges were selected to evaluate the fitting agreement of different-order polynomial equations. Two quantitative criteria, the average of the absolute error values |e|ave and the standard deviation of calibration equation estd, were used to evaluate the accuracy and precision of these calibrations equations. The optimal order of polynomial equations differed with the temperature range. The accuracy and precision of the calibration equation could be improved significantly with an adequate higher degree polynomial equation. The technique could be applied with hardware modules to serve as an intelligent sensor for temperature measurement. PMID:24351627

  9. Turbine blade and vane heat flux sensor development, phase 1

    NASA Technical Reports Server (NTRS)

    Atkinson, W. H.; Cyr, M. A.; Strange, R. R.

    1984-01-01

    Heat flux sensors available for installation in the hot section airfoils of advanced aircraft gas turbine engines were developed. Two heat flux sensors were designed, fabricated, calibrated, and tested. Measurement techniques are compared in an atmospheric pressure combustor rig test. Sensors, embedded thermocouple and the Gordon gauge, were fabricated that met the geometric and fabricability requirements and could withstand the hot section environmental conditions. Calibration data indicate that these sensors yielded repeatable results and have the potential to meet the accuracy goal of measuring local heat flux to within 5%. Thermal cycle tests and thermal soak tests indicated that the sensors are capable of surviving extended periods of exposure to the environment conditions in the turbine. Problems in calibration of the sensors caused by severe non-one dimensional heat flow were encountered. Modifications to the calibration techniques are needed to minimize this problem and proof testing of the sensors in an engine is needed to verify the designs.

  10. Turbine blade and vane heat flux sensor development, phase 1

    NASA Astrophysics Data System (ADS)

    Atkinson, W. H.; Cyr, M. A.; Strange, R. R.

    1984-08-01

    Heat flux sensors available for installation in the hot section airfoils of advanced aircraft gas turbine engines were developed. Two heat flux sensors were designed, fabricated, calibrated, and tested. Measurement techniques are compared in an atmospheric pressure combustor rig test. Sensors, embedded thermocouple and the Gordon gauge, were fabricated that met the geometric and fabricability requirements and could withstand the hot section environmental conditions. Calibration data indicate that these sensors yielded repeatable results and have the potential to meet the accuracy goal of measuring local heat flux to within 5%. Thermal cycle tests and thermal soak tests indicated that the sensors are capable of surviving extended periods of exposure to the environment conditions in the turbine. Problems in calibration of the sensors caused by severe non-one dimensional heat flow were encountered. Modifications to the calibration techniques are needed to minimize this problem and proof testing of the sensors in an engine is needed to verify the designs.

  11. Mesh refinement and numerical sensitivity analysis for parameter calibration of partial differential equations

    NASA Astrophysics Data System (ADS)

    Becker, Roland; Vexler, Boris

    2005-06-01

    We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.

  12. Calibration of the clumped isotope thermometer for planktic foraminifers

    NASA Astrophysics Data System (ADS)

    Meinicke, N.; Ho, S. L.; Nürnberg, D.; Tripati, A. K.; Jansen, E.; Dokken, T.; Schiebel, R.; Meckler, A. N.

    2017-12-01

    Many proxies for past ocean temperature suffer from secondary influences or require species-specific calibrations that might not be applicable on longer time scales. Being thermodynamically based and thus independent of seawater composition, clumped isotopes in carbonates (Δ47) have the potential to circumvent such issues affecting other proxies and provide reliable temperature reconstructions far back in time and in unknown settings. Although foraminifers are commonly used for paleoclimate reconstructions, their use for clumped isotope thermometry has been hindered so far by large sample-size requirements. Existing calibration studies suggest that data from a variety of foraminifer species agree with synthetic carbonate calibrations (Tripati, et al., GCA, 2010; Grauel, et al., GCA, 2013). However, these studies did not include a sufficient number of samples to fully assess the existence of species-specific effects, and data coverage was especially sparse in the low temperature range (<10 °C). To expand the calibration database of clumped isotopes in planktic foraminifers, especially for colder temperatures (<10°C), we present new Δ47 data analysed on 14 species of planktic foraminifers from 13 sites, covering a temperature range of 1-29 °C. Our method allows for analysis of smaller sample sizes (3-5 mg), hence also the measurement of multiple species from the same samples. We analyzed surface-dwelling ( 0-50 m) species and deep-dwelling (habitat depth up to several hundred meters) planktic foraminifers from the same sites to evaluate species-specific effects and to assess the feasibility of temperature reconstructions for different water depths. We also assess the effects of different techniques in estimating foraminifer calcification temperature on the calibration. Finally, we compare our calibration to existing clumped isotope calibrations. Our results confirm previous findings that indicate no species-specific effects on the Δ47-temperature relationship measured in planktic foraminifers.

  13. Method calibration of the model 13145 infrared target projectors

    NASA Astrophysics Data System (ADS)

    Huang, Jianxia; Gao, Yuan; Han, Ying

    2014-11-01

    The SBIR Model 13145 Infrared Target Projectors ( The following abbreviation Evaluation Unit ) used for characterizing the performances of infrared imaging system. Test items: SiTF, MTF, NETD, MRTD, MDTD, NPS. Infrared target projectors includes two area blackbodies, a 12 position target wheel, all reflective collimator. It provide high spatial frequency differential targets, Precision differential targets imaged by infrared imaging system. And by photoelectricity convert on simulate signal or digital signal. Applications software (IR Windows TM 2001) evaluate characterizing the performances of infrared imaging system. With regards to as a whole calibration, first differently calibration for distributed component , According to calibration specification for area blackbody to calibration area blackbody, by means of to amend error factor to calibration of all reflective collimator, radiance calibration of an infrared target projectors using the SR5000 spectral radiometer, and to analyze systematic error. With regards to as parameter of infrared imaging system, need to integrate evaluation method. According to regulation with -GJB2340-1995 General specification for military thermal imaging sets -testing parameters of infrared imaging system, the results compare with results from Optical Calibration Testing Laboratory . As a goal to real calibration performances of the Evaluation Unit.

  14. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    DTIC Science & Technology

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  15. Improved infra-red procedure for the evaluation of calibrating units.

    DOT National Transportation Integrated Search

    2011-01-04

    Introduction. The NHTSA Model Specifications for Calibrating Units for Breath : Alcohol Testers (FR 72 34742-34748) requires that calibration units submitted for : inclusion on the NHTSA Conforming Products List for such devices be evaluated using : ...

  16. Inverse models: A necessary next step in ground-water modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1997-01-01

    Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.

  17. Early Evaluation of the VIIRS Calibration, Cloud Mask and Surface Reflectance Earth Data Records

    NASA Technical Reports Server (NTRS)

    Vermote, Eric; Justice, Chris; Csiszar, Ivan

    2014-01-01

    Surface reflectance is one of the key products fromVIIRS and as withMODIS, is used in developing several higherorder land products. The VIIRS Surface Reflectance (SR) Intermediate Product (IP) is based on the heritageMODIS Collection 5 product (Vermote, El Saleous, & Justice, 2002). The quality and character of surface reflectance depend on the accuracy of the VIIRS Cloud Mask (VCM), the aerosol algorithms and the adequate calibration of the sensor. The focus of this paper is the early evaluation of the VIIRS SR product in the context of the maturity of the operational processing system, the Interface Data Processing System (IDPS). After a brief introduction, the paper presents the calibration performance and the role of the surface reflectance in calibration monitoring. The analysis of the performance of the cloud mask with a focus on vegetation monitoring (no snow conditions) shows typical problems over bright surfaces and high elevation sites. Also discussed is the performance of the aerosol input used in the atmospheric correction and in particular the artifacts generated by the use of the Navy Aerosol Analysis and Prediction System. Early quantitative results of the performance of the SR product over the AERONET sites showthatwith the fewadjustments recommended, the accuracy iswithin the threshold specifications. The analysis of the adequacy of the SR product (Land PEATE adjusted version) in applications of societal benefits is then presented. We conclude with a set of recommendations to ensure consistency and continuity of the JPSS mission with the MODIS Land Climate Data Record.

  18. Cryogenic Pressure Calibrator for Wide Temperature Electronically Scanned (ESP) Pressure Modules

    NASA Technical Reports Server (NTRS)

    Faulcon, Nettie D.

    2001-01-01

    Electronically scanned pressure (ESP) modules have been developed that can operate in ambient and in cryogenic environments, particularly Langley's National Transonic Facility (NTF). Because they can operate directly in a cryogenic environment, their use eliminates many of the operational problems associated with using conventional modules at low temperatures. To ensure the accuracy of these new instruments, calibration was conducted in a laboratory simulating the environmental conditions of NTF. This paper discusses the calibration process by means of the simulation laboratory, the system inputs and outputs and the analysis of the calibration data. Calibration results of module M4, a wide temperature ESP module with 16 ports and a pressure range of +/- 4 psid are given.

  19. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  20. Analytical and simulator study of advanced transport

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Rickard, W. W.

    1982-01-01

    An analytic methodology, based on the optimal-control pilot model, was demonstrated for assessing longitidunal-axis handling qualities of transport aircraft in final approach. Calibration of the methodology is largely in terms of closed-loop performance requirements, rather than specific vehicle response characteristics, and is based on a combination of published criteria, pilot preferences, physical limitations, and engineering judgment. Six longitudinal-axis approach configurations were studied covering a range of handling qualities problems, including the presence of flexible aircraft modes. The analytical procedure was used to obtain predictions of Cooper-Harper ratings, a solar quadratic performance index, and rms excursions of important system variables.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendl, Christian B.; Spohn, Herbert

    The nonequilibrium dynamics of anharmonic chains is studied by imposing an initial domain-wall state, in which the two half lattices are prepared in equilibrium with distinct parameters. Here, we analyse the Riemann problem for the corresponding Euler equations and, in specific cases, compare with molecular dynamics. Additionally, the fluctuations of time-integrated currents are investigated. In analogy with the KPZ equation, their typical fluctuations should be of size t 1/3 and have a Tracy–Widom GUE distributed amplitude. The proper extension to anharmonic chains is explained and tested through molecular dynamics. Our results are calibrated against the stochastic LeRoux lattice gas.

  2. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  3. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  4. Adjustment method for embedded metrology engine in an EM773 series microcontroller.

    PubMed

    Blazinšek, Iztok; Kotnik, Bojan; Chowdhury, Amor; Kačič, Zdravko

    2015-09-01

    This paper presents the problems of implementation and adjustment (calibration) of a metrology engine embedded in NXP's EM773 series microcontroller. The metrology engine is used in a smart metering application to collect data about energy utilization and is controlled with the use of metrology engine adjustment (calibration) parameters. The aim of this research is to develop a method which would enable the operators to find and verify the optimum parameters which would ensure the best possible accuracy. Properly adjusted (calibrated) metrology engines can then be used as a base for variety of products used in smart and intelligent environments. This paper focuses on the problems encountered in the development, partial automatisation, implementation and verification of this method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Design of experiments and data analysis challenges in calibration for forensics applications

    DOE PAGES

    Anderson-Cook, Christine M.; Burr, Thomas L.; Hamada, Michael S.; ...

    2015-07-15

    Forensic science aims to infer characteristics of source terms using measured observables. Our focus is on statistical design of experiments and data analysis challenges arising in nuclear forensics. More specifically, we focus on inferring aspects of experimental conditions (of a process to produce product Pu oxide powder), such as temperature, nitric acid concentration, and Pu concentration, using measured features of the product Pu oxide powder. The measured features, Y, include trace chemical concentrations and particle morphology such as particle size and shape of the produced Pu oxide power particles. Making inferences about the nature of inputs X that were usedmore » to create nuclear materials having particular characteristics, Y, is an inverse problem. Therefore, statistical analysis can be used to identify the best set (or sets) of Xs for a new set of observed responses Y. One can fit a model (or models) such as Υ = f(Χ) + error, for each of the responses, based on a calibration experiment and then “invert” to solve for the best set of Xs for a new set of Ys. This perspectives paper uses archived experimental data to consider aspects of data collection and experiment design for the calibration data to maximize the quality of the predicted Ys in the forward models; that is, we assume that well-estimated forward models are effective in the inverse problem. In addition, we consider how to identify a best solution for the inferred X, and evaluate the quality of the result and its robustness to a variety of initial assumptions, and different correlation structures between the responses. In addition, we also briefly review recent advances in metrology issues related to characterizing particle morphology measurements used in the response vector, Y.« less

  6. Design of experiments and data analysis challenges in calibration for forensics applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine M.; Burr, Thomas L.; Hamada, Michael S.

    Forensic science aims to infer characteristics of source terms using measured observables. Our focus is on statistical design of experiments and data analysis challenges arising in nuclear forensics. More specifically, we focus on inferring aspects of experimental conditions (of a process to produce product Pu oxide powder), such as temperature, nitric acid concentration, and Pu concentration, using measured features of the product Pu oxide powder. The measured features, Y, include trace chemical concentrations and particle morphology such as particle size and shape of the produced Pu oxide power particles. Making inferences about the nature of inputs X that were usedmore » to create nuclear materials having particular characteristics, Y, is an inverse problem. Therefore, statistical analysis can be used to identify the best set (or sets) of Xs for a new set of observed responses Y. One can fit a model (or models) such as Υ = f(Χ) + error, for each of the responses, based on a calibration experiment and then “invert” to solve for the best set of Xs for a new set of Ys. This perspectives paper uses archived experimental data to consider aspects of data collection and experiment design for the calibration data to maximize the quality of the predicted Ys in the forward models; that is, we assume that well-estimated forward models are effective in the inverse problem. In addition, we consider how to identify a best solution for the inferred X, and evaluate the quality of the result and its robustness to a variety of initial assumptions, and different correlation structures between the responses. In addition, we also briefly review recent advances in metrology issues related to characterizing particle morphology measurements used in the response vector, Y.« less

  7. Multi-parameter brain tissue microsensor and interface systems: calibration, reliability and user experiences of pressure and temperature sensors in the setting of neurointensive care.

    PubMed

    Childs, Charmaine; Wang, Li; Neoh, Boon Kwee; Goh, Hok Liok; Zu, Mya Myint; Aung, Phyo Wai; Yeo, Tseng Tsai

    2014-10-01

    The objective was to investigate sensor measurement uncertainty for intracerebral probes inserted during neurosurgery and remaining in situ during neurocritical care. This describes a prospective observational study of two sensor types and including performance of the complete sensor-bedside monitoring and readout system. Sensors from 16 patients with severe traumatic brain injury (TBI) were obtained at the time of removal from the brain. When tested, 40% of sensors achieved the manufacturer temperature specification of 0.1 °C. Pressure sensors calibration differed from the manufacturers at all test pressures in 8/20 sensors. The largest pressure measurement error was in the intraparenchymal triple sensor. Measurement uncertainty is not influenced by duration in situ. User experiences reveal problems with sensor 'handling', alarms and firmware. Rigorous investigation of the performance of intracerebral sensors in the laboratory and at the bedside has established measurement uncertainty in the 'real world' setting of neurocritical care.

  8. Thematic Mapper. Volume 1: Calibration report flight model, LANDSAT 5

    NASA Technical Reports Server (NTRS)

    Cooley, R. C.; Lansing, J. C.

    1984-01-01

    The calibration of the Flight 1 Model Thematic Mapper is discussed. Spectral response, scan profile, coherent noise, line spread profiles and white light leaks, square wave response, radiometric calibration, and commands and telemetry are specifically addressed.

  9. Investigating temporal field sampling strategies for site-specific calibration of three soil moisture-neutron intensity parameterisation methods

    NASA Astrophysics Data System (ADS)

    Iwema, J.; Rosolem, R.; Baatz, R.; Wagener, T.; Bogena, H. R.

    2015-07-01

    The Cosmic-Ray Neutron Sensor (CRNS) can provide soil moisture information at scales relevant to hydrometeorological modelling applications. Site-specific calibration is needed to translate CRNS neutron intensities into sensor footprint average soil moisture contents. We investigated temporal sampling strategies for calibration of three CRNS parameterisations (modified N0, HMF, and COSMIC) by assessing the effects of the number of sampling days and soil wetness conditions on the performance of the calibration results while investigating actual neutron intensity measurements, for three sites with distinct climate and land use: a semi-arid site, a temperate grassland, and a temperate forest. When calibrated with 1 year of data, both COSMIC and the modified N0 method performed better than HMF. The performance of COSMIC was remarkably good at the semi-arid site in the USA, while the N0mod performed best at the two temperate sites in Germany. The successful performance of COSMIC at all three sites can be attributed to the benefits of explicitly resolving individual soil layers (which is not accounted for in the other two parameterisations). To better calibrate these parameterisations, we recommend in situ soil sampled to be collected on more than a single day. However, little improvement is observed for sampling on more than 6 days. At the semi-arid site, the N0mod method was calibrated better under site-specific average wetness conditions, whereas HMF and COSMIC were calibrated better under drier conditions. Average soil wetness condition gave better calibration results at the two humid sites. The calibration results for the HMF method were better when calibrated with combinations of days with similar soil wetness conditions, opposed to N0mod and COSMIC, which profited from using days with distinct wetness conditions. Errors in actual neutron intensities were translated to average errors specifically to each site. At the semi-arid site, these errors were below the typical measurement uncertainties from in situ point-scale sensors and satellite remote sensing products. Nevertheless, at the two humid sites, reduction in uncertainty with increasing sampling days only reached typical errors associated with satellite remote sensing products. The outcomes of this study can be used by researchers as a CRNS calibration strategy guideline.

  10. A Theoretical Framework for Calibration in Computer Models: Parametrization, Estimation and Convergence Properties

    DOE PAGES

    Tuo, Rui; Jeff Wu, C. F.

    2016-07-19

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  11. Probing fibronectin–antibody interactions using AFM force spectroscopy and lateral force microscopy

    PubMed Central

    Kulik, Andrzej J; Lee, Kyumin; Pyka-Fościak, Grazyna; Nowak, Wieslaw

    2015-01-01

    Summary The first experiment showing the effects of specific interaction forces using lateral force microscopy (LFM) was demonstrated for lectin–carbohydrate interactions some years ago. Such measurements are possible under the assumption that specific forces strongly dominate over the non-specific ones. However, obtaining quantitative results requires the complex and tedious calibration of a torsional force. Here, a new and relatively simple method for the calibration of the torsional force is presented. The proposed calibration method is validated through the measurement of the interaction forces between human fibronectin and its monoclonal antibody. The results obtained using LFM and AFM-based classical force spectroscopies showed similar unbinding forces recorded at similar loading rates. Our studies verify that the proposed lateral force calibration method can be applied to study single molecule interactions. PMID:26114080

  12. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  13. Re-calibration of the magnetic compass in hand-raised European robins (Erithacus rubecula)

    PubMed Central

    Alert, Bianca; Michalik, Andreas; Thiele, Nadine; Bottesch, Michael; Mouritsen, Henrik

    2015-01-01

    Migratory birds can use a variety of environmental cues for orientation. A primary calibration between the celestial and magnetic compasses seems to be fundamental prior to a bird’s first autumn migration. Releasing hand-raised or rescued young birds back into the wild might therefore be a problem because they might not have established a functional orientation system during their first calendar year. Here, we test whether hand-raised European robins that did not develop any functional compass before or during their first autumn migration could relearn to orient if they were exposed to natural celestial cues during the subsequent winter and spring. When tested in the geomagnetic field without access to celestial cues, these birds could orient in their species-specific spring migratory direction. In contrast, control birds that were deprived of any natural celestial cues throughout remained unable to orient. Our experiments suggest that European robins are still capable of establishing a functional orientation system after their first autumn. Although the external reference remains speculative, most likely, natural celestial cues enabled our birds to calibrate their magnetic compass. Our data suggest that avian compass systems are more flexible than previously believed and have implications for the release of hand-reared migratory birds. PMID:26388258

  14. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  15. Alexandrium minutum growth controlled by phosphorus . An applied model

    NASA Astrophysics Data System (ADS)

    Chapelle, A.; Labry, C.; Sourisseau, M.; Lebreton, C.; Youenou, A.; Crassous, M. P.

    2010-11-01

    Toxic algae are a worldwide problem threatening aquaculture, public health and tourism. Alexandrium, a toxic dinoflagellate proliferates in Northwest France estuaries (i.e. the Penzé estuary) causing Paralytic Shellfish Poisoning events. Vegetative growth, and in particular the role of nutrient uptake and growth rate, are crucial parameters to understand toxic blooms. With the goal of modelling in situ Alexandrium blooms related to environmental parameters, we first try to calibrate a zero-dimensional box model of Alexandrium growth. This work focuses on phosphorus nutrition. Our objective is to calibrate Alexandrium minutum as well as Heterocapsa triquetra (a non-toxic dinoflagellate) growth under different rates of phosphorus supply, other factors being optimal and constant. Laboratory experiments are used to calibrate two growth models and three uptake models for each species. Models are then used to simulate monospecific batch and semi-continuous experiments as well as competition between the two algae (mixed cultures). Results show that the Droop growth model together with linear uptake versus quota can represent most of our observations, although a power law uptake function can more accurately simulate our phosphorus uptake data. We note that such models have limitations in non steady-state situations and cell quotas can depend on a variety of factors, so care must be taken in extrapolating these results beyond the specific conditions studied.

  16. 40 CFR 1066.130 - Measurement instrument calibrations and verifications.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Measurement instrument calibrations... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Equipment, Measurement Instruments, Fuel, and Analytical Gas Specifications § 1066.130 Measurement instrument calibrations and verifications. The...

  17. Combined SIMS, NanoSIMS, FTIR, and SEM Studies of OH in Nominally Anhydrous Minerals (NAMs)

    NASA Astrophysics Data System (ADS)

    Mosenfelder, J. L.; Le Voyer, M.; Rossman, G. R.; Guan, Y.; Bell, D. R.; Asimow, P. D.; Eiler, J.

    2010-12-01

    The accurate analysis of trace concentrations of hydrogen in NAMs is a long-standing problem, with wide-ranging implications in geology and planetology. SIMS and FTIR are two powerful and complementary analytical tools capable of measuring concentrations down to levels of less than 1 ppm H2O. Both methods, however, are subject to matrix effects and rely on other techniques such as manometry or nuclear reaction analysis (NRA) for quantitative calibration. We compared FTIR and SIMS data for a wide variety of NAMs: olivine, orthopyroxene, clinopyroxene, pyrope and grossular garnet, rutile, zircon, kyanite, andalusite, and sillimanite. Some samples were also characterized using high-resolution FE-SEM to assess the potential contribution of submicrocopic inclusions to the analyses. For SIMS, we use high mass resolution (≥5000 MRP) to measure 16O1H, using 30Si and/or 18O as reference isotopes. We use both primary standards, measured independently using manometry or NRA (e.g., [1]), and secondary standards, measured using polarized FTIR referenced back to calibrations developed on primary standards. Our major focus was on on olivine, for which we collected repeated calibration data with both SIMS and NanoSIMS, bracketing measurements of H diffusion profiles in both natural and experimentally annealed crystals at levels of 5-100 ppm H2O. With both instruments we establish low blanks (≤5 ppm) and high precision (typically less than 5% 2-σ errors in 16O1H/30Si), critical requirements for the low concentration levels being measured. Assessment of over 300 analyses on 11 olivines allows us to evaluate the suitability of different standards, several of which are in use in other laboratories [2,3,4]. Seven olivines, with 0-125 ppm H2O, give highly reproducible results and allow us to establish well-constrained calibration slopes with high correlation coefficients (r2 = 0.98-99), in contrast to previous studies [2,3,4]. However, four kimberlitic megacrysts with 140-243 ppm H2O show 16O1H/30Si ratios that vary by up to a factor of 2 even in sequential analyses (cf. [3,4]). A potential cause of these discrepancies is the presence of sub-micron scale pores (as small as 100 nm), which we have documented by SEM. These pores probably contain liquid H2O and/or condensed hydrous phase precipitates. Although the ionization potential of fluids under high-vacuum conditions is an under-studied problem, these sub-micron features may contribute to the measured 16O1H, resulting in analyses with erratic depth profiles and corresponding high uncertainties (up to 15% 2-σ). However, if we omit analyses with such high uncertainties, the data for all olivines fit well together. Our results imply that the Bell et al. IR calibration [1] can be applied accurately to all olivines with IR bands from ~3400-3650 cm-1, without the need for band-specific IR absorption coefficients (cf. [3]). A five-fold total variation was observed among the calibration line slopes for the different minerals analyzed, confirming the need for mineral-specific calibrations. [1]Bell et al. (2003), JGR, 108 [2] Tenner et al. (2009), Chem. Geol., 262, 42-56 [3] Kovács, I. et al. (2010), Am. Min., 95, 292-299 [4] O'Leary et al. (2010), EPSL

  18. Holographic Entanglement Entropy, SUSY & Calibrations

    NASA Astrophysics Data System (ADS)

    Colgáin, Eoin Ó.

    2018-01-01

    Holographic calculations of entanglement entropy boil down to identifying minimal surfaces in curved spacetimes. This generically entails solving second-order equations. For higher-dimensional AdS geometries, we demonstrate that supersymmetry and calibrations reduce the problem to first-order equations. We note that minimal surfaces corresponding to disks preserve supersymmetry, whereas strips do not.

  19. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  20. Patient-specific calibration of cone-beam computed tomography data sets for radiotherapy dose calculations and treatment plan assessment.

    PubMed

    MacFarlane, Michael; Wong, Daniel; Hoover, Douglas A; Wong, Eugene; Johnson, Carol; Battista, Jerry J; Chen, Jeff Z

    2018-03-01

    In this work, we propose a new method of calibrating cone beam computed tomography (CBCT) data sets for radiotherapy dose calculation and plan assessment. The motivation for this patient-specific calibration (PSC) method is to develop an efficient, robust, and accurate CBCT calibration process that is less susceptible to deformable image registration (DIR) errors. Instead of mapping the CT numbers voxel-by-voxel with traditional DIR calibration methods, the PSC methods generates correlation plots between deformably registered planning CT and CBCT voxel values, for each image slice. A linear calibration curve specific to each slice is then obtained by least-squares fitting, and applied to the CBCT slice's voxel values. This allows each CBCT slice to be corrected using DIR without altering the patient geometry through regional DIR errors. A retrospective study was performed on 15 head-and-neck cancer patients, each having routine CBCTs and a middle-of-treatment re-planning CT (reCT). The original treatment plan was re-calculated on the patient's reCT image set (serving as the gold standard) as well as the image sets produced by voxel-to-voxel DIR, density-overriding, and the new PSC calibration methods. Dose accuracy of each calibration method was compared to the reference reCT data set using common dose-volume metrics and 3D gamma analysis. A phantom study was also performed to assess the accuracy of the DIR and PSC CBCT calibration methods compared with planning CT. Compared with the gold standard using reCT, the average dose metric differences were ≤ 1.1% for all three methods (PSC: -0.3%; DIR: -0.7%; density-override: -1.1%). The average gamma pass rates with thresholds 3%, 3 mm were also similar among the three techniques (PSC: 95.0%; DIR: 96.1%; density-override: 94.4%). An automated patient-specific calibration method was developed which yielded strong dosimetric agreement with the results obtained using a re-planning CT for head-and-neck patients. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  1. Soil specific re-calibration of water content sensors for a field-scale sensor network

    NASA Astrophysics Data System (ADS)

    Gasch, Caley K.; Brown, David J.; Anderson, Todd; Brooks, Erin S.; Yourek, Matt A.

    2015-04-01

    Obtaining accurate soil moisture data from a sensor network requires sensor calibration. Soil moisture sensors are factory calibrated, but multiple site specific factors may contribute to sensor inaccuracies. Thus, sensors should be calibrated for the specific soil type and conditions in which they will be installed. Lab calibration of a large number of sensors prior to installation in a heterogeneous setting may not be feasible, and it may not reflect the actual performance of the installed sensor. We investigated a multi-step approach to retroactively re-calibrate sensor water content data from the dielectric permittivity readings obtained by sensors in the field. We used water content data collected since 2009 from a sensor network installed at 42 locations and 5 depths (210 sensors total) within the 37-ha Cook Agronomy Farm with highly variable soils located in the Palouse region of the Northwest United States. First, volumetric water content was calculated from sensor dielectric readings using three equations: (1) a factory calibration using the Topp equation; (2) a custom calibration obtained empirically from an instrumented soil in the field; and (3) a hybrid equation that combines the Topp and custom equations. Second, we used soil physical properties (particle size and bulk density) and pedotransfer functions to estimate water content at saturation, field capacity, and wilting point for each installation location and depth. We also extracted the same reference points from the sensor readings, when available. Using these reference points, we re-scaled the sensor readings, such that water content was restricted to the range of values that we would expect given the physical properties of the soil. The re-calibration accuracy was assessed with volumetric water content measurements obtained from field-sampled cores taken on multiple dates. In general, the re-calibration was most accurate when all three reference points (saturation, field capacity, and wilting point) were represented in the sensor readings. We anticipate that obtaining water retention curves for field soils will improve the re-calibration accuracy by providing more precise estimates of saturation, field capacity, and wilting point. This approach may serve as an alternative method for sensor calibration in lieu of or to complement pre-installation calibration.

  2. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    NASA Astrophysics Data System (ADS)

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Jeff Wu, C. F.

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  4. Absolute radiometric calibration of advanced remote sensing systems

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1982-01-01

    The distinction between the uses of relative and absolute spectroradiometric calibration of remote sensing systems is discussed. The advantages of detector-based absolute calibration are described, and the categories of relative and absolute system calibrations are listed. The limitations and problems associated with three common methods used for the absolute calibration of remote sensing systems are addressed. Two methods are proposed for the in-flight absolute calibration of advanced multispectral linear array systems. One makes use of a sun-illuminated panel in front of the sensor, the radiance of which is monitored by a spectrally flat pyroelectric radiometer. The other uses a large, uniform, high-radiance reference ground surface. The ground and atmospheric measurements required as input to a radiative transfer program to predict the radiance level at the entrance pupil of the orbital sensor are discussed, and the ground instrumentation is described.

  5. Calibration of Clinical Audio Recording and Analysis Systems for Sound Intensity Measurement.

    PubMed

    Maryn, Youri; Zarowski, Andrzej

    2015-11-01

    Sound intensity is an important acoustic feature of voice/speech signals. Yet recordings are performed with different microphone, amplifier, and computer configurations, and it is therefore crucial to calibrate sound intensity measures of clinical audio recording and analysis systems on the basis of output of a sound-level meter. This study was designed to evaluate feasibility, validity, and accuracy of calibration methods, including audiometric speech noise signals and human voice signals under typical speech conditions. Calibration consisted of 3 comparisons between data from 29 measurement microphone-and-computer systems and data from the sound-level meter: signal-specific comparison with audiometric speech noise at 5 levels, signal-specific comparison with natural voice at 3 levels, and cross-signal comparison with natural voice at 3 levels. Intensity measures from recording systems were then linearly converted into calibrated data on the basis of these comparisons, and validity and accuracy of calibrated sound intensity were investigated. Very strong correlations and quasisimilarity were found between calibrated data and sound-level meter data across calibration methods and recording systems. Calibration of clinical sound intensity measures according to this method is feasible, valid, accurate, and representative for a heterogeneous set of microphones and data acquisition systems in real-life circumstances with distinct noise contexts.

  6. Assessing groundwater vulnerability in the Kinshasa region, DR Congo, using a calibrated DRASTIC model

    NASA Astrophysics Data System (ADS)

    Mfumu Kihumba, Antoine; Vanclooster, Marnik; Ndembo Longo, Jean

    2017-02-01

    This study assessed the vulnerability of groundwater against pollution in the Kinshasa region, DR Congo, as a support of a groundwater protection program. The parametric vulnerability model (DRASTIC) was modified and calibrated to predict the intrinsic vulnerability as well as the groundwater pollution risk. The method uses groundwater body specific parameters for the calibration of the factor ratings and weightings of the original DRASTIC model. These groundwater specific parameters are inferred from the statistical relation between the original DRASTIC model and observed nitrate pollution for a specific period. In addition, site-specific land use parameters are integrated into the method. The method is fully embedded in a Geographic Information System (GIS). Following these modifications, the correlation coefficient between groundwater pollution risk and observed nitrate concentrations for the 2013-2014 survey improved from r = 0.42, for the original DRASTIC model, to r = 0.61 for the calibrated model. As a way to validate this pollution risk map, observed nitrate concentrations from another survey (2008) are compared to pollution risk indices showing a good degree of coincidence with r = 0.51. The study shows that a calibration of a vulnerability model is recommended when vulnerability maps are used for groundwater resource management and land use planning at the regional scale and that it is adapted to a specific area.

  7. CMOS-APS Detectors for Solar Physics: Lessons Learned during the SWAP Preflight Calibration

    NASA Astrophysics Data System (ADS)

    de Groof, A.; Berghmans, D.; Nicula, B.; Halain, J.-P.; Defise, J.-M.; Thibert, T.; Schühle, U.

    2008-05-01

    CMOS-APS imaging detectors open new opportunities for remote sensing in solar physics beyond what classical CCDs can provide, offering far less power consumption, simpler electronics, better radiation hardness, and the possibility of avoiding a mechanical shutter. The SWAP telescope onboard the PROBA2 technology demonstration satellite of the European Space Agency will be the first actual implementation of a CMOS-APS detector for solar physics in orbit. One of the goals of the SWAP project is precisely to acquire experience with the CMOS-APS technology in a real-live space science context. Such a precursor mission is essential in the preparation of missions such as Solar Orbiter where the extra CMOS-APS functionalities will be hard requirements. The current paper concentrates on specific CMOS-APS issues that were identified during the SWAP preflight calibration measurements. We will discuss the different readout possibilities that the CMOS-APS detector of SWAP provides and their associated pros and cons. In particular we describe the “image lag” effect, which results in a contamination of each image with a remnant of the previous image. We have characterised this effect for the specific SWAP implementation and we conclude with a strategy on how to successfully circumvent the problem and actually take benefit of it for solar monitoring.

  8. Spacecraft attitude calibration/verification baseline study

    NASA Technical Reports Server (NTRS)

    Chen, L. C.

    1981-01-01

    A baseline study for a generalized spacecraft attitude calibration/verification system is presented. It can be used to define software specifications for three major functions required by a mission: the pre-launch parameter observability and data collection strategy study; the in-flight sensor calibration; and the post-calibration attitude accuracy verification. Analytical considerations are given for both single-axis and three-axis spacecrafts. The three-axis attitudes considered include the inertial-pointing attitudes, the reference-pointing attitudes, and attitudes undergoing specific maneuvers. The attitude sensors and hardware considered include the Earth horizon sensors, the plane-field Sun sensors, the coarse and fine two-axis digital Sun sensors, the three-axis magnetometers, the fixed-head star trackers, and the inertial reference gyros.

  9. A High Precision $3.50 Open Source 3D Printed Rain Gauge Calibrator

    NASA Astrophysics Data System (ADS)

    Lopez Alcala, J. M.; Udell, C.; Selker, J. S.

    2017-12-01

    Currently available rain gauge calibrators tend to be designed for specific rain gauges, are expensive, employ low-precision water reservoirs, and do not offer the flexibility needed to test the ever more popular small-aperture rain gauges. The objective of this project was to develop and validate a freely downloadable, open-source, 3D printed rain gauge calibrator that can be adjusted for a wide range of gauges. The proposed calibrator provides for applying low, medium, and high intensity flow, and allows the user to modify the design to conform to unique system specifications based on parametric design, which may be modified and printed using CAD software. To overcome the fact that different 3D printers yield different print qualities, we devised a simple post-printing step that controlled critical dimensions to assure robust performance. Specifically, the three orifices of the calibrator are drilled to reach the three target flow rates. Laboratory tests showed that flow rates were consistent between prints, and between trials of each part, while the total applied water was precisely controlled by the use of a volumetric flask as the reservoir.

  10. High-level neutron coincidence counter maintenance manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swansen, J.; Collinsworth, P.

    1983-05-01

    High-level neutron coincidence counter operational (field) calibration and usage is well known. This manual makes explicit basic (shop) check-out, calibration, and testing of new units and is a guide for repair of failed in-service units. Operational criteria for the major electronic functions are detailed, as are adjustments and calibration procedures, and recurrent mechanical/electromechanical problems are addressed. Some system tests are included for quality assurance. Data on nonstandard large-scale integrated (circuit) components and a schematic set are also included.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  12. In-flight calibration verification of spaceborne remote sensing instruments

    NASA Astrophysics Data System (ADS)

    LaBaw, Clayton C.

    1990-07-01

    The need to verify the pei1ormaixc of untended instrumentation has been recognized since scientists began sending thnse instrumems into hostile environments to quire data. The sea floor and the stratosphere have been explored, and the quality and cury of the data obtained vified by calibrating the instrumentalion in the laboratoiy, both jxior and subsequent to deployment The inability to make the lau measurements on deep-space missions make the calibration vthficatkin of these insiruments a uniclue problem.

  13. Calibrating the Spatiotemporal Root Density Distribution for Macroscopic Water Uptake Models Using Tikhonov Regularization

    NASA Astrophysics Data System (ADS)

    Li, N.; Yue, X. Y.

    2018-03-01

    Macroscopic root water uptake models proportional to a root density distribution function (RDDF) are most commonly used to model water uptake by plants. As the water uptake is difficult and labor intensive to measure, these models are often calibrated by inverse modeling. Most previous inversion studies assume RDDF to be constant with depth and time or dependent on only depth for simplification. However, under field conditions, this function varies with type of soil and root growth and thus changes with both depth and time. This study proposes an inverse method to calibrate both spatially and temporally varying RDDF in unsaturated water flow modeling. To overcome the difficulty imposed by the ill-posedness, the calibration is formulated as an optimization problem in the framework of the Tikhonov regularization theory, adding additional constraint to the objective function. Then the formulated nonlinear optimization problem is numerically solved with an efficient algorithm on the basis of the finite element method. The advantage of our method is that the inverse problem is translated into a Tikhonov regularization functional minimization problem and then solved based on the variational construction, which circumvents the computational complexity in calculating the sensitivity matrix involved in many derivative-based parameter estimation approaches (e.g., Levenberg-Marquardt optimization). Moreover, the proposed method features optimization of RDDF without any prior form, which is applicable to a more general root water uptake model. Numerical examples are performed to illustrate the applicability and effectiveness of the proposed method. Finally, discussions on the stability and extension of this method are presented.

  14. 40 CFR Appendix E to Part 52 - Performance Specifications and, Specification Test Procedures for Monitoring Systems for Effluent...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... rate at the time of the measurements is zero. 3.4Calibration drift. The change in measurement system... reference value (paragraph 6.3.1). Zero drift (24 hours) zero drift, calibration drift, and operation period. 5.1.1System...

  15. 40 CFR Appendix E to Part 52 - Performance Specifications and, Specification Test Procedures for Monitoring Systems for Effluent...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... rate at the time of the measurements is zero. 3.4Calibration drift. The change in measurement system... reference value (paragraph 6.3.1). Zero drift (24 hours) zero drift, calibration drift, and operation period. 5.1.1System...

  16. 40 CFR Appendix E to Part 52 - Performance Specifications and, Specification Test Procedures for Monitoring Systems for Effluent...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... rate at the time of the measurements is zero. 3.4Calibration drift. The change in measurement system... reference value (paragraph 6.3.1). Zero drift (24 hours) zero drift, calibration drift, and operation period. 5.1.1System...

  17. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  18. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  19. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  20. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  1. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  2. 40 CFR 1066.240 - Torque transducer verification and calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer Specifications § 1066.240 Torque transducer verification and calibration. Calibrate torque-measurement systems as described in 40 CFR 1065.310. ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Torque transducer verification and...

  3. 40 CFR 1066.240 - Torque transducer verification and calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer Specifications § 1066.240 Torque transducer verification and calibration. Calibrate torque-measurement systems as described in 40 CFR 1065.310. ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Torque transducer verification and...

  4. Simbol-X Telescope Scientific Calibrations: Requirements and Plans

    NASA Astrophysics Data System (ADS)

    Malaguti, G.; Angelini, L.; Raimondi, L.; Moretti, A.; Trifoglio, M.

    2009-05-01

    The Simbol-X telescope characteristics and the mission scientific requirements impose a challenging calibration plan with a number of unprecedented issues. The 20 m focal length implies for the incoming X-ray beam a divergence comparable to the incidence angle of the mirror surface also for 100 m-long facilities. Moreover this is the first time that a direct focussing X-ray telescope will be calibrated on an energy band covering about three decades, and with a complex focal plane. These problems require a careful plan and organization of the measurements, together with an evaluation of the calibration needs in terms of both hardware and software.

  5. Improved uncertainty quantification in nondestructive assay for nonproliferation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Ken

    2016-12-01

    This paper illustrates methods to improve uncertainty quantification (UQ) for non-destructive assay (NDA) measurements used in nuclear nonproliferation. First, it is shown that current bottom-up UQ applied to calibration data is not always adequate, for three main reasons: (1) Because there are errors in both the predictors and the response, calibration involves a ratio of random quantities, and calibration data sets in NDA usually consist of only a modest number of samples (3–10); therefore, asymptotic approximations involving quantities needed for UQ such as means and variances are often not sufficiently accurate; (2) Common practice overlooks that calibration implies a partitioningmore » of total error into random and systematic error, and (3) In many NDA applications, test items exhibit non-negligible departures in physical properties from calibration items, so model-based adjustments are used, but item-specific bias remains in some data. Therefore, improved bottom-up UQ using calibration data should predict the typical magnitude of item-specific bias, and the suggestion is to do so by including sources of item-specific bias in synthetic calibration data that is generated using a combination of modeling and real calibration data. Second, for measurements of the same nuclear material item by both the facility operator and international inspectors, current empirical (top-down) UQ is described for estimating operator and inspector systematic and random error variance components. A Bayesian alternative is introduced that easily accommodates constraints on variance components, and is more robust than current top-down methods to the underlying measurement error distributions.« less

  6. Development of an automated procedure for estimation of the spatial variation of runoff in large river basins

    USDA-ARS?s Scientific Manuscript database

    The use of distributed parameter models to address water resource management problems has increased in recent years. Calibration is necessary to reduce the uncertainties associated with model input parameters. Manual calibration of a distributed parameter model is a very time consuming effort. There...

  7. SCAMP: Automatic Astrometric and Photometric Calibration

    NASA Astrophysics Data System (ADS)

    Bertin, Emmanuel

    2010-10-01

    Astrometric and photometric calibrations have remained the most tiresome step in the reduction of large imaging surveys. SCAMP has been written to address this problem. The program efficiently computes accurate astrometric and photometric solutions for any arbitrary sequence of FITS images in a completely automatic way. SCAMP is released under the GNU General Public License.

  8. Accuracy of airspeed measurements and flight calibration procedures

    NASA Technical Reports Server (NTRS)

    Huston, Wilber B

    1948-01-01

    The sources of error that may enter into the measurement of airspeed by pitot-static methods are reviewed in detail together with methods of flight calibration of airspeed installations. Special attention is given to the problem of accurate measurements of airspeed under conditions of high speed and maneuverability required of military airplanes. (author)

  9. Uncertainty propagation in the calibration equations for NTC thermistors

    NASA Astrophysics Data System (ADS)

    Liu, Guang; Guo, Liang; Liu, Chunlong; Wu, Qingwen

    2018-06-01

    The uncertainty propagation problem is quite important for temperature measurements, since we rely so much on the sensors and calibration equations. Although uncertainty propagation for platinum resistance or radiation thermometers is well known, there have been few publications concerning negative temperature coefficient (NTC) thermistors. Insight into the propagation characteristics of uncertainty that develop when equations are determined using the Lagrange interpolation or least-squares fitting method is presented here with respect to several of the most common equations used in NTC thermistor calibration. Within this work, analytical expressions of the propagated uncertainties for both fitting methods are derived for the uncertainties in the measured temperature and resistance at each calibration point. High-precision calibration of an NTC thermistor in a precision water bath was performed by means of the comparison method. Results show that, for both fitting methods, the propagated uncertainty is flat in the interpolation region but rises rapidly beyond the calibration range. Also, for temperatures interpolated between calibration points, the propagated uncertainty is generally no greater than that associated with the calibration points. For least-squares fitting, the propagated uncertainty is significantly reduced by increasing the number of calibration points and can be well kept below the uncertainty of the calibration points.

  10. Processing and Quality Monitoring for the ATLAS Tile Hadronic Calorimeter Data

    NASA Astrophysics Data System (ADS)

    Burghgrave, Blake; ATLAS Collaboration

    2017-10-01

    An overview is presented of Data Processing and Data Quality (DQ) Monitoring for the ATLAS Tile Hadronic Calorimeter. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. Data quality in physics runs is monitored extensively and continuously. Any problems are reported and immediately investigated. The DQ efficiency achieved was 99.6% in 2012 and 100% in 2015, after the detector maintenance in 2013-2014. Changes to detector status or calibrations are entered into the conditions database (DB) during a brief calibration loop between the end of a run and the beginning of bulk processing of data collected in it. Bulk processed data are reviewed and certified for the ATLAS Good Run List if no problem is detected. Experts maintain the tools used by DQ shifters and the calibration teams during normal operation, and prepare new conditions for data reprocessing and Monte Carlo (MC) production campaigns. Conditions data are stored in 3 databases: Online DB, Offline DB for data and a special DB for Monte Carlo. Database updates can be performed through a custom-made web interface.

  11. Development and test of sets of 3D printed age-specific thyroid phantoms for 131I measurements

    NASA Astrophysics Data System (ADS)

    Beaumont, Tiffany; Caldeira Ideias, Pedro; Rimlinger, Maeva; Broggio, David; Franck, Didier

    2017-06-01

    In the case of a nuclear reactor accident the release contains a high proportion of iodine-131 that can be inhaled or ingested by members of the public. Iodine-131 is naturally retained in the thyroid and increases the thyroid cancer risk. Since the radiation induced thyroid cancer risk is greater for children than for adults, the thyroid dose to children should be assessed as accurately as possible. For that purpose direct measurements should be carried out with age-specific calibration factors but, currently, there is no age-specific thyroid phantoms allowing a robust measurement protocol. A set of age-specific thyroid phantoms for 5, 10, 15 year old children and for the adult has been designed and 3D printed. A realistic thyroid shape has been selected and material properties taken into account to simulate the attenuation of biological tissues. The thyroid volumes follow ICRP recommendations and the phantoms also include the trachea and a spine model. Several versions, with or without spine, with our without trachea, with or without age-specific neck have been manufactured, in order to study the influence of these elements on calibration factors. The calibration factor obtained with the adult phantom and a reference phantom are in reasonable agreement. In vivo calibration experiments with germanium detectors have shown that the difference in counting efficiency, the inverse of the calibration factor, between the 5 year and adult phantoms is 25% for measurement at contact. It is also experimentally evidenced that the inverse of the calibration factor varies linearly with the thyroid volume. The influence of scattering elements like the neck or spine is not evidenced by experimental measurements.

  12. Clinical experience with video Head Impulse Test in children.

    PubMed

    Hülse, Roland; Hörmann, Karl; Servais, Jerôme José; Hülse, Manfred; Wenzel, Angela

    2015-08-01

    A standardized diagnostic protocol for children's vestibular assessment is still missing in daily clinical life. As rotatory chair testing and caloric test are usually not tolerated well by children, the aim of our study was not only to evaluate the importance and practicability of the video head impulse test performed in children with and without balance problems, but also to outline a diagnostic algorithm for children with balance problems. Fifty-five children aged 3-16 years have been included in this prospective monocentric study. Balance was assessed using results from health screening examinations of the participants and results from a specific dizziness questionnaire for children. The children were then divided in two groups: group I without any sign of vestibular development disorder and group II with possible signs for a pathological equilibrium development. Horizontal vestibulo-ocular reflex (HVOR) was assessed using a video-oculography system device (EyeSeeCam(©)). Gain at 40, 60, and 80ms and gain variance has been measured. Furthermore, it was evaluated how calibration of the system was tolerated by the participants, how the test itself was accomplishable in children, and which difficulties arose during testing. Reproducible test results were accomplished in 42 children (75%). Children with no balance problems in history showed a median gain of 1.02 (±0.28). A significant gain reduction between 40 and 80ms was found (P<0.05). Catch-up saccades were found in none of the children. Children with balance problems had a significantly reduced gain. (0.47±0.3; P<0.05) In this group, catch-up saccades could be detected in 4 out of 6 patients. For both groups, performing the test approximately took 20min, which is significantly longer than in adults (P<0.05). Calibration of the system with laser dots was easily doable in children aged 6 and older, whereas children between 3 and 5 years had better calibration results using colorful little icons. Video head impulse test is a sensitive and efficient vestibular test in children, which is tolerated well by children aged 3-16 years. Therefore, video head impulse test can be easily used as a screening tool to detect vestibular dysfunction in the pediatric population. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Coupling machine learning with mechanistic models to study runoff production and river flow at the hillslope scale

    NASA Astrophysics Data System (ADS)

    Marçais, J.; Gupta, H. V.; De Dreuzy, J. R.; Troch, P. A. A.

    2016-12-01

    Geomorphological structure and geological heterogeneity of hillslopes are major controls on runoff responses. The diversity of hillslopes (morphological shapes and geological structures) on one hand, and the highly non linear runoff mechanism response on the other hand, make it difficult to transpose what has been learnt at one specific hillslope to another. Therefore, making reliable predictions on runoff appearance or river flow for a given hillslope is a challenge. Applying a classic model calibration (based on inverse problems technique) requires doing it for each specific hillslope and having some data available for calibration. When applied to thousands of cases it cannot always be promoted. Here we propose a novel modeling framework based on coupling process based models with data based approach. First we develop a mechanistic model, based on hillslope storage Boussinesq equations (Troch et al. 2003), able to model non linear runoff responses to rainfall at the hillslope scale. Second we set up a model database, representing thousands of non calibrated simulations. These simulations investigate different hillslope shapes (real ones obtained by analyzing 5m digital elevation model of Brittany and synthetic ones), different hillslope geological structures (i.e. different parametrizations) and different hydrologic forcing terms (i.e. different infiltration chronicles). Then, we use this model library to train a machine learning model on this physically based database. Machine learning model performance is then assessed by a classic validating phase (testing it on new hillslopes and comparing machine learning with mechanistic outputs). Finally we use this machine learning model to learn what are the hillslope properties controlling runoffs. This methodology will be further tested combining synthetic datasets with real ones.

  14. Co-calibrating quality-of-life scores from three pulmonary disorders: implications for comparative-effectiveness research.

    PubMed

    Rouse, M; Twiss, J; McKenna, S P

    2016-06-01

    Background Efficient use of health resources requires accurate outcome assessment. Disease-specific patient-reported outcome (PRO) measures are designed to be highly relevant to patients with a specific disease. They have advantages over generic PROs that lack relevance to patient groups and miss crucial impacts of illness. It is thought that disease-specific measurement cannot be used in comparative effectiveness research (CER). The present study provides further evidence of the value of disease-specific measures in making valid comparisons across diseases. Methods The Asthma Life Impact Scale (ALIS, 22 items), Living with Chronic Obstructive Pulmonary Disease (LCOPD, 22 items) scale, and Cambridge Pulmonary Hypertension Outcome Review (CAMPHOR, 25 items) were completed by 140, 162, and 91 patients, respectively. The three samples were analyzed for fit to the Rasch model, then combined into a scale consisting of 58 unique items and re-analyzed. Raw scores on the three measures were co-calibrated and a transformation table produced. Results The scales fit the Rasch model individually (ALIS Chi(2) probability value (p-Chi(2)) = 0.05; LCOPD p-Chi(2 )=( )0.38; CAMPHOR p-Chi(2 )=( )0.92). The combined data also fit the Rasch model (p-Chi(2 )=( )0.22). There was no differential item functioning related to age, gender, or disease. The co-calibrated scales successfully distinguished between perceived severity groups (p < 0.001). Limitations The samples were drawn from different sources. For scales to be co-calibrated using a common item design, they must be based on the same theoretical construct, be unidimensional, and have overlapping items. Conclusions The results showed that it is possible to co-calibrate scores from disease-specific PRO measures. This will permit more accurate and sensitive outcome measurement to be incorporated into CER. The co-calibration of needs-based disease-specific measures allows the calculation of γ scores that can be used to compare directly the impact of any type of interventions on any diseases included in the co-calibration.

  15. Experimental Demonstration of In-Place Calibration for Time Domain Microwave Imaging System

    NASA Astrophysics Data System (ADS)

    Kwon, S.; Son, S.; Lee, K.

    2018-04-01

    In this study, the experimental demonstration of in-place calibration was conducted using the developed time domain measurement system. Experiments were conducted using three calibration methods—in-place calibration and two existing calibrations, that is, array rotation and differential calibration. The in-place calibration uses dual receivers located at an equal distance from the transmitter. The received signals at the dual receivers contain similar unwanted signals, that is, the directly received signal and antenna coupling. In contrast to the simulations, the antennas are not perfectly matched and there might be unexpected environmental errors. Thus, we experimented with the developed experimental system to demonstrate the proposed method. The possible problems with low signal-to-noise ratio and clock jitter, which may exist in time domain systems, were rectified by averaging repeatedly measured signals. The tumor was successfully detected using the three calibration methods according to the experimental results. The cross correlation was calculated using the reconstructed image of the ideal differential calibration for a quantitative comparison between the existing rotation calibration and the proposed in-place calibration. The mean value of cross correlation between the in-place calibration and ideal differential calibration was 0.80, and the mean value of cross correlation of the rotation calibration was 0.55. Furthermore, the results of simulation were compared with the experimental results to verify the in-place calibration method. A quantitative analysis was also performed, and the experimental results show a tendency similar to the simulation.

  16. Automated Attitude Sensor Calibration: Progress and Plans

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph; Hashmall, Joseph

    2004-01-01

    This paper describes ongoing work a NASA/Goddard Space Flight Center to improve the quality of spacecraft attitude sensor calibration and reduce costs by automating parts of the calibration process. The new calibration software can autonomously preview data quality over a given time span, select a subset of the data for processing, perform the requested calibration, and output a report. This level of automation is currently being implemented for two specific applications: inertial reference unit (IRU) calibration and sensor alignment calibration. The IRU calibration utility makes use of a sequential version of the Davenport algorithm. This utility has been successfully tested with simulated and actual flight data. The alignment calibration is still in the early testing stage. Both utilities will be incorporated into the institutional attitude ground support system.

  17. Integrating the ODI-PPA scientific gateway with the QuickReduce pipeline for on-demand processing

    NASA Astrophysics Data System (ADS)

    Young, Michael D.; Kotulla, Ralf; Gopu, Arvind; Liu, Wilson

    2014-07-01

    As imaging systems improve, the size of astronomical data has continued to grow, making the transfer and processing of data a significant burden. To solve this problem for the WIYN Observatory One Degree Imager (ODI), we developed the ODI-Portal, Pipeline, and Archive (ODI-PPA) science gateway, integrating the data archive, data reduction pipelines, and a user portal. In this paper, we discuss the integration of the QuickReduce (QR) pipeline into PPA's Tier 2 processing framework. QR is a set of parallelized, stand-alone Python routines accessible to all users, and operators who can create master calibration products and produce standardized calibrated data, with a short turn-around time. Upon completion, the data are ingested into the archive and portal, and made available to authorized users. Quality metrics and diagnostic plots are generated and presented via the portal for operator approval and user perusal. Additionally, users can tailor the calibration process to their specific science objective(s) by selecting custom datasets, applying preferred master calibrations or generating their own, and selecting pipeline options. Submission of a QuickReduce job initiates data staging, pipeline execution, and ingestion of output data products all while allowing the user to monitor the process status, and to download or further process/analyze the output within the portal. User-generated data products are placed into a private user-space within the portal. ODI-PPA leverages cyberinfrastructure at Indiana University including the Big Red II supercomputer, the Scholarly Data Archive tape system and the Data Capacitor shared file system.

  18. Thermal history of sedimentary basins, maturation indices, and kinetics of oil and gas generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tissot, B.P.; Pelet, R.; Ungerer, P.

    1987-12-01

    Temperature is the most sensitive parameter in hydrocarbon generation. Thus, reconstruction of temperature history is essential when evaluating petroleum prospects. No measurable parameter can be directly converted to paleotemperature. Maturation indices such as vitrinite reflectance, T/sub max/ from Rock-Eval pyrolysis, spore coloration, Thermal Alteration Index (TAI), or concentration of biological markers offer an indirect approach. All these indices are a function of the thermal history through rather complex kinetics, frequently influenced by the type of organic matter. Their significance and validity are reviewed. Besides the problems of identification (e.g. vitrinite) and interlaboratory calibration, it is important to simultaneously interpret kerogenmore » type and maturation and to avoid difficult conversions from one index to another. Geodynamic models, where structural and thermal histories are connected, are another approach to temperature reconstruction which could be calibrated against the present distribution of temperature and the present value of maturation indices. Kinetics of kerogen decomposition controls the amount and composition of hydrocarbons generated. An empirical time-temperature index (TTI), originally introduced by Lopatin, does not allow such a quantitative evaluation. Due to several limitations (no provision for different types of kerogen and different rates of reactions, poor calibration on vitrinite reflectance), it is of limited interest unless one has no access to a desk-top computer. Kinetic models, based on a specific calibration made on actual source rock samples, can simulate the evolution of all types of organic matter and can provide a quantitative evaluation of oil and gas generated. 29 figures.« less

  19. Hand-Eye Calibration in Visually-Guided Robot Grinding.

    PubMed

    Li, Wen-Long; Xie, He; Zhang, Gang; Yan, Si-Jie; Yin, Zhou-Ping

    2016-11-01

    Visually-guided robot grinding is a novel and promising automation technique for blade manufacturing. One common problem encountered in robot grinding is hand-eye calibration, which establishes the pose relationship between the end effector (hand) and the scanning sensor (eye). This paper proposes a new calibration approach for robot belt grinding. The main contribution of this paper is its consideration of both joint parameter errors and pose parameter errors in a hand-eye calibration equation. The objective function of the hand-eye calibration is built and solved, from which 30 compensated values (corresponding to 24 joint parameters and six pose parameters) are easily calculated in a closed solution. The proposed approach is economic and simple because only a criterion sphere is used to calculate the calibration parameters, avoiding the need for an expensive and complicated tracking process using a laser tracker. The effectiveness of this method is verified using a calibration experiment and a blade grinding experiment. The code used in this approach is attached in the Appendix.

  20. Automatic calibration system for analog instruments based on DSP and CCD sensor

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Wei, Xiangqin; Bai, Zhenlong

    2008-12-01

    Currently, the calibration work of analog measurement instruments is mainly completed by manual and there are many problems waiting for being solved. In this paper, an automatic calibration system (ACS) based on Digital Signal Processor (DSP) and Charge Coupled Device (CCD) sensor is developed and a real-time calibration algorithm is presented. In the ACS, TI DM643 DSP processes the data received by CCD sensor and the outcome is displayed on Liquid Crystal Display (LCD) screen. For the algorithm, pointer region is firstly extracted for improving calibration speed. And then a math model of the pointer is built to thin the pointer and determine the instrument's reading. Through numbers of experiments, the time of once reading is no more than 20 milliseconds while it needs several seconds if it is done manually. At the same time, the error of the instrument's reading satisfies the request of the instruments. It is proven that the automatic calibration system can effectively accomplish the calibration work of the analog measurement instruments.

  1. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision.

    PubMed

    Tu, Junchao; Zhang, Liyan

    2018-01-12

    A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.

  2. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  3. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  4. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  5. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  6. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  7. Update: Partnership for the Revitalization of National Wind Tunnel Force Measurement Technology Capability

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.

    2010-01-01

    NASA's Aeronautics Test Program (ATP) chartered a team to examine the issues and risks associated with the lack of funding and focus on force measurement over the past several years, focusing specifically on strain-gage balances. NASA partnered with the U.S. Air Force's Arnold Engineering Development Center (AEDC) to exploit their combined capabilities and take a national level government view of the problem and established the National Force Measurement Technology Capability (NFMTC) project. This paper provides an update on the team's status for revitalizing the government's balance capability with respect to designing, fabricating, calibrating, and using the these critical measurement devices.

  8. Response analysis of holography-based modal wavefront sensor.

    PubMed

    Dong, Shihao; Haist, Tobias; Osten, Wolfgang; Ruppel, Thomas; Sawodny, Oliver

    2012-03-20

    The crosstalk problem of holography-based modal wavefront sensing (HMWS) becomes more severe with increasing aberration. In this paper, crosstalk effects on the sensor response are analyzed statistically for typical aberrations due to atmospheric turbulence. For specific turbulence strength, we optimized the sensor by adjusting the detector radius and the encoded phase bias for each Zernike mode. Calibrated response curves of low-order Zernike modes were further utilized to improve the sensor accuracy. The simulation results validated our strategy. The number of iterations for obtaining a residual RMS wavefront error of 0.1λ is reduced from 18 to 3. © 2012 Optical Society of America

  9. Oceanography from space

    NASA Technical Reports Server (NTRS)

    Stewart, R. H.

    1982-01-01

    Active and passive spaceborne instruments that can observe the sea are discussed. Attention is given to satellite observations of ocean surface temperature and heating, wind speed and direction, ocean currents, wave height, ocean color, and sea ice. Specific measurements now being made from space are described, the accuracy of various instruments is considered, and problems associated with the analysis of satellite data are examined. It is concluded that the satellites and techniques used by different nations should be sufficiently standard that data from one satellite can be directly compared with data from another and that accurate calibration and overlap of satellite data are necessary to confirm the continuity and homogeneity of the data.

  10. Shocks, Rarefaction Waves, and Current Fluctuations for Anharmonic Chains

    DOE PAGES

    Mendl, Christian B.; Spohn, Herbert

    2016-10-04

    The nonequilibrium dynamics of anharmonic chains is studied by imposing an initial domain-wall state, in which the two half lattices are prepared in equilibrium with distinct parameters. Here, we analyse the Riemann problem for the corresponding Euler equations and, in specific cases, compare with molecular dynamics. Additionally, the fluctuations of time-integrated currents are investigated. In analogy with the KPZ equation, their typical fluctuations should be of size t 1/3 and have a Tracy–Widom GUE distributed amplitude. The proper extension to anharmonic chains is explained and tested through molecular dynamics. Our results are calibrated against the stochastic LeRoux lattice gas.

  11. Accuracy evaluation of optical distortion calibration by digital image correlation

    NASA Astrophysics Data System (ADS)

    Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan

    2017-11-01

    Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.

  12. Genetic algorithm applied to a Soil-Vegetation-Atmosphere system: Sensitivity and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Schneider, Sébastien; Jacques, Diederik; Mallants, Dirk

    2010-05-01

    Numerical models are of precious help for predicting water fluxes in the vadose zone and more specifically in Soil-Vegetation-Atmosphere (SVA) systems. For such simulations, robust models and representative soil hydraulic parameters are required. Calibration of unsaturated hydraulic properties is known to be a difficult optimization problem due to the high non-linearity of the water flow equations. Therefore, robust methods are needed to avoid the optimization process to lead to non-optimal parameters. Evolutionary algorithms and specifically genetic algorithms (GAs) are very well suited for those complex parameter optimization problems. Additionally, GAs offer the opportunity to assess the confidence in the hydraulic parameter estimations, because of the large number of model realizations. The SVA system in this study concerns a pine stand on a heterogeneous sandy soil (podzol) in the Campine region in the north of Belgium. Throughfall and other meteorological data and water contents at different soil depths have been recorded during one year at a daily time step in two lysimeters. The water table level, which is varying between 95 and 170 cm, has been recorded with intervals of 0.5 hour. The leaf area index was measured as well at some selected time moments during the year in order to evaluate the energy which reaches the soil and to deduce the potential evaporation. Water contents at several depths have been recorded. Based on the profile description, five soil layers have been distinguished in the podzol. Two models have been used for simulating water fluxes: (i) a mechanistic model, the HYDRUS-1D model, which solves the Richards' equation, and (ii) a compartmental model, which treats the soil profile as a bucket into which water flows until its maximum capacity is reached. A global sensitivity analysis (Morris' one-at-a-time sensitivity analysis) was run previously to the calibration, in order to check the sensitivity in the chosen parameter search space. For the inversion procedure a genetical algorithm (GA) was used. Specific features such as elitism, roulette-wheel process for selection operator and island theory were implemented. Optimization was based on the water content measurements recorded at several depths. Ten scenarios have been elaborated and applied on the two lysimeters in order to investigate the impact of the conceptual model in terms of processes description (mechanistic or compartmental) and geometry (number of horizons in the profile description) on the calibration accuracy. Calibration leads to a good agreement with the measured water contents. The most critical parameters for improving the goodness of fit are the number of horizons and the type of process description. Best fit are found for a mechanistic model with 5 horizons resulting in absolute differences between observed and simulated water contents less than 0.02 cm3cm-3 in average. Parameter estimate analysis shows that layers thicknesses are poorly constrained whereas hydraulic parameters are much well defined.

  13. ISO Key Project: Exploring the full range of QUASAR/AGN properties

    NASA Technical Reports Server (NTRS)

    Wilkes, B.

    1998-01-01

    The PIA (PHOT Interactive Analysis) software was upgraded as new releases were made available by VILSPA. We have continued to analyze our data but, given the large number of still outstanding problems with the calibration and analysis (listed below), we remain unable to move forward on our scientific program. We have concentrated on observations with long (256 sec) exposure times to avoid the most extreme detector responsivity drift problems which occur with a change in observed flux level, ie. as one begins to observe a new target. There remain a significant number of problems with analyzing these data including: (1) the default calibration source (FCS) observations early in the mission were too short and affected by strong detector responsivity drifts; (2) the calibration of the FCS sources is not yet well-understood, particularly for chopped observations (which includes most of ours); (3) the detector responsivity drift is not well-understood and models are only now becoming available for fitting chopped data; (4) charged particle hits on the detector cause transient responsivity drifts which need to be corrected; (5) the "flat-field" calibration of the long-wavelength (array) detectors: C1OO, C200 leaves significant residual structure and so needs to be improved;(6) the vignetting correction, which affects detected flux levels in the array detectors, is not yet available; (7) the intra-filter calibrations are not yet available; and (8) the background above 60 microns has a significant gradient which results in spurious positive and negative "detections" in chopped observations. ISO Observation planning, conferences and talks, ground based observing and other grant related activities are also briefly discussed.

  14. Retrieving Storm Electric Fields from Aircraft Field Mill Data. Part 1; Theory

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.

    2006-01-01

    It is shown that the problem of retrieving storm electric fields from an aircraft instrumented with several electric field mill sensors can be expressed in terms of a standard Lagrange multiplier optimization problem. The method naturally removes aircraft charge from the retrieval process without having to use a high voltage stinger and linearly combined mill data values. It allows a variety of user-supplied physical constraints (the so-called side constraints in the theory of Lagrange multipliers) and also helps improve absolute calibration. Additionally, this paper introduces an alternate way of performing the absolute calibration of an aircraft that has some benefits over conventional analyses. It is accomplished by using the time derivatives of mill and pitch data for a pitch down maneuver performed at high (greater than 1 km) altitude. In Part II of this study, the above methods are tested and then applied to complete a full calibration of a Citation aircraft.

  15. Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part I: Theory

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.

    2005-01-01

    It is shown that the problem of retrieving storm electric fields from an aircraft instrumented with several electric field mill sensors can be expressed in terms of a standard Lagrange multiplier optimization problem. The method naturally removes aircraft charge from the retrieval process without having to use a high voltage stinger and linearly combined mill data values. It also allows a variety of user-supplied physical constraints (the so-called side constraints in the theory of Lagrange multipliers). Additionally, this paper introduces a novel way of performing the absolute calibration of an aircraft that has several benefits over conventional analyses. In the new approach, absolute calibration is completed by inspecting the time derivatives of mill and pitch data for a pitch down maneuver performed at high (greater than 1 km) altitude. In Part II of this study, the above methods are tested and then applied to complete a full calibration of a Citation aircraft.

  16. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    NASA Technical Reports Server (NTRS)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  17. A frequentist approach to computer model calibration

    DOE PAGES

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  18. Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.

    2013-12-01

    This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.

  19. Students' Strengths and Weaknesses in Evaluating Technical Arguments as Revealed through Implementing Calibrated Peer Review™ in a Bioengineering Laboratory

    ERIC Educational Resources Information Center

    Volz, Tracy; Saterbak, Ann

    2009-01-01

    In engineering fields, students are expected to construct technical arguments that demonstrate a discipline's expected use of logic, evidence, and conventions. Many undergraduate bioengineering students struggle to enact the appropriate argument structures when they produce technical posters. To address this problem we implemented Calibrated Peer…

  20. Sub-half-micron contact window design with 3D photolithography simulator

    NASA Astrophysics Data System (ADS)

    Brainerd, Steve K.; Bernard, Douglas A.; Rey, Juan C.; Li, Jiangwei; Granik, Yuri; Boksha, Victor V.

    1997-07-01

    In state of the art IC design and manufacturing certain lithography layers have unique requirements. Latitudes and tolerances that apply to contacts and polysilicon gates are tight for such critical layers. Industry experts are discussing the most cost effective ways to use feature- oriented equipment and materials already developed for these layers. Such requirements introduce new dimensions into the traditionally challenging task for the photolithography engineer when considering various combinations of multiple factors to optimize and control the process. In addition, he/she faces a rapidly increasing cost of experiments, limited time and scarce access to equipment to conduct them. All the reasons presented above support simulation as an ideal method to satisfy these demands. However lithography engineers may be easily dissatisfied with a simulation tool when discovering disagreement between the simulation and experimental data. The problem is that several parameters used in photolithography simulation are very process specific. Calibration, i.e. matching experimental and simulation data using a specific set of procedures allows one to effectively use the simulation tool. We present results of a simulation based approach to optimize photolithography processes for sub-0.5 micron contact windows. Our approach consists of: (1) 3D simulation to explore different lithographic options, (2) calibration to a range of process conditions with extensive use of specifically developed optimization techniques. The choice of a 3D simulator is essential because of 3D nature of the problem of contact window design. We use DEPICT 4.1. This program performs fast aerial image simulation as presented before. For 3D exposure the program uses an extension to three-dimensions of the high numerical aperture model combined with Fast Fourier Transforms for maximum performance and accuracy. We use Kim (U.C. Berkeley) model and the fast marching Level Set method respectively for the calculation of resist development rates and resist surface movement during development process. Calibration efforts were aimed at matching experimental results on contact windows obtained after exposure of a binary mask. Additionally, simulation was applied to conduct quantitative analysis of PSM design capabilities, optical proximity correction, and stepper parameter optimization. Extensive experiments covered exposure (ASML 5500/100D stepper), pre- and post-exposure bake and development (2.38% TMAH, puddle process) of JSR IX725D2G and TOK iP3500 photoresists films on 200 mm test wafers. `Aquatar' was used as top antireflective coating, SEM pictures of developed patterns were analyzed and compared with simulation results for different values of defocus, exposure energies, numerical aperture and partial coherence.

  1. Hand-eye calibration using a target registration error model.

    PubMed

    Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M

    2017-10-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.

  2. A calibration method based on virtual large planar target for cameras with large FOV

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu

    2018-02-01

    In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.

  3. Radiometrie recalibration procedure for landsat-5 thematic mapper data

    USGS Publications Warehouse

    Chander, G.; Micijevic, E.; Hayes, R.W.; Barsi, J.A.

    2008-01-01

    The Landsat-5 (L5) satellite was launched on March 01, 1984, with a design life of three years. Incredibly, the L5 Thematic Mapper (TM) has collected data for 23 years. Over this time, the detectors have aged, and its radiometric characteristics have changed since launch. The calibration procedures and parameters have also changed with time. Revised radiometric calibrations have improved the radiometric accuracy of recently processed data; however, users with data that were processed prior to the calibration update do not benefit from the revisions. A procedure has been developed to give users the ability to recalibrate their existing Level 1 (L1) products without having to purchase reprocessed data from the U.S. Geological Survey (USGS). The accuracy of the recalibration is dependent on the knowledge of the prior calibration applied to the data. The ""Work Order" file, included with standard National Land Archive Production System (NLAFS) data products, gives parameters that define the applied calibration. These are the Internal Calibrator (IC) calibration parameters or the default prelaunch calibration, if there were problems with the IC calibration. This paper details the recalibration procedure for data processed using IC, in which users have the Work Order file. ?? 2001 IEEE.

  4. Preparation of calibration materials for microanalysis of Ti minerals by direct fusion of synthetic and natural materials: experience with LA-ICP-MS analysis of some important minor and trace elements in ilmenite and rutile.

    PubMed

    Odegård, M; Mansfeld, J; Dundas, S H

    2001-08-01

    Calibration materials for microanalysis of Ti minerals have been prepared by direct fusion of synthetic and natural materials by resistance heating in high-purity graphite electrodes. Synthetic materials were FeTiO3 and TiO2 reagents doped with minor and trace elements; CRMs for ilmenite, rutile, and a Ti-rich magnetite were used as natural materials. Problems occurred during fusion of Fe2O3-rich materials, because at atmospheric pressure Fe2O3 decomposes into Fe3O4 and O2 at 1462 degrees C. An alternative fusion technique under pressure was tested, but the resulting materials were characterized by extensive segregation and development of separate phases. Fe2O3-rich materials were therefore fused below this temperature, resulting in a form of sintering, without conversion of the materials into amorphous glasses. The fused materials were studied by optical microscopy and EPMA, and tested as calibration materials by inductively coupled plasma mass spectrometry, equipped with laser ablation for sample introduction (LA-ICP-MS). It was demonstrated that calibration curves based on materials of rutile composition, within normal analytical uncertainty, generally coincide with calibration curves based on materials of ilmenite composition. It is, therefore, concluded that LA-ICP-MS analysis of Ti minerals can with advantage be based exclusively on calibration materials prepared for rutile, thereby avoiding the special fusion problems related to oxide mixtures of ilmenite composition. It is documented that sintered materials were in good overall agreement with homogeneous glass materials, an observation that indicates that in other situations also sintered mineral concentrates might be a useful alternative for instrument calibration, e.g. as alternative to pressed powders.

  5. COSTEP - Comprehensive Suprathermal and Energetic Particle Analyser

    NASA Astrophysics Data System (ADS)

    Müller-Mellin, R.; Kunow, H.; Fleißner, V.; Pehlke, E.; Rode, E.; Röschmann, N.; Scharmberg, C.; Sierks, H.; Rusznyak, P.; McKenna-Lawlor, S.; Elendt, I.; Sequeiros, J.; Meziat, D.; Sanchez, S.; Medina, J.; Del Peral, L.; Witte, M.; Marsden, R.; Henrion, J.

    1995-12-01

    The COSTEP experiment on SOHO forms part of the CEPAC complex of instruments that will perform studies of the suprathermal and energetic particle populations of solar, interplanetary, and galactic origin. Specifically, the LION and EPHIN instruments are designed to use particle emissions from the Sun for several species (electrons, protons, and helium nuclei) in the energy range 44 keV/particle to > 53 MeV/n as tools to study critical problems in solar physics as well as fundamental problems in space plasma and astrophysics. Scientific goals are presented and a technical description is provided of the two sensors and the common data processing unit. Calibration results are presented which show the ability of LION to separate electrons from protons and the ability of EPHIN to obtain energy spectra and achieve isotope separation for light nuclei. A brief description of mission operations and data products is given.

  6. Sensitivity to imputation models and assumptions in receiver operating characteristic analysis with incomplete data

    PubMed Central

    Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M.

    2015-01-01

    Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms. PMID:26379316

  7. Laser-induced breakdown spectroscopy for detection of heavy metals in environmental samples

    NASA Astrophysics Data System (ADS)

    Wisbrun, Richard W.; Schechter, Israel; Niessner, Reinhard; Schroeder, Hartmut

    1993-03-01

    The application of LIBS technology as a sensor for heavy metals in solid environmental samples has been studied. This specific application introduces some new problems in the LIBS analysis. Some of them are related to the particular distribution of contaminants in the grained samples. Other problems are related to mechanical properties of the samples and to general matrix effects, like the water and organic fibers content of the sample. An attempt has been made to optimize the experimental set-up for the various involved parameters. The understanding of these factors has enabled the adjustment of the technique to the substrates of interest. The special importance of the grain size and of the laser-induced aerosol production is pointed out. Calibration plots for the analysis of heavy metals in diverse sand and soil samples have been carried out. The detection limits are shown to be usually below the recent regulation restricted concentrations.

  8. A general system for automatic biomedical image segmentation using intensity neighborhoods.

    PubMed

    Chen, Cheng; Ozolek, John A; Wang, Wei; Rohde, Gustavo K

    2011-01-01

    Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before being used in a different application. We describe an approach that, with few modifications, can be used in a variety of image segmentation problems. The approach is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. We describe methods for modeling rotations and variations in scales as well as a subset selection for training the classifiers. We show that the performance of our approach in tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar to or better than several algorithms specifically designed for each of these applications.

  9. Decision curve analysis and external validation of the postoperative Karakiewicz nomogram for renal cell carcinoma based on a large single-center study cohort.

    PubMed

    Zastrow, Stefan; Brookman-May, Sabine; Cong, Thi Anh Phuong; Jurk, Stanislaw; von Bar, Immanuel; Novotny, Vladimir; Wirth, Manfred

    2015-03-01

    To predict outcome of patients with renal cell carcinoma (RCC) who undergo surgical therapy, risk models and nomograms are valuable tools. External validation on independent datasets is crucial for evaluating accuracy and generalizability of these models. The objective of the present study was to externally validate the postoperative nomogram developed by Karakiewicz et al. for prediction of cancer-specific survival. A total of 1,480 consecutive patients with a median follow-up of 82 months (IQR 46-128) were included into this analysis with 268 RCC-specific deaths. Nomogram-estimated survival probabilities were compared with survival probabilities of the actual cohort, and concordance indices were calculated. Calibration plots and decision curve analyses were used for evaluating calibration and clinical net benefit of the nomogram. Concordance between predictions of the nomogram and survival rates of the cohort was 0.911 after 12, 0.909 after 24 months and 0.896 after 60 months. Comparison of predicted probabilities and actual survival estimates with calibration plots showed an overestimation of tumor-specific survival based on nomogram predictions of high-risk patients, although calibration plots showed a reasonable calibration for probability ranges of interest. Decision curve analysis showed a positive net benefit of nomogram predictions for our patient cohort. The postoperative Karakiewicz nomogram provides a good concordance in this external cohort and is reasonably calibrated. It may overestimate tumor-specific survival in high-risk patients, which should be kept in mind when counseling patients. A positive net benefit of nomogram predictions was proven.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacFarlane, Michael; Battista, Jerry; Chen, Jeff

    Purpose: To develop a radiotherapy dose tracking and plan evaluation technique using cone-beam computed tomography (CBCT) images. Methods: We developed a patient-specific method of calibrating CBCT image sets for dose calculation. The planning CT was first registered with the CBCT using deformable image registration (DIR). A scatter plot was generated between the CT numbers of the planning CT and CBCT for each slice. The CBCT calibration curve was obtained by least-square fitting of the data, and applied to each CBCT slice. The calibrated CBCT was then merged with original planning CT to extend the small field of view of CBCT.more » Finally, the treatment plan was copied to the merged CT for dose tracking and plan evaluation. The proposed patient-specific calibration method was also compared to two methods proposed in literature. To evaluate the accuracy of each technique, 15 head-and-neck patients requiring plan adaptation were arbitrarily selected from our institution. The original plan was calculated on each method’s data set, including a second planning CT acquired within 48 hours of the CBCT (serving as gold standard). Clinically relevant dose metrics and 3D gamma analysis of dose distributions were compared between the different techniques. Results: Compared to the gold standard of using planning CTs, the patient-specific CBCT calibration method was shown to provide promising results with gamma pass rates above 95% and average dose metric agreement within 2.5%. Conclusions: The patient-specific CBCT calibration method could potentially be used for on-line dose tracking and plan evaluation, without requiring a re-planning CT session.« less

  11. Self-calibration performance in stereoscopic PIV acquired in a transonic wind tunnel

    DOE PAGES

    Beresh, Steven J.; Wagner, Justin L.; Smith, Barton L.

    2016-03-16

    Three stereoscopic PIV experiments have been examined to test the effectiveness of self-calibration under varied circumstances. Furthermore, we our measurements taken in a streamwise plane yielded a robust self-calibration that returned common results regardless of the specific calibration procedure, but measurements in the crossplane exhibited substantial velocity bias errors whose nature was sensitive to the particulars of the self-calibration approach. Self-calibration is complicated by thick laser sheets and large stereoscopic camera angles and further exacerbated by small particle image diameters and high particle seeding density. In spite of the different answers obtained by varied self-calibrations, each implementation locked onto anmore » apparently valid solution with small residual disparity and converged adjustment of the calibration plane. Thus, the convergence of self-calibration on a solution with small disparity is not sufficient to indicate negligible velocity error due to the stereo calibration.« less

  12. Definition of energy-calibrated spectra for national reachback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunz, Christopher L.; Hertz, Kristin L.

    2014-01-01

    Accurate energy calibration is critical for the timeliness and accuracy of analysis results of spectra submitted to National Reachback, particularly for the detection of threat items. Many spectra submitted for analysis include either a calibration spectrum using 137Cs or no calibration spectrum at all. The single line provided by 137Cs is insufficient to adequately calibrate nonlinear spectra. A calibration source that provides several lines that are well-spaced, from the low energy cutoff to the full energy range of the detector, is needed for a satisfactory energy calibration. This paper defines the requirements of an energy calibration for the purposes ofmore » National Reachback, outlines a method to validate whether a given spectrum meets that definition, discusses general source considerations, and provides a specific operating procedure for calibrating the GR-135.« less

  13. Camera Geo calibration Using an MCMC Approach (Author’s Manuscript)

    DTIC Science & Technology

    2016-08-19

    calibration problem that supports priors over camera parameters and constraints that relate image anno - tations, camera geometry, and a geographic...Θi+1|Θi) = Ndim∏ j =1 1 σj φ ( Θi+1, j −Θi, j σj ) , where σj denotes the sampling step size on the j -th dimen- sion, and φ(x) denotes the PDF of the...image calibration,” Imaging for Crime Detection and Prevention, 2013. 1 [2] Haipeng Zhang, Mohammed Korayem, David J Crandall, and Gretchen LeBuhn

  14. GPS Disciplined Oscillators for Traceability to the Italian Time Standard

    NASA Technical Reports Server (NTRS)

    Cordara, Franco; Pettiti, Valerio

    1996-01-01

    The Istituo Elettrotecnico Nazionale (IEN) is one of the Italian primary institutes which is responsible for the accreditation of secondary laboratories belong to the national calibration system (SNT) established by law in 1991. The Times and Frequency Department that has accredited in this frame 14 calibration centers for frequency, performs also the remote calibration of their reference oscillators by means of different synchronization systems. The problem of establishing the traceability of the national time standard of the Global Positioning System (GPS) disciplined oscillators has been investigated and the results obtained are reported.

  15. Calibrating and adjusting expectations in life: A grounded theory on how elderly persons with somatic health problems maintain control and balance in life and optimize well-being

    PubMed Central

    Helvik, Anne-Sofie; Iversen, Valentina Cabral; Steiring, Randi; Hallberg, Lillemor R-M

    2011-01-01

    Aim This study aims at exploring the main concern for elderly individuals with somatic health problems and what they do to manage this. Method In total, 14 individuals (mean=74.2 years; range=68–86 years) of both gender including hospitalized and outpatient persons participated in the study. Open interviews were conducted and analyzed according to grounded theory, an inductive theory-generating method. Results The main concern for the elderly individuals with somatic health problems was identified as their striving to maintain control and balance in life. The analysis ended up in a substantive theory explaining how elderly individuals with somatic disease were calibrating and adjusting their expectations in life in order to adapt to their reduced energy level, health problems, and aging. By adjusting the expectations to their actual abilities, the elderly can maintain a sense of that they still have the control over their lives and create stability. The ongoing adjustment process is facilitated by different strategies and result despite lower expectations in subjective well-being. The facilitating strategies are utilizing the network of important others, enjoying cultural heritage, being occupied with interests, having a mission to fulfill, improving the situation by limiting boundaries and, finally, creating meaning in everyday life. Conclusion The main concern of the elderly with somatic health problems was to maintain control and balance in life. The emerging theory explains how elderly people with somatic health problems calibrate their expectations of life in order to adjust to reduced energy, health problems, and aging. This process is facilitated by different strategies and result despite lower expectation in subjective well-being. PMID:21468299

  16. Ring Laser Gyro G-Sensitive Misalignment Calibration in Linear Vibration Environments.

    PubMed

    Wang, Lin; Wu, Wenqi; Li, Geng; Pan, Xianfei; Yu, Ruihang

    2018-02-16

    The ring laser gyro (RLG) dither axis will bend and exhibit errors due to the specific forces acting on the instrument, which are known as g-sensitive misalignments of the gyros. The g-sensitive misalignments of the RLG triad will cause severe attitude error in vibration or maneuver environments where large-amplitude specific forces and angular rates coexist. However, g-sensitive misalignments are usually ignored when calibrating the strapdown inertial navigation system (SINS). This paper proposes a novel method to calibrate the g-sensitive misalignments of an RLG triad in linear vibration environments. With the SINS is attached to a linear vibration bench through outer rubber dampers, rocking of the SINS can occur when the linear vibration is performed on the SINS. Therefore, linear vibration environments can be created to simulate the harsh environment during aircraft flight. By analyzing the mathematical model of g-sensitive misalignments, the relationship between attitude errors and specific forces as well as angular rates is established, whereby a calibration scheme with approximately optimal observations is designed. Vibration experiments are conducted to calibrate g-sensitive misalignments of the RLG triad. Vibration tests also show that SINS velocity error decreases significantly after g-sensitive misalignments compensation.

  17. An automatic calibration procedure for remote eye-gaze tracking systems.

    PubMed

    Model, Dmitri; Guestrin, Elias D; Eizenman, Moshe

    2009-01-01

    Remote gaze estimation systems use calibration procedures to estimate subject-specific parameters that are needed for the calculation of the point-of-gaze. In these procedures, subjects are required to fixate on a specific point or points at specific time instances. Advanced remote gaze estimation systems can estimate the optical axis of the eye without any personal calibration procedure, but use a single calibration point to estimate the angle between the optical axis and the visual axis (line-of-sight). This paper presents a novel automatic calibration procedure that does not require active user participation. To estimate the angles between the optical and visual axes of each eye, this procedure minimizes the distance between the intersections of the visual axes of the left and right eyes with the surface of a display while subjects look naturally at the display (e.g., watching a video clip). Simulation results demonstrate that the performance of the algorithm improves as the range of viewing angles increases. For a subject sitting 75 cm in front of an 80 cm x 60 cm display (40" TV) the standard deviation of the error in the estimation of the angles between the optical and visual axes is 0.5 degrees.

  18. 40 CFR Appendix I to Part 92 - Emission Related Locomotive and Engine Parameters and Specifications

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... injection—non-compression ignition engines. a. Control parameters and calibrations. b. Idle mixture. c. Fuel...(s). i. Injector timing calibration. 4. Fuel injection—compression ignition engines. a. Control... restriction. III. Fuel System. 1. General. a. Engine idle speed. 2. Carburetion. a. Air-fuel flow calibration...

  19. 40 CFR Appendix I to Part 92 - Emission Related Locomotive and Engine Parameters and Specifications

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... injection—non-compression ignition engines. a. Control parameters and calibrations. b. Idle mixture. c. Fuel...(s). i. Injector timing calibration. 4. Fuel injection—compression ignition engines. a. Control... restriction. III. Fuel System. 1. General. a. Engine idle speed. 2. Carburetion. a. Air-fuel flow calibration...

  20. 40 CFR Appendix I to Part 92 - Emission Related Locomotive and Engine Parameters and Specifications

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... injection—non-compression ignition engines. a. Control parameters and calibrations. b. Idle mixture. c. Fuel...(s). i. Injector timing calibration. 4. Fuel injection—compression ignition engines. a. Control... restriction. III. Fuel System. 1. General. a. Engine idle speed. 2. Carburetion. a. Air-fuel flow calibration...

  1. 40 CFR Appendix I to Part 92 - Emission Related Locomotive and Engine Parameters and Specifications

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... injection—non-compression ignition engines. a. Control parameters and calibrations. b. Idle mixture. c. Fuel...(s). i. Injector timing calibration. 4. Fuel injection—compression ignition engines. a. Control... restriction. III. Fuel System. 1. General. a. Engine idle speed. 2. Carburetion. a. Air-fuel flow calibration...

  2. Time-gated flow cytometry: an ultra-high selectivity method to recover ultra-rare-event μ-targets in high-background biosamples

    NASA Astrophysics Data System (ADS)

    Jin, Dayong; Piper, James A.; Leif, Robert C.; Yang, Sean; Ferrari, Belinda C.; Yuan, Jingli; Wang, Guilan; Vallarino, Lidia M.; Williams, John W.

    2009-03-01

    A fundamental problem for rare-event cell analysis is auto-fluorescence from nontarget particles and cells. Time-gated flow cytometry is based on the temporal-domain discrimination of long-lifetime (>1 μs) luminescence-stained cells and can render invisible all nontarget cell and particles. We aim to further evaluate the technique, focusing on detection of ultra-rare-event 5-μm calibration beads in environmental water dirt samples. Europium-labeled 5-μm calibration beads with improved luminescence homogeneity and reduced aggregation were evaluated using the prototype UV LED excited time-gated luminescence (TGL) flow cytometer (FCM). A BD FACSAria flow cytometer was used to sort accurately a very low number of beads (<100 events), which were then spiked into concentrated samples of environmental water. The use of europium-labeled beads permitted the demonstration of specific detection rates of 100%+/-30% and 91%+/-3% with 10 and 100 target beads, respectively, that were mixed with over one million nontarget autofluorescent background particles. Under the same conditions, a conventional FCM was unable to recover rare-event fluorescein isothiocyanate (FITC) calibration beads. Preliminary results on Giardia detection are also reported. We have demonstrated the scientific value of lanthanide-complex biolabels in flow cytometry. This approach may augment the current method that uses multifluorescence-channel flow cytometry gating.

  3. Distance error correction for time-of-flight cameras

    NASA Astrophysics Data System (ADS)

    Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian

    2017-06-01

    The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.

  4. NIST/ISAC standardization study: variability in assignment of intensity values to fluorescence standard beads and in cross calibration of standard beads to hard dyed beads.

    PubMed

    Hoffman, Robert A; Wang, Lili; Bigos, Martin; Nolan, John P

    2012-09-01

    Results from a standardization study cosponsored by the International Society for Advancement of Cytometry (ISAC) and the US National Institute of Standards and Technology (NIST) are reported. The study evaluated the variability of assigning intensity values to fluorophore standard beads by bead manufacturers and the variability of cross calibrating the standard beads to stained polymer beads (hard-dyed beads) using different flow cytometers. Hard dyed beads are generally not spectrally matched to the fluorophores used to stain cells, and spectral response varies among flow cytometers. Thus if hard dyed beads are used as fluorescence calibrators, one expects calibration for specific fluorophores (e.g., FITC or PE) to vary among different instruments. Using standard beads surface-stained with specific fluorophores (FITC, PE, APC, and Pacific Blue™), the study compared the measured intensity of fluorophore standard beads to that of hard dyed beads through cross calibration on 133 different flow cytometers. Using robust CV as a measure of variability, the variation of cross calibrated values was typically 20% or more for a particular hard dyed bead in a specific detection channel. The variation across different instrument models was often greater than the variation within a particular instrument model. As a separate part of the study, NIST and four bead manufacturers used a NIST supplied protocol and calibrated fluorophore solution standards to assign intensity values to the fluorophore beads. Values assigned to the reference beads by different groups varied by orders of magnitude in most cases, reflecting differences in instrumentation used to perform the calibration. The study concluded that the use of any spectrally unmatched hard dyed bead as a general fluorescence calibrator must be verified and characterized for every particular instrument model. Close interaction between bead manufacturers and NIST is recommended to have reliable and uniformly assigned fluorescence standard beads. Copyright © 2012 International Society for Advancement of Cytometry.

  5. Smart Sensor Node Development, Testing and Implementation for Rocket Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Mengers, Timothy R.; Shipley, John; Merrill, Richard; Eggett, Leon; Johnson, Mont; Morris, Jonathan; Figueroa, Fernando; Schmalzel, John; Turowski, Mark P.

    2007-01-01

    Successful design and implementation of an Integrated System Health Management (ISHM) approach for rocket propulsion systems requires the capability improve the reliability of complex systems by detecting and diagnosing problems. One of the critical elements in the ISHM is an intelligent sensor node for data acquisition that meets specific requirements for rocket motor testing including accuracy, sample rate and size/weight. Traditional data acquisition systems are calibrated in a controlled environment and guaranteed to perform bounded by their tested conditions. In a real world ISHM system, the data acquisition and signal conditioning needs to function in an uncontrolled environment. Development and testing of this sensor node focuses on a design with the ability to self check in order to extend calibration times, report internal faults and drifts and notify the overall system when the data acquisition is not performing as it should. All of this will be designed within a system that is flexible, requiring little re-design to be deployed on a wide variety of systems. Progress in this design and initial testing of prototype units will be reported.

  6. MATE: Machine Learning for Adaptive Calibration Template Detection

    PubMed Central

    Donné, Simon; De Vylder, Jonas; Goossens, Bart; Philips, Wilfried

    2016-01-01

    The problem of camera calibration is two-fold. On the one hand, the parameters are estimated from known correspondences between the captured image and the real world. On the other, these correspondences themselves—typically in the form of chessboard corners—need to be found. Many distinct approaches for this feature template extraction are available, often of large computational and/or implementational complexity. We exploit the generalized nature of deep learning networks to detect checkerboard corners: our proposed method is a convolutional neural network (CNN) trained on a large set of example chessboard images, which generalizes several existing solutions. The network is trained explicitly against noisy inputs, as well as inputs with large degrees of lens distortion. The trained network that we evaluate is as accurate as existing techniques while offering improved execution time and increased adaptability to specific situations with little effort. The proposed method is not only robust against the types of degradation present in the training set (lens distortions, and large amounts of sensor noise), but also to perspective deformations, e.g., resulting from multi-camera set-ups. PMID:27827920

  7. Advances in spectroscopic methods for quantifying soil carbon

    USGS Publications Warehouse

    Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco; Hively, W. Dean

    2012-01-01

    The current gold standard for soil carbon (C) determination is elemental C analysis using dry combustion. However, this method requires expensive consumables, is limited by the number of samples that can be processed (~100/d), and is restricted to the determination of total carbon. With increased interest in soil C sequestration, faster methods of analysis are needed, and there is growing interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared spectral ranges. These spectral methods can decrease analytical requirements and speed sample processing, be applied to large landscape areas using remote sensing imagery, and be used to predict multiple analytes simultaneously. However, the methods require localized calibrations to establish the relationship between spectral data and reference analytical data, and also have additional, specific problems. For example, remote sensing is capable of scanning entire watersheds for soil carbon content but is limited to the surface layer of tilled soils and may require difficult and extensive field sampling to obtain proper localized calibration reference values. The objective of this chapter is to discuss the present state of spectroscopic methods for determination of soil carbon.

  8. Scheduling and calibration strategy for continuous radio monitoring of 1700 sources every three days

    NASA Astrophysics Data System (ADS)

    Max-Moerbeck, Walter

    2014-08-01

    The Owens Valley Radio Observatory 40 meter telescope is currently monitoring a sample of about 1700 blazars every three days at 15 GHz, with the main scientific goal of determining the relation between the variability of blazars at radio and gamma-rays as observed with the Fermi Gamma-ray Space Telescope. The time domain relation between radio and gamma-ray emission, in particular its correlation and time lag, can help us determine the location of the high-energy emission site in blazars, a current open question in blazar research. To achieve this goal, continuous observation of a large sample of blazars in a time scale of less than a week is indispensable. Since we only look at bright targets, the time available for target observations is mostly limited by source observability, calibration requirements and slewing of the telescope. Here I describe the implementation of a practical solution to this scheduling, calibration, and slewing time minimization problem. This solution combines ideas from optimization, in particular the traveling salesman problem, with astronomical and instrumental constraints. A heuristic solution using well established optimization techniques and astronomical insights particular to this situation, allow us to observe all the sources in the required three days cadence while obtaining reliable calibration of the radio flux densities. Problems of this nature will only be more common in the future and the ideas presented here can be relevant for other observing programs.

  9. Non-convex optimization for self-calibration of direction-dependent effects in radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves

    2017-10-01

    Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, I.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.

  10. The effects of temperature and diet on age grading and population age structure determination in Drosophila.

    PubMed

    Aw, Wen C; Ballard, J William O

    2013-10-01

    The age structure of natural population is of interest in physiological, life history and ecological studies but it is often difficult to determine. One methodological problem is that samples may need to be invasively sampled preventing subsequent taxonomic curation. A second problem is that it can be very expensive to accurately determine the age structure of given population because large sample sizes are often necessary. In this study, we test the effects of temperature (17 °C, 23 °C and 26 °C) and diet (standard cornmeal and low calorie diet) on the accuracy of the non-invasive, inexpensive and high throughput near-infrared spectroscopy (NIRS) technique to determine the age of Drosophila flies. Composite and simplified calibration models were developed for each sex. Independent sets for each temperature and diet treatments with flies not involved in calibration model were then used to validate the accuracy of the calibration models. The composite NIRS calibration model was generated by including flies reared under all temperatures and diets. This approach permits rapid age measurement and age structure determination in large population of flies as less than or equal to 9 days, or more than 9 days old with 85-97% and 64-99% accuracy, respectively. The simplified calibration models were generated by including flies reared at 23 °C on standard diet. Low accuracy rates were observed when simplified calibration models were used to identify (a) Drosophila reared at 17 °C and 26 °C and (b) 23 °C with low calorie diet. These results strongly suggest that appropriate calibration models need to be developed in the laboratory before this technique can be reliably used in field. These calibration models should include the major environmental variables that change across space and time in the particular natural population to be studied. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times

    PubMed Central

    Heath, Tracy A.

    2012-01-01

    In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343

  12. Evaluation of calibration efficacy under different levels of uncertainty

    DOE PAGES

    Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...

    2014-06-10

    This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less

  13. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  14. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  15. Automatic Astrometric and Photometric Calibration with SCAMP

    NASA Astrophysics Data System (ADS)

    Bertin, E.

    2006-07-01

    Astrometric and photometric calibrations have remained the most tiresome step in the reduction of large imaging surveys. I present a new software package, SCAMP which has been written to address this problem. SCAMP efficiently computes accurate astrometric and photometric solutions for any arbitrary sequence of FITS images in a completely automatic way. SCAMP is released under the GNU General Public Licence.

  16. 40 CFR 86.126-90 - Calibration of other equipment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... according to good practice. Specific equipment requiring calibration are the gas chromatograph and flame ionization detector used in measuring methanol and the high pressure liquid chromatograph (HPLC) and...

  17. 40 CFR 86.526-90 - Calibration of other equipment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... necessary according to good practice. Specific equipment requiring calibration is the gas chromatograph and flame ionization detector used in measuring methanol and the high pressure liquid chromatograph (HPLC...

  18. 40 CFR 86.126-90 - Calibration of other equipment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... according to good practice. Specific equipment requiring calibration are the gas chromatograph and flame ionization detector used in measuring methanol and the high pressure liquid chromatograph (HPLC) and...

  19. 40 CFR 86.526-90 - Calibration of other equipment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... necessary according to good practice. Specific equipment requiring calibration is the gas chromatograph and flame ionization detector used in measuring methanol and the high pressure liquid chromatograph (HPLC...

  20. Recent Loads Calibration Experience With a Delta Wing Airplane

    NASA Technical Reports Server (NTRS)

    Jenkins, Jerald M.; Kuhl, Albert E.

    1977-01-01

    Aircraft which are designed for supersonic and hypersonic flight are evolving with delta wing configurations. An integral part of the evolution of all new aircraft is the flight test phase. Included in the flight test phase is an effort to identify and evaluate the loads environment of the aircraft. The most effective way of examining the loads environment is to utilize calibrated strain gages to provide load magnitudes. Using strain gage data to accomplish this has turned out to be anything but a straightforward task. The delta wing configuration has turned out to be a very difficult type of wing structure to calibrate. Elevated structural temperatures result in thermal effects which contaminate strain gage data being used to deduce flight loads. The concept of thermally calibrating a strain gage system is an approach to solving this problem. This paper will address how these problems were approached on a program directed toward measuring loads on the wing of a large, flexible supersonic aircraft. Structural configurations typical of high-speed delta wing aircraft will be examined. The temperature environment will be examined to see how it induces thermal stresses which subsequently cause errors in loads equations used to deduce the flight loads.

  1. Exploiting semantics for sensor re-calibration in event detection systems

    NASA Astrophysics Data System (ADS)

    Vaisenberg, Ronen; Ji, Shengyue; Hore, Bijit; Mehrotra, Sharad; Venkatasubramanian, Nalini

    2008-01-01

    Event detection from a video stream is becoming an important and challenging task in surveillance and sentient systems. While computer vision has been extensively studied to solve different kinds of detection problems over time, it is still a hard problem and even in a controlled environment only simple events can be detected with a high degree of accuracy. Instead of struggling to improve event detection using image processing only, we bring in semantics to direct traditional image processing. Semantics are the underlying facts that hide beneath video frames, which can not be "seen" directly by image processing. In this work we demonstrate that time sequence semantics can be exploited to guide unsupervised re-calibration of the event detection system. We present an instantiation of our ideas by using an appliance as an example--Coffee Pot level detection based on video data--to show that semantics can guide the re-calibration of the detection model. This work exploits time sequence semantics to detect when re-calibration is required to automatically relearn a new detection model for the newly evolved system state and to resume monitoring with a higher rate of accuracy.

  2. Lunar Spectral Irradiance and Radiance (LUSI): New Instrumentation to Characterize the Moon as a Space-Based Radiometric Standard

    PubMed Central

    Smith, Allan W.; Lorentz, Steven R.; Stone, Thomas C.; Datla, Raju V.

    2012-01-01

    The need to understand and monitor climate change has led to proposed radiometric accuracy requirements for space-based remote sensing instruments that are very stringent and currently outside the capabilities of many Earth orbiting instruments. A major problem is quantifying changes in sensor performance that occur from launch and during the mission. To address this problem on-orbit calibrators and monitors have been developed, but they too can suffer changes from launch and the harsh space environment. One solution is to use the Moon as a calibration reference source. Already the Moon has been used to remove post-launch drift and to cross-calibrate different instruments, but further work is needed to develop a new model with low absolute uncertainties capable of climate-quality absolute calibration of Earth observing instruments on orbit. To this end, we are proposing an Earth-based instrument suite to measure the absolute lunar spectral irradiance to an uncertainty1 of 0.5 % (k=1) over the spectral range from 320 nm to 2500 nm with a spectral resolution of approximately 0.3 %. Absolute measurements of lunar radiance will also be acquired to facilitate calibration of high spatial resolution sensors. The instruments will be deployed at high elevation astronomical observatories and flown on high-altitude balloons in order to mitigate the effects of the Earth’s atmosphere on the lunar observations. Periodic calibrations using instrumentation and techniques available from NIST will ensure traceability to the International System of Units (SI) and low absolute radiometric uncertainties. PMID:26900523

  3. Lunar Spectral Irradiance and Radiance (LUSI): New Instrumentation to Characterize the Moon as a Space-Based Radiometric Standard.

    PubMed

    Smith, Allan W; Lorentz, Steven R; Stone, Thomas C; Datla, Raju V

    2012-01-01

    The need to understand and monitor climate change has led to proposed radiometric accuracy requirements for space-based remote sensing instruments that are very stringent and currently outside the capabilities of many Earth orbiting instruments. A major problem is quantifying changes in sensor performance that occur from launch and during the mission. To address this problem on-orbit calibrators and monitors have been developed, but they too can suffer changes from launch and the harsh space environment. One solution is to use the Moon as a calibration reference source. Already the Moon has been used to remove post-launch drift and to cross-calibrate different instruments, but further work is needed to develop a new model with low absolute uncertainties capable of climate-quality absolute calibration of Earth observing instruments on orbit. To this end, we are proposing an Earth-based instrument suite to measure the absolute lunar spectral irradiance to an uncertainty(1) of 0.5 % (k=1) over the spectral range from 320 nm to 2500 nm with a spectral resolution of approximately 0.3 %. Absolute measurements of lunar radiance will also be acquired to facilitate calibration of high spatial resolution sensors. The instruments will be deployed at high elevation astronomical observatories and flown on high-altitude balloons in order to mitigate the effects of the Earth's atmosphere on the lunar observations. Periodic calibrations using instrumentation and techniques available from NIST will ensure traceability to the International System of Units (SI) and low absolute radiometric uncertainties.

  4. User guide for the USGS aerial camera Report of Calibration.

    USGS Publications Warehouse

    Tayman, W.P.

    1984-01-01

    Calibration and testing of aerial mapping cameras includes the measurement of optical constants and the check for proper functioning of a number of complicated mechanical and electrical parts. For this purpose the US Geological Survey performs an operational type photographic calibration. This paper is not strictly a scientific paper but rather a 'user guide' to the USGS Report of Calibration of an aerial mapping camera for compliance with both Federal and State mapping specifications. -Author

  5. Modeling of sheet metal fracture via cohesive zone model and application to spot welds

    NASA Astrophysics Data System (ADS)

    Wu, Joseph Z.

    Even though the cohesive zone model (CZM) has been widely used to analyze ductile fracture, it is not yet clearly understood how to calibrate the cohesive parameters including the specific work of separation (the work of separation per unit crack area) and the peak stress. A systematic approach is presented to first determine the cohesive values for sheet metal and then apply the calibrated model to various structure problems including the failure of spot welds. Al5754-0 was chosen for this study since it is not sensitive to heat treatment so the effect of heat-affected-zone (HAZ) can be ignored. The CZM has been applied to successfully model both mode-I and mode-III fracture for various geometries including Kahn specimens, single-notch specimens, and deep double-notch specimens for mode-I and trouser specimens for mode-III. The mode-I fracture of coach-peel spot-weld nugget and the mixed-mode fracture of nugget pull-out have also been well simulated by the CZM. Using the mode-I average specific work of separation of 13 kJ/m2 identified in a previous work and the mode-III specific work of separation of 38 kJ/m 2 found in this thesis, the cohesive peak stress has been determined to range from 285 MPa to 600 MPa for mode-I and from 165 MPa to 280 MPa for mode-III, depending on the degree of plastic deformation. The uncertainty of these cohesive values has also been examined. It is concluded that, if the specific work of separation is a material constant, the peak stress changes with the degree of plastic deformation and is therefore geometry-dependent.

  6. Ethnic Variability in Body Size, Proportions and Composition in Children Aged 5 to 11 Years: Is Ethnic-Specific Calibration of Bioelectrical Impedance Required?

    PubMed Central

    Lee, Simon; Bountziouka, Vassiliki; Lum, Sooky; Stocks, Janet; Bonner, Rachel; Naik, Mitesh; Fothergill, Helen; Wells, Jonathan C. K.

    2014-01-01

    Background Bioelectrical Impedance Analysis (BIA) has the potential to be used widely as a method of assessing body fatness and composition, both in clinical and community settings. BIA provides bioelectrical properties, such as whole-body impedance which ideally needs to be calibrated against a gold-standard method in order to provide accurate estimates of fat-free mass. UK studies in older children and adolescents have shown that, when used in multi-ethnic populations, calibration equations need to include ethnic-specific terms, but whether this holds true for younger children remains to be elucidated. The aims of this study were to examine ethnic differences in body size, proportions and composition in children aged 5 to 11 years, and to establish the extent to which such differences could influence BIA calibration. Methods In a multi-ethnic population of 2171 London primary school-children (47% boys; 34% White, 29% Black African/Caribbean, 25% South Asian, 12% Other) detailed anthropometric measurements were performed and ethnic differences in body size and proportion were assessed. Ethnic differences in fat-free mass, derived by deuterium dilution, were further evaluated in a subsample of the population (n = 698). Multiple linear regression models were used to calibrate BIA against deuterium dilution. Results In children <11 years of age, Black African/Caribbean children were significantly taller, heavier and had larger body size than children of other ethnicities. They also had larger waist and limb girths and relatively longer legs. Despite these differences, ethnic-specific terms did not contribute significantly to the BIA calibration equation (Fat-free mass = 1.12+0.71*(height2/impedance)+0.18*weight). Conclusion Although clear ethnic differences in body size, proportions and composition were evident in this population of young children aged 5 to 11 years, an ethnic-specific BIA calibration equation was not required. PMID:25478928

  7. Ethnic variability in body size, proportions and composition in children aged 5 to 11 years: is ethnic-specific calibration of bioelectrical impedance required?

    PubMed

    Lee, Simon; Bountziouka, Vassiliki; Lum, Sooky; Stocks, Janet; Bonner, Rachel; Naik, Mitesh; Fothergill, Helen; Wells, Jonathan C K

    2014-01-01

    Bioelectrical Impedance Analysis (BIA) has the potential to be used widely as a method of assessing body fatness and composition, both in clinical and community settings. BIA provides bioelectrical properties, such as whole-body impedance which ideally needs to be calibrated against a gold-standard method in order to provide accurate estimates of fat-free mass. UK studies in older children and adolescents have shown that, when used in multi-ethnic populations, calibration equations need to include ethnic-specific terms, but whether this holds true for younger children remains to be elucidated. The aims of this study were to examine ethnic differences in body size, proportions and composition in children aged 5 to 11 years, and to establish the extent to which such differences could influence BIA calibration. In a multi-ethnic population of 2171 London primary school-children (47% boys; 34% White, 29% Black African/Caribbean, 25% South Asian, 12% Other) detailed anthropometric measurements were performed and ethnic differences in body size and proportion were assessed. Ethnic differences in fat-free mass, derived by deuterium dilution, were further evaluated in a subsample of the population (n = 698). Multiple linear regression models were used to calibrate BIA against deuterium dilution. In children < 11 years of age, Black African/Caribbean children were significantly taller, heavier and had larger body size than children of other ethnicities. They also had larger waist and limb girths and relatively longer legs. Despite these differences, ethnic-specific terms did not contribute significantly to the BIA calibration equation (Fat-free mass = 1.12+0.71*(height2/impedance)+0.18*weight). Although clear ethnic differences in body size, proportions and composition were evident in this population of young children aged 5 to 11 years, an ethnic-specific BIA calibration equation was not required.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schüller, Andreas, E-mail: andreas.schueller@ptb.de; Meier, Markus; Selbach, Hans-Joachim

    Purpose: The aim of this study was to investigate whether a chamber-type-specific radiation quality correction factor k{sub Q} can be determined in order to measure the reference air kerma rate of {sup 60}Co high-dose-rate (HDR) brachytherapy sources with acceptable uncertainty by means of a well-type ionization chamber calibrated for {sup 192}Ir HDR sources. Methods: The calibration coefficients of 35 well-type ionization chambers of two different chamber types for radiation fields of {sup 60}Co and {sup 192}Ir HDR brachytherapy sources were determined experimentally. A radiation quality correction factor k{sub Q} was determined as the ratio of the calibration coefficients for {supmore » 60}Co and {sup 192}Ir. The dependence on chamber-to-chamber variations, source-to-source variations, and source strength was investigated. Results: For the PTW Tx33004 (Nucletron source dosimetry system (SDS)) well-type chamber, the type-specific radiation quality correction factor k{sub Q} is 1.19. Note that this value is valid for chambers with the serial number, SN ≥ 315 (Nucletron SDS SN ≥ 548) onward only. For the Standard Imaging HDR 1000 Plus well-type chambers, the type-specific correction factor k{sub Q} is 1.05. Both k{sub Q} values are independent of the source strengths in the complete clinically relevant range. The relative expanded uncertainty (k = 2) of k{sub Q} is U{sub k{sub Q}} = 2.1% for both chamber types. Conclusions: The calibration coefficient of a well-type chamber for radiation fields of {sup 60}Co HDR brachytherapy sources can be calculated from a given calibration coefficient for {sup 192}Ir radiation by using a chamber-type-specific radiation quality correction factor k{sub Q}. However, the uncertainty of a {sup 60}Co calibration coefficient calculated via k{sub Q} is at least twice as large as that for a direct calibration with a {sup 60}Co source.« less

  9. A radiation quality correction factor k for well-type ionization chambers for the measurement of the reference air kerma rate of (60)Co HDR brachytherapy sources.

    PubMed

    Schüller, Andreas; Meier, Markus; Selbach, Hans-Joachim; Ankerhold, Ulrike

    2015-07-01

    The aim of this study was to investigate whether a chamber-type-specific radiation quality correction factor kQ can be determined in order to measure the reference air kerma rate of (60)Co high-dose-rate (HDR) brachytherapy sources with acceptable uncertainty by means of a well-type ionization chamber calibrated for (192)Ir HDR sources. The calibration coefficients of 35 well-type ionization chambers of two different chamber types for radiation fields of (60)Co and (192)Ir HDR brachytherapy sources were determined experimentally. A radiation quality correction factor kQ was determined as the ratio of the calibration coefficients for (60)Co and (192)Ir. The dependence on chamber-to-chamber variations, source-to-source variations, and source strength was investigated. For the PTW Tx33004 (Nucletron source dosimetry system (SDS)) well-type chamber, the type-specific radiation quality correction factor kQ is 1.19. Note that this value is valid for chambers with the serial number, SN ≥ 315 (Nucletron SDS SN ≥ 548) onward only. For the Standard Imaging HDR 1000 Plus well-type chambers, the type-specific correction factor kQ is 1.05. Both kQ values are independent of the source strengths in the complete clinically relevant range. The relative expanded uncertainty (k = 2) of kQ is UkQ = 2.1% for both chamber types. The calibration coefficient of a well-type chamber for radiation fields of (60)Co HDR brachytherapy sources can be calculated from a given calibration coefficient for (192)Ir radiation by using a chamber-type-specific radiation quality correction factor kQ. However, the uncertainty of a (60)Co calibration coefficient calculated via kQ is at least twice as large as that for a direct calibration with a (60)Co source.

  10. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter.

    PubMed

    Liu, Wanli

    2017-03-08

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.

  11. A Visual Servoing-Based Method for ProCam Systems Calibration

    PubMed Central

    Berry, Francois; Aider, Omar Ait; Mosnier, Jeremie

    2013-01-01

    Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy. PMID:24084121

  12. Optical Comb from a Whispering Gallery Mode Resonator for Spectroscopy and Astronomy Instruments Calibration

    NASA Technical Reports Server (NTRS)

    Strekalov, Dmitry V.; Yu, Nam; Thompson, Robert J.

    2012-01-01

    The most accurate astronomical data is available from space-based observations that are not impeded by the Earth's atmosphere. Such measurements may require spectral samples taken as long as decades apart, with the 1 cm/s velocity precision integrated over a broad wavelength range. This raises the requirements specifically for instruments used in astrophysics research missions -- their stringent wavelength resolution and accuracy must be maintained over years and possibly decades. Therefore, a stable and broadband optical calibration technique compatible with spaceflights becomes essential. The space-based spectroscopic instruments need to be calibrated in situ, which puts forth specific requirements to the calibration sources, mainly concerned with their mass, power consumption, and reliability. A high-precision, high-resolution reference wavelength comb source for astronomical and astrophysics spectroscopic observations has been developed that is deployable in space. The optical comb will be used for wavelength calibrations of spectrographs and will enable Doppler measurements to better than 10 cm/s precision, one hundred times better than the current state-of-the- art.

  13. Precise dielectric property measurements and E-field probe calibration for specific absorption rate measurements using a rectangular waveguide

    PubMed Central

    Hakim, B M; Beard, B B; Davis, C C

    2018-01-01

    Specific absorption rate (SAR) measurements require accurate calculations of the dielectric properties of tissue-equivalent liquids and associated calibration of E-field probes. We developed a precise tissue-equivalent dielectric measurement and E-field probe calibration system. The system consists of a rectangular waveguide, electric field probe, and data control and acquisition system. Dielectric properties are calculated using the field attenuation factor inside the tissue-equivalent liquid and power reflectance inside the waveguide at the air/dielectric-slab interface. Calibration factors were calculated using isotropicity measurements of the E-field probe. The frequencies used are 900 MHz and 1800 MHz. The uncertainties of the measured values are within ±3%, at the 95% confidence level. Using the same waveguide for dielectric measurements as well as calibrating E-field probes used in SAR assessments eliminates a source of uncertainty. Moreover, we clearly identified the system parameters that affect the overall uncertainty of the measurement system. PMID:29520129

  14. When to Make Mountains out of Molehills: The Pros and Cons of Simple and Complex Model Calibration Procedures

    NASA Astrophysics Data System (ADS)

    Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.

    2017-12-01

    Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.

  15. Minimizing calibration time using inter-subject information of single-trial recognition of error potentials in brain-computer interfaces.

    PubMed

    Iturrate, Iñaki; Montesano, Luis; Chavarriaga, Ricardo; del R Millán, Jose; Minguez, Javier

    2011-01-01

    One of the main problems of both synchronous and asynchronous EEG-based BCIs is the need of an initial calibration phase before the system can be used. This phase is necessary due to the high non-stationarity of the EEG, since it changes between sessions and users. The calibration process limits the BCI systems to scenarios where the outputs are very controlled, and makes these systems non-friendly and exhausting for the users. Although it has been studied how to reduce calibration time for asynchronous signals, it is still an open issue for event-related potentials. Here, we propose the minimization of the calibration time on single-trial error potentials by using classifiers based on inter-subject information. The results show that it is possible to have a classifier with a high performance from the beginning of the experiment, and which is able to adapt itself making the calibration phase shorter and transparent to the user.

  16. NICMOS Cycles 13 and 14 Calibration Plans

    NASA Astrophysics Data System (ADS)

    Arribas, Santiago; Bergeron, Eddie; de Jong, Roeof; Malhotra, Sangeeta; Mobasher, Bahram; Noll, Keith; Schultz, Al; Wiklind, Tommy; Xu, Chun

    2005-11-01

    This document summarizes the NICMOS Calibration Plans for Cycles 13 and 14. These plans complement the SMOV3b, the Cycle 10 (interim), and the Cycles 11 and 12 (regular) calibration programs executed after the installation of the NICMOS Cooling System (NCS).. These previous programs have shown that the instrument is very stable, which has motivated a further reduction in the frequency of the monitoring programs for Cycle 13. In addition, for Cycle 14 some of these programs were slightly modified to account for 2 Gyro HST operations. The special calibrations on Cycle 13 were focussed on a follow up of the spectroscopic recalibration initiated in Cycle 12. This program led to the discovery of a possible count rate non-linearity, which has triggered a special program for Cycle 13 and a number of subsequent tests and calibrations during Cycle 14. At the time of writing this is a very active area of research. We also briefly comment on other calibrations defined to address other specific issues like: the autoreset test, the SPAR sequences tests, and the low-frequency flat residual for NIC1. The calibration programs for the 2-Gyro campaigns are not included here, since they have been described somewhere else. Further details and updates on specific programs can be found via the NICMOS web site.

  17. Tunable laser techniques for improving the precision of observational astronomy

    NASA Astrophysics Data System (ADS)

    Cramer, Claire E.; Brown, Steven W.; Lykke, Keith R.; Woodward, John T.; Bailey, Stephen; Schlegel, David J.; Bolton, Adam S.; Brownstein, Joel; Doherty, Peter E.; Stubbs, Christopher W.; Vaz, Amali; Szentgyorgyi, Andrew

    2012-09-01

    Improving the precision of observational astronomy requires not only new telescopes and instrumentation, but also advances in observing protocols, calibrations and data analysis. The Laser Applications Group at the National Institute of Standards and Technology in Gaithersburg, Maryland has been applying advances in detector metrology and tunable laser calibrations to problems in astronomy since 2007. Using similar measurement techniques, we have addressed a number of seemingly disparate issues: precision flux calibration for broad-band imaging, precision wavelength calibration for high-resolution spectroscopy, and precision PSF mapping for fiber spectrographs of any resolution. In each case, we rely on robust, commercially-available laboratory technology that is readily adapted to use at an observatory. In this paper, we give an overview of these techniques.

  18. The MeqTrees software system and its use for third-generation calibration of radio interferometers

    NASA Astrophysics Data System (ADS)

    Noordam, J. E.; Smirnov, O. M.

    2010-12-01

    Context. The formulation of the radio interferometer measurement equation (RIME) for a generic radio telescope by Hamaker et al. has provided us with an elegant mathematical apparatus for better understanding, simulation and calibration of existing and future instruments. The calibration of the new radio telescopes (LOFAR, SKA) would be unthinkable without the RIME formalism, and new software to exploit it. Aims: The MeqTrees software system is designed to implement numerical models, and to solve for arbitrary subsets of their parameters. It may be applied to many problems, but was originally geared towards implementing Measurement Equations in radio astronomy for the purposes of simulation and calibration. The technical goal of MeqTrees is to provide a tool for rapid implementation of such models, while offering performance comparable to hand-written code. We are also pursuing the wider goal of increasing the rate of evolution of radio astronomical software, by offering a tool that facilitates rapid experimentation, and exchange of ideas (and scripts). Methods: MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time (hours rather than weeks or months) for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code. Results: MeqTrees is already widely used as the simulation tool for new instruments (LOFAR, SKA) and technologies (focal plane arrays). It has demonstrated that it can achieve a noise-limited dynamic range in excess of a million, on WSRT data. It is the only package that is specifically designed to handle what we propose to call third-generation calibration (3GC), which is needed for the new generation of giant radio telescopes, but can also improve the calibration of existing instruments.

  19. Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter

    NASA Astrophysics Data System (ADS)

    Tulyakov, Stepan; Ivanov, Anton; Thomas, Nicolas; Roloff, Victoria; Pommerol, Antoine; Cremonese, Gabriele; Weigel, Thomas; Fleuret, Francois

    2018-01-01

    There are many geometric calibration methods for "standard" cameras. These methods, however, cannot be used for the calibration of telescopes with large focal lengths and complex off-axis optics. Moreover, specialized calibration methods for the telescopes are scarce in literature. We describe the calibration method that we developed for the Colour and Stereo Surface Imaging System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO). Although our method is described in the context of CaSSIS, with camera-specific experiments, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.

  20. Observations of classical cepheids

    NASA Technical Reports Server (NTRS)

    Pel, J. W.

    1980-01-01

    The observations of classical Cepheids are reviewed. The main progress that has been made is summarized and some of the problems yet to be solved are discussed. The problems include color excesses, calibration of color, duplicity, ultraviolet colors, temperature-color relations, mass discrepancies, and radius determination.

  1. Balance Calibration – A Method for Assigning a Direct-Reading Uncertainty to an Electronic Balance.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mike Stears

    2010-07-01

    Paper Title: Balance Calibration – A method for assigning a direct-reading uncertainty to an electronic balance. Intended Audience: Those who calibrate or use electronic balances. Abstract: As a calibration facility, we provide on-site (at the customer’s location) calibrations of electronic balances for customers within our company. In our experience, most of our customers are not using their balance as a comparator, but simply putting an unknown quantity on the balance and reading the displayed mass value. Manufacturer’s specifications for balances typically include specifications such as readability, repeatability, linearity, and sensitivity temperature drift, but what does this all mean when themore » balance user simply reads the displayed mass value and accepts the reading as the true value? This paper discusses a method for assigning a direct-reading uncertainty to a balance based upon the observed calibration data and the environment where the balance is being used. The method requires input from the customer regarding the environment where the balance is used and encourages discussion with the customer regarding sources of uncertainty and possible means for improvement; the calibration process becomes an educational opportunity for the balance user as well as calibration personnel. This paper will cover the uncertainty analysis applied to the calibration weights used for the field calibration of balances; the uncertainty is calculated over the range of environmental conditions typically encountered in the field and the resulting range of air density. The temperature stability in the area of the balance is discussed with the customer and the temperature range over which the balance calibration is valid is decided upon; the decision is based upon the uncertainty needs of the customer and the desired rigor in monitoring by the customer. Once the environmental limitations are decided, the calibration is performed and the measurement data is entered into a custom spreadsheet. The spreadsheet uses measurement results, along with the manufacturer’s specifications, to assign a direct-read measurement uncertainty to the balance. The fact that the assigned uncertainty is a best-case uncertainty is discussed with the customer; the assigned uncertainty contains no allowance for contributions associated with the unknown weighing sample, such as density, static charges, magnetism, etc. The attendee will learn uncertainty considerations associated with balance calibrations along with one method for assigning an uncertainty to a balance used for non-comparison measurements.« less

  2. Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2014-01-01

    An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems.

  3. A comparison of single- and multi-site calibration and validation: a case study of SWAT in the Miyun Reservoir watershed, China

    NASA Astrophysics Data System (ADS)

    Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu

    2017-09-01

    An essential task in evaluating global water resource and pollution problems is to obtain the optimum set of parameters in hydrological models through calibration and validation. For a large-scale watershed, single-site calibration and validation may ignore spatial heterogeneity and may not meet the needs of the entire watershed. The goal of this study is to apply a multi-site calibration and validation of the Soil andWater Assessment Tool (SWAT), using the observed flow data at three monitoring sites within the Baihe watershed of the Miyun Reservoir watershed, China. Our results indicate that the multi-site calibration parameter values are more reasonable than those obtained from single-site calibrations. These results are mainly due to significant differences in the topographic factors over the large-scale area, human activities and climate variability. The multi-site method involves the division of the large watershed into smaller watersheds, and applying the calibrated parameters of the multi-site calibration to the entire watershed. It was anticipated that this case study could provide experience of multi-site calibration in a large-scale basin, and provide a good foundation for the simulation of other pollutants in followup work in the Miyun Reservoir watershed and other similar large areas.

  4. An Automated Thermocouple Calibration System

    NASA Technical Reports Server (NTRS)

    Bethea, Mark D.; Rosenthal, Bruce N.

    1992-01-01

    An Automated Thermocouple Calibration System (ATCS) was developed for the unattended calibration of type K thermocouples. This system operates from room temperature to 650 C and has been used for calibration of thermocouples in an eight-zone furnace system which may employ as many as 60 thermocouples simultaneously. It is highly efficient, allowing for the calibration of large numbers of thermocouples in significantly less time than required for manual calibrations. The system consists of a personal computer, a data acquisition/control unit, and a laboratory calibration furnace. The calibration furnace is a microprocessor-controlled multipurpose temperature calibrator with an accuracy of +/- 0.7 C. The accuracy of the calibration furnace is traceable to the National Institute of Standards and Technology (NIST). The computer software is menu-based to give the user flexibility and ease of use. The user needs no programming experience to operate the systems. This system was specifically developed for use in the Microgravity Materials Science Laboratory (MMSL) at the NASA LeRC.

  5. A BPM calibration procedure using TBT data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, M.J.; Crisp, J.; Prieto, P.

    2007-06-01

    Accurate BPM calibration is crucial for lattice analysis. It is also reassuring when the calibration can be independently verified. This paper outlines a procedure that can extract BPM calibration information from TBT orbit data. The procedure is developed as an extension to the Turn-By-Turn lattice analysis [1]. Its application to data from both Recycler Ring and Main Injector (MI) at Fermilab have produced very encouraging results. Some specifics in hardware design will be mentioned to contrast that of analysis results.

  6. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  7. Multisensory visual servoing by a neural network.

    PubMed

    Wei, G Q; Hirzinger, G

    1999-01-01

    Conventional computer vision methods for determining a robot's end-effector motion based on sensory data needs sensor calibration (e.g., camera calibration) and sensor-to-hand calibration (e.g., hand-eye calibration). This involves many computations and even some difficulties, especially when different kinds of sensors are involved. In this correspondence, we present a neural network approach to the motion determination problem without any calibration. Two kinds of sensory data, namely, camera images and laser range data, are used as the input to a multilayer feedforward network to associate the direct transformation from the sensory data to the required motions. This provides a practical sensor fusion method. Using a recursive motion strategy and in terms of a network correction, we relax the requirement for the exactness of the learned transformation. Another important feature of our work is that the goal position can be changed without having to do network retraining. Experimental results show the effectiveness of our method.

  8. Radiometric calibration of Landsat Thematic Mapper multispectral images

    USGS Publications Warehouse

    Chavez, P.S.

    1989-01-01

    A main problem encountered in radiometric calibration of satellite image data is correcting for atmospheric effects. Without this correction, an image digital number (DN) cannot be converted to a surface reflectance value. In this paper the accuracy of a calibration procedure, which includes a correction for atmospheric scattering, is tested. Two simple methods, a stand-alone and an in situ sky radiance measurement technique, were used to derive the HAZE DN values for each of the six reflectance Thematic Mapper (TM) bands. The DNs of two Landsat TM images of Phoenix, Arizona were converted to surface reflectances. -from Author

  9. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  10. Using random forest for reliable classification and cost-sensitive learning for medical diagnosis.

    PubMed

    Yang, Fan; Wang, Hua-zhen; Mi, Hong; Lin, Cheng-de; Cai, Wei-wen

    2009-01-30

    Most machine-learning classifiers output label predictions for new instances without indicating how reliable the predictions are. The applicability of these classifiers is limited in critical domains where incorrect predictions have serious consequences, like medical diagnosis. Further, the default assumption of equal misclassification costs is most likely violated in medical diagnosis. In this paper, we present a modified random forest classifier which is incorporated into the conformal predictor scheme. A conformal predictor is a transductive learning scheme, using Kolmogorov complexity to test the randomness of a particular sample with respect to the training sets. Our method show well-calibrated property that the performance can be set prior to classification and the accurate rate is exactly equal to the predefined confidence level. Further, to address the cost sensitive problem, we extend our method to a label-conditional predictor which takes into account different costs for misclassifications in different class and allows different confidence level to be specified for each class. Intensive experiments on benchmark datasets and real world applications show the resultant classifier is well-calibrated and able to control the specific risk of different class. The method of using RF outlier measure to design a nonconformity measure benefits the resultant predictor. Further, a label-conditional classifier is developed and turn to be an alternative approach to the cost sensitive learning problem that relies on label-wise predefined confidence level. The target of minimizing the risk of misclassification is achieved by specifying the different confidence level for different class.

  11. Stochastic approach for radionuclides quantification

    NASA Astrophysics Data System (ADS)

    Clement, A.; Saurel, N.; Perrin, G.

    2018-01-01

    Gamma spectrometry is a passive non-destructive assay used to quantify radionuclides present in more or less complex objects. Basic methods using empirical calibration with a standard in order to quantify the activity of nuclear materials by determining the calibration coefficient are useless on non-reproducible, complex and single nuclear objects such as waste packages. Package specifications as composition or geometry change from one package to another and involve a high variability of objects. Current quantification process uses numerical modelling of the measured scene with few available data such as geometry or composition. These data are density, material, screen, geometric shape, matrix composition, matrix and source distribution. Some of them are strongly dependent on package data knowledge and operator backgrounds. The French Commissariat à l'Energie Atomique (CEA) is developing a new methodology to quantify nuclear materials in waste packages and waste drums without operator adjustment and internal package configuration knowledge. This method suggests combining a global stochastic approach which uses, among others, surrogate models available to simulate the gamma attenuation behaviour, a Bayesian approach which considers conditional probability densities of problem inputs, and Markov Chains Monte Carlo algorithms (MCMC) which solve inverse problems, with gamma ray emission radionuclide spectrum, and outside dimensions of interest objects. The methodology is testing to quantify actinide activity in different kind of matrix, composition, and configuration of sources standard in terms of actinide masses, locations and distributions. Activity uncertainties are taken into account by this adjustment methodology.

  12. Stand level height-diameter mixed effects models: parameters fitted using loblolly pine but calibrated for sweetgum

    Treesearch

    Curtis L. Vanderschaaf

    2008-01-01

    Mixed effects models can be used to obtain site-specific parameters through the use of model calibration that often produces better predictions of independent data. This study examined whether parameters of a mixed effect height-diameter model estimated using loblolly pine plantation data but calibrated using sweetgum plantation data would produce reasonable...

  13. Using satellite fire detection to calibrate components of the fire weather index system in Malaysia and Indonesia.

    PubMed

    Dymond, Caren C; Field, Robert D; Roswintiarti, Orbita; Guswanto

    2005-04-01

    Vegetation fires have become an increasing problem in tropical environments as a consequence of socioeconomic pressures and subsequent land-use change. In response, fire management systems are being developed. This study set out to determine the relationships between two aspects of the fire problems in western Indonesia and Malaysia, and two components of the Canadian Forest Fire Weather Index System. The study resulted in a new method for calibrating components of fire danger rating systems based on satellite fire detection (hotspot) data. Once the climate was accounted for, a problematic number of fires were related to high levels of the Fine Fuel Moisture Code. The relationship between climate, Fine Fuel Moisture Code, and hotspot occurrence was used to calibrate Fire Occurrence Potential classes where low accounted for 3% of the fires from 1994 to 2000, moderate accounted for 25%, high 26%, and extreme 38%. Further problems arise when there are large clusters of fires burning that may consume valuable land or produce local smoke pollution. Once the climate was taken into account, the hotspot load (number and size of clusters of hotspots) was related to the Fire Weather Index. The relationship between climate, Fire Weather Index, and hotspot load was used to calibrate Fire Load Potential classes. Low Fire Load Potential conditions (75% of an average year) corresponded with 24% of the hotspot clusters, which had an average size of 30% of the largest cluster. In contrast, extreme Fire Load Potential conditions (1% of an average year) corresponded with 30% of the hotspot clusters, which had an average size of 58% of the maximum. Both Fire Occurrence Potential and Fire Load Potential calibrations were successfully validated with data from 2001. This study showed that when ground measurements are not available, fire statistics derived from satellite fire detection archives can be reliably used for calibration. More importantly, as a result of this work, Malaysia and Indonesia have two new sources of information to initiate fire prevention and suppression activities.

  14. The Chandra Source Catalog 2.0: Calibrations

    NASA Astrophysics Data System (ADS)

    Graessle, Dale E.; Evans, Ian N.; Rots, Arnold H.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    Among the many enhancements implemented for the release of Chandra Source Catalog (CSC) 2.0 are improvements in the processing calibration database (CalDB). We have included a thorough overhaul of the CalDB software used in the processing. The software system upgrade, called "CalDB version 4," allows for a more rational and consistent specification of flight configurations and calibration boundary conditions. Numerous improvements in the specific calibrations applied have also been added. Chandra's radiometric and detector response calibrations vary considerably with time, detector operating temperature, and position on the detector. The CalDB has been enhanced to provide the best calibrations possible to each observation over the fifteen-year period included in CSC 2.0. Calibration updates include an improved ACIS contamination model, as well as updated time-varying gain (i.e., photon energy) and quantum efficiency maps for ACIS and HRC-I. Additionally, improved corrections for the ACIS quantum efficiency losses due to CCD charge transfer inefficiency (CTI) have been added for each of the ten ACIS detectors. These CTI corrections are now time and temperature-dependent, allowing ACIS to maintain a 0.3% energy calibration accuracy over the 0.5-7.0 keV range for any ACIS source in the catalog. Radiometric calibration (effective area) accuracy is estimated at ~4% over that range. We include a few examples where improvements in the Chandra CalDB allow for improved data reduction and modeling for the new CSC.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  15. [Methodologic and clinical comparison of four different ergospirometry systems].

    PubMed

    Winter, U J; Fritsch, J; Gitt, A K; Pothoff, G; Berge, P G; Hilger, H H

    1994-01-01

    The clinician who uses cardio-pulmonary exercise testing (CPX) systems relies on the technical informations from the device producers. In this paper, the practicability, the accuracy and the safety of four different, available CPX systems are compared in the clinical area, using clinically orientated criteria. The exercise tests were performed in healthy subjects, in patients with cardiac and/or pulmonary disease as well as in young or old people. The comparison study showed, that there were partially large differences in device design and measurement accuracy. Furthermore, our investigation demonstrated that beneath repetitive calibrations of the CPX systems a frequent validation of the devices by means of a metabolic simulator is necessary. Problems in calibration can be caused by an inadequate performance or by unclean calibration gases. Problems in validation can be due to incompatibility of the CPX device and the validator. The comparison study of the four different systems showed that in the future standards for CPX testing should be defined.

  16. VS2DI: Model use, calibration, and validation

    USGS Publications Warehouse

    Healy, Richard W.; Essaid, Hedeff I.

    2012-01-01

    VS2DI is a software package for simulating water, solute, and heat transport through soils or other porous media under conditions of variable saturation. The package contains a graphical preprocessor for constructing simulations, a postprocessor for displaying simulation results, and numerical models that solve for flow and solute transport (VS2DT) and flow and heat transport (VS2DH). Flow is described by the Richards equation, and solute and heat transport are described by advection-dispersion equations; the finite-difference method is used to solve these equations. Problems can be simulated in one, two, or three (assuming radial symmetry) dimensions. This article provides an overview of calibration techniques that have been used with VS2DI; included is a detailed description of calibration procedures used in simulating the interaction between groundwater and a stream fed by drainage from agricultural fields in central Indiana. Brief descriptions of VS2DI and the various types of problems that have been addressed with the software package are also presented.

  17. Problem-based writing with peer review improves academic performance in physiology.

    PubMed

    Pelaez, Nancy J

    2002-12-01

    The aim of this study was to determine whether problem-based writing with peer review (PW-PR) improves undergraduate student performance on physiology exams. Didactic lectures were replaced with assignments to give students practice explaining their reasoning while solving qualitative problems, thus transferring the responsibility for abstraction and generalization to the students. Performance on exam items about concepts taught using PW-PR was compared with performance on concepts taught using didactic lectures followed by group work. Calibrated Peer Review, a Web-delivered program, was used to collect student essays and to manage anonymous peer review after students "passed" three calibration peer reviews. Results show that the students had difficulty relating concepts. Relationship errors were categorized as (1) problems recognizing levels of organization, (2) problems with cause/effect, and (3) overgeneralizations. For example, some described cells as molecules; others thought that vesicles transport materials through the extracellular fluid. With PW-PR, class discussion was used to confront and resolve such difficulties. Both multiple-choice and essay exam results were better with PW-PR instead of lecture.

  18. Hybrid Geometric Calibration Method for Multi-Platform Spaceborne SAR Image with Sparse Gcps

    NASA Astrophysics Data System (ADS)

    Lv, G.; Tang, X.; Ai, B.; Li, T.; Chen, Q.

    2018-04-01

    Geometric calibration is able to provide high-accuracy geometric coordinates of spaceborne SAR image through accurate geometric parameters in the Range-Doppler model by ground control points (GCPs). However, it is very difficult to obtain GCPs that covering large-scale areas, especially in the mountainous regions. In addition, the traditional calibration method is only used for single platform SAR images and can't support the hybrid geometric calibration for multi-platform images. To solve the above problems, a hybrid geometric calibration method for multi-platform spaceborne SAR images with sparse GCPs is proposed in this paper. First, we calibrate the master image that contains GCPs. Secondly, the point tracking algorithm is used to obtain the tie points (TPs) between the master and slave images. Finally, we calibrate the slave images using TPs as the GCPs. We take the Beijing-Tianjin- Hebei region as an example to study SAR image hybrid geometric calibration method using 3 TerraSAR-X images, 3 TanDEM-X images and 5 GF-3 images covering more than 235 kilometers in the north-south direction. Geometric calibration of all images is completed using only 5 GCPs. The GPS data extracted from GNSS receiver are used to assess the plane accuracy after calibration. The results after geometric calibration with sparse GCPs show that the geometric positioning accuracy is 3 m for TSX/TDX images and 7.5 m for GF-3 images.

  19. BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.

    PubMed

    Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R

    2015-02-20

    Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .

  20. A generalised multiple-mass based method for the determination of the live mass of a force transducer

    NASA Astrophysics Data System (ADS)

    Montalvão, Diogo; Baker, Thomas; Ihracska, Balazs; Aulaqi, Muhammad

    2017-01-01

    Many applications in Experimental Modal Analysis (EMA) require that the sensors' masses are known. This is because the added mass from sensors will affect the structural mode shapes, and in particular its natural frequencies. EMA requires the measurement of the exciting forces at given coordinates, which is often made using piezoelectric force transducers. In such a case, the live mass of the force transducer, i.e. the mass as 'seen' by the structure in perpendicular directions must be measured somehow, so that compensation methods like mass cancelation can be performed. This however presents a problem on how to obtain an accurate measurement for the live mass. If the system is perfectly calibrated, then a reasonably accurate estimate can be made using a straightforward method available in most classical textbooks based on Newton's second law. However, this is often not the case (for example when the transducer's sensitivity changed over time, when it is unknown or when the connection influences the transmission of the force). In a self-calibrating iterative method, both the live mass and calibration factor are determined, but this paper shows that the problem may be ill-conditioned, producing misleading results if certain conditions are not met. Therefore, a more robust method is presented and discussed in this paper, reducing the ill-conditioning problems and the need to know the calibration factors beforehand. The three methods will be compared and discussed through numerical and experimental examples, showing that classical EMA still is a field of research that deserves the attention from scientists and engineers.

  1. Iodine-Containing Mass-Defect-Tuned Dendrimers for Use as Internal Mass Spectrometry Calibrants

    NASA Astrophysics Data System (ADS)

    Giesen, Joseph A.; Diament, Benjamin J.; Grayson, Scott M.

    2018-03-01

    Calibrants based on synthetic dendrimers have been recently proposed as a versatile alternative to peptides and proteins for both MALDI and ESI mass spectrometry calibration. Because of their modular synthetic platform, dendrimer calibrants are particularly amenable to tailoring for specific applications. Utilizing this versatility, a set of dendrimers has been designed as an internal calibrant with a tailored mass defect to differentiate them from the majority of natural peptide analytes. This was achieved by incorporating a tris-iodinated aromatic core as an initiator for the dendrimer synthesis, thereby affording multiple calibration points ( m/z range 600-2300) with an optimized mass-defect offset relative to all peptides composed of the 20 most common proteinogenic amino acids. [Figure not available: see fulltext.

  2. Linearization of Positional Response Curve of a Fiber-optic Displacement Sensor

    NASA Astrophysics Data System (ADS)

    Babaev, O. G.; Matyunin, S. A.; Paranin, V. D.

    2018-01-01

    Currently, the creation of optical measuring instruments and sensors for measuring linear displacement is one of the most relevant problems in the area of instrumentation. Fiber-optic contactless sensors based on the magneto-optical effect are of special interest. They are essentially contactless, non-electrical and have a closed optical channel not subject to contamination. The main problem of this type of sensors is the non-linearity of their positional response curve due to the hyperbolic nature of the magnetic field intensity variation induced by moving the magnetic source mounted on the controlled object relative to the sensing element. This paper discusses an algorithmic method of linearizing the positional response curve of fiber-optic displacement sensors in any selected range of the displacements to be measured. The method is divided into two stages: 1 - definition of the calibration function, 2 - measurement and linearization of the positional response curve (including its temperature stabilization). The algorithm under consideration significantly reduces the number of points of the calibration function, which is essential for the calibration of temperature dependence, due to the use of the points that randomly deviate from the grid points with uniform spacing. Subsequent interpolation of the deviating points and piecewise linear-plane approximation of the calibration function reduces the microcontroller storage capacity for storing the calibration function and the time required to process the measurement results. The paper also presents experimental results of testing real samples of fiber-optic displacement sensors.

  3. Mixed Model Association with Family-Biased Case-Control Ascertainment.

    PubMed

    Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L

    2017-01-05

    Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  4. Curvature-correction-based time-domain CMOS smart temperature sensor with an inaccuracy of -0.8 °C-1.2 °C after one-point calibration from -40 °C to 120 °C

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Chi; Lin, Shih-Hao; Lin, Yi

    2014-06-01

    This paper proposes a time-domain CMOS smart temperature sensor featuring on-chip curvature correction and one-point calibration support for thermal management systems. Time-domain inverter-based temperature sensors, which exhibit the advantages of low power and low cost, have been proposed for on-chip thermal monitoring. However, the curvature is large for the thermal transfer curve, which substantially affects the accuracy as the temperature range increases. Another problem is that the inverter is sensitive to process variations, resulting in difficulty for the sensors to achieve an acceptable accuracy for one-point calibration. To overcome these two problems, a temperature-dependent oscillator with curvature correction is proposed to increase the linearity of the oscillatory width, thereby resolving the drawback caused by a costly off-chip second-order master curve fitting. For one-point calibration support, an adjustable-gain time amplifier was adopted to eliminate the effect of process variations, with the assistance of a calibration circuit. The proposed circuit occupied a small area of 0.073 mm2 and was fabricated in a TSMC CMOS 0.35-μm 2P4M digital process. The linearization of the oscillator and the effect cancellation of process variations enabled the sensor, which featured a fixed resolution of 0.049 °C/LSB, to achieve an optimal inaccuracy of -0.8 °C to 1.2 °C after one-point calibration of 12 test chips from -40 °C to 120 °C. The power consumption was 35 μW at a sample rate of 10 samples/s.

  5. Apparatus for in-situ calibration of instruments that measure fluid depth

    DOEpatents

    Campbell, Melvin D.

    1994-01-01

    The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position.

  6. Hybrid Self-Adaptive Evolution Strategies Guided by Neighborhood Structures for Combinatorial Optimization Problems.

    PubMed

    Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G

    2016-01-01

    This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.

  7. Uncertainty in Calibration, Detection and Estimation of Metal Concentrations in Engine Plumes Using OPAD

    NASA Technical Reports Server (NTRS)

    Hopkins, Randall C.; Benzing, Daniel A.

    1998-01-01

    Improvements in uncertainties in the values of radiant intensity (I) can be accomplished mainly by improvements in the calibration process and in minimizing the difference between the background and engine plume radiance. For engine tests in which the plume is extremely bright, the difference in luminance between the calibration lamp and the engine plume radiance can be so large as to cause relatively large uncertainties in the values of R. This is due to the small aperture necessary on the receiving optics to avoid saturating the instrument. However, this is not a problem with the SSME engine since the liquid oxygen/hydrogen combustion is not as bright as some other fuels. Applying the instrumentation to other type engine tests may require a much brighter calibration lamp.

  8. Method and apparatus for calibrating a particle emissions monitor

    DOEpatents

    Flower, W.L.; Renzi, R.F.

    1998-07-07

    The invention discloses a method and apparatus for calibrating particulate emissions monitors, in particular, sampling probes, and in general, without removing the instrument from the system being monitored. A source of one or more specific metals in aerosol (either solid or liquid) or vapor form is housed in the instrument. The calibration operation is initiated by moving a focusing lens, used to focus a light beam onto an analysis location and collect the output light response, from an operating position to a calibration position such that the focal point of the focusing lens is now within a calibration stream issuing from a calibration source. The output light response from the calibration stream can be compared to that derived from an analysis location in the operating position to more accurately monitor emissions within the emissions flow stream. 6 figs.

  9. Method and apparatus for calibrating a particle emissions monitor

    DOEpatents

    Flower, William L.; Renzi, Ronald F.

    1998-07-07

    The instant invention discloses method and apparatus for calibrating particulate emissions monitors, in particular, and sampling probes, in general, without removing the instrument from the system being monitored. A source of one or more specific metals in aerosol (either solid or liquid) or vapor form is housed in the instrument. The calibration operation is initiated by moving a focusing lens, used to focus a light beam onto an analysis location and collect the output light response, from an operating position to a calibration position such that the focal point of the focusing lens is now within a calibration stream issuing from a calibration source. The output light response from the calibration stream can be compared to that derived from an analysis location in the operating position to more accurately monitor emissions within the emissions flow stream.

  10. The Impact of Indoor and Outdoor Radiometer Calibration on Solar Measurements: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    2016-07-01

    Accurate solar radiation data sets are critical to reducing the expenses associated with mitigating performance risk for solar energy conversion systems, and they help utility planners and grid system operators understand the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of calibration methodologies and the resulting calibration responsivities provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these radiometers are calibratedmore » indoors, and some are calibrated outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The reference radiometer calibrations are traceable to the World Radiometric Reference. These different methods of calibration demonstrated 1% to 2% differences in solar irradiance measurement. Analyzing these values will ultimately assist in determining the uncertainties of the radiometer data and will assist in developing consensus on a standard for calibration.« less

  11. Humidity Measurements: A Psychrometer Suitable for On-Line Data Acquisition.

    ERIC Educational Resources Information Center

    Caporaloni, Marina; Ambrosini, Roberto

    1992-01-01

    Explains the typical design, operation, and calibration of a traditional psychrometer. Presents the method utilized for this class project with design considerations, calibration techniques, remote data sensing schematic, and specifics of the implementation process. (JJK)

  12. Preserving Flow Variability in Watershed Model Calibrations

    EPA Science Inventory

    Background/Question/Methods Although watershed modeling flow calibration techniques often emphasize a specific flow mode, ecological conditions that depend on flow-ecology relationships often emphasize a range of flow conditions. We used informal likelihood methods to investig...

  13. Evaluation of the Long-Term Stability and Temperature Coefficient of Dew-Point Hygrometers

    NASA Astrophysics Data System (ADS)

    Benyon, R.; Vicente, T.; Hernández, P.; De Rivas, L.; Conde, F.

    2012-09-01

    The continuous quest for improved specifications of optical dew-point hygrometers has raised customer expectations on the performance of these devices. In the absence of a long calibration history, users with a limited prior experience in the measurement of humidity, place reliance on manufacturer specifications to estimate long-term stability. While this might be reasonable in the case of measurement of electrical quantities, in humidity it can lead to optimistic estimations of uncertainty. This article reports a study of the long-term stability of some hygrometers and the analysis of their performance as monitored through regular calibration. The results of the investigations provide some typical, realistic uncertainties associated with the long-term stability of instruments used in calibration and testing laboratories. Together, these uncertainties can help in establishing initial contributions in uncertainty budgets, as well as in setting the minimum calibration requirements, based on the evaluation of dominant influence quantities.

  14. New blackbody calibration source for low temperatures from -20 C to +350 C

    NASA Astrophysics Data System (ADS)

    Mester, Ulrich; Winter, Peter

    2001-03-01

    Calibration procedures for infrared thermometers and thermal imaging systems require radiation sources of precisely known radiation properties. In the physical absence of an ideal Planck's radiator, the German Committee VDI/VDE-GMA FA 2.51, 'Applied Radiation Thermometry', agreed upon desirable specifications and limiting parameters for a blackbody calibration source with a temperature range from -20 degree(s)C to +350 degree(s)C, a spectral range from 2 to 15 microns, an emissivity greater than 0.999 and a useful source aperture of 60 mm, among others. As a result of the subsequent design and development performed with the support of the laboratory '7.31 Thermometry' of the German national institute of natural and engineering sciences (PTB), the Mester ME20 Blackbody Calibration Source is presented. The ME20 meets or exceeds all of the specifications formulated by the VDI/VDE committee.

  15. Simulation and optimization of an experimental membrane wastewater treatment plant using computational intelligence methods.

    PubMed

    Ludwig, T; Kern, P; Bongards, M; Wolf, C

    2011-01-01

    The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.

  16. Radiological and microwave Protection at NRL, January - December 1983

    DTIC Science & Technology

    1984-06-27

    reduced to background. 18 Surveys with TLD badges were made on pulsed electron beam machines in Buildings 101 and A68 throughout the year. The Gamble...calibration of radiation dosimetry systems required by the Laboratory’s radiological safety program, or by other Laboratory or Navy groups. The Section...provides consultation and assistance on dosimetry problems to the Staff, Laboratory, and Navy. The Section maintains and calibrates fixed-field radiac

  17. MODOPTIM: A general optimization program for ground-water flow model calibration and ground-water management with MODFLOW

    USGS Publications Warehouse

    Halford, Keith J.

    2006-01-01

    MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.

  18. Hand–eye calibration using a target registration error model

    PubMed Central

    Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M.

    2017-01-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand–eye calibration between the camera and the tracking system. The authors introduce the concept of ‘guided hand–eye calibration’, where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand–eye calibration as a registration problem between homologous point–line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera. PMID:29184657

  19. Actuator-Assisted Calibration of Freehand 3D Ultrasound System.

    PubMed

    Koo, Terry K; Silvia, Nathaniel

    2018-01-01

    Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified "collinear point target" phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration.

  20. Actuator-Assisted Calibration of Freehand 3D Ultrasound System

    PubMed Central

    2018-01-01

    Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified “collinear point target” phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration. PMID:29854371

  1. The Effect of Inappropriate Calibration: Three Case Studies in Molecular Ecology

    PubMed Central

    Ho, Simon Y. W.; Saarma, Urmas; Barnett, Ross; Haile, James; Shapiro, Beth

    2008-01-01

    Time-scales estimated from sequence data play an important role in molecular ecology. They can be used to draw correlations between evolutionary and palaeoclimatic events, to measure the tempo of speciation, and to study the demographic history of an endangered species. In all of these studies, it is paramount to have accurate estimates of time-scales and substitution rates. Molecular ecological studies typically focus on intraspecific data that have evolved on genealogical scales, but often these studies inappropriately employ deep fossil calibrations or canonical substitution rates (e.g., 1% per million years for birds and mammals) for calibrating estimates of divergence times. These approaches can yield misleading estimates of molecular time-scales, with significant impacts on subsequent evolutionary and ecological inferences. We illustrate this calibration problem using three case studies: avian speciation in the late Pleistocene, the demographic history of bowhead whales, and the Pleistocene biogeography of brown bears. For each data set, we compare the date estimates that are obtained using internal and external calibration points. In all three cases, the conclusions are significantly altered by the application of revised, internally-calibrated substitution rates. Collectively, the results emphasise the importance of judicious selection of calibrations for analyses of recent evolutionary events. PMID:18286172

  2. The effect of inappropriate calibration: three case studies in molecular ecology.

    PubMed

    Ho, Simon Y W; Saarma, Urmas; Barnett, Ross; Haile, James; Shapiro, Beth

    2008-02-20

    Time-scales estimated from sequence data play an important role in molecular ecology. They can be used to draw correlations between evolutionary and palaeoclimatic events, to measure the tempo of speciation, and to study the demographic history of an endangered species. In all of these studies, it is paramount to have accurate estimates of time-scales and substitution rates. Molecular ecological studies typically focus on intraspecific data that have evolved on genealogical scales, but often these studies inappropriately employ deep fossil calibrations or canonical substitution rates (e.g., 1% per million years for birds and mammals) for calibrating estimates of divergence times. These approaches can yield misleading estimates of molecular time-scales, with significant impacts on subsequent evolutionary and ecological inferences. We illustrate this calibration problem using three case studies: avian speciation in the late Pleistocene, the demographic history of bowhead whales, and the Pleistocene biogeography of brown bears. For each data set, we compare the date estimates that are obtained using internal and external calibration points. In all three cases, the conclusions are significantly altered by the application of revised, internally-calibrated substitution rates. Collectively, the results emphasise the importance of judicious selection of calibrations for analyses of recent evolutionary events.

  3. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter

    PubMed Central

    Liu, Wanli

    2017-01-01

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated. PMID:28282897

  4. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  5. A tunable laser system for precision wavelength calibration of spectra

    NASA Astrophysics Data System (ADS)

    Cramer, Claire

    2010-02-01

    We present a novel laser-based wavelength calibration technique that improves the precision of astronomical spectroscopy, and solves a calibration problem inherent to multi-object spectroscopy. We have tested a prototype with the Hectochelle spectrograph at the MMT 6.5 m telescope. The Hectochelle is a high-dispersion, fiber-fed, multi-object spectrograph capable of recording up to 240 spectra simultaneously with a resolving power of 40000. The standard wavelength calibration method uses of spectra from ThAr hollow-cathode lamps shining directly onto the fibers. The difference in light path between calibration and science light as well as the uneven distribution of spectral lines are believed to introduce errors of up to several hundred m/s in the wavelength scale. Our tunable laser wavelength calibrator is bright enough for use with a dome screen, allowing the calibration light path to better match the science light path. Further, the laser is tuned in regular steps across a spectral order, creating a comb of evenly-spaced lines on the detector. Using the solar spectrum reflected from the atmosphere to record the same spectrum in every fiber, we show that laser wavelength calibration brings radial velocity uncertainties down below 100 m/s. We also present results from studies of globular clusters, and explain how the calibration technique can aid in stellar age determinations, studies of young stars, and searches for dark matter clumping in the galactic halo. )

  6. Extrinsic Calibration of a Laser Galvanometric Setup and a Range Camera.

    PubMed

    Sels, Seppe; Bogaerts, Boris; Vanlanduit, Steve; Penne, Rudi

    2018-05-08

    Currently, galvanometric scanning systems (like the one used in a scanning laser Doppler vibrometer) rely on a planar calibration procedure between a two-dimensional (2D) camera and the laser galvanometric scanning system to automatically aim a laser beam at a particular point on an object. In the case of nonplanar or moving objects, this calibration is not sufficiently accurate anymore. In this work, a three-dimensional (3D) calibration procedure that uses a 3D range sensor is proposed. The 3D calibration is valid for all types of objects and retains its accuracy when objects are moved between subsequent measurement campaigns. The proposed 3D calibration uses a Non-Perspective-n-Point (NPnP) problem solution. The 3D range sensor is used to calculate the position of the object under test relative to the laser galvanometric system. With this extrinsic calibration, the laser galvanometric scanning system can automatically aim a laser beam to this object. In experiments, the mean accuracy of aiming the laser beam on an object is below 10 mm for 95% of the measurements. This achieved accuracy is mainly determined by the accuracy and resolution of the 3D range sensor. The new calibration method is significantly better than the original 2D calibration method, which in our setup achieves errors below 68 mm for 95% of the measurements.

  7. An early warning system for marine storm hazard mitigation

    NASA Astrophysics Data System (ADS)

    Vousdoukas, M. I.; Almeida, L. P.; Pacheco, A.; Ferreira, O.

    2012-04-01

    The present contribution presents efforts towards the development of an operational Early Warning System for storm hazard prediction and mitigation. The system consists of a calibrated nested-model train which consists of specially calibrated Wave Watch III, SWAN and XBeach models. The numerical simulations provide daily forecasts of the hydrodynamic conditions, morphological change and overtopping risk at the area of interest. The model predictions are processed by a 'translation' module which is based on site-specific Storm Impact Indicators (SIIs) (Ciavola et al., 2011, Storm impacts along European coastlines. Part 2: lessons learned from the MICORE project, Environmental Science & Policy, Vol 14), and warnings are issued when pre-defined threshold values are exceeded. For the present site the selected SIIs were (i) the maximum wave run-up height during the simulations; and (ii) the dune-foot horizontal retreat at the end of the simulations. Both SIIs and pre-defined thresholds were carefully selected on the grounds of existing experience and field data. Four risk levels were considered, each associated with an intervention approach, recommended to the responsible coastal protection authority. Regular updating of the topography/bathymetry is critical for the performance of the storm impact forecasting, especially when there are significant morphological changes. The system can be extended to other critical problems, like implications of global warming and adaptive management strategies, while the approach presently followed, from model calibration to the early warning system for storm hazard mitigation, can be applied to other sites worldwide, with minor adaptations.

  8. Tradeoffs among watershed model calibration targets for parameter estimation

    EPA Science Inventory

    Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation f...

  9. Obtaining continuous BrAC/BAC estimates in the field: A hybrid system integrating transdermal alcohol biosensor, Intellidrink smartphone app, and BrAC Estimator software tools.

    PubMed

    Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary

    2018-08-01

    Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.

  10. Calibration and use of an interactive-accounting model to simulate dissolved solids, streamflow, and water-supply operations in the Arkansas River basin, Colorado

    USGS Publications Warehouse

    Burns, A.W.

    1989-01-01

    An interactive-accounting model was used to simulate dissolved solids, streamflow, and water supply operations in the Arkansas River basin, Colorado. Model calibration of specific conductance to streamflow relations at three sites enabled computation of dissolved-solids loads throughout the basin. To simulate streamflow only, all water supply operations were incorporated in the regression relations for streamflow. Calibration for 1940-85 resulted in coefficients of determination that ranged from 0.89 to 0.58, and values in excess of 0.80 were determined for 16 of 20 nodes. The model then incorporated 74 water users and 11 reservoirs to simulate the water supply operations for two periods, 1943-74 and 1975-85. For the 1943-74 calibration, coefficients of determination for streamflow ranged from 0.87 to 0.02. Calibration of the water supply operations resulted in coefficients of determination that ranged from 0.87 to negative for simulated irrigation diversions of 37 selected water users. Calibration for 1975-85 was not evaluated statistically, but average values and plots of reservoir contents indicated reasonableness of the simulation. To demonstrate the utility of the model, six specific alternatives were simulated to consider effects of potential enlargement of Pueblo Reservoir. Three general major alternatives were simulated: the 1975-85 calibrated model data, the calibrated model data with an addition of 30 cu ft/sec in Fountain Creek flows, and the calibrated model data plus additional municipal water in storage. These three major alternatives considered the options of reservoir enlargement or no enlargement. A 40,000-acre-foot reservoir enlargement resulted in average increases of 2,500 acre-ft in transmountain diversions, of 800 acre-ft in storage diversions, and of 100 acre-ft in winter-water storage. (USGS)

  11. Matrix Factorisation-based Calibration For Air Quality Crowd-sensing

    NASA Astrophysics Data System (ADS)

    Dorffer, Clement; Puigt, Matthieu; Delmaire, Gilles; Roussel, Gilles; Rouvoy, Romain; Sagnier, Isabelle

    2017-04-01

    Internet of Things (IoT) is extending internet to physical objects and places. The internet-enabled objects are thus able to communicate with each other and with their users. One main interest of IoT is the ease of production of huge masses of data (Big Data) using distributed networks of connected objects, thus making possible a fine-grained yet accurate analysis of physical phenomena. Mobile crowdsensing is a way to collect data using IoT. It basically consists of acquiring geolocalized data from the sensors (from or connected to the mobile devices, e.g., smartphones) of a crowd of volunteers. The sensed data are then collectively shared using wireless connection—such as GSM or WiFi—and stored on a dedicated server to be processed. One major application of mobile crowdsensing is environment monitoring. Indeed, with the proliferation of miniaturized yet sensitive sensors on one hand and, on the other hand, of low-cost microcontrollers/single-card PCs, it is easy to extend the sensing abilities of smartphones. Alongside the conventional, regulated, bulky and expensive instruments used in authoritative air quality stations, it is then possible to create a large-scale mobile sensor network providing insightful information about air quality. In particular, the finer spatial sampling rate due to such a dense network should allow air quality models to take into account local effects such as street canyons. However, one key issue with low-cost air quality sensors is the lack of trust in the sensed data. In most crowdsensing scenarios, the sensors (i) cannot be calibrated in a laboratory before or during their deployment and (ii) might be sparsely or continuously faulty (thus providing outliers in the data). Such issues should be automatically handled from the sensor readings. Indeed, due to the masses of generated data, solving the above issues cannot be performed by experts but requires specific data processing techniques. In this work, we assume that some mobile sensors share some information using the APISENSE® crowdsensing platform and we aim to calibrate the sensor responses from the data directly. For that purpose, we express the sensor readings as a low-rank matrix with missing entries and we revisit self-calibration as a Matrix Factorization (MF) problem. In our proposed framework, one factor matrix contains the calibration parameters while the other is structured by the calibration model and contains some values of the sensed phenomenon. The MF calibration approach also uses the precise measurements from ATMO—the French public institution—to drive the calibration of the mobile sensors. MF calibration can be improved using, e.g., the mean calibration parameters provided by the sensor manufacturers, or using sparse priors or a model of the physical phenomenon. All our approaches are shown to provide a better calibration accuracy than matrix-completion-based and robust-regression-based methods, even in difficult scenarios involving a lot of missing data and/or very few accurate references. When combined with a dictionary of air quality patterns, our experiments suggest that MF is not only able to perform sensor network calibration but also to provide detailed maps of air quality.

  12. Precise SAR measurements in the near-field of RF antenna systems

    NASA Astrophysics Data System (ADS)

    Hakim, Bandar M.

    Wireless devices must meet specific safety radiation limits, and in order to assess the health affects of such devices, standard procedures are used in which standard phantoms, tissue-equivalent liquids, and miniature electric field probes are used. The accuracy of such measurements depend on the precision in measuring the dielectric properties of the tissue-equivalent liquids and the associated calibrations of the electric-field probes. This thesis describes work on the theoretical modeling and experimental measurement of the complex permittivity of tissue-equivalent liquids, and associated calibration of miniature electric-field probes. The measurement method is based on measurements of the field attenuation factor and power reflection coefficient of a tissue-equivalent sample. A novel method, to the best of the authors knowledge, for determining the dielectric properties and probe calibration factors is described and validated. The measurement system is validated using saline at different concentrations, and measurements of complex permittivity and calibration factors have been made on tissue-equivalent liquids at 900MHz and 1800MHz. Uncertainty analysis have been conducted to study the measurement system sensitivity. Using the same waveguide to measure tissue-equivalent permittivity and calibrate e-field probes eliminates a source of uncertainty associated with using two different measurement systems. The measurement system is used to test GSM cell-phones at 900MHz and 1800MHz for Specific Absorption Rate (SAR) compliance using a Specific Anthropomorphic Mannequin phantom (SAM).

  13. ATLAS Tile Calorimeter time calibration, monitoring and performance

    NASA Astrophysics Data System (ADS)

    Davidek, T.; ATLAS Collaboration

    2017-11-01

    The Tile Calorimeter (TileCal) is the hadronic calorimeter covering the central region of the ATLAS experiment at the LHC. This sampling device is made of plastic scintillating tiles alternated with iron plates and its response is calibrated to electromagnetic scale by means of several dedicated calibration systems. The accurate time calibration is important for the energy reconstruction, non-collision background removal as well as for specific physics analyses. The initial time calibration with so-called splash events and subsequent fine-tuning with collision data are presented. The monitoring of the time calibration with laser system and physics collision data is discussed as well as the corrections for sudden changes performed still before the recorded data are processed for physics analyses. Finally, the time resolution as measured with jets and isolated muons is presented.

  14. The photomultiplier tube calibration system of the MicroBooNE experiment

    DOE PAGES

    Conrad, J.; Jones, B. J. P.; Moss, Z.; ...

    2015-06-03

    Here, we report on the design and construction of a LED-based fiber calibration system for large liquid argon time projection detectors. This system was developed to calibrate the optical systems of the MicroBooNE experiment. As well as detailing the materials and installation procedure, we provide technical drawings and specifications so that the system may be easily replicated in future LArTPC detectors.

  15. Calibration methods influence quantitative material decomposition in photon-counting spectral CT

    NASA Astrophysics Data System (ADS)

    Curtis, Tyler E.; Roeder, Ryan K.

    2017-03-01

    Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.

  16. Apparatus for in-situ calibration of instruments that measure fluid depth

    DOEpatents

    Campbell, M.D.

    1994-01-11

    The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position. 8 figures.

  17. International Round-Robin Testing of Bulk Thermoelectrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsin; Porter, Wallace D; Bottner, Harold

    2011-11-01

    Two international round-robin studies were conducted on transport properties measurements of bulk thermoelectric materials. The study discovered current measurement problems. In order to get ZT of a material four separate transport measurements must be taken. The round-robin study showed that among the four properties Seebeck coefficient is the one can be measured consistently. Electrical resistivity has +4-9% scatter. Thermal diffusivity has similar +5-10% scatter. The reliability of the above three properties can be improved by standardizing test procedures and enforcing system calibrations. The worst problem was found in specific heat measurements using DSC. The probability of making measurement error ismore » great due to the fact three separate runs must be taken to determine Cp and the baseline shift is always an issue for commercial DSC. It is suggest the Dulong Petit limit be always used as a guide line for Cp. Procedures have been developed to eliminate operator and system errors. The IEA-AMT annex is developing standard procedures for transport properties testing.« less

  18. Performance assessment of FY-3C/MERSI on early orbit

    NASA Astrophysics Data System (ADS)

    Hu, Xiuqing; Xu, Na; Wu, Ronghua; Chen, Lin; Min, Min; Wang, Ling; Xu, Hanlie; Sun, Ling; Yang, Zhongdong; Zhang, Peng

    2014-11-01

    FY-3C/MERSI has some remarkable improvements compared to the previous MERSIs including better spectral response function (SRF) consistency of different detectors within one band, increasing the capability of lunar observation by space view (SV) and the improvement of radiometric response stability of solar bands. During the In-orbit verification (IOV) commissioning phase, early results that indicate the MERSI representative performance were derived, including the signal noise ratio (SNR), dynamic range, MTF, B2B registration, calibration bias and instrument stability. The SNRs at the solar bands (Bands 1-4 and 6-20) was largely beyond the specifications except for two NIR bands. The in-flight calibration and verification for these bands are also heavily relied on the vicarious techniques such as China radiometric calibration sites(CRCS), cross-calibration, lunar calibration, DCC calibration, stability monitoring using Pseudo Invariant Calibration Sites (PICS) and multi-site radiance simulation. This paper will give the results of the above several calibration methods and monitoring the instrument degradation in early on-orbit time.

  19. Four years of Landsat-7 on-orbit geometric calibration and performance

    USGS Publications Warehouse

    Lee, D.S.; Storey, James C.; Choate, M.J.; Hayes, R.W.

    2004-01-01

    Unlike its predecessors, Landsat-7 has undergone regular geometric and radiometric performance monitoring and calibration since launch in April 1999. This ongoing activity, which includes issuing quarterly updates to calibration parameters, has generated a wealth of geometric performance data over the four-year on-orbit period of operations. A suite of geometric characterization (measurement and evaluation procedures) and calibration (procedures to derive improved estimates of instrument parameters) methods are employed by the Landsat-7 Image Assessment System to maintain the geometric calibration and to track specific aspects of geometric performance. These include geodetic accuracy, band-to-band registration accuracy, and image-to-image registration accuracy. These characterization and calibration activities maintain image product geometric accuracy at a high level - by monitoring performance to determine when calibration is necessary, generating new calibration parameters, and verifying that new parameters achieve desired improvements in accuracy. Landsat-7 continues to meet and exceed all geometric accuracy requirements, although aging components have begun to affect performance.

  20. External calibration of polarimetric radars using point and distributed targets

    NASA Technical Reports Server (NTRS)

    Yueh, S. H.; Kong, J. A.; Shin, R. T.

    1991-01-01

    Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.

  1. Numerical Analysis of a Radiant Heat Flux Calibration System

    NASA Technical Reports Server (NTRS)

    Jiang, Shanjuan; Horn, Thomas J.; Dhir, V. K.

    1998-01-01

    A radiant heat flux gage calibration system exists in the Flight Loads Laboratory at NASA's Dryden Flight Research Center. This calibration system must be well understood if the heat flux gages calibrated in it are to provide useful data during radiant heating ground tests or flight tests of high speed aerospace vehicles. A part of the calibration system characterization process is to develop a numerical model of the flat plate heater element and heat flux gage, which will help identify errors due to convection, heater element erosion, and other factors. A 2-dimensional mathematical model of the gage-plate system has been developed to simulate the combined problem involving convection, radiation and mass loss by chemical reaction. A fourth order finite difference scheme is used to solve the steady state governing equations and determine the temperature distribution in the gage and plate, incident heat flux on the gage face, and flat plate erosion. Initial gage heat flux predictions from the model are found to be within 17% of experimental results.

  2. External calibration of polarimetric radars using point and distributed targets

    NASA Astrophysics Data System (ADS)

    Yueh, S. H.; Kong, J. A.; Shin, R. T.

    1991-08-01

    Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.

  3. Case-based Reasoning for Automotive Engine Performance Tune-up

    NASA Astrophysics Data System (ADS)

    Vong, C. M.; Huang, H.; Wong, P. K.

    2010-05-01

    The automotive engine performance tune-up is greatly affected by the calibration of its electronic control unit (ECU). The ECU calibration is traditionally done by trial-and-error method. This traditional method consumes a large amount of time and money because of a large number of dynamometer tests. To resolve this problem, case based reasoning (CBR) is employed, so that an existing and effective ECU setup can be adapted to fit another similar class of engines. The adaptation procedure is done through a more sophisticated step called case-based adaptation (CBA) [1, 2]. CBA is an effective knowledge management tool, which can interactively learn the expert adaptation knowledge. The paper briefly reviews the methodologies of CBR and CBA. Then the application to ECU calibration is described via a case study. With CBR and CBA, the efficiency of calibrating an ECU can be enhanced. A prototype system has also been developed to verify the usefulness of CBR in ECU calibration.

  4. Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque

    NASA Astrophysics Data System (ADS)

    Klaus, Leonard; Eichstädt, Sascha

    2018-04-01

    For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.

  5. Two Approaches to Calibration in Metrology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark

    2014-04-01

    Inferring mathematical relationships with quantified uncertainty from measurement data is common to computational science and metrology. Sufficient knowledge of measurement process noise enables Bayesian inference. Otherwise, an alternative approach is required, here termed compartmentalized inference, because collection of uncertain data and model inference occur independently. Bayesian parameterized model inference is compared to a Bayesian-compatible compartmentalized approach for ISO-GUM compliant calibration problems in renewable energy metrology. In either approach, model evidence can help reduce model discrepancy.

  6. ASME V\\&V challenge problem: Surrogate-based V&V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beghini, Lauren L.; Hough, Patricia D.

    2015-12-18

    The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less

  7. Continuous measurement of suspended-sediment discharge in rivers by use of optical backscatterance sensors

    USGS Publications Warehouse

    Schoellhamer, D.H.; Wright, S.A.; Bogen, J.; Fergus, T.; Walling, D.

    2003-01-01

    Optical sensors have been used to measure turbidity and suspended-sediment concentration by many marine and estuarine studies, and optical sensors can provide automated, continuous time series of suspended-sediment concentration and discharge in rivers. Three potential problems with using optical sensors are biological fouling, particle-size variability, and particle-reflectivity variability. Despite varying particle size, output from an optical backscatterance sensor in the Sacramento River at Freeport, California, USA, was calibrated successfully to discharge-weighted, cross-sectionally averaged suspended-sediment concentration, which was measured with the equal discharge-, or width-increment, methods and an isokinetic sampler. A correction for sensor drift was applied to the 3-year time series. However, the calibration of an optical backscatterance sensor used in the Colorado River at Cisco, Utah, USA, was affected by particle-size variability. The adjusted time series at Freeport was used to calculate hourly suspended-sediment discharge that compared well with daily values from a sediment station at Freeport. The appropriateness of using optical sensors in rivers should be evaluated on a site-specific basis and measurement objectives, potential particle size effects, and potential fouling should be considered.

  8. Finite element code development for modeling detonation of HMX composites

    NASA Astrophysics Data System (ADS)

    Duran, Adam V.; Sundararaghavan, Veera

    2017-01-01

    In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.

  9. Molecular Form Differences Between Prostate-Specific Antigen (PSA) Standards Create Quantitative Discordances in PSA ELISA Measurements.

    PubMed

    McJimpsey, Erica L

    2016-02-25

    The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves.

  10. Molecular Form Differences Between Prostate-Specific Antigen (PSA) Standards Create Quantitative Discordances in PSA ELISA Measurements

    NASA Astrophysics Data System (ADS)

    McJimpsey, Erica L.

    2016-02-01

    The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves.

  11. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    USGS Publications Warehouse

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  12. Image processing via VLSI: A concept paper

    NASA Technical Reports Server (NTRS)

    Nathan, R.

    1982-01-01

    Implementing specific image processing algorithms via very large scale integrated systems offers a potent solution to the problem of handling high data rates. Two algorithms stand out as being particularly critical -- geometric map transformation and filtering or correlation. These two functions form the basis for data calibration, registration and mosaicking. VLSI presents itself as an inexpensive ancillary function to be added to almost any general purpose computer and if the geometry and filter algorithms are implemented in VLSI, the processing rate bottleneck would be significantly relieved. A set of image processing functions that limit present systems to deal with future throughput needs, translates these functions to algorithms, implements via VLSI technology and interfaces the hardware to a general purpose digital computer is developed.

  13. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  14. Predicting protein function and other biomedical characteristics with heterogeneous ensembles

    PubMed Central

    Whalen, Sean; Pandey, Om Prakash

    2015-01-01

    Prediction problems in biomedical sciences, including protein function prediction (PFP), are generally quite difficult. This is due in part to incomplete knowledge of the cellular phenomenon of interest, the appropriateness and data quality of the variables and measurements used for prediction, as well as a lack of consensus regarding the ideal predictor for specific problems. In such scenarios, a powerful approach to improving prediction performance is to construct heterogeneous ensemble predictors that combine the output of diverse individual predictors that capture complementary aspects of the problems and/or datasets. In this paper, we demonstrate the potential of such heterogeneous ensembles, derived from stacking and ensemble selection methods, for addressing PFP and other similar biomedical prediction problems. Deeper analysis of these results shows that the superior predictive ability of these methods, especially stacking, can be attributed to their attention to the following aspects of the ensemble learning process: (i) better balance of diversity and performance, (ii) more effective calibration of outputs and (iii) more robust incorporation of additional base predictors. Finally, to make the effective application of heterogeneous ensembles to large complex datasets (big data) feasible, we present DataSink, a distributed ensemble learning framework, and demonstrate its sound scalability using the examined datasets. DataSink is publicly available from https://github.com/shwhalen/datasink. PMID:26342255

  15. Accommodating subject and instrument variations in spectroscopic determinations

    DOEpatents

    Haas, Michael J [Albuquerque, NM; Rowe, Robert K [Corrales, NM; Thomas, Edward V [Albuquerque, NM

    2006-08-29

    A method and apparatus for measuring a biological attribute, such as the concentration of an analyte, particularly a blood analyte in tissue such as glucose. The method utilizes spectrographic techniques in conjunction with an improved instrument-tailored or subject-tailored calibration model. In a calibration phase, calibration model data is modified to reduce or eliminate instrument-specific attributes, resulting in a calibration data set modeling intra-instrument or intra-subject variation. In a prediction phase, the prediction process is tailored for each target instrument separately using a minimal number of spectral measurements from each instrument or subject.

  16. Lyman alpha SMM/UVSP absolute calibration and geocoronal correction

    NASA Technical Reports Server (NTRS)

    Fontenla, Juan M.; Reichmann, Edwin J.

    1987-01-01

    Lyman alpha observations from the Ultraviolet Spectrometer Polarimeter (UVSP) instrument of the Solar Maximum Mission (SMM) spacecraft were analyzed and provide instrumental calibration details. Specific values of the instrument quantum efficiency, Lyman alpha absolute intensity, and correction for geocoronal absorption are presented.

  17. (abstract) A VLBI Test of Tropospheric Delay Calibration with WVRs

    NASA Technical Reports Server (NTRS)

    Linfield, R. P.; Teitelbaum, L. P.; Keihm, S. J.; Resch, G. M.; Mahoney, M. J.; Treuhaft, R. N.

    1994-01-01

    Dual frequency (S/X band) very long baseline interferometry (VLBI) observations were used to test troposphere calibration by water vapor radiometers (WVRs). Comparison of the VLBI and WVR measurements show a statistical agreement (specifically, their structure functions agree) on time scales less than 700 seconds. On longer time scales, VLBI instrumental errors become important. The improvement in VLBI residual delays from WVR calibration was consistent with the measured level of tropospheric fluctuations.

  18. Specifying and calibrating instrumentations for wideband electronic power measurements. [in switching circuits

    NASA Technical Reports Server (NTRS)

    Lesco, D. J.; Weikle, D. H.

    1980-01-01

    The wideband electric power measurement related topics of electronic wattmeter calibration and specification are discussed. Tested calibration techniques are described in detail. Analytical methods used to determine the bandwidth requirements of instrumentation for switching circuit waveforms are presented and illustrated with examples from electric vehicle type applications. Analog multiplier wattmeters, digital wattmeters and calculating digital oscilloscopes are compared. The instrumentation characteristics which are critical to accurate wideband power measurement are described.

  19. The cost of uniqueness in groundwater model calibration

    NASA Astrophysics Data System (ADS)

    Moore, Catherine; Doherty, John

    2006-04-01

    Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as an inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based on pilot points, and calibration is implemented using both zones of piecewise constancy and constrained minimization regularization.

  20. Panorama parking assistant system with improved particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong

    2013-10-01

    A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.

  1. Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration

    PubMed Central

    Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.

    2014-01-01

    Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030

  2. SU-C-204-02: Improved Patient-Specific Optimization of the Stopping Power Calibration for Proton Therapy Planning Using a Single Proton Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rinaldi, I; Ludwig Maximilian University, Garching, DE; Heidelberg University Hospital, Heidelberg, DE

    2015-06-15

    Purpose: We present an improved method to calculate patient-specific calibration curves to convert X-ray computed tomography (CT) Hounsfield Unit (HU) to relative stopping powers (RSP) for proton therapy treatment planning. Methods: By optimizing the HU-RSP calibration curve, the difference between a proton radiographic image and a digitally reconstructed X-ray radiography (DRR) is minimized. The feasibility of this approach has previously been demonstrated. This scenario assumes that all discrepancies between proton radiography and DRR originate from uncertainties in the HU-RSP curve. In reality, external factors cause imperfections in the proton radiography, such as misalignment compared to the DRR and unfaithful representationmore » of geometric structures (“blurring”). We analyze these effects based on synthetic datasets of anthropomorphic phantoms and suggest an extended optimization scheme which explicitly accounts for these effects. Performance of the method is been tested for various simulated irradiation parameters. The ultimate purpose of the optimization is to minimize uncertainties in the HU-RSP calibration curve. We therefore suggest and perform a thorough statistical treatment to quantify the accuracy of the optimized HU-RSP curve. Results: We demonstrate that without extending the optimization scheme, spatial blurring (equivalent to FWHM=3mm convolution) in the proton radiographies can cause up to 10% deviation between the optimized and the ground truth HU-RSP calibration curve. Instead, results obtained with our extended method reach 1% or better correspondence. We have further calculated gamma index maps for different acceptance levels. With DTA=0.5mm and RD=0.5%, a passing ratio of 100% is obtained with the extended method, while an optimization neglecting effects of spatial blurring only reach ∼90%. Conclusion: Our contribution underlines the potential of a single proton radiography to generate a patient-specific calibration curve and to improve dose delivery by optimizing the HU-RSP calibration curve as long as all sources of systematic incongruence are properly modeled.« less

  3. Development of a 300 L Calibration Bath for Oceanographic Thermometers

    NASA Astrophysics Data System (ADS)

    Baba, S.; Yamazawa, K.; Nakano, T.; Saito, I.; Tamba, J.; Wakimoto, T.; Katoh, K.

    2017-11-01

    The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) has been developing a 300 L calibration bath to calibrate 24 oceanographic thermometers (OT) simultaneously and thereby reduce the calibration work load necessary to service more than 180 OT every year. This study investigated characteristics of the developed 300 L calibration bath using a SBE 3plus thermometer produced by an OT manufacturer. We also used 11 thermistor thermometers that were calibrated to be traceable to the international temperature scale of 1990 (ITS-90) within 1 mK of standard uncertainty through collaboration of JAMSTEC and NMIJ/AIST. Results show that the time stability of temperature of the developed bath was within ± 1 mK. Furthermore, the temperature uniformity was ± 1.3 mK. The expanded uncertainty (k=2) components for the characteristics of the developed 300 L calibration bath were estimated as 2.9 mK, which is much less than the value of 10 mK: the required specification for uncertainty of calibration for the OT. These results demonstrated the utility of this 300 L calibration bath as a device for use with a new calibration system.

  4. Development of a calibration equipment for spectrometer qualification

    NASA Astrophysics Data System (ADS)

    Michel, C.; Borguet, B.; Boueé, A.; Blain, P.; Deep, A.; Moreau, V.; François, M.; Maresi, L.; Myszkowiak, A.; Taccola, M.; Versluys, J.; Stockman, Y.

    2017-09-01

    With the development of new spectrometer concepts, it is required to adapt the calibration facilities to characterize correctly their performances. These spectro-imaging performances are mainly Modulation Transfer Function, spectral response, resolution and registration; polarization, straylight and radiometric calibration. The challenge of this calibration development is to achieve better performance than the item under test using mostly standard items. Because only the subsystem spectrometer needs to be calibrated, the calibration facility needs to simulate the geometrical "behaviours" of the imaging system. A trade-off study indicates that no commercial devices are able to fulfil completely all the requirements so that it was necessary to opt for an in home telecentric achromatic design. The proposed concept is based on an Offner design. This allows mainly to use simple spherical mirrors and to cover the spectral range. The spectral range is covered with a monochromator. Because of the large number of parameters to record the calibration facility is fully automatized. The performances of the calibration system have been verified by analysis and experimentally. Results achieved recently on a free-form grating Offner spectrometer demonstrate the capacities of this new calibration facility. In this paper, a full calibration facility is described, developed specifically for a new free-form spectro-imager.

  5. The Seventh SeaWiFS Intercalibration Round-Robin Experiment (SIRREX-7), March 1999

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); McLean, Scott; Sherman, Jennifer; Small, Mark; Lazin, Gordana; Zibordi, Giuseppe; Brown, James W.; McClain, Charles R. (Technical Monitor)

    2002-01-01

    This report documents the scientific activities during the seventh SeaWiFS Intercalibration Round-Robin Experiment (SIRREX-7) held at Satlantic, Inc. (Halifax, Canada). The overall objective of SIRREX-7 was to determine the uncertainties of radiometric calibrations and measurements at a single calibration facility. Specifically, this involved the estimation of the uncertainties in a) lamp standards, b) plaque standards (including the uncertainties associated with plaque illumination non-uniformity), c) radiance calibrations, and d) irradiance calibrations. The investigation of the uncertainties in lamp standards included a comparison between a calibration of a new FEL by the National Institute of Standards and Technology (NIST) and Optronic Laboratories, Inc. In addition, the rotation and polarization sensitivity of radiometers were determined, and a procedure for transferring an absolute calibration to portable light sources was defined and executed.

  6. Application of Pressure Sensitive Paint to Confined Flow at Mach Number 2.5

    NASA Technical Reports Server (NTRS)

    Lepicovsky, J.; Bencic, T. J.; Bruckner, R. J.

    1998-01-01

    Pressure sensitive paint (PSP) is a novel technology that is being used frequently in external aerodynamics. For internal flows in narrow channels, and applications at elevated nonuniform temperatures, however, there are still unresolved problems that complicate the procedures for calibrating PSP signals. To address some of these problems, investigations were carried out in a narrow channel with supersonic flows of Mach 2.5. The first set of tests focused on the distribution of the wall pressure in the diverging section of the test channel downstream of the nozzle throat. The second set dealt with the distribution of wall static pressure due to the shock/wall interaction caused by a 25 deg. wedge in the constant Mach number part of the test section. In addition, the total temperature of the flow was varied to assess the effects of temperature on the PSP signal. Finally, contamination of the pressure field data, caused by internal reflection of the PSP signal in a narrow channel, was demonstrated. The local wall pressures were measured with static taps, and the wall pressure distributions were acquired by using PSP. The PSP results gave excellent qualitative impressions of the pressure field investigated. However, the quantitative results, specifically the accuracy of the PSP data in narrow channels, show that improvements need to be made in the calibration procedures, particularly for heated flows. In the cases investigated, the experimental error had a standard deviation of +/- 8.0% for the unheated flow, and +/- 16.0% for the heated flow, at an average pressure of 11 kpa.

  7. Impact of influent data frequency and model structure on the quality of WWTP model calibration and uncertainty.

    PubMed

    Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar

    2012-01-01

    Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.

  8. Improved Radial Velocity Precision with a Tunable Laser Calibrator

    NASA Astrophysics Data System (ADS)

    Cramer, Claire; Brown, S.; Dupree, A. K.; Lykke, K. R.; Smith, A.; Szentgyorgyi, A.

    2010-01-01

    We present radial velocities obtained using a novel laser-based wavelength calibration technique. We have built a prototype laser calibrator for the Hectochelle spectrograph at the MMT 6.5 m telescope. The Hectochelle is a high-dispersion, fiber-fed, multi-object spectrograph capable of recording up to 240 spectra simultaneously with a resolving power of 40000. The standard wavelength calibration method makes use of spectra from thorium-argon hollow cathode lamps shining directly onto the fibers. The difference in light path between calibration and science light as well as the uneven distribution of spectral lines are believed to introduce errors of up to several hundred m/s in the wavelength scale. Our tunable laser wavelength calibrator solves these problems. The laser is bright enough for use with a dome screen, allowing the calibration light path to better match the science light path. Further, the laser is tuned in regular steps across a spectral order to generate a calibration spectrum, creating a comb of evenly-spaced lines on the detector. Using the solar spectrum reflected from the atmosphere to record the same spectrum in every fiber, we show that laser wavelength calibration brings radial velocity uncertainties down below 100 m/s. We present these results as well as an application of tunable laser calibration to stellar radial velocities determined with the infrared Ca triplet in globular clusters M15 and NGC 7492. We also suggest how the tunable laser could be useful for other instruments, including single-object, cross-dispersed echelle spectrographs, and adapted for infrared spectroscopy.

  9. Novel Calibration Algorithm for a Three-Axis Strapdown Magnetometer

    PubMed Central

    Liu, Yan Xia; Li, Xi Sheng; Zhang, Xiao Juan; Feng, Yi Bo

    2014-01-01

    A complete error calibration model with 12 independent parameters is established by analyzing the three-axis magnetometer error mechanism. The said model conforms to an ellipsoid restriction, the parameters of the ellipsoid equation are estimated, and the ellipsoid coefficient matrix is derived. However, the calibration matrix cannot be determined completely, as there are fewer ellipsoid parameters than calibration model parameters. Mathematically, the calibration matrix derived from the ellipsoid coefficient matrix by a different matrix decomposition method is not unique, and there exists an unknown rotation matrix R between them. This paper puts forward a constant intersection angle method (angles between the geomagnetic field and gravitational field are fixed) to estimate R. The Tikhonov method is adopted to solve the problem that rounding errors or other errors may seriously affect the calculation results of R when the condition number of the matrix is very large. The geomagnetic field vector and heading error are further corrected by R. The constant intersection angle method is convenient and practical, as it is free from any additional calibration procedure or coordinate transformation. In addition, the simulation experiment indicates that the heading error declines from ±1° calibrated by classical ellipsoid fitting to ±0.2° calibrated by a constant intersection angle method, and the signal-to-noise ratio is 50 dB. The actual experiment exhibits that the heading error is further corrected from ±0.8° calibrated by the classical ellipsoid fitting to ±0.3° calibrated by a constant intersection angle method. PMID:24831110

  10. [A plane-based hand-eye calibration method for surgical robots].

    PubMed

    Zeng, Bowei; Meng, Fanle; Ding, Hui; Liu, Wenbo; Wu, Di; Wang, Guangzhi

    2017-04-01

    In order to calibrate the hand-eye transformation of the surgical robot and laser range finder (LRF), a calibration algorithm based on a planar template was designed. A mathematical model of the planar template had been given and the approach to address the equations had been derived. Aiming at the problems of the measurement error in a practical system, we proposed a new algorithm for selecting coplanar data. This algorithm can effectively eliminate considerable measurement error data to improve the calibration accuracy. Furthermore, three orthogonal planes were used to improve the calibration accuracy, in which a nonlinear optimization for hand-eye calibration was used. With the purpose of verifying the calibration precision, we used the LRF to measure some fixed points in different directions and a cuboid's surfaces. Experimental results indicated that the precision of a single planar template method was (1.37±0.24) mm, and that of the three orthogonal planes method was (0.37±0.05) mm. Moreover, the mean FRE of three-dimensional (3D) points was 0.24 mm and mean TRE was 0.26 mm. The maximum angle measurement error was 0.4 degree. Experimental results show that the method presented in this paper is effective with high accuracy and can meet the requirements of surgical robot precise location.

  11. An accurate on-site calibration system for electronic voltage transformers using a standard capacitor

    NASA Astrophysics Data System (ADS)

    Hu, Chen; Chen, Mian-zhou; Li, Hong-bin; Zhang, Zhu; Jiao, Yang; Shao, Haiming

    2018-05-01

    Ordinarily electronic voltage transformers (EVTs) are calibrated off-line and the calibration procedure requires complex switching operations, which will influence the reliability of the power grid and induce large economic losses. To overcome this problem, this paper investigates a 110 kV on-site calibration system for EVTs, including a standard channel, a calibrated channel and a PC equipped with the LabView environment. The standard channel employs a standard capacitor and an analogue integrating circuit to reconstruct the primary voltage signal. Moreover, an adaptive full-phase discrete Fourier transform (DFT) algorithm is proposed to extract electrical parameters. The algorithm involves the process of extracting the frequency of the grid, adjusting the operation points, and calculating the results using DFT. In addition, an insulated automatic lifting device is designed to realize the live connection of the standard capacitor, which is driven by a wireless remote controller. A performance test of the capacitor verifies the accurateness of the standard capacitor. A system calibration test shows that the system ratio error is less than 0.04% and the phase error is below 2‧, which meets the requirement of the 0.2 accuracy class. Finally, the developed calibration system was used in a substation, and the field test data validates the availability of the system.

  12. Basic Geometric Support of Systems for Earth Observation from Geostationary and Highly Elliptical Orbits

    NASA Astrophysics Data System (ADS)

    Gektin, Yu. M.; Egoshkin, N. A.; Eremeev, V. V.; Kuznecov, A. E.; Moskatinyev, I. V.; Smelyanskiy, M. B.

    2017-12-01

    A set of standardized models and algorithms for geometric normalization and georeferencing images from geostationary and highly elliptical Earth observation systems is considered. The algorithms can process information from modern scanning multispectral sensors with two-coordinate scanning and represent normalized images in optimal projection. Problems of the high-precision ground calibration of the imaging equipment using reference objects, as well as issues of the flight calibration and refinement of geometric models using the absolute and relative reference points, are considered. Practical testing of the models, algorithms, and technologies is performed in the calibration of sensors for spacecrafts of the Electro-L series and during the simulation of the Arktika prospective system.

  13. Wavelength calibration of arc spectra using intensity modelling

    NASA Astrophysics Data System (ADS)

    Balona, L. A.

    2010-12-01

    Wavelength calibration for astronomical spectra usually involves the use of different arc lamps for different resolving powers to reduce the problem of line blending. We present a technique which eliminates the necessity of different lamps. A lamp producing a very rich spectrum, normally used only at high resolving powers, can be used at the lowest resolving power as well. This is accomplished by modelling the observed arc spectrum and solving for the wavelength calibration as part of the modelling procedure. Line blending is automatically incorporated as part of the model. The method has been implemented and successfully tested on spectra taken with the Robert Stobie spectrograph of the Southern African Large Telescope.

  14. The development of local calibration factors for implementing the highway safety manual in Maryland.

    DOT National Transportation Integrated Search

    2014-03-01

    The goal of the study was to determine local calibration factors (LCFs) to adjust predicted motor : vehicle traffic crashes for the Maryland-specific application of the Highway Safety Manual : (HSM). Since HSM predictive models were developed using d...

  15. APEX - the Hyperspectral ESA Airborne Prism Experiment

    PubMed Central

    Itten, Klaus I.; Dell'Endice, Francesco; Hueni, Andreas; Kneubühler, Mathias; Schläpfer, Daniel; Odermatt, Daniel; Seidel, Felix; Huber, Silvia; Schopfer, Jürg; Kellenberger, Tobias; Bühler, Yves; D'Odorico, Petra; Nieke, Jens; Alberti, Edoardo; Meuleman, Koen

    2008-01-01

    The airborne ESA-APEX (Airborne Prism Experiment) hyperspectral mission simulator is described with its distinct specifications to provide high quality remote sensing data. The concept of an automatic calibration, performed in the Calibration Home Base (CHB) by using the Control Test Master (CTM), the In-Flight Calibration facility (IFC), quality flagging (QF) and specific processing in a dedicated Processing and Archiving Facility (PAF), and vicarious calibration experiments are presented. A preview on major applications and the corresponding development efforts to provide scientific data products up to level 2/3 to the user is presented for limnology, vegetation, aerosols, general classification routines and rapid mapping tasks. BRDF (Bidirectional Reflectance Distribution Function) issues are discussed and the spectral database SPECCHIO (Spectral Input/Output) introduced. The optical performance as well as the dedicated software utilities make APEX a state-of-the-art hyperspectral sensor, capable of (a) satisfying the needs of several research communities and (b) helping the understanding of the Earth's complex mechanisms. PMID:27873868

  16. An experimental protocol for the definition of upper limb anatomical frames on children using magneto-inertial sensors.

    PubMed

    Ricci, L; Formica, D; Tamilia, E; Taffoni, F; Sparaci, L; Capirci, O; Guglielmelli, E

    2013-01-01

    Motion capture based on magneto-inertial sensors is a technology enabling data collection in unstructured environments, allowing "out of the lab" motion analysis. This technology is a good candidate for motion analysis of children thanks to the reduced weight and size as well as the use of wireless communication that has improved its wearability and reduced its obtrusivity. A key issue in the application of such technology for motion analysis is its calibration, i.e. a process that allows mapping orientation information from each sensor to a physiological reference frame. To date, even if there are several calibration procedures available for adults, no specific calibration procedures have been developed for children. This work addresses this specific issue presenting a calibration procedure for motion capture of thorax and upper limbs on healthy children. Reported results suggest comparable performance with similar studies on adults and emphasize some critical issues, opening the way to further improvements.

  17. Assessment of the first radiances received from the VSSR Atmospheric Sounder (VAS) instrument

    NASA Technical Reports Server (NTRS)

    Chesters, D.; Uccellini, L. W.; Montgomery, H.; Mostek, A.; Robinson, W.

    1981-01-01

    The first orderly, calibrated radiances from the VAS-D instrument on the GOES-4 satellite are examined for: image quality, radiometric precision, radiation transfer verification at clear air radiosonde sites, regression retrieval accuracy, and mesoscale analysis features. Postlaunch problems involving calibration and data processing irregularities of scientific or operational significance are included. The radiances provide good visual and relative radiometric data for empirically conditioned retrievals of mesoscale temperature and moisture fields in clear air.

  18. MERITXELL: The Multifrequency Experimental Radiometer with Interference Tracking for Experiments over Land and Littoral-Instrument Description, Calibration and Performance.

    PubMed

    Querol, Jorge; Tarongí, José Miguel; Forte, Giuseppe; Gómez, José Javier; Camps, Adriano

    2017-05-10

    MERITXELL is a ground-based multisensor instrument that includes a multiband dual-polarization radiometer, a GNSS reflectometer, and several optical sensors. Its main goals are twofold: to test data fusion techniques, and to develop Radio-Frequency Interference (RFI) detection, localization and mitigation techniques. The former is necessary to retrieve complementary data useful to develop geophysical models with improved accuracy, whereas the latter aims at solving one of the most important problems of microwave radiometry. This paper describes the hardware design, the instrument control architecture, the calibration of the radiometer, and several captures of RFI signals taken with MERITXELL in urban environment. The multiband radiometer has a dual linear polarization total-power radiometer topology, and it covers the L-, S-, C-, X-, K-, Ka-, and W-band. Its back-end stage is based on a spectrum analyzer structure which allows to perform real-time signal processing, while the rest of the sensors are controlled by a host computer where the off-line processing takes place. The calibration of the radiometer is performed using the hot-cold load procedure, together with the tipping curves technique in the case of the five upper frequency bands. Finally, some captures of RFI signals are shown for most of the radiometric bands under analysis, which evidence the problem of RFI in microwave radiometry, and the limitations they impose in external calibration.

  19. MERITXELL: The Multifrequency Experimental Radiometer with Interference Tracking for Experiments over Land and Littoral—Instrument Description, Calibration and Performance

    PubMed Central

    Querol, Jorge; Tarongí, José Miguel; Forte, Giuseppe; Gómez, José Javier; Camps, Adriano

    2017-01-01

    MERITXELL is a ground-based multisensor instrument that includes a multiband dual-polarization radiometer, a GNSS reflectometer, and several optical sensors. Its main goals are twofold: to test data fusion techniques, and to develop Radio-Frequency Interference (RFI) detection, localization and mitigation techniques. The former is necessary to retrieve complementary data useful to develop geophysical models with improved accuracy, whereas the latter aims at solving one of the most important problems of microwave radiometry. This paper describes the hardware design, the instrument control architecture, the calibration of the radiometer, and several captures of RFI signals taken with MERITXELL in urban environment. The multiband radiometer has a dual linear polarization total-power radiometer topology, and it covers the L-, S-, C-, X-, K-, Ka-, and W-band. Its back-end stage is based on a spectrum analyzer structure which allows to perform real-time signal processing, while the rest of the sensors are controlled by a host computer where the off-line processing takes place. The calibration of the radiometer is performed using the hot-cold load procedure, together with the tipping curves technique in the case of the five upper frequency bands. Finally, some captures of RFI signals are shown for most of the radiometric bands under analysis, which evidence the problem of RFI in microwave radiometry, and the limitations they impose in external calibration. PMID:28489056

  20. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    PubMed

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.

  1. The Red Edge Problem in asteroid band parameter analysis

    NASA Astrophysics Data System (ADS)

    Lindsay, Sean S.; Dunn, Tasha L.; Emery, Joshua P.; Bowles, Neil E.

    2016-04-01

    Near-infrared reflectance spectra of S-type asteroids contain two absorptions at 1 and 2 μm (band I and II) that are diagnostic of mineralogy. A parameterization of these two bands is frequently employed to determine the mineralogy of S(IV) asteroids through the use of ordinary chondrite calibration equations that link the mineralogy to band parameters. The most widely used calibration study uses a Band II terminal wavelength point (red edge) at 2.50 μm. However, due to the limitations of the NIR detectors on prominent telescopes used in asteroid research, spectral data for asteroids are typically only reliable out to 2.45 μm. We refer to this discrepancy as "The Red Edge Problem." In this report, we evaluate the associated errors for measured band area ratios (BAR = Area BII/BI) and calculated relative abundance measurements. We find that the Red Edge Problem is often not the dominant source of error for the observationally limited red edge set at 2.45 μm, but it frequently is for a red edge set at 2.40 μm. The error, however, is one sided and therefore systematic. As such, we provide equations to adjust measured BARs to values with a different red edge definition. We also provide new ol/(ol+px) calibration equations for red edges set at 2.40 and 2.45 μm.

  2. Modelling exploration of non-stationary hydrological system

    NASA Astrophysics Data System (ADS)

    Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei

    2015-04-01

    Traditional hydrological modelling assumes that the catchment does not change with time (i.e., stationary conditions) which means the model calibrated for the historical period is valid for the future period. However, in reality, due to change of climate and catchment conditions this stationarity assumption may not be valid in the future. It is a challenge to make the hydrological model adaptive to the future climate and catchment conditions that are not observable at the present time. In this study a lumped conceptual rainfall-runoff model called IHACRES was applied to a catchment in southwest England. Long observation data from 1961 to 2008 were used and seasonal calibration (in this study only summer period is further explored because it is more sensitive to climate and land cover change than the other three seasons) has been done since there are significant seasonal rainfall patterns. We expect that the model performance can be improved by calibrating the model based on individual seasons. The data is split into calibration and validation periods with the intention of using the validation period to represent the future unobserved situations. The success of the non-stationary model will depend not only on good performance during the calibration period but also the validation period. Initially, the calibration is based on changing the model parameters with time. Methodology is proposed to adapt the parameters using the step forward and backward selection schemes. However, in the validation both the forward and backward multiple parameter changing models failed. One problem is that the regression with time is not reliable since the trend may not be in a monotonic linear relationship with time. The second issue is that changing multiple parameters makes the selection process very complex which is time consuming and not effective in the validation period. As a result, two new concepts are explored. First, only one parameter is selected for adjustment while the other parameters are set as constant. Secondly, regression is made against climate condition instead of against time. It has been found that such a new approach is very effective and this non-stationary model worked very well both in the calibration and validation period. Although the catchment is specific in southwest England and the data are for only the summer period, the methodology proposed in this study is general and applicable to other catchments. We hope this study will stimulate the hydrological community to explore a variety of sites so that valuable experiences and knowledge could be gained to improve our understanding of such a complex modelling issue in climate change impact assessment.

  3. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  4. Improved quantification of important beer quality parameters based on nonlinear calibration methods applied to FT-MIR spectra.

    PubMed

    Cernuda, Carlos; Lughofer, Edwin; Klein, Helmut; Forster, Clemens; Pawliczek, Marcin; Brandstetter, Markus

    2017-01-01

    During the production process of beer, it is of utmost importance to guarantee a high consistency of the beer quality. For instance, the bitterness is an essential quality parameter which has to be controlled within the specifications at the beginning of the production process in the unfermented beer (wort) as well as in final products such as beer and beer mix beverages. Nowadays, analytical techniques for quality control in beer production are mainly based on manual supervision, i.e., samples are taken from the process and analyzed in the laboratory. This typically requires significant lab technicians efforts for only a small fraction of samples to be analyzed, which leads to significant costs for beer breweries and companies. Fourier transform mid-infrared (FT-MIR) spectroscopy was used in combination with nonlinear multivariate calibration techniques to overcome (i) the time consuming off-line analyses in beer production and (ii) already known limitations of standard linear chemometric methods, like partial least squares (PLS), for important quality parameters Speers et al. (J I Brewing. 2003;109(3):229-235), Zhang et al. (J I Brewing. 2012;118(4):361-367) such as bitterness, citric acid, total acids, free amino nitrogen, final attenuation, or foam stability. The calibration models are established with enhanced nonlinear techniques based (i) on a new piece-wise linear version of PLS by employing fuzzy rules for local partitioning the latent variable space and (ii) on extensions of support vector regression variants (-PLSSVR and ν-PLSSVR), for overcoming high computation times in high-dimensional problems and time-intensive and inappropriate settings of the kernel parameters. Furthermore, we introduce a new model selection scheme based on bagged ensembles in order to improve robustness and thus predictive quality of the final models. The approaches are tested on real-world calibration data sets for wort and beer mix beverages, and successfully compared to linear methods, showing a clear out-performance in most cases and being able to meet the model quality requirements defined by the experts at the beer company. Figure Workflow for calibration of non-Linear model ensembles from FT-MIR spectra in beer production .

  5. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  6. Parameter estimation procedure for complex non-linear systems: calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K

    2001-01-01

    When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.

  7. Satellite Instrument Calibration for Measuring Global Climate Change. Report of a Workshop at the University of Maryland Inn and Conference Center, College Park, MD. , November 12-14, 2002

    NASA Technical Reports Server (NTRS)

    Ohring, G.; Wielicki, B.; Spencer, R.; Emery, B.; Datla, R.

    2004-01-01

    Measuring the small changes associated with long-term global climate change from space is a daunting task. To address these problems and recommend directions for improvements in satellite instrument calibration some 75 scientists, including researchers who develop and analyze long-term data sets from satellites, experts in the field of satellite instrument calibration, and physicists working on state of the art calibration sources and standards met November 12 - 14, 2002 and discussed the issues. The workshop defined the absolute accuracies and long-term stabilities of global climate data sets that are needed to detect expected trends, translated these data set accuracies and stabilities to required satellite instrument accuracies and stabilities, and evaluated the ability of current observing systems to meet these requirements. The workshop's recommendations include a set of basic axioms or overarching principles that must guide high quality climate observations in general, and a roadmap for improving satellite instrument characterization, calibration, inter-calibration, and associated activities to meet the challenge of measuring global climate change. It is also recommended that a follow-up workshop be conducted to discuss implementation of the roadmap developed at this workshop.

  8. Ability of calibration phantom to reduce the interscan variability in electron beam computed tomography.

    PubMed

    Budoff, Matthew J; Mao, Songshou; Lu, Bin; Takasu, Junichiro; Child, Janis; Carson, Sivi; Fisher, Hans

    2002-01-01

    To test the hypothesis that a calibration phantom would improve interpatient and interscan variability in coronary artery calcium (CAC) studies. We scanned 144 patients twice with or without the calibration phantom and then scanned 93 patients with a single calcific lesion twice and, finally, scanned a cork heart with calcific foci. There were no linear correlations in computed tomography Hounsfield unit (CT HU) and CT HU interscan variation between blood pool and phantom plugs at any slice level in patient groups (p > 0.05). The CT HU interscan variation in phantom plugs (2.11 HU) was less than that of the blood pool (3.47 HU; p < 0.05) and CAC lesion (20.39; p < 0.001). Comparing images with and without a calibration phantom, there was a significant decrease in CT HU as well as an increase in noise and peak values in patient studies and the cork phantom study. The CT HU attenuation variations of the interpatient and interscan blood pool, calibration phantom plug, and cork coronary arteries were not parallel. Therefore, the ability to adjust the CT HU variation of calcific lesions by a calibration phantom is problematic and may worsen the problem.

  9. Automated Mounting Bias Calibration for Airborne LIDAR System

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Jiang, W.; Jiang, S.

    2012-07-01

    Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.

  10. Calibration of decadal ensemble predictions

    NASA Astrophysics Data System (ADS)

    Pasternack, Alexander; Rust, Henning W.; Bhend, Jonas; Liniger, Mark; Grieger, Jens; Müller, Wolfgang; Ulbrich, Uwe

    2017-04-01

    Decadal climate predictions are of great socio-economic interest due to the corresponding planning horizons of several political and economic decisions. Due to uncertainties of weather and climate, forecasts (e.g. due to initial condition uncertainty), they are issued in a probabilistic way. One issue frequently observed for probabilistic forecasts is that they tend to be not reliable, i.e. the forecasted probabilities are not consistent with the relative frequency of the associated observed events. Thus, these kind of forecasts need to be re-calibrated. While re-calibration methods for seasonal time scales are available and frequently applied, these methods still have to be adapted for decadal time scales and its characteristic problems like climate trend and lead time dependent bias. Regarding this, we propose a method to re-calibrate decadal ensemble predictions that takes the above mentioned characteristics into account. Finally, this method will be applied and validated to decadal forecasts from the MiKlip system (Germany's initiative for decadal prediction).

  11. Accuracy and Calibration of High Explosive Thermodynamic Equations of State

    NASA Astrophysics Data System (ADS)

    Baker, Ernest L.; Capellos, Christos; Stiel, Leonard I.; Pincay, Jack

    2010-10-01

    The Jones-Wilkins-Lee-Baker (JWLB) equation of state (EOS) was developed to more accurately describe overdriven detonation while maintaining an accurate description of high explosive products expansion work output. The increased mathematical complexity of the JWLB high explosive equations of state provides increased accuracy for practical problems of interest. Increased numbers of parameters are often justified based on improved physics descriptions but can also mean increased calibration complexity. A generalized extent of aluminum reaction Jones-Wilkins-Lee (JWL)-based EOS was developed in order to more accurately describe the observed behavior of aluminized explosives detonation products expansion. A calibration method was developed to describe the unreacted, partially reacted, and completely reacted explosive using nonlinear optimization. A reasonable calibration of a generalized extent of aluminum reaction JWLB EOS as a function of aluminum reaction fraction has not yet been achieved due to the increased mathematical complexity of the JWLB form.

  12. Machine-Learning Based Co-adaptive Calibration: A Perspective to Fight BCI Illiteracy

    NASA Astrophysics Data System (ADS)

    Vidaurre, Carmen; Sannelli, Claudia; Müller, Klaus-Robert; Blankertz, Benjamin

    "BCI illiteracy" is one of the biggest problems and challenges in BCI research. It means that BCI control cannot be achieved by a non-negligible number of subjects (estimated 20% to 25%). There are two main causes for BCI illiteracy in BCI users: either no SMR idle rhythm is observed over motor areas, or this idle rhythm is not attenuated during motor imagery, resulting in a classification performance lower than 70% (criterion level) already for offline calibration data. In a previous work of the same authors, the concept of machine learning based co-adaptive calibration was introduced. This new type of calibration provided substantially improved performance for a variety of users. Here, we use a similar approach and investigate to what extent co-adapting learning enables substantial BCI control for completely novice users and those who suffered from BCI illiteracy before.

  13. Pre-treatment patient-specific stopping power by combining list-mode proton radiography and x-ray CT

    NASA Astrophysics Data System (ADS)

    Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Hansen, David C.; Beaulieu, Luc; Seco, Joao

    2017-09-01

    The relative stopping power (RSP) uncertainty is the largest contributor to the range uncertainty in proton therapy. The purpose of this work was to develop a systematic method that yields accurate and patient-specific RSPs by combining (1) pre-treatment x-ray CT and (2) daily proton radiography of the patient. The method was formulated as a penalized least squares optimization problem (argmin(\\Vert {A}{x}-{b}\\Vert _22 )). The parameter A represents the cumulative path-length crossed by the proton in each material, separated by thresholding on the HU. The material RSPs (water equivalent thickness/physical thickness) are denoted by x. The parameter b is the list-mode proton radiography produced using Geant4 simulations. The problem was solved using a non-negative linear-solver with {x}≥slant0 . A was computed by superposing proton trajectories calculated with a cubic or linear spline approach to the CT. The material’s RSP assigned in Geant4 were used for reference while the clinical HU-RSP calibration curve was used for comparison. The Gammex RMI-467 phantom was first investigated. The standard deviation between the estimated material RSP and the calculated RSP is 0.45%. The robustness of the techniques was then assessed as a function of the number of projections and initial proton energy. Optimization with two initial projections yields precise RSP (⩽1.0%) for 330 MeV protons. 250 MeV protons have shown higher uncertainty (⩽2.0%) due to the loss of precision in the path estimate. Anthropomorphic phantoms of the head, pelvis, and lung were subsequently evaluated. Accurate RSP has been obtained for the head (μ =0.21+/-1.63% ), the lung (μ=0.06+/-0.99% ) and the pelvis (μ=0.90+/-3.87% ). The range precision has been optimized using the calibration curves obtained with the algorithm, yielding a mean R80 difference to the reference of 0.11  ±0.09%, 0.28  ±  0.34% and 0.05 +/- 0.06% in the same order. The solution’s accuracy is limited by the assumed HU/RSP bijection, neglecting inherent degeneracy. The proposed formulation of the problem with prior knowledge x-ray CT demonstrates potential to increase the accuracy of present RSP estimates.

  14. Pre-treatment patient-specific stopping power by combining list-mode proton radiography and x-ray CT.

    PubMed

    Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Hansen, David C; Beaulieu, Luc; Seco, Joao

    2017-08-03

    The relative stopping power (RSP) uncertainty is the largest contributor to the range uncertainty in proton therapy. The purpose of this work was to develop a systematic method that yields accurate and patient-specific RSPs by combining (1) pre-treatment x-ray CT and (2) daily proton radiography of the patient. The method was formulated as a penalized least squares optimization problem (argmin([Formula: see text])). The parameter A represents the cumulative path-length crossed by the proton in each material, separated by thresholding on the HU. The material RSPs (water equivalent thickness/physical thickness) are denoted by x. The parameter b is the list-mode proton radiography produced using Geant4 simulations. The problem was solved using a non-negative linear-solver with [Formula: see text]. A was computed by superposing proton trajectories calculated with a cubic or linear spline approach to the CT. The material's RSP assigned in Geant4 were used for reference while the clinical HU-RSP calibration curve was used for comparison. The Gammex RMI-467 phantom was first investigated. The standard deviation between the estimated material RSP and the calculated RSP is 0.45%. The robustness of the techniques was then assessed as a function of the number of projections and initial proton energy. Optimization with two initial projections yields precise RSP (⩽1.0%) for 330 MeV protons. 250 MeV protons have shown higher uncertainty (⩽2.0%) due to the loss of precision in the path estimate. Anthropomorphic phantoms of the head, pelvis, and lung were subsequently evaluated. Accurate RSP has been obtained for the head ([Formula: see text]), the lung ([Formula: see text]) and the pelvis ([Formula: see text]). The range precision has been optimized using the calibration curves obtained with the algorithm, yielding a mean [Formula: see text] difference to the reference of 0.11  ±0.09%, 0.28  ±  0.34% and [Formula: see text] in the same order. The solution's accuracy is limited by the assumed HU/RSP bijection, neglecting inherent degeneracy. The proposed formulation of the problem with prior knowledge x-ray CT demonstrates potential to increase the accuracy of present RSP estimates.

  15. 40 CFR 89.307 - Dynamometer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... master load-cell for each in-use range used. (5) The in-use torque measurement must be within 2 percent... torque measurement for each range used by the following method: (1) Warm up the dynamometer following the dynamometer manufacturer's specifications. (2) Determine the dynamometer calibration moment arm (a distance...

  16. Multispectral scanner flight model (F-1) radiometric calibration and alignment handbook

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This handbook on the calibration of the MSS-D flight model (F-1) provides both the relevant data and a summary description of how the data were obtained for the system radiometric calibration, system relative spectral response, and the filter response characteristics for all 24 channels of the four band MSS-D F-1 scanner. The calibration test procedure and resulting test data required to establish the reference light levels of the MSS-D internal calibration system are discussed. The final set of data ("nominal" calibration wedges for all 24 channels) for the internal calibration system is given. The system relative spectral response measurements for all 24 channels of MSS-D F-1 are included. These data are the spectral response of the complete scanner, which are the composite of the spectral responses of the scan mirror primary and secondary telescope mirrors, fiber optics, optical filters, and detectors. Unit level test data on the measurements of the individual channel optical transmission filters are provided. Measured performance is compared to specification values.

  17. A statistical method for estimating rates of soil development and ages of geologic deposits: A design for soil-chronosequence studies

    USGS Publications Warehouse

    Switzer, P.; Harden, J.W.; Mark, R.K.

    1988-01-01

    A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.

  18. Two laboratory methods for the calibration of GPS speed meters

    NASA Astrophysics Data System (ADS)

    Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie

    2015-01-01

    The set-ups of two calibration systems are presented to investigate calibration methods of GPS speed meters. The GPS speed meter calibrated is a special type of high accuracy speed meter for vehicles which uses Doppler demodulation of GPS signals to calculate the measured speed of a moving target. Three experiments are performed: including simulated calibration, field-test signal replay calibration, and in-field test comparison with an optical speed meter. The experiments are conducted at specific speeds in the range of 40-180 km h-1 with the same GPS speed meter as the device under calibration. The evaluation of measurement results validates both methods for calibrating GPS speed meters. The relative deviations between the measurement results of the GPS-based high accuracy speed meter and those of the optical speed meter are analyzed, and the equivalent uncertainty of the comparison is evaluated. The comparison results justify the utilization of GPS speed meters as reference equipment if no fewer than seven satellites are available. This study contributes to the widespread use of GPS-based high accuracy speed meters as legal reference equipment in traffic speed metrology.

  19. Effects of dilution rates, animal species and instruments on the spectrophotometric determination of sperm counts.

    PubMed

    Rondeau, M; Rouleau, M

    1981-06-01

    Using semen from bull, boar and stallion as well as different spectrophotometers, we established the calibration curves relating the optical density of a sperm sample to the sperm count obtained on the hemacytometer. The results show that, for a given spectrophotometer, the calibration curve is not characteristic of the animal species we studied. The differences in size of the spermatozoa are probably too small to account for the anticipated specificity of the calibration curve. Furthermore, the fact that different dilution rates must be used, because of the vastly different concentrations of spermatozoa which is characteristic of those species, has no effect on the calibration curves since the dilution rate is shown to be artefactual. On the other hand, for a given semen, the calibration curve varies depending upon the spectrophotometry used. However, if two instruments have the same characteristic in terms of spectral bandwidth, the calibration curves are not statistically different.

  20. Calibration-free optical chemical sensors

    DOEpatents

    DeGrandpre, Michael D.

    2006-04-11

    An apparatus and method for taking absorbance-based chemical measurements are described. In a specific embodiment, an indicator-based pCO2 (partial pressure of CO2) sensor displays sensor-to-sensor reproducibility and measurement stability. These qualities are achieved by: 1) renewing the sensing solution, 2) allowing the sensing solution to reach equilibrium with the analyte, and 3) calculating the response from a ratio of the indicator solution absorbances which are determined relative to a blank solution. Careful solution preparation, wavelength calibration, and stray light rejection also contribute to this calibration-free system. Three pCO2 sensors were calibrated and each had response curves which were essentially identical within the uncertainty of the calibration. Long-term laboratory and field studies showed the response had no drift over extended periods (months). The theoretical response, determined from thermodynamic characterization of the indicator solution, also predicted the observed calibration-free performance.

  1. Calibration and comparison of the NASA Lewis free-piston Stirling engine model predictions with RE-1000 test data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.

    1987-01-01

    A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Reasonable agreement was obtained between the code prediction and the experimental data over a wide range of engine operating conditions.

  2. Calibration and comparison of the NASA Lewis free-piston Stirling engine model predictions with RE-1000 test data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.

    1987-01-01

    A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Resonable agreement was obtained between the code predictions and the experimental data over a wide range of engine operating conditions.

  3. Geometrical Calibration of the Photo-Spectral System and Digital Maps Retrieval

    NASA Astrophysics Data System (ADS)

    Bruchkouskaya, S.; Skachkova, A.; Katkovski, L.; Martinov, A.

    2013-12-01

    Imaging systems for remote sensing of the Earth are required to demonstrate high metric accuracy of the picture which can be provided through preliminary geometrical calibration of optical systems. Being defined as a result of the geometrical calibration, parameters of internal and external orientation of the cameras are needed while solving such problems of image processing, as orthotransformation, geometrical correction, geographical coordinate fixing, scale adjustment and image registration from various channels and cameras, creation of image mosaics of filmed territories, and determination of geometrical characteristics of objects in the images. The geometrical calibration also helps to eliminate image deformations arising due to manufacturing defects and errors in installation of camera elements and photo receiving matrices as well as those resulted from lens distortions. A Photo-Spectral System (PhSS), which is intended for registering reflected radiation spectra of underlying surfaces in a wavelength range from 350 nm to 1050 nm and recording images of high spatial resolution, has been developed at the A.N. Sevchenko Research Institute of Applied Physical Problems of the Belarusian State University. The PhSS has undergone flight tests over the territory of Belarus onboard the Antonov AN-2 aircraft with the aim to obtain visible range images of the underlying surface. Then we performed the geometrical calibration of the PhSS and carried out the correction of images obtained during the flight tests. Furthermore, we have plotted digital maps of the terrain using the stereo pairs of images acquired from the PhSS and evaluated the accuracy of the created maps. Having obtained the calibration parameters, we apply them for correction of the images from another identical PhSS device, which is located at the Russian Orbital Segment of the International Space Station (ROS ISS), aiming to retrieve digital maps of the terrain with higher accuracy.

  4. Dry Bias and Variability in Vaisala RS80-H Radiosondes: The ARM Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, David D.; Lesht, B. M.; Clough, Shepard A.

    2003-01-02

    Thousands of comparisons between total precipitable water vapor (PWV) obtained from radiosonde (Vaisala RS80-H) profiles and PWV retrieved from a collocated microwave radiometer (MWR) were made at the Atmospheric Radiation Measurement (ARM) Program's Southern Great Plains Cloud and Radiation Testbed (SGP/CART) site in northern Oklahoma from 1994 to 2000. These comparisons show that the RS80-H radiosonde has an approximate 5% dry bias compared to the MWR. This observation is consistent with interpretations of Vaisala RS80 radiosonde data obtained during the Tropical Ocean and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA/COARE). In addition to the dry bias, analysis of the PWVmore » comparisons as well as of data obtained from dual-sonde soundings done at the SGP show that the calibration of the radiosonde humidity measurements varies considerably both when the radiosondes come from different calibration batches and when the radiosondes come from the same calibration batch. This variability can result in peak-to-peak differences between radiosondes of greater than 25% in PWV. Because accurate representation of the vertical profile of water vapor is critical for ARM's science objectives, we have developed an empirical method for correcting the radiosonde humidity profiles that is based on a constant scaling factor. By using an independent set of observations and radiative transfer models to test the correction, we show that the constant humidity scaling method appears both to improve the accuracy and reduce the uncertainty of the radiosonde data. We also used the ARM data to examine a different, physically-based, correction scheme that was developed recently by scientists from Vaisala and the National Center for Atmospheric Research (NCAR). This scheme, which addresses the dry bias problem as well as other calibration-related problems with the RS80-H sensor, results in excellent agreement between the PWV retrieved from the MWR and integrated from the corrected radiosonde. However, because the physically-based correction scheme does not address the apparently random calibration variations we observe, it does not reduce the variability either between radiosonde calibration batches or within individual calibration batches.« less

  5. Motivational Influences of Using Peer Evaluation in Problem-Based Learning in Medical Education

    ERIC Educational Resources Information Center

    Abercrombie, Sara; Parkes, Jay; McCarty, Teresita

    2015-01-01

    This study investigates the ways in which medical students' achievement goal orientations (AGO) affect their perceptions of learning and actual learning from an online problem-based learning environment, Calibrated Peer Review™. First, the tenability of a four-factor model (Elliot & McGregor, 2001) of AGO was tested with data collected from…

  6. Calibrating LOFAR using the Black Board Selfcal System

    NASA Astrophysics Data System (ADS)

    Pandey, V. N.; van Zwieten, J. E.; de Bruyn, A. G.; Nijboer, R.

    2009-09-01

    The Black Board SelfCal (BBS) system is designed as the final processing system to carry out the calibration of LOFAR in an efficient way. In this paper we give a brief description of its architectural and software design including its distributed computing approach. A confusion limited deep all sky image (from 38-62 MHz) by calibrating LOFAR test data with the BBS suite is shown as a sample result. The present status and future directions of development of BBS suite are also touched upon. Although BBS is mainly developed for LOFAR, it may also be used to calibrate other instruments once their specific algorithms are plugged in.

  7. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    NASA Astrophysics Data System (ADS)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of distribution algorithms. pp. 75-102, Springer Kern, S., N. Hansen and P. Koumoutsakos (2006). Local Meta-Models for Optimization Using Evolution Strategies. In Ninth International Conference on Parallel Problem Solving from Nature PPSN IX, Proceedings, pp.939-948, Berlin: Springer. Tahk, M., Woo, H., and Park. M, (2007). A hybrid optimization of evolutionary and gradient search. Engineering Optimization, (39), 87-104.

  8. Applications of inductively coupled plasma mass spectrometry and laser ablation inductively coupled plasma mass spectrometry in materials science

    NASA Astrophysics Data System (ADS)

    Becker, Johanna Sabine

    2002-12-01

    Inductively coupled plasma mass spectrometry (ICP-MS) and laser ablation ICP-MS (LA-ICP-MS) have been applied as the most important inorganic mass spectrometric techniques having multielemental capability for the characterization of solid samples in materials science. ICP-MS is used for the sensitive determination of trace and ultratrace elements in digested solutions of solid samples or of process chemicals (ultrapure water, acids and organic solutions) for the semiconductor industry with detection limits down to sub-picogram per liter levels. Whereas ICP-MS on solid samples (e.g. high-purity ceramics) sometimes requires time-consuming sample preparation for its application in materials science, and the risk of contamination is a serious drawback, a fast, direct determination of trace elements in solid materials without any sample preparation by LA-ICP-MS is possible. The detection limits for the direct analysis of solid samples by LA-ICP-MS have been determined for many elements down to the nanogram per gram range. A deterioration of detection limits was observed for elements where interferences with polyatomic ions occur. The inherent interference problem can often be solved by applying a double-focusing sector field mass spectrometer at higher mass resolution or by collision-induced reactions of polyatomic ions with a collision gas using an ICP-MS fitted with collision cell. The main problem of LA-ICP-MS is quantification if no suitable standard reference materials with a similar matrix composition are available. The calibration problem in LA-ICP-MS can be solved using on-line solution-based calibration, and different procedures, such as external calibration and standard addition, have been discussed with respect to their application in materials science. The application of isotope dilution in solution-based calibration for trace metal determination in small amounts of noble metals has been developed as a new calibration strategy. This review discusses new analytical developments and possible applications of ICP-MS and LA-ICP-MS for the quantitative determination of trace elements and in surface analysis for materials science.

  9. Sky camera geometric calibration using solar observations

    DOE PAGES

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-05

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less

  10. ROx3: Retinal oximetry utilizing the blue-green oximetry method

    NASA Astrophysics Data System (ADS)

    Parsons, Jennifer Kathleen Hendryx

    The ROx is a retinal oximeter under development with the purpose of non-invasively and accurately measuring oxygen saturation (SO2) in vivo. It is novel in that it utilizes the blue-green oximetry technique with on-axis illumination. ROx calibration tests were performed by inducing hypoxia in live anesthetized swine and comparing ROx measurements to SO 2 values measured by a CO-Oximeter. Calibration was not achieved to the precision required for clinical use, but limiting factors were identified and improved. The ROx was used in a set of sepsis experiments on live pigs with the intention of tracking retinal SO2 during the development of sepsis. Though conclusions are qualitative due to insufficient calibration of the device, retinal venous SO2 is shown to trend generally with central venous SO2 as sepsis develops. The novel sepsis model developed in these experiments is also described. The method of cecal ligation and perforation with additional soiling of the abdomen consistently produced controllable severe sepsis/septic shock in a matter of hours. In addition, the ROx was used to collect retinal images from a healthy human volunteer. These experiments served as a bench test for several of the additions/modifications made to the ROx. This set of experiments specifically served to illuminate problems with various light paths and image acquisition. The analysis procedure for the ROx is under development, particularly automating the process for consistency, accuracy, and time efficiency. The current stage of automation is explained, including data acquisition processes and the automated vessel fit routine. Suggestions for the next generation of device minimization are also described.

  11. A fundamental study of suction for Laminar Flow Control (LFC)

    NASA Astrophysics Data System (ADS)

    Watmuff, Jonathan H.

    1992-10-01

    This report covers the period forming the first year of the project. The aim is to experimentally investigate the effects of suction as a technique for Laminar Flow Control. Experiments are to be performed which require substantial modifications to be made to the experimental facility. Considerable effort has been spent developing new high performance constant temperature hot-wire anemometers for general purpose use in the Fluid Mechanics Laboratory. Twenty instruments have been delivered. An important feature of the facility is that it is totally automated under computer control. Unprecedently large quantities of data can be acquired and the results examined using the visualization tools developed specifically for studying the results of numerical simulations on graphics works stations. The experiment must be run for periods of up to a month at a time since the data is collected on a point-by-point basis. Several techniques were implemented to reduce the experimental run-time by a significant factor. Extra probes have been constructed and modifications have been made to the traverse hardware and to the real-time experimental code to enable multiple probes to be used. This will reduce the experimental run-time by the appropriate factor. Hot-wire calibration drift has been a frustrating problem owing to the large range of ambient temperatures experienced in the laboratory. The solution has been to repeat the calibrations at frequent intervals. However the calibration process has consumed up to 40 percent of the run-time. A new method of correcting the drift is very nearly finalized and when implemented it will also lead to a significant reduction in the experimental run-time.

  12. Reference measurement procedure for total glycerides by isotope dilution GC-MS.

    PubMed

    Edwards, Selvin H; Stribling, Shelton L; Pyatt, Susan D; Kimberly, Mary M

    2012-04-01

    The CDC's Lipid Standardization Program established the chromotropic acid (CA) reference measurement procedure (RMP) as the accuracy base for standardization and metrological traceability for triglyceride testing. The CA RMP has several disadvantages, including lack of ruggedness. It uses obsolete instrumentation and hazardous reagents. To overcome these problems the CDC developed an isotope dilution GC-MS (ID-GC-MS) RMP for total glycerides in serum. We diluted serum samples with Tris-HCl buffer solution and spiked 200-μL aliquots with [(13)C(3)]-glycerol. These samples were incubated and hydrolyzed under basic conditions. The samples were dried, derivatized with acetic anhydride and pyridine, extracted with ethyl acetate, and analyzed by ID-GC-MS. Linearity, imprecision, and accuracy were evaluated by analyzing calibrator solutions, 10 serum pools, and a standard reference material (SRM 1951b). The calibration response was linear for the range of calibrator concentrations examined (0-1.24 mmol/L) with a slope and intercept of 0.717 (95% CI, 0.7123-0.7225) and 0.3122 (95% CI, 0.3096-0.3140), respectively. The limit of detection was 14.8 μmol/L. The mean %CV for the sample set (serum pools and SRM) was 1.2%. The mean %bias from NIST isotope dilution MS values for SRM 1951b was 0.7%. This ID-GC-MS RMP has the specificity and ruggedness to accurately quantify total glycerides in the serum pools used in the CDC's Lipid Standardization Program and demonstrates sufficiently acceptable agreement with the NIST primary RMP for total glyceride measurement.

  13. A fundamental study of suction for Laminar Flow Control (LFC)

    NASA Technical Reports Server (NTRS)

    Watmuff, Jonathan H.

    1992-01-01

    This report covers the period forming the first year of the project. The aim is to experimentally investigate the effects of suction as a technique for Laminar Flow Control. Experiments are to be performed which require substantial modifications to be made to the experimental facility. Considerable effort has been spent developing new high performance constant temperature hot-wire anemometers for general purpose use in the Fluid Mechanics Laboratory. Twenty instruments have been delivered. An important feature of the facility is that it is totally automated under computer control. Unprecedently large quantities of data can be acquired and the results examined using the visualization tools developed specifically for studying the results of numerical simulations on graphics works stations. The experiment must be run for periods of up to a month at a time since the data is collected on a point-by-point basis. Several techniques were implemented to reduce the experimental run-time by a significant factor. Extra probes have been constructed and modifications have been made to the traverse hardware and to the real-time experimental code to enable multiple probes to be used. This will reduce the experimental run-time by the appropriate factor. Hot-wire calibration drift has been a frustrating problem owing to the large range of ambient temperatures experienced in the laboratory. The solution has been to repeat the calibrations at frequent intervals. However the calibration process has consumed up to 40 percent of the run-time. A new method of correcting the drift is very nearly finalized and when implemented it will also lead to a significant reduction in the experimental run-time.

  14. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  15. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  16. Cloned plasmid DNA fragments as calibrators for controlling GMOs: different real-time duplex quantitative PCR methods.

    PubMed

    Taverniers, Isabel; Van Bockstaele, Erik; De Loose, Marc

    2004-03-01

    Analytical real-time PCR technology is a powerful tool for implementation of the GMO labeling regulations enforced in the EU. The quality of analytical measurement data obtained by quantitative real-time PCR depends on the correct use of calibrator and reference materials (RMs). For GMO methods of analysis, the choice of appropriate RMs is currently under debate. So far, genomic DNA solutions from certified reference materials (CRMs) are most often used as calibrators for GMO quantification by means of real-time PCR. However, due to some intrinsic features of these CRMs, errors may be expected in the estimations of DNA sequence quantities. In this paper, two new real-time PCR methods are presented for Roundup Ready soybean, in which two types of plasmid DNA fragments are used as calibrators. Single-target plasmids (STPs) diluted in a background of genomic DNA were used in the first method. Multiple-target plasmids (MTPs) containing both sequences in one molecule were used as calibrators for the second method. Both methods simultaneously detect a promoter 35S sequence as GMO-specific target and a lectin gene sequence as endogenous reference target in a duplex PCR. For the estimation of relative GMO percentages both "delta C(T)" and "standard curve" approaches are tested. Delta C(T) methods are based on direct comparison of measured C(T) values of both the GMO-specific target and the endogenous target. Standard curve methods measure absolute amounts of target copies or haploid genome equivalents. A duplex delta C(T) method with STP calibrators performed at least as well as a similar method with genomic DNA calibrators from commercial CRMs. Besides this, high quality results were obtained with a standard curve method using MTP calibrators. This paper demonstrates that plasmid DNA molecules containing either one or multiple target sequences form perfect alternative calibrators for GMO quantification and are especially suitable for duplex PCR reactions.

  17. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2011-07-01

    The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow and where peak-flow timing at sub-daily time scales is of high importance. The results suggest that the calibration method can be useful when observation time periods for discharge and model input data do not overlap. The method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.

  18. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2010-12-01

    The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow. The results suggest that the new calibration method can be useful when observation time periods for discharge and model input data do not overlap. The new method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.

  19. Neutron monitoring systems including gamma thermometers and methods of calibrating nuclear instruments using gamma thermometers

    DOEpatents

    Moen, Stephan Craig; Meyers, Craig Glenn; Petzen, John Alexander; Foard, Adam Muhling

    2012-08-07

    A method of calibrating a nuclear instrument using a gamma thermometer may include: measuring, in the instrument, local neutron flux; generating, from the instrument, a first signal proportional to the neutron flux; measuring, in the gamma thermometer, local gamma flux; generating, from the gamma thermometer, a second signal proportional to the gamma flux; compensating the second signal; and calibrating a gain of the instrument based on the compensated second signal. Compensating the second signal may include: calculating selected yield fractions for specific groups of delayed gamma sources; calculating time constants for the specific groups; calculating a third signal that corresponds to delayed local gamma flux based on the selected yield fractions and time constants; and calculating the compensated second signal by subtracting the third signal from the second signal. The specific groups may have decay time constants greater than 5.times.10.sup.-1 seconds and less than 5.times.10.sup.5 seconds.

  20. Java-Library for the Access, Storage and Editing of Calibration Metadata of Optical Sensors

    NASA Astrophysics Data System (ADS)

    Firlej, M.; Kresse, W.

    2016-06-01

    The standardization of the calibration of optical sensors in photogrammetry and remote sensing has been discussed for more than a decade. Projects of the German DGPF and the European EuroSDR led to the abstract International Technical Specification ISO/TS 19159-1:2014 "Calibration and validation of remote sensing imagery sensors and data - Part 1: Optical sensors". This article presents the first software interface for a read- and write-access to all metadata elements standardized in the ISO/TS 19159-1. This interface is based on an xml-schema that was automatically derived by ShapeChange from the UML-model of the Specification. The software interface serves two cases. First, the more than 300 standardized metadata elements are stored individually according to the xml-schema. Secondly, the camera manufacturers are using many administrative data that are not a part of the ISO/TS 19159-1. The new software interface provides a mechanism for input, storage, editing, and output of both types of data. Finally, an output channel towards a usual calibration protocol is provided. The interface is written in Java. The article also addresses observations made when analysing the ISO/TS 19159-1 and compiles a list of proposals for maturing the document, i.e. for an updated version of the Specification.

  1. Molecular Form Differences Between Prostate-Specific Antigen (PSA) Standards Create Quantitative Discordances in PSA ELISA Measurements

    PubMed Central

    McJimpsey, Erica L.

    2016-01-01

    The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves. PMID:26911983

  2. 40 CFR 86.1308-84 - Dynamometer and engine equipment specifications.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... technique involves the calibration of a master load cell (i.e., dynamometer case load cell). This... hydraulically actuated precalibrated master load cell. This calibration is then transferred to the flywheel torque measuring device. The technique involves the following steps: (i) A master load cell shall be...

  3. A Spectralon BRF Data Base for MISR Calibration Application

    NASA Technical Reports Server (NTRS)

    Bruegge, C.; Chrien, N.; Haner, D.

    1999-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) is an Earth observing sensor which will provide global retrievals of aerosols, clouds, and land surface parameters. Instrument specifications require high accuracy absolute calibration, as well as accurate camera-to-camera, band-to-band and pixel-to-pixel relative response determinations.

  4. Exploring a Three-Level Model of Calibration Accuracy

    ERIC Educational Resources Information Center

    Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.

    2014-01-01

    We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…

  5. 40 CFR 86.1308-84 - Dynamometer and engine equipment specifications.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ....e., armature current, etc.) may be used for torque measurement provided that it can be shown that... a constant speed. The flywheel torque measurement device readout shall be calibrated to the master... approximately equal useful ranges of torque measurement.) The transfer calibration shall be performed in a...

  6. 40 CFR 86.1308-84 - Dynamometer and engine equipment specifications.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....e., armature current, etc.) may be used for torque measurement provided that it can be shown that... a constant speed. The flywheel torque measurement device readout shall be calibrated to the master... approximately equal useful ranges of torque measurement.) The transfer calibration shall be performed in a...

  7. 40 CFR Appendix Viii to Part 85 - Vehicle and Engine Parameters and Specifications

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Pt. 85, App. VIII Appendix VIII.... Air Inlet System. 1. Temperature control system calibration. IV. Fuel System. 1. General. a. Engine idle speed. b. Engine idle mixture. 2. Carburetion. a. Air-fuel flow calibration. b. Transient...

  8. 40 CFR Appendix Viii to Part 85 - Vehicle and Engine Parameters and Specifications

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Pt. 85, App. VIII Appendix VIII.... Air Inlet System. 1. Temperature control system calibration. IV. Fuel System. 1. General. a. Engine idle speed. b. Engine idle mixture. 2. Carburetion. a. Air-fuel flow calibration. b. Transient...

  9. 40 CFR Appendix Viii to Part 85 - Vehicle and Engine Parameters and Specifications

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Pt. 85, App. VIII Appendix VIII.... Air Inlet System. 1. Temperature control system calibration. IV. Fuel System. 1. General. a. Engine idle speed. b. Engine idle mixture. 2. Carburetion. a. Air-fuel flow calibration. b. Transient...

  10. 7 CFR 28.956 - Prescribed fees.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    .... sample 42.00 3.0Furnishing standard color tiles for calibrating cotton colormeters, per set of five tiles... outside continental United States 165.00 3.1Furnishing single color calibration tiles for use with specific instruments or as replacements in above sets, each tile: a. f.o.b. Memphis, Tennessee 22.00 b...

  11. 7 CFR 28.956 - Prescribed fees.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .... sample 42.00 3.0Furnishing standard color tiles for calibrating cotton colormeters, per set of five tiles... outside continental United States 165.00 3.1Furnishing single color calibration tiles for use with specific instruments or as replacements in above sets, each tile: a. f.o.b. Memphis, Tennessee 22.00 b...

  12. Task Complexity, Epistemological Beliefs and Metacognitive Calibration: An Exploratory Study

    ERIC Educational Resources Information Center

    Stahl, Elmar; Pieschl, Stephanie; Bromme, Rainer

    2006-01-01

    This article presents an explorative study, which is part of a comprehensive project to examine the impact of epistemological beliefs on metacognitive calibration during learning processes within a complex hypermedia information system. More specifically, this study investigates: 1) if learners differentiate between tasks of different complexity,…

  13. Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)

    NASA Astrophysics Data System (ADS)

    Gorman, Richard M.; Oliver, Hilary J.

    2018-06-01

    Most geophysical models include many parameters that are not fully determined by theory, and can be tuned to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.

  14. Improving the Ar I and II branching ratio calibration method: Monte Carlo simulations of effects from photon scattering/reflecting in hollow cathodes

    NASA Astrophysics Data System (ADS)

    Lawler, J. E.; Den Hartog, E. A.

    2018-03-01

    The Ar I and II branching ratio calibration method is discussed with the goal of improving the technique. This method of establishing a relative radiometric calibration is important in ongoing research to improve atomic transition probabilities for quantitative spectroscopy in astrophysics and other fields. Specific suggestions are presented along with Monte Carlo simulations of wavelength dependent effects from scattering/reflecting of photons in a hollow cathode.

  15. In-Situ Cameras for Radiometric Correction of Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Kautz, Jess S.

    The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full-scale implementation.

  16. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  17. Simultaneous auto-calibration and gradient delays estimation (SAGE) in non-Cartesian parallel MRI using low-rank constraints.

    PubMed

    Jiang, Wenwen; Larson, Peder E Z; Lustig, Michael

    2018-03-09

    To correct gradient timing delays in non-Cartesian MRI while simultaneously recovering corruption-free auto-calibration data for parallel imaging, without additional calibration scans. The calibration matrix constructed from multi-channel k-space data should be inherently low-rank. This property is used to construct reconstruction kernels or sensitivity maps. Delays between the gradient hardware across different axes and RF receive chain, which are relatively benign in Cartesian MRI (excluding EPI), lead to trajectory deviations and hence data inconsistencies for non-Cartesian trajectories. These in turn lead to higher rank and corrupted calibration information which hampers the reconstruction. Here, a method named Simultaneous Auto-calibration and Gradient delays Estimation (SAGE) is proposed that estimates the actual k-space trajectory while simultaneously recovering the uncorrupted auto-calibration data. This is done by estimating the gradient delays that result in the lowest rank of the calibration matrix. The Gauss-Newton method is used to solve the non-linear problem. The method is validated in simulations using center-out radial, projection reconstruction and spiral trajectories. Feasibility is demonstrated on phantom and in vivo scans with center-out radial and projection reconstruction trajectories. SAGE is able to estimate gradient timing delays with high accuracy at a signal to noise ratio level as low as 5. The method is able to effectively remove artifacts resulting from gradient timing delays and restore image quality in center-out radial, projection reconstruction, and spiral trajectories. The low-rank based method introduced simultaneously estimates gradient timing delays and provides accurate auto-calibration data for improved image quality, without any additional calibration scans. © 2018 International Society for Magnetic Resonance in Medicine.

  18. A game-theoretic approach for calibration of low-cost magnetometers under noise uncertainty

    NASA Astrophysics Data System (ADS)

    Siddharth, S.; Ali, A. S.; El-Sheimy, N.; Goodall, C. L.; Syed, Z. F.

    2012-02-01

    Pedestrian heading estimation is a fundamental challenge in Global Navigation Satellite System (GNSS)-denied environments. Additionally, the heading observability considerably degrades in low-speed mode of operation (e.g. walking), making this problem even more challenging. The goal of this work is to improve the heading solution when hand-held personal/portable devices, such as cell phones, are used for positioning and to improve the heading estimation in GNSS-denied signal environments. Most smart phones are now equipped with self-contained, low cost, small size and power-efficient sensors, such as magnetometers, gyroscopes and accelerometers. A magnetometer needs calibration before it can be properly employed for navigation purposes. Magnetometers play an important role in absolute heading estimation and are embedded in many smart phones. Before the users navigate with the phone, a calibration is invoked to ensure an improved signal quality. This signal is used later in the heading estimation. In most of the magnetometer-calibration approaches, the motion modes are seldom described to achieve a robust calibration. Also, suitable calibration approaches fail to discuss the stopping criteria for calibration. In this paper, the following three topics are discussed in detail that are important to achieve proper magnetometer-calibration results and in turn the most robust heading solution for the user while taking care of the device misalignment with respect to the user: (a) game-theoretic concepts to attain better filter parameter tuning and robustness in noise uncertainty, (b) best maneuvers with focus on 3D and 2D motion modes and related challenges and (c) investigation of the calibration termination criteria leveraging the calibration robustness and efficiency.

  19. Hybrid dynamic radioactive particle tracking (RPT) calibration technique for multiphase flow systems

    NASA Astrophysics Data System (ADS)

    Khane, Vaibhav; Al-Dahhan, Muthanna H.

    2017-04-01

    The radioactive particle tracking (RPT) technique has been utilized to measure three-dimensional hydrodynamic parameters for multiphase flow systems. An analytical solution to the inverse problem of the RPT technique, i.e. finding the instantaneous tracer positions based upon instantaneous counts received in the detectors, is not possible. Therefore, a calibration to obtain a counts-distance map is needed. There are major shortcomings in the conventional RPT calibration method due to which it has limited applicability in practical applications. In this work, the design and development of a novel dynamic RPT calibration technique are carried out to overcome the shortcomings of the conventional RPT calibration method. The dynamic RPT calibration technique has been implemented around a test reactor with 1foot in diameter and 1 foot in height using Cobalt-60 as an isotopes tracer particle. Two sets of experiments have been carried out to test the capability of novel dynamic RPT calibration. In the first set of experiments, a manual calibration apparatus has been used to hold a tracer particle at known static locations. In the second set of experiments, the tracer particle was moved vertically downwards along a straight line path in a controlled manner. The obtained reconstruction results about the tracer particle position were compared with the actual known position and the reconstruction errors were estimated. The obtained results revealed that the dynamic RPT calibration technique is capable of identifying tracer particle positions with a reconstruction error between 1 to 5.9 mm for the conditions studied which could be improved depending on various factors outlined here.

  20. Comparative analysis of model behaviour for flood prediction purposes using Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Casper, M. C.; Grundmann, J.; Buchholz, O.

    2009-03-01

    Distributed watershed models constitute a key component in flood forecasting systems. It is widely recognized that models because of their structural differences have varying capabilities of capturing different aspects of the system behaviour equally well. Of course, this also applies to the reproduction of peak discharges by a simulation model which is of particular interest regarding the flood forecasting problem. In our study we use a Self-Organizing Map (SOM) in combination with index measures which are derived from the flow duration curve in order to examine the conditions under which three different distributed watershed models are capable of reproducing flood events present in the calibration data. These indices are specifically conceptualized to extract data on the peak discharge characteristics of model output time series which are obtained from Monte-Carlo simulations with the distributed watershed models NASIM, LARSIM and WaSIM-ETH. The SOM helps to analyze this data by producing a discretized mapping of their distribution in the index space onto a two dimensional plane such that their pattern and consequently the patterns of model behaviour can be conveyed in a comprehensive manner. It is demonstrated how the SOM provides useful information about details of model behaviour and also helps identifying the model parameters that are relevant for the reproduction of peak discharges and thus for flood prediction problems. It is further shown how the SOM can be used to identify those parameter sets from among the Monte-Carlo data that most closely approximate the peak discharges of a measured time series. The results represent the characteristics of the observed time series with partially superior accuracy than the reference simulation obtained by implementing a simple calibration strategy using the global optimization algorithm SCE-UA. The most prominent advantage of using SOM in the context of model analysis is that it allows to comparatively evaluating the data from two or more models. Our results highlight the individuality of the model realizations in terms of the index measures and shed a critical light on the use and implementation of simple and yet too rigorous calibration strategies.

  1. A portable foot-parameter-extracting system

    NASA Astrophysics Data System (ADS)

    Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan

    2016-03-01

    In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.

  2. Modified expression for bulb-tracer depletion—Effect on argon dating standards

    USGS Publications Warehouse

    Fleck, Robert J.; Calvert, Andrew T.

    2014-01-01

    40Ar/39Ar geochronology depends critically on well-calibrated standards, often traceable to first-principles K-Ar age calibrations using bulb-tracer systems. Tracer systems also provide precise standards for noble-gas studies and interlaboratory calibration. The exponential expression long used for calculating isotope tracer concentrations in K-Ar age dating and calibration of 40Ar/39Ar age standards may provide a close approximation of those values, but is not correct. Appropriate equations are derived that accurately describe the depletion of tracer reservoirs and concentrations of sequential tracers. In the modified expression the depletion constant is not in the exponent, which only varies as integers by tracer-number. Evaluation of the expressions demonstrates that systematic error introduced through use of the original expression may be substantial where reservoir volumes are small and resulting depletion constants are large. Traditional use of large reservoir to tracer volumes and the resulting small depletion constants have kept errors well less than experimental uncertainties in most previous K-Ar and calibration studies. Use of the proper expression, however, permits use of volumes appropriate to the problems addressed.

  3. CubiCal - Fast radio interferometric calibration suite exploiting complex optimisation

    NASA Astrophysics Data System (ADS)

    Kenyon, J. S.; Smirnov, O. M.; Grobler, T. L.; Perkins, S. J.

    2018-05-01

    It has recently been shown that radio interferometric gain calibration can be expressed succinctly in the language of complex optimisation. In addition to providing an elegant framework for further development, it exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares solvers such as Gauss-Newton and Levenberg-Marquardt. We extend existing derivations to chains of Jones terms: products of several gains which model different aberrant effects. In doing so, we find that the useful properties found in the single term case still hold. We also develop several specialised solvers which deal with complex gains parameterised by real values. The newly developed solvers have been implemented in a Python package called CubiCal, which uses a combination of Cython, multiprocessing and shared memory to leverage the power of modern hardware. We apply CubiCal to both simulated and real data, and perform both direction-independent and direction-dependent self-calibration. Finally, we present the results of some rudimentary profiling to show that CubiCal is competitive with respect to existing calibration tools such as MeqTrees.

  4. Multi-Dimensional Calibration of Impact Dynamic Models

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.

    2011-01-01

    NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.

  5. Calibration of GPS based high accuracy speed meter for vehicles

    NASA Astrophysics Data System (ADS)

    Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie

    2015-02-01

    GPS based high accuracy speed meter for vehicles is a special type of GPS speed meter which uses Doppler Demodulation of GPS signals to calculate the speed of a moving target. It is increasingly used as reference equipment in the field of traffic speed measurement, but acknowledged standard calibration methods are still lacking. To solve this problem, this paper presents the set-ups of simulated calibration, field test signal replay calibration, and in-field test comparison with an optical sensor based non-contact speed meter. All the experiments were carried out on particular speed values in the range of (40-180) km/h with the same GPS speed meter. The speed measurement errors of simulated calibration fall in the range of +/-0.1 km/h or +/-0.1%, with uncertainties smaller than 0.02% (k=2). The errors of replay calibration fall in the range of +/-0.1% with uncertainties smaller than 0.10% (k=2). The calibration results justify the effectiveness of the two methods. The relative deviations of the GPS speed meter from the optical sensor based noncontact speed meter fall in the range of +/-0.3%, which validates the use of GPS speed meter as reference instruments. The results of this research can provide technical basis for the establishment of internationally standard calibration methods of GPS speed meters, and thus ensures the legal status of GPS speed meters as reference equipment in the field of traffic speed metrology.

  6. Simultaneous estimation of diet composition and calibration coefficients with fatty acid signature data

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.; Rode, Karyn D.

    2017-01-01

    Knowledge of animal diets provides essential insights into their life history and ecology, although diet estimation is challenging and remains an active area of research. Quantitative fatty acid signature analysis (QFASA) has become a popular method of estimating diet composition, especially for marine species. A primary assumption of QFASA is that constants called calibration coefficients, which account for the differential metabolism of individual fatty acids, are known. In practice, however, calibration coefficients are not known, but rather have been estimated in feeding trials with captive animals of a limited number of model species. The impossibility of verifying the accuracy of feeding trial derived calibration coefficients to estimate the diets of wild animals is a foundational problem with QFASA that has generated considerable criticism. We present a new model that allows simultaneous estimation of diet composition and calibration coefficients based only on fatty acid signature samples from wild predators and potential prey. Our model performed almost flawlessly in four tests with constructed examples, estimating both diet proportions and calibration coefficients with essentially no error. We also applied the model to data from Chukchi Sea polar bears, obtaining diet estimates that were more diverse than estimates conditioned on feeding trial calibration coefficients. Our model avoids bias in diet estimates caused by conditioning on inaccurate calibration coefficients, invalidates the primary criticism of QFASA, eliminates the need to conduct feeding trials solely for diet estimation, and consequently expands the utility of fatty acid data to investigate aspects of ecology linked to animal diets.

  7. Advanced Mathematical Tools in Metrology III

    NASA Astrophysics Data System (ADS)

    Ciarlini, P.

    The Table of Contents for the book is as follows: * Foreword * Invited Papers * The ISO Guide to the Expression of Uncertainty in Measurement: A Bridge between Statistics and Metrology * Bootstrap Algorithms and Applications * The TTRSs: 13 Oriented Constraints for Dimensioning, Tolerancing & Inspection * Graded Reference Data Sets and Performance Profiles for Testing Software Used in Metrology * Uncertainty in Chemical Measurement * Mathematical Methods for Data Analysis in Medical Applications * High-Dimensional Empirical Linear Prediction * Wavelet Methods in Signal Processing * Software Problems in Calibration Services: A Case Study * Robust Alternatives to Least Squares * Gaining Information from Biomagnetic Measurements * Full Papers * Increase of Information in the Course of Measurement * A Framework for Model Validation and Software Testing in Regression * Certification of Algorithms for Determination of Signal Extreme Values during Measurement * A Method for Evaluating Trends in Ozone-Concentration Data and Its Application to Data from the UK Rural Ozone Monitoring Network * Identification of Signal Components by Stochastic Modelling in Measurements of Evoked Magnetic Fields from Peripheral Nerves * High Precision 3D-Calibration of Cylindrical Standards * Magnetic Dipole Estimations for MCG-Data * Transfer Functions of Discrete Spline Filters * An Approximation Method for the Linearization of Tridimensional Metrology Problems * Regularization Algorithms for Image Reconstruction from Projections * Quality of Experimental Data in Hydrodynamic Research * Stochastic Drift Models for the Determination of Calibration Intervals * Short Communications * Projection Method for Lidar Measurement * Photon Flux Measurements by Regularised Solution of Integral Equations * Correct Solutions of Fit Problems in Different Experimental Situations * An Algorithm for the Nonlinear TLS Problem in Polynomial Fitting * Designing Axially Symmetric Electromechanical Systems of Superconducting Magnetic Levitation in Matlab Environment * Data Flow Evaluation in Metrology * A Generalized Data Model for Integrating Clinical Data and Biosignal Records of Patients * Assessment of Three-Dimensional Structures in Clinical Dentistry * Maximum Entropy and Bayesian Approaches to Parameter Estimation in Mass Metrology * Amplitude and Phase Determination of Sinusoidal Vibration in the Nanometer Range using Quadrature Signals * A Class of Symmetric Compactly Supported Wavelets and Associated Dual Bases * Analysis of Surface Topography by Maximum Entropy Power Spectrum Estimation * Influence of Different Kinds of Errors on Imaging Results in Optical Tomography * Application of the Laser Interferometry for Automatic Calibration of Height Setting Micrometer * Author Index

  8. The estimation of pointing angle and normalized surface scattering cross section from GEOS-3 radar altimeter measurements

    NASA Technical Reports Server (NTRS)

    Brown, G. S.; Curry, W. J.

    1977-01-01

    The statistical error of the pointing angle estimation technique is determined as a function of the effective receiver signal to noise ratio. Other sources of error are addressed and evaluated with inadequate calibration being of major concern. The impact of pointing error on the computation of normalized surface scattering cross section (sigma) from radar and the waveform attitude induced altitude bias is considered and quantitative results are presented. Pointing angle and sigma processing algorithms are presented along with some initial data. The intensive mode clean vs. clutter AGC calibration problem is analytically resolved. The use clutter AGC data in the intensive mode is confirmed as the correct calibration set for the sigma computations.

  9. Telescope Scientist on the Advanced X-Ray Astrophysics Observatory

    NASA Technical Reports Server (NTRS)

    VanSpeybroeck, Leon

    1999-01-01

    The most important activity during this reporting period was the calibration of the AXAF High Resolution Mirror Assembly (HRMA) and the analysis of the copious data which were obtained during that project. The calibration was highly successful, and will result in the AXAF being by far the best calibrated X-ray observatory ever flown, and more accurate results by all of its users. This period also included participation in the spacecraft alignment and assembly activities and final flight readiness reviews. The planning of the first year of Telescope Scientist AXAF observations also was accomplished. The Telescope Scientist team also served as a technical resource for various problems which were encountered during this period. Many of these contributions have been documented in memoranda sent to the project.

  10. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-01-01

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978

  11. Inversion of Robin coefficient by a spectral stochastic finite element approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin Bangti; Zou Jun

    2008-03-01

    This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.

  12. Dose Calculation on KV Cone Beam CT Images: An Investigation of the Hu-Density Conversion Stability and Dose Accuracy Using the Site-Specific Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rong Yi, E-mail: rong@humonc.wisc.ed; Smilowitz, Jennifer; Tewatia, Dinesh

    2010-10-01

    Precise calibration of Hounsfield units (HU) to electron density (HU-density) is essential to dose calculation. On-board kV cone beam computed tomography (CBCT) imaging is used predominantly for patients' positioning, but will potentially be used for dose calculation. The impacts of varying 3 imaging parameters (mAs, source-imager distance [SID], and cone angle) and phantom size on the HU number accuracy and HU-density calibrations for CBCT imaging were studied. We proposed a site-specific calibration method to achieve higher accuracy in CBCT image-based dose calculation. Three configurations of the Computerized Imaging Reference Systems (CIRS) water equivalent electron density phantom were used to simulatemore » sites including head, lungs, and lower body (abdomen/pelvis). The planning computed tomography (CT) scan was used as the baseline for comparisons. CBCT scans of these phantom configurations were performed using Varian Trilogy{sup TM} system in a precalibrated mode with fixed tube voltage (125 kVp), but varied mAs, SID, and cone angle. An HU-density curve was generated and evaluated for each set of scan parameters. Three HU-density tables generated using different phantom configurations with the same imaging parameter settings were selected for dose calculation on CBCT images for an accuracy comparison. Changing mAs or SID had small impact on HU numbers. For adipose tissue, the HU discrepancy from the baseline was 20 HU in a small phantom, but 5 times lager in a large phantom. Yet, reducing the cone angle significantly decreases the HU discrepancy. The HU-density table was also affected accordingly. By performing dose comparison between CT and CBCT image-based plans, results showed that using the site-specific HU-density tables to calibrate CBCT images of different sites improves the dose accuracy to {approx}2%. Our phantom study showed that CBCT imaging can be a feasible option for dose computation in adaptive radiotherapy approach if the site-specific calibration is applied.« less

  13. Online geometrical calibration of a mobile C-arm using external sensors

    NASA Astrophysics Data System (ADS)

    Mitschke, Matthias M.; Navab, Nassir; Schuetz, Oliver

    2000-04-01

    3D tomographic reconstruction of high contrast objects such as contrast agent enhanced blood vessels or bones from x-ray images acquired by isocentric C-arm systems recently gained interest. For tomographic reconstruction, a sequence of images is captured during the C-arm rotation around the patient and the precise projection geometry has to be determined for each image. This is a difficult task, as C- arms usually do not provide accurate information about their projection geometry. Standard methods propose the use of an x-ray calibration phantom and an offline calibration, when the motion of the C-arm is supposed to be reproducible between calibration and patient run. However, mobile C-arms usually do not have this desirable property. Therefore, an online recovery of projection geometry is necessary. Here, we study the use of external tracking systems such as Polaris or Optotrak from Northern Digital, Inc., for online calibration. In order to use the external tracking system for recovery of x-ray projection geometry two unknown transformations have to be estimated. We describe our attempt to solve this calibration problem. These are the relations between x-ray imaging system and marker plate of the tracking system as well as worked and sensor coordinate system. Experimental result son anatomical data are presented and visually compared with the results of estimating the projection geometry with an x-ray calibration phantom.

  14. Calibration facility for environment dosimetry instruments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bercea, Sorin; Celarel, Aurelia; Cenusa, Constantin

    2013-12-16

    In the last ten years, the nuclear activities, as well as the major nuclear events (see Fukushima accident) had an increasing impact on the environment, merely by contamination with radioactive materials. The most conferment way to quickly identify the presence of some radioactive elements in the environment, is to measure the dose-equivalent rate H. In this situation, information concerning the values of H due only to the natural radiation background must exist. Usually, the values of H due to the natural radiation background, are very low (∼10{sup −9} - 10{sup −8} Sv/h). A correct measurement of H in this rangemore » involve a performing calibration of the measuring instruments in the measuring range corresponding to the natural radiation background lead to important problems due to the presence of the natural background itself the best way to overlap this difficulty is to set up the calibration stand in an area with very low natural radiation background. In Romania, we identified an area with such special conditions at 200 m dept, in a salt mine. This paper deals with the necessary requirements for such a calibration facility, as well as with the calibration stand itself. The paper includes also, a description of the calibration stand (and images) as well as the radiological and metrological parameters. This calibration facilities for environment dosimetry is one of the few laboratories in this field in Europe.« less

  15. Geometric artifacts reduction for cone-beam CT via L0-norm minimization without dedicated phantoms.

    PubMed

    Gong, Changcheng; Cai, Yufang; Zeng, Li

    2018-01-01

    For cone-beam computed tomography (CBCT), transversal shifts of the rotation center exist inevitably, which will result in geometric artifacts in CT images. In this work, we propose a novel geometric calibration method for CBCT, which can also be used in micro-CT. The symmetry property of the sinogram is used for the first calibration, and then L0-norm of the gradient image from the reconstructed image is used as the cost function to be minimized for the second calibration. An iterative search method is adopted to pursue the local minimum of the L0-norm minimization problem. The transversal shift value is updated with affirmatory step size within a search range determined by the first calibration. In addition, graphic processing unit (GPU)-based FDK algorithm and acceleration techniques are designed to accelerate the calibration process of the presented new method. In simulation experiments, the mean absolute difference (MAD) and the standard deviation (SD) of the transversal shift value were less than 0.2 pixels between the noise-free and noisy projection images, which indicated highly accurate calibration applying the new calibration method. In real data experiments, the smaller entropies of the corrected images also indicated that higher resolution image was acquired using the corrected projection data and the textures were well protected. Study results also support the feasibility of applying the proposed method to other imaging modalities.

  16. Distributing Variable Star Data to the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Kinne, Richard C.; Templeton, M. R.; Henden, A. A.; Zografou, P.; Harbo, P.; Evans, J.; Rots, A. H.; LAZIO, J.

    2013-01-01

    Effective distribution of data is a core element of effective astronomy today. The AAVSO is the home of several different unique databases. The AAVSO International Database (AID) contains over a century of photometric and time-series data on thousands of individual variable stars comprising over 22 million observations. The AAVSO Photometric All-Sky Survey (APASS) is a new photometric catalog containing calibrated photometry in Johnson B, V and Sloan g', r' and i' filters for stars with magnitudes of 10 < V < 17. The AAVSO is partnering with researchers and technologists at the Virtual Astronomical Observatory (VAO) to solve the data distribution problem for these datasets by making them available via various VO tools. We give specific examples of how these data can be accessed through Virtual Observatory (VO) toolsets and utilized for astronomical research.

  17. Semiquantitative determination of short-chain fatty acids in cane and beet sugars.

    PubMed

    Batista, Rebecca B; Grimm, Casey C; Godshall, Mary An

    2002-03-01

    Some sugars, specifically white beet sugar and raw cane sugars, possess off-flavors and off-odors. Although not necessarily the source, the presence of short-chain fatty acids serves as an indicator of an off-odor problem in sugar. Solid-phase microextraction (SPME) is used to collect the volatile compounds from the headspace of sugar. The temperature, moisture, and type of SPME fiber are varied to optimize recovery. Sugars analyzed in the absence of water using an incubation temperature of 70 degrees C with a divinylbenzene-carboxen-polydimethylsiloxane fiber yield the most reproducible results. Data from depletion analyses report a recovery level of 38% for the first injection. The semiquantitative analysis of butyric acid is accomplished using injected standards to develop a calibration curve.

  18. Bore-sight calibration of the profile laser scanner using a large size exterior calibration field

    NASA Astrophysics Data System (ADS)

    Koska, Bronislav; Křemen, Tomáš; Štroner, Martin

    2014-10-01

    The bore-sight calibration procedure and results of a profile laser scanner using a large size exterior calibration field is presented in the paper. The task is a part of Autonomous Mapping Airship (AMA) project which aims to create s surveying system with specific properties suitable for effective surveying of medium-wide areas (units to tens of square kilometers per a day). As is obvious from the project name an airship is used as a carrier. This vehicle has some specific properties. The most important properties are high carrying capacity (15 kg), long flight time (3 hours), high operating safety and special flight characteristics such as stability of flight, in terms of vibrations, and possibility to flight at low speed. The high carrying capacity enables using of high quality sensors like professional infrared (IR) camera FLIR SC645, high-end visible spectrum (VIS) digital camera and optics in the visible spectrum and tactical grade INSGPS sensor iMAR iTracerRT-F200 and profile laser scanner SICK LD-LRS1000. The calibration method is based on direct laboratory measuring of coordinate offset (lever-arm) and in-flight determination of rotation offsets (bore-sights). The bore-sight determination is based on the minimization of squares of individual point distances from measured planar surfaces.

  19. Hydrological modelling of the Chaohe Basin in China: Statistical model formulation and Bayesian inference

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Reichert, Peter; Abbaspour, Karim C.; Yang, Hong

    2007-07-01

    SummaryCalibration of hydrologic models is very difficult because of measurement errors in input and response, errors in model structure, and the large number of non-identifiable parameters of distributed models. The difficulties even increase in arid regions with high seasonal variation of precipitation, where the modelled residuals often exhibit high heteroscedasticity and autocorrelation. On the other hand, support of water management by hydrologic models is important in arid regions, particularly if there is increasing water demand due to urbanization. The use and assessment of model results for this purpose require a careful calibration and uncertainty analysis. Extending earlier work in this field, we developed a procedure to overcome (i) the problem of non-identifiability of distributed parameters by introducing aggregate parameters and using Bayesian inference, (ii) the problem of heteroscedasticity of errors by combining a Box-Cox transformation of results and data with seasonally dependent error variances, (iii) the problems of autocorrelated errors, missing data and outlier omission with a continuous-time autoregressive error model, and (iv) the problem of the seasonal variation of error correlations with seasonally dependent characteristic correlation times. The technique was tested with the calibration of the hydrologic sub-model of the Soil and Water Assessment Tool (SWAT) in the Chaohe Basin in North China. The results demonstrated the good performance of this approach to uncertainty analysis, particularly with respect to the fulfilment of statistical assumptions of the error model. A comparison with an independent error model and with error models that only considered a subset of the suggested techniques clearly showed the superiority of the approach based on all the features (i)-(iv) mentioned above.

  20. A comparison of hydrologic models for ecological flows and water availability

    USGS Publications Warehouse

    Caldwell, Peter V; Kennen, Jonathan G.; Sun, Ge; Kiang, Julie E.; Butcher, John B; Eddy, Michelle C; Hay, Lauren E.; LaFontaine, Jacob H.; Hain, Ernie F.; Nelson, Stacy C; McNulty, Steve G

    2015-01-01

    Robust hydrologic models are needed to help manage water resources for healthy aquatic ecosystems and reliable water supplies for people, but there is a lack of comprehensive model comparison studies that quantify differences in streamflow predictions among model applications developed to answer management questions. We assessed differences in daily streamflow predictions by four fine-scale models and two regional-scale monthly time step models by comparing model fit statistics and bias in ecologically relevant flow statistics (ERFSs) at five sites in the Southeastern USA. Models were calibrated to different extents, including uncalibrated (level A), calibrated to a downstream site (level B), calibrated specifically for the site (level C) and calibrated for the site with adjusted precipitation and temperature inputs (level D). All models generally captured the magnitude and variability of observed streamflows at the five study sites, and increasing level of model calibration generally improved performance. All models had at least 1 of 14 ERFSs falling outside a +/−30% range of hydrologic uncertainty at every site, and ERFSs related to low flows were frequently over-predicted. Our results do not indicate that any specific hydrologic model is superior to the others evaluated at all sites and for all measures of model performance. Instead, we provide evidence that (1) model performance is as likely to be related to calibration strategy as it is to model structure and (2) simple, regional-scale models have comparable performance to the more complex, fine-scale models at a monthly time step.

  1. Use of active personal dosimeters in hospitals: EURADOS survey.

    PubMed

    Ciraj-Bjelac, Olivera; Carinou, Eleftheria; Vanhavere, Filip

    2018-06-01

    Considering that occupational exposure in medicine is a matter of growing concern, active personal dosimeters (APDs) are also increasingly being used in different fields of application of ionising radiation in medicine. An extensive survey to collect relevant information regarding the use of APDs in medical imaging applications of ionising radiation was organised by the EURADOS (European Radiation Dosimetry Group) Working Group 12. The objective was to collect data about the use of APDs and to identify the basic problems in the use of APDs in hospitals. APDs are most frequently used in interventional radiology and cardiology departments (54%), in nuclear medicine (29%), and in radiotherapy (12%). Most types of APDs use silicon diodes as the detector; however, in many cases their calibration is not given proper attention, as radiation beam qualities in which they are calibrated differ significantly from those in which they are actually used. The survey revealed problems related to the use of APDs, including their reliability in pulsed x-ray fields that are widely used in hospitals. Guidance from regulatory authorities and professional organisations on the testing and calibration of APDs used in hospital would likely improve the situation.

  2. Long term measurement network for FIFE

    NASA Technical Reports Server (NTRS)

    Blad, Blaine L.; Walter-Shea, Elizabeth A.; Hays, Cynthia J.

    1988-01-01

    The objectives were: to obtain selected instruments which were not standard equipment on the Portable Automated Mesometeorological (PAM) and Data Control Platform (DCP) stations; to assist in incorporation of these instruments onto the PAM and DCP stations; to help provide routine maintenance of the instruments; to conduct periodic instrument calibrations; and to repair or replace malfunctioning instruments when possible. All of the objectives were or will be met soon. All instruments and the necessary instrument stands were purchased or made and were available for inclusion on the PAM and DCP stations before the beginning of the IFC-1. Due to problems beyond control, the DCP stations experienced considerable difficulty in becoming operational. To fill some of the gaps caused by the DCP problems, Campbell CR21-X data loggers were installed and the data collected on cassette tapes. Periodic checks of all instruments were made, to maintain data quality, to make necessary adjustments in certain instruments, to replace malfunctioning instruments, and to provide instrument calibration. All instruments will be calibrated before the beginning of the 1988 growing season as soon as the weather permits access to all stations and provides conditions that are not too harsh to work in for extended periods of time.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genest-Beaulieu, C.; Bergeron, P., E-mail: genest@astro.umontreal.ca, E-mail: bergeron@astro.umontreal.ca

    We present a comparative analysis of atmospheric parameters obtained with the so-called photometric and spectroscopic techniques. Photometric and spectroscopic data for 1360 DA white dwarfs from the Sloan Digital Sky Survey (SDSS) are used, as well as spectroscopic data from the Villanova White Dwarf Catalog. We first test the calibration of the ugriz photometric system by using model atmosphere fits to observed data. Our photometric analysis indicates that the ugriz photometry appears well calibrated when the SDSS to AB{sub 95} zeropoint corrections are applied. The spectroscopic analysis of the same data set reveals that the so-called high-log g problem canmore » be solved by applying published correction functions that take into account three-dimensional hydrodynamical effects. However, a comparison between the SDSS and the White Dwarf Catalog spectra also suggests that the SDSS spectra still suffer from a small calibration problem. We then compare the atmospheric parameters obtained from both fitting techniques and show that the photometric temperatures are systematically lower than those obtained from spectroscopic data. This systematic offset may be linked to the hydrogen line profiles used in the model atmospheres. We finally present the results of an analysis aimed at measuring surface gravities using photometric data only.« less

  4. Towards a global network of gamma-ray detector calibration facilities

    NASA Astrophysics Data System (ADS)

    Tijs, Marco; Koomans, Ronald; Limburg, Han

    2016-09-01

    Gamma-ray logging tools are applied worldwide. At various locations, calibration facilities are used to calibrate these gamma-ray logging systems. Several attempts have been made to cross-correlate well known calibration pits, but this cross-correlation does not include calibration facilities in Europe or private company calibration facilities. Our aim is to set-up a framework that gives the possibility to interlink all calibration facilities worldwide by using `tools of opportunity' - tools that have been calibrated in different calibration facilities, whether this usage was on a coordinated basis or by coincidence. To compare the measurement of different tools, it is important to understand the behaviour of the tools in the different calibration pits. Borehole properties, such as diameter, fluid, casing and probe diameter strongly influence the outcome of gamma-ray borehole logging. Logs need to be properly calibrated and compensated for these borehole properties in order to obtain in-situ grades or to do cross-hole correlation. Some tool providers provide tool-specific correction curves for this purpose. Others rely on reference measurements against sources of known radionuclide concentration and geometry. In this article, we present an attempt to set-up a framework for transferring `local' calibrations to be applied `globally'. This framework includes corrections for any geometry and detector size to give absolute concentrations of radionuclides from borehole measurements. This model is used to compare measurements in the calibration pits of Grand Junction, located in the USA; Adelaide (previously known as AMDEL), located in Adelaide Australia; and Stonehenge, located at Medusa Explorations BV in the Netherlands.

  5. Calibration of mass spectrometric peptide mass fingerprint data without specific external or internal calibrants

    PubMed Central

    Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut

    2005-01-01

    Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175

  6. Updated radiometric calibration for the Landsat-5 thematic mapper reflective bands

    USGS Publications Warehouse

    Helder, D.L.; Markham, B.L.; Thome, K.J.; Barsi, J.A.; Chander, G.; Malla, R.

    2008-01-01

    The Landsat-5 Thematic Mapper (TM) has been the workhorse of the Landsat system. Launched in 1984, it continues collecting data through the time frame of this paper. Thus, it provides an invaluable link to the past history of the land features of the Earth's surface, and it becomes imperative to provide an accurate radiometric calibration of the reflective bands to the user community. Previous calibration has been based on information obtained from prelaunch, the onboard calibrator, vicarious calibration attempts, and cross-calibration with Landsat-7. Currently, additional data sources are available to improve this calibration. Specifically, improvements in vicarious calibration methods and development of the use of pseudoinvariant sites for trending provide two additional independent calibration sources. The use of these additional estimates has resulted in a consistent calibration approach that ties together all of the available calibration data sources. Results from this analysis indicate a simple exponential, or a constant model may be used for all bands throughout the lifetime of Landsat-5 TM. Where previously time constants for the exponential models were approximately one year, the updated model has significantly longer time constants in bands 1-3. In contrast, bands 4, 5, and 7 are shown to be best modeled by a constant. The models proposed in this paper indicate calibration knowledge of 5% or better early in life, decreasing to nearly 2% later in life. These models have been implemented at the U.S. Geological Survey Earth Resources Observation and Science (EROS) and are the default calibration used for all Landsat TM data now distributed through EROS. ?? 2008 IEEE.

  7. Application of an automatic approach to calibrate the NEMURO nutrient-phytoplankton-zooplankton food web model in the Oyashio region

    NASA Astrophysics Data System (ADS)

    Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi

    2010-10-01

    The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.

  8. Hydrogeology and flow of water in a sand and gravel aquifer contaminated by wood-preserving compounds, Pensacola, Florida

    USGS Publications Warehouse

    Franks, B.J.

    1988-01-01

    The sand and gravel aquifer in southern Escambia County, Florida , is a typical surficial aquifer composed of quartz sands and gravels interbedded locally with silts and clays. Problems of groundwater contamination from leaking surface impoundments are common in surficial aquifers and are a subject of increasing concern and attention. A potentially widespread contamination problem involves organic chemicals from wood-preserving processes. Because creosote is the most extensively used industrial preservative in the United States, an abandoned wood-treatment plant near Pensacola was chosen for investigation. This report describes the hydrogeology and groundwater flow system of the sand and gravel aquifer near the plant. A three-dimensional simulation of groundwater flow in the aquifer was evaluated under steady-state conditions. The model was calibrated on the basis of observed water levels from January 1986. Calibration criteria included reproducing all water levels within the accuracy of the data (one-half contour interval in most cases). Sensitivity analysis showed that the simulations were most sensitive to recharge and vertical leakance of the confining units between layers 1 and 2, and relatively insensitive to changes in hydraulic conductivity and transmissivity and to other changes in vertical leakance. Applications of the results of the calibrated flow model in evaluation of solute transport may require further discretization of the contaminated area, including more sublayers, than were needed for calibration of the groundwater flow system itself. (USGS)

  9. 27 Years of Satellite Ozone Data: Merging of Data Records from Multiple Instruments to Observe Global Trends and Recovery

    NASA Technical Reports Server (NTRS)

    Stolarski, Richard S.

    2007-01-01

    Satellite measurements provide a unique global view of the stratospheric ozone layer. The perspective from satellites allowed for the early mapping of the extent of the phenomenon that became known as the ozone hole. The use of the satellite data for global trends outside of the ozone hole confronts the problem of the possible drift of the calibration of the instrument. The TOMS and SBUV instruments on Nimbus 7 lasted for more than a decade. During that time, the diffuser plate used to reflect sunlight into the measurement degraded (darkened) and the instruments each had a number of events that made calibration determination difficult. Initially the TOMS data were used for global trends by adjusting the overall calibration to agree with a set of ground-based measurement stations. But this was unsatisfactory because the record was not independent of those ground measurements and problems were found in many of the ground stations by using TOMS as a transfer standard. After many years of dedicated work, the TOMS/SBUV team learned how to correct for instrument drift, remove the interfering effects of aerosols, and establish instrument-to-instrument calibrations resulting in a long-term record that can be used for accurate trend and recovery determination. The global view of the satellites allows for determination not only of temporal change in ozone, but spatial fingerprints that allow more confidence in assigning cause to observed changes.

  10. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  11. 40 CFR 89.313 - Initial calibration of analyzers.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...

  12. 40 CFR 89.313 - Initial calibration of analyzers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...

  13. 40 CFR 89.313 - Initial calibration of analyzers.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...

  14. 40 CFR 89.313 - Initial calibration of analyzers.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...

  15. 40 CFR 89.313 - Initial calibration of analyzers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...

  16. Calibrating a Measure of Gender Differences in Motivation for Learning Technology

    ERIC Educational Resources Information Center

    Hwang, Young Suk; Fisher, William; Vrongistinos, Konstantinos

    2009-01-01

    This paper reports on the theory, design, and calibration of an instrument for measuring gender difference in motivation for learning technology. The content of the instrument was developed based upon the motivational theories of Eccles and others. More specifically, the learners' self-concept of ability, perception of technology, perception of…

  17. 40 CFR Appendix A to Part 75 - Specifications and Test Procedures

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... from the National Technical Information Service, 5285 Port Royal Road, Springfield, VA, 703-605-6585 or... specified in section 5.2 of this appendix. Introduce the calibration gas at the gas injection port, as... 5.1 of this appendix. Introduce the calibration gas at the gas injection port, as specified in...

  18. Mobile micro-colorimeter and micro-spectrometer sensor modules as enablers for the replacement of subjective inspections by objective measurements for optically clear colored liquids in-field

    NASA Astrophysics Data System (ADS)

    Dittrich, Paul-Gerald; Grunert, Fred; Ehehalt, Jörg; Hofmann, Dietrich

    2015-03-01

    Aim of the paper is to show that the colorimetric characterization of optically clear colored liquids can be performed with different measurement methods and their application specific multichannel spectral sensors. The possible measurement methods are differentiated by the applied types of multichannel spectral sensors and therefore by their spectral resolution, measurement speed, measurement accuracy and measurement costs. The paper describes how different types of multichannel spectral sensors are calibrated with different types of calibration methods and how the measurement values can be used for further colorimetric calculations. The different measurement methods and the different application specific calibration methods will be explained methodically and theoretically. The paper proofs that and how different multichannel spectral sensor modules with different calibration methods can be applied with smartpads for the calculation of measurement results both in laboratory and in field. A given practical example is the application of different multichannel spectral sensors for the colorimetric characterization of petroleum oils and fuels and their colorimetric characterization by the Saybolt color scale.

  19. Numerical simulation of damage evolution for ductile materials and mechanical properties study

    NASA Astrophysics Data System (ADS)

    El Amri, A.; Hanafi, I.; Haddou, M. E. Y.; Khamlichi, A.

    2015-12-01

    This paper presents results of a numerical modelling of ductile fracture and failure of elements made of 5182H111 aluminium alloys subjected to dynamic traction. The analysis was performed using Johnson-Cook model based on ABAQUS software. The modelling difficulty related to prediction of ductile fracture mainly arises because there is a tremendous span of length scales from the structural problem to the micro-mechanics problem governing the material separation process. This study has been used the experimental results to calibrate a simple crack propagation criteria for shell elements of which one has often been used in practical analyses. The performance of the proposed model is in general good and it is believed that the presented results and experimental-numerical calibration procedure can be of use in practical finite-element simulations.

  20. Preparation of the calibration unit for LINC-NIRVANA

    NASA Astrophysics Data System (ADS)

    Labadie, Lucas; de Bonis, Fulvio; Egner, Sebastian; Herbst, Tom; Bizenberger, Peter; Kürster, Martin; Delboulé, Alain

    2008-07-01

    We present in this paper the status of the calibration unit for the interferometric infrared imager LINC-NIRVANA that will be installed on the Large Binocular Telescope, Arizona. LINC-NIRVANA will combine high angular resolution (~10 mas in J), and wide field-of-view (up to 2'×2') thanks to the conjunct use of interferometry and MCAO. The goal of the calibration unit is to provide calibration tools for the different sub-systems of the instrument. We give an overview of the different tasks that are foreseen as well as of the preliminary detailed design. We show some interferometric results obtained with specific fiber splitters optimized for LINC-NIRVANA. The different components of the calibration unit will be used either during the integration phase on site, or during the science exploitation phase of the instrument.

Top