Sample records for numerical calibration method

  1. A novel approach to calibrate the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Khoram, Nafiseh; Zayane, Chadia; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2016-03-15

    The calibration of the hemodynamic model that describes changes in blood flow and blood oxygenation during brain activation is a crucial step for successfully monitoring and possibly predicting brain activity. This in turn has the potential to provide diagnosis and treatment of brain diseases in early stages. We propose an efficient numerical procedure for calibrating the hemodynamic model using some fMRI measurements. The proposed solution methodology is a regularized iterative method equipped with a Kalman filtering-type procedure. The Newton component of the proposed method addresses the nonlinear aspect of the problem. The regularization feature is used to ensure the stability of the algorithm. The Kalman filter procedure is incorporated here to address the noise in the data. Numerical results obtained with synthetic data as well as with real fMRI measurements are presented to illustrate the accuracy, robustness to the noise, and the cost-effectiveness of the proposed method. We present numerical results that clearly demonstrate that the proposed method outperforms the Cubature Kalman Filter (CKF), one of the most prominent existing numerical methods. We have designed an iterative numerical technique, called the TNM-CKF algorithm, for calibrating the mathematical model that describes the single-event related brain response when fMRI measurements are given. The method appears to be highly accurate and effective in reconstructing the BOLD signal even when the measurements are tainted with high noise level (as high as 30%). Published by Elsevier B.V.

  2. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  3. Dry calibration of electromagnetic flowmeters based on numerical models combining multiple physical phenomena (multiphysics)

    NASA Astrophysics Data System (ADS)

    Fu, X.; Hu, L.; Lee, K. M.; Zou, J.; Ruan, X. D.; Yang, H. Y.

    2010-10-01

    This paper presents a method for dry calibration of an electromagnetic flowmeter (EMF). This method, which determines the voltage induced in the EMF as conductive liquid flows through a magnetic field, numerically solves a coupled set of multiphysical equations with measured boundary conditions for the magnetic, electric, and flow fields in the measuring pipe of the flowmeter. Specifically, this paper details the formulation of dry calibration and an efficient algorithm (that adaptively minimizes the number of measurements and requires only the normal component of the magnetic flux density as boundary conditions on the pipe surface to reconstruct the magnetic field involved) for computing the sensitivity of EMF. Along with an in-depth discussion on factors that could significantly affect the final precision of a dry calibrated EMF, the effects of flow disturbance on measuring errors have been experimentally studied by installing a baffle at the inflow port of the EMF. Results of the dry calibration on an actual EMF were compared against flow-rig calibration; excellent agreements (within 0.3%) between dry calibration and flow-rig tests verify the multiphysical computation of the fields and the robustness of the method. As requiring no actual flow, the dry calibration is particularly useful for calibrating large-diameter EMFs where conventional flow-rig methods are often costly and difficult to implement.

  4. Automated Calibration For Numerical Models Of Riverflow

    NASA Astrophysics Data System (ADS)

    Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey

    2017-04-01

    Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  6. FAST Model Calibration and Validation of the OC5-DeepCwind Floating Offshore Wind System Against Wave Tank Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  7. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    PubMed

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  8. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    PubMed Central

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. PMID:28672823

  9. A Theoretical Framework for Calibration in Computer Models: Parametrization, Estimation and Convergence Properties

    DOE PAGES

    Tuo, Rui; Jeff Wu, C. F.

    2016-07-19

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Jeff Wu, C. F.

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  11. FAST Model Calibration and Validation of the OC5- DeepCwind Floating Offshore Wind System Against Wave Tank Test Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  12. Residual mode correction in calibrating nonlinear damper for vibration control of flexible structures

    NASA Astrophysics Data System (ADS)

    Sun, Limin; Chen, Lin

    2017-10-01

    Residual mode correction is found crucial in calibrating linear resonant absorbers for flexible structures. The classic modal representation augmented with stiffness and inertia correction terms accounting for non-resonant modes improves the calibration accuracy and meanwhile avoids complex modal analysis of the full system. This paper explores the augmented modal representation in calibrating control devices with nonlinearity, by studying a taut cable attached with a general viscous damper and its Equivalent Dynamic Systems (EDSs), i.e. the augmented modal representations connected to the same damper. As nonlinearity is concerned, Frequency Response Functions (FRFs) of the EDSs are investigated in detail for parameter calibration, using the harmonic balance method in combination with numerical continuation. The FRFs of the EDSs and corresponding calibration results are then compared with those of the full system documented in the literature for varied structural modes, damper locations and nonlinearity. General agreement is found and in particular the EDS with both stiffness and inertia corrections (quasi-dynamic correction) performs best among available approximate methods. This indicates that the augmented modal representation although derived from linear cases is applicable to a relatively wide range of damper nonlinearity. Calibration of nonlinear devices by this means still requires numerical analysis while the efficiency is largely improved owing to the system order reduction.

  13. Calibrated Multivariate Regression with Application to Neural Semantic Basis Discovery.

    PubMed

    Liu, Han; Wang, Lie; Zhao, Tuo

    2015-08-01

    We propose a calibrated multivariate regression method named CMR for fitting high dimensional multivariate regression models. Compared with existing methods, CMR calibrates regularization for each regression task with respect to its noise level so that it simultaneously attains improved finite-sample performance and tuning insensitiveness. Theoretically, we provide sufficient conditions under which CMR achieves the optimal rate of convergence in parameter estimation. Computationally, we propose an efficient smoothed proximal gradient algorithm with a worst-case numerical rate of convergence O (1/ ϵ ), where ϵ is a pre-specified accuracy of the objective function value. We conduct thorough numerical simulations to illustrate that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR to solve a brain activity prediction problem and find that it is as competitive as a handcrafted model created by human experts. The R package camel implementing the proposed method is available on the Comprehensive R Archive Network http://cran.r-project.org/web/packages/camel/.

  14. Cantilever spring constant calibration using laser Doppler vibrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohler, Benjamin

    2007-06-15

    Uncertainty in cantilever spring constants is a critical issue in atomic force microscopy (AFM) force measurements. Though numerous methods exist for calibrating cantilever spring constants, the accuracy of these methods can be limited by both the physical models themselves as well as uncertainties in their experimental implementation. Here we report the results from two of the most common calibration methods, the thermal tune method and the Sader method. These were implemented on a standard AFM system as well as using laser Doppler vibrometry (LDV). Using LDV eliminates some uncertainties associated with optical lever detection on an AFM. It also offersmore » considerably higher signal to noise deflection measurements. We find that AFM and LDV result in similar uncertainty in the calibrated spring constants, about 5%, using either the thermal tune or Sader methods provided that certain limitations of the methods and instrumentation are observed.« less

  15. Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic

    PubMed Central

    Guillas, S.; Georgiopoulou, A.; Dias, F.

    2017-01-01

    Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained. PMID:28484339

  16. Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic.

    PubMed

    Salmanidou, D M; Guillas, S; Georgiopoulou, A; Dias, F

    2017-04-01

    Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained.

  17. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    PubMed Central

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726

  18. Time-of-flight PET time calibration using data consistency

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan

    2018-05-01

    This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.

  19. Cosmic reionization on computers. I. Design and calibration of simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov

    Cosmic Reionization On Computers is a long-term program of numerical simulations of cosmic reionization. Its goal is to model fully self-consistently (albeit not necessarily from the first principles) all relevant physics, from radiative transfer to gas dynamics and star formation, in simulation volumes of up to 100 comoving Mpc, and with spatial resolution approaching 100 pc in physical units. In this method paper, we describe our numerical method, the design of simulations, and the calibration of numerical parameters. Using several sets (ensembles) of simulations in 20 h {sup –1} Mpc and 40 h {sup –1} Mpc boxes with spatial resolutionmore » reaching 125 pc at z = 6, we are able to match the observed galaxy UV luminosity functions at all redshifts between 6 and 10, as well as obtain reasonable agreement with the observational measurements of the Gunn-Peterson optical depth at z < 6.« less

  20. Tidal, Residual, Intertidal Mudflat (TRIM) Model and its Applications to San Francisco Bay, California

    USGS Publications Warehouse

    Cheng, R.T.; Casulli, V.; Gartner, J.W.

    1993-01-01

    A numerical model using a semi-implicit finite-difference method for solving the two-dimensional shallow-water equations is presented. The gradient of the water surface elevation in the momentum equations and the velocity divergence in the continuity equation are finite-differenced implicitly, the remaining terms are finite-differenced explicitly. The convective terms are treated using an Eulerian-Lagrangian method. The combination of the semi-implicit finite-difference solution for the gravity wave propagation, and the Eulerian-Lagrangian treatment of the convective terms renders the numerical model unconditionally stable. When the baroclinic forcing is included, a salt transport equation is coupled to the momentum equations, and the numerical method is subject to a weak stability condition. The method of solution and the properties of the numerical model are given. This numerical model is particularly suitable for applications to coastal plain estuaries and tidal embayments in which tidal currents are dominant, and tidally generated residual currents are important. The model is applied to San Francisco Bay, California where extensive historical tides and current-meter data are available. The model calibration is considered by comparing time-series of the field data and of the model results. Alternatively, and perhaps more meaningfully, the model is calibrated by comparing the harmonic constants of tides and tidal currents derived from field data with those derived from the model. The model is further verified by comparing the model results with an independent data set representing the wet season. The strengths and the weaknesses of the model are assessed based on the results of model calibration and verification. Using the model results, the properties of tides and tidal currents in San Francisco Bay are characterized and discussed. Furthermore, using the numerical model, estimates of San Francisco Bay's volume, surface area, mean water depth, tidal prisms, and tidal excursions at spring and neap tides are computed. Additional applications of the model reveal, qualitatively the spatial distribution of residual variables. ?? 1993 Academic Press. All rights reserved.

  1. Overview of Pavement Management.

    DTIC Science & Technology

    1987-01-01

    what types of construction and maintenance have worked or failed in the past and can be used as a learning tool . Historical and current data can also be...basis. - Instant records. S- Reasonably good repeatability of results. Disadvantages are: - Need for frequent calibration. - Numerous operating...variations. 2.4 Performance There are other rating methods and numerous methods of 3 measuring road roughness, the use of which help to evaluate

  2. Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut

    2017-04-01

    Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization

  3. In-Situ Transfer Standard and Coincident-View Intercomparisons for Sensor Cross-Calibration

    NASA Technical Reports Server (NTRS)

    Thome, Kurt; McCorkel, Joel; Czapla-Myers, Jeff

    2013-01-01

    There exist numerous methods for accomplishing on-orbit calibration. Methods include the reflectance-based approach relying on measurements of surface and atmospheric properties at the time of a sensor overpass as well as invariant scene approaches relying on knowledge of the temporal characteristics of the site. The current work examines typical cross-calibration methods and discusses the expected uncertainties of the methods. Data from the Advanced Land Imager (ALI), Advanced Spaceborne Thermal Emission and Reflection and Radiometer (ASTER), Enhanced Thematic Mapper Plus (ETM+), Moderate Resolution Imaging Spectroradiometer (MODIS), and Thematic Mapper (TM) are used to demonstrate the limits of relative sensor-to-sensor calibration as applied to current sensors while Landsat-5 TM and Landsat-7 ETM+ are used to evaluate the limits of in situ site characterizations for SI-traceable cross calibration. The current work examines the difficulties in trending of results from cross-calibration approaches taking into account sampling issues, site-to-site variability, and accuracy of the method. Special attention is given to the differences caused in the cross-comparison of sensors in radiance space as opposed to reflectance space. The results show that cross calibrations with absolute uncertainties lesser than 1.5 percent (1 sigma) are currently achievable even for sensors without coincident views.

  4. Numerical emulation of Thru-Reflection-Line calibration for the de-embedding of Surface Acoustic Wave devices.

    PubMed

    Mencarelli, D; Djafari-Rouhani, B; Pennec, Y; Pitanti, A; Zanotto, S; Stocchi, M; Pierantoni, L

    2018-06-18

    In this contribution, a rigorous numerical calibration is proposed to characterize the excitation of propagating mechanical waves by interdigitated transducers (IDTs). The transition from IDT terminals to phonon waveguides is modeled by means of a general circuit representation that makes use of Scattering Matrix (SM) formalism. In particular, the three-step calibration approach called the Thru-Reflection-Line (TRL), that is a well-established technique in microwave engineering, has been successfully applied to emulate typical experimental conditions. The proposed procedure is suitable for the synthesis/optimization of surface-acoustic-wave (SAW) based devices: the TRL calibration allows to extract/de-embed the acoustic component, namely resonator or filter, from the outer IDT structure, regardless of complexity and size of the letter. We report, as a result, the hybrid scattering parameters of the IDT transition to a mechanical waveguide formed by a phononic crystal patterned on a piezoelectric AlN membrane, where the effect of a discontinuity from periodic to uniform mechanical waveguide is also characterized. In addition, to ensure the correctness of our numerical calculations, the proposed method has been validated by independent calculations.

  5. Determining geometric error model parameters of a terrestrial laser scanner through Two-face, Length-consistency, and Network methods

    PubMed Central

    Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel

    2017-01-01

    Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607

  6. Calibration of the LHAASO-KM2A electromagnetic particle detectors using charged particles within the extensive air showers

    NASA Astrophysics Data System (ADS)

    Lv, Hongkui; He, Huihai; Sheng, Xiangdong; Liu, Jia; Chen, Songzhan; Liu, Ye; Hou, Chao; Zhao, Jing; Zhang, Zhongquan; Wu, Sha; Wang, Yaping; Lhaaso Collaboration

    2018-07-01

    In the Large High Altitude Air Shower Observatory (LHAASO), one square kilometer array (KM2A), with 5242 electromagnetic particle detectors (EDs) and 1171 muon detectors (MDs), is designed to study ultra-high energy gamma-ray astronomy and cosmic ray physics. The remoteness and numerous detectors extremely demand a robust and automatic calibration procedure. In this paper, a self-calibration method which relies on the measurement of charged particles within the extensive air showers is proposed. The method is fully validated by Monte Carlo simulation and successfully applied in a KM2A prototype array experiment. Experimental results show that the self-calibration method can be used to determine the detector time offset constants at the sub-nanosecond level and the number density of particles collected by each ED with an accuracy of a few percents, which are adequate to meet the physical requirements of LHAASO experiment. This software calibration also offers an ideal method to realtime monitor the detector performances for next generation ground-based EAS experiments covering an area above square kilometers scale.

  7. Calibrating the Spatiotemporal Root Density Distribution for Macroscopic Water Uptake Models Using Tikhonov Regularization

    NASA Astrophysics Data System (ADS)

    Li, N.; Yue, X. Y.

    2018-03-01

    Macroscopic root water uptake models proportional to a root density distribution function (RDDF) are most commonly used to model water uptake by plants. As the water uptake is difficult and labor intensive to measure, these models are often calibrated by inverse modeling. Most previous inversion studies assume RDDF to be constant with depth and time or dependent on only depth for simplification. However, under field conditions, this function varies with type of soil and root growth and thus changes with both depth and time. This study proposes an inverse method to calibrate both spatially and temporally varying RDDF in unsaturated water flow modeling. To overcome the difficulty imposed by the ill-posedness, the calibration is formulated as an optimization problem in the framework of the Tikhonov regularization theory, adding additional constraint to the objective function. Then the formulated nonlinear optimization problem is numerically solved with an efficient algorithm on the basis of the finite element method. The advantage of our method is that the inverse problem is translated into a Tikhonov regularization functional minimization problem and then solved based on the variational construction, which circumvents the computational complexity in calculating the sensitivity matrix involved in many derivative-based parameter estimation approaches (e.g., Levenberg-Marquardt optimization). Moreover, the proposed method features optimization of RDDF without any prior form, which is applicable to a more general root water uptake model. Numerical examples are performed to illustrate the applicability and effectiveness of the proposed method. Finally, discussions on the stability and extension of this method are presented.

  8. Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com

    2016-06-15

    The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiationmore » of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.« less

  9. A numerical approach to 14C wiggle-match dating of organic deposits: best fits and confidence intervals

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; Heuvelink, Gerard B. M.; Mauquoy, Dmitri; van der Plicht, Johannes; van Geel, Bas

    2003-06-01

    14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).

  10. Multichannel-Hadamard calibration of high-order adaptive optics systems.

    PubMed

    Guo, Youming; Rao, Changhui; Bao, Hua; Zhang, Ang; Zhang, Xuejun; Wei, Kai

    2014-06-02

    we present a novel technique of calibrating the interaction matrix for high-order adaptive optics systems, called the multichannel-Hadamard method. In this method, the deformable mirror actuators are firstly divided into a series of channels according to their coupling relationship, and then the voltage-oriented Hadamard method is applied to these channels. Taking the 595-element adaptive optics system as an example, the procedure is described in detail. The optimal channel dividing is discussed and tested by numerical simulation. The proposed method is also compared with the voltage-oriented Hadamard only method and the multichannel only method by experiments. Results show that the multichannel-Hadamard method can produce significant improvement on interaction matrix measurement.

  11. An efficient multistage algorithm for full calibration of the hemodynamic model from BOLD signal responses.

    PubMed

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2017-11-01

    We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model. The proposed method is used to estimate consecutively the values of the two sets of model parameters. Numerical results corresponding to both synthetic and real functional magnetic resonance imaging measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Effect of numerical dispersion as a source of structural noise in the calibration of a highly parameterized saltwater intrusion model

    USGS Publications Warehouse

    Langevin, Christian D.; Hughes, Joseph D.

    2010-01-01

    A model with a small amount of numerical dispersion was used to represent saltwater 7 intrusion in a homogeneous aquifer for a 10-year historical calibration period with one 8 groundwater withdrawal location followed by a 10-year prediction period with two groundwater 9 withdrawal locations. Time-varying groundwater concentrations at arbitrary locations in this low-10 dispersion model were then used as observations to calibrate a model with a greater amount of 11 numerical dispersion. The low-dispersion model was solved using a Total Variation Diminishing 12 numerical scheme; an implicit finite difference scheme with upstream weighting was used for 13 the calibration simulations. Calibration focused on estimating a three-dimensional hydraulic 14 conductivity field that was parameterized using a regular grid of pilot points in each layer and a 15 smoothness constraint. Other model parameters (dispersivity, porosity, recharge, etc.) were 16 fixed at the known values. The discrepancy between observed and simulated concentrations 17 (due solely to numerical dispersion) was reduced by adjusting hydraulic conductivity through the 18 calibration process. Within the transition zone, hydraulic conductivity tended to be lower than 19 the true value for the calibration runs tested. The calibration process introduced lower hydraulic 20 conductivity values to compensate for numerical dispersion and improve the match between 21 observed and simulated concentration breakthrough curves at monitoring locations. 22 Concentrations were underpredicted at both groundwater withdrawal locations during the 10-23 year prediction period.

  13. Numerical Filtering of Spurious Transients in a Satellite Scanning Radiometer: Application to CERES

    NASA Technical Reports Server (NTRS)

    Smith, G. Louis; Pandey, D. K.; Lee, Robert B., III; Barkstrom, Bruce R.; Priestley, Kory J.

    2002-01-01

    The Clouds and Earth Radiant Energy System (CERES) scanning, radiometer was designed to provide high accuracy measurements of the radiances from the earth. Calibration testing of the instruments showed the presence of all undesired slow transient in the measurements of all channels at 1% to 2% of the signal. Analysis of the data showed that the transient consists of a single linear mode. The characteristic time of this mode is 0.3 to 0.4 s and is much greater than that the 8-10-ms response time of the detector, so that it is well separated from the detector response. A numerical filter was designed for the removal of this transient from the measurements. Results show no trace remaining of the transient after application of the numerical filter. The characterization of the slow mode on the basis of ground calibration data is discussed and flight results are shown for the CERES instruments aboard the Tropical Rainfall Measurement Mission and Terra spacecraft. The primary influence of the slow mode is in the calibration of the instrument and the in-flight validation of the calibration. This method may be applicable to other radiometers that are striving for high accuracy and encounter a slow spurious mode regardless of the underlying physics.

  14. Air data position-error calibration using state reconstruction techniques

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.; Larson, T. J.; Ehernberger, L. J.

    1984-01-01

    During the highly maneuverable aircraft technology (HiMAT) flight test program recently completed at NASA Ames Research Center's Dryden Flight Research Facility, numerous problems were experienced in airspeed calibration. This necessitated the use of state reconstruction techniques to arrive at a position-error calibration. For the HiMAT aircraft, most of the calibration effort was expended on flights in which the air data pressure transducers were not performing accurately. Following discovery of this problem, the air data transducers of both aircraft were wrapped in heater blankets to correct the problem. Additional calibration flights were performed, and from the resulting data a satisfactory position-error calibration was obtained. This calibration and data obtained before installation of the heater blankets were used to develop an alternate calibration method. The alternate approach took advantage of high-quality inertial data that was readily available. A linearized Kalman filter (LKF) was used to reconstruct the aircraft's wind-relative trajectory; the trajectory was then used to separate transducer measurement errors from the aircraft position error. This calibration method is accurate and inexpensive. The LKF technique has an inherent advantage of requiring that no flight maneuvers be specially designed for airspeed calibrations. It is of particular use when the measurements of the wind-relative quantities are suspected to have transducer-related errors.

  15. Modifications of steam condensation model implemented in commercial solver

    NASA Astrophysics Data System (ADS)

    Sova, Libor; Jun, Gukchol; ŠÅ¥astný, Miroslav

    2017-09-01

    Nucleation theory and droplet grow theory and methods how they are incorporated into numerical solvers are crucial factors for proper wet steam modelling. Unfortunately, they are still covered by cloud of uncertainty and therefore some calibration of these models according to reliable experimental results is important for practical analyses of steam turbines. This article demonstrates how is possible to calibrate wet steam model incorporated into commercial solver ANSYS CFX.

  16. Image analysis method for the measurement of water saturation in a two-dimensional experimental flow tank

    NASA Astrophysics Data System (ADS)

    Belfort, Benjamin; Weill, Sylvain; Lehmann, François

    2017-07-01

    A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.

  17. The determination of gravity anomalies from geoid heights using the inverse Stokes' formula, Fourier transforms, and least squares collocation

    NASA Technical Reports Server (NTRS)

    Rummel, R.; Sjoeberg, L.; Rapp, R. H.

    1978-01-01

    A numerical method for the determination of gravity anomalies from geoid heights is described using the inverse Stokes formula. This discrete form of the inverse Stokes formula applies a numerical integration over the azimuth and an integration over a cubic interpolatory spline function which approximates the step function obtained from the numerical integration. The main disadvantage of the procedure is the lack of a reliable error measure. The method was applied on geoid heights derived from GEOS-3 altimeter measurements in the calibration area of the GEOS-3 satellite.

  18. Calibration strategy for the COROT photometry

    NASA Astrophysics Data System (ADS)

    Buey, J.-T.; Auvergne, M.; Lapeyrere, V.; Boumier, P.

    2004-01-01

    Like Eddington, the COROT photometer will measure very small fluctutions on a large signal: the amplitudes of planetary transits and solar-like oscillations are expressed in ppm (parts per million). For such an instrument, specific calibration has to be done during the different phases of the development of the instrument and of all the subsystems. Two main things have to be taken into account: - the calibration during the study phase; - the calibration of the sub-systems and building of numerical models. The first item allows us to clearly understand all the perturbations (internal and external) and to identify their relative impacts on the expected signal (by numerical models including expected values of perturbations and sensitivity of the instrument). Methods and a schedule for the calibration process can also be introduced, in good agreement with the development plan of the instrument. The second item is more related to the measurement of the sensitivity of the instrument and all its sub-systems. As the instrument is designed to be as stable as possible, we have to mix measurements (with larger fluctuations of parameters than expected) and numerical models. Some typical reasons for that are: - there are many parameters to introduce in the measurements and results from some models (bread-board for example) may be extrapolated to the flight model; - larger fluctuations than expected are used (to measure precisely the sensitivity) and numerical models give the real value of noise with the expected fluctuations. - Characteristics of sub-systems may be measured and models used to give the sensitivity of the whole system built with them, as end-to-end measurements may be impossible (time, budget, physical limitations). Also, house-keeping measurements have to be set up on the critical parts of the sub-systems: measurements on thermal probes, power supply, pointing, etc. All these house-keeping data are used during ground calibration and during the flight, so that correct correlation between signal and house-keeping can be achieved.

  19. Psychophysical contrast calibration

    PubMed Central

    To, Long; Woods, Russell L; Goldstein, Robert B; Peli, Eli

    2013-01-01

    Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing. PMID:23643843

  20. Objective Measurement of Erythema in Psoriasis using Digital Color Photography with Color Calibration

    PubMed Central

    Raina, Abhay; Hennessy, Ricky; Rains, Michael; Allred, James; Hirshburg, Jason M; Diven, Dayna; Markey, Mia K.

    2016-01-01

    Background Traditional metrics for evaluating the severity of psoriasis are subjective, which complicates efforts to measure effective treatments in clinical trials. Methods We collected images of psoriasis plaques and calibrated the coloration of the images according to an included color card. Features were extracted from the images and used to train a linear discriminant analysis classifier with cross-validation to automatically classify the degree of erythema. The results were tested against numerical scores obtained by a panel of dermatologists using a standard rating system. Results Quantitative measures of erythema based on the digital color images showed good agreement with subjective assessment of erythema severity (κ = 0.4203). The color calibration process improved the agreement from κ = 0.2364 to κ = 0.4203. Conclusions We propose a method for the objective measurement of the psoriasis severity parameter of erythema and show that the calibration process improved the results. PMID:26517973

  1. A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis.

    PubMed

    Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A

    2005-07-01

    Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.

  2. Numerical Modelling of Rayleigh Wave Propagation in Course of Rapid Impulse Compaction

    NASA Astrophysics Data System (ADS)

    Herbut, Aneta; Rybak, Jarosław

    2017-10-01

    As the soil improvement technologies are the area of a rapid development, they require designing and implementing novel methods of control and calibration in order to ensure the safety of geotechnical works. At Wroclaw University of Science and Technology (Poland), these new methods are continually developed with the aim to provide the appropriate tools for the preliminary design of work process, as well as for the further ongoing on-site control of geotechnical works (steel sheet piling, pile driving or soil improvement technologies). The studies include preliminary numerical simulations and field tests concerning measurements and continuous histogram recording of shocks and vibrations and its ground-born dynamic impact on engineering structures. The impact of vibrations on reinforced concrete and masonry structures in the close proximity of the construction site may be destroying in both architectural and structural meaning. Those limits are juxtaposed in codes of practice, but always need an individual judgment. The results and observations make it possible to delineate specific modifications to the parameters of technology applied (e.g. hammer drop height). On the basis of numerous case studies of practical applications, already summarized and published, we were able to formulate the guidelines for work on the aforementioned sites. This work presents specific aspects of the active design (calibration of building site numerical model) by means of technology calibration, using the investigation of the impact of vibrations that occur during the Impulse Compaction on adjacent structures. A case study entails the impact of construction works on Rayleigh wave propagation in the zone of 100 m (radius) around the Compactor.

  3. Numerical Solutions for Nonlinear High Damping Rubber Bearing Isolators: Newmark's Method with Netwon-Raphson Iteration Revisited

    NASA Astrophysics Data System (ADS)

    Markou, A. A.; Manolis, G. D.

    2018-03-01

    Numerical methods for the solution of dynamical problems in engineering go back to 1950. The most famous and widely-used time stepping algorithm was developed by Newmark in 1959. In the present study, for the first time, the Newmark algorithm is developed for the case of the trilinear hysteretic model, a model that was used to describe the shear behaviour of high damping rubber bearings. This model is calibrated against free-vibration field tests implemented on a hybrid base isolated building, namely the Solarino project in Italy, as well as against laboratory experiments. A single-degree-of-freedom system is used to describe the behaviour of a low-rise building isolated with a hybrid system comprising high damping rubber bearings and low friction sliding bearings. The behaviour of the high damping rubber bearings is simulated by the trilinear hysteretic model, while the description of the behaviour of the low friction sliding bearings is modeled by a linear Coulomb friction model. In order to prove the effectiveness of the numerical method we compare the analytically solved trilinear hysteretic model calibrated from free-vibration field tests (Solarino project) against the same model solved with the Newmark method with Netwon-Raphson iteration. Almost perfect agreement is observed between the semi-analytical solution and the fully numerical solution with Newmark's time integration algorithm. This will allow for extension of the trilinear mechanical models to bidirectional horizontal motion, to time-varying vertical loads, to multi-degree-of-freedom-systems, as well to generalized models connected in parallel, where only numerical solutions are possible.

  4. VIIRS reflective solar bands on-orbit calibration five-year update: extension and improvements

    NASA Astrophysics Data System (ADS)

    Sun, Junqiang; Wang, Menghua

    2016-09-01

    The Suomi National Polar-orbiting Partnership (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS) has been onorbit for almost five years. VIIRS has 22 spectral bands, among which fourteen are reflective solar bands (RSB) covering a spectral range from 0.410 to 2.25 μm. The SNPP VIIRS RSB have performed very well since launch. The radiometric calibration for the RSB has also reached a mature stage after almost five years since its launch. Numerous improvements have been made in the standard RSB calibration methodology. Additionally, a hybrid calibration method, which takes the advantages of both solar diffuser calibration and lunar calibration and avoids the drawbacks of the two methods, successfully finalizes the highly accurate calibration for VIIRS RSB. The successfully calibrated RSB data record significantly impacts the ocean color products, whose stringent requirements are especially sensitive to calibration accuracy, and helps the ocean color products to reach maturity and high quality. Nevertheless, there are still many challenge issues to be investigated for further improvements of the VIIRS sensor data records (SDR). In this presentation, the robust results of the RSB calibrations and the ocean product performance will be presented. The reprocessed SDR is now in more science tests, in addition to the ocean science tests already completed one year ago, readying to be the mission-long operational SDR.

  5. Estimation of water table level and nitrate pollution based on geostatistical and multiple mass transport models

    NASA Astrophysics Data System (ADS)

    Matiatos, Ioannis; Varouhakis, Emmanouil A.; Papadopoulou, Maria P.

    2015-04-01

    As the sustainable use of groundwater resources is a great challenge for many countries in the world, groundwater modeling has become a very useful and well established tool for studying groundwater management problems. Based on various methods used to numerically solve algebraic equations representing groundwater flow and contaminant mass transport, numerical models are mainly divided into Finite Difference-based and Finite Element-based models. The present study aims at evaluating the performance of a finite difference-based (MODFLOW-MT3DMS), a finite element-based (FEFLOW) and a hybrid finite element and finite difference (Princeton Transport Code-PTC) groundwater numerical models simulating groundwater flow and nitrate mass transport in the alluvial aquifer of Trizina region in NE Peloponnese, Greece. The calibration of groundwater flow in all models was performed using groundwater hydraulic head data from seven stress periods and the validation was based on a series of hydraulic head data for two stress periods in sufficient numbers of observation locations. The same periods were used for the calibration of nitrate mass transport. The calibration and validation of the three models revealed that the simulated values of hydraulic heads and nitrate mass concentrations coincide well with the observed ones. The models' performance was assessed by performing a statistical analysis of these different types of numerical algorithms. A number of metrics, such as Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Bias, Nash Sutcliffe Model Efficiency (NSE) and Reliability Index (RI) were used allowing the direct comparison of models' performance. Spatiotemporal Kriging (STRK) was also applied using separable and non-separable spatiotemporal variograms to predict water table level and nitrate concentration at each sampling station for two selected hydrological stress periods. The predictions were validated using the respective measured values. Maps of water table level and nitrate concentrations were produced and compared with those obtained from groundwater and mass transport numerical models. Preliminary results showed similar efficiency of the spatiotemporal geostatistical method with the numerical models. However data requirements of the former model were significantly less. Advantages and disadvantages of the methods performance were analysed and discussed indicating the characteristics of the different approaches.

  6. Spectral multivariate calibration without laboratory prepared or determined reference analyte values.

    PubMed

    Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H

    2013-02-05

    An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.

  7. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.

    PubMed

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method

    NASA Astrophysics Data System (ADS)

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods.

  9. OEDIPE: a new graphical user interface for fast construction of numerical phantoms and MCNP calculations.

    PubMed

    Franck, D; de Carlan, L; Pierrat, N; Broggio, D; Lamart, S

    2007-01-01

    Although great efforts have been made to improve the physical phantoms used to calibrate in vivo measurement systems, these phantoms represent a single average counting geometry and usually contain a uniform distribution of the radionuclide over the tissue substitute. As a matter of fact, significant corrections must be made to phantom-based calibration factors in order to obtain absolute calibration efficiencies applicable to a given individual. The importance of these corrections is particularly crucial when considering in vivo measurements of low energy photons emitted by radionuclides deposited in the lung such as actinides. Thus, it was desirable to develop a method for calibrating in vivo measurement systems that is more sensitive to these types of variability. Previous works have demonstrated the possibility of such a calibration using the Monte Carlo technique. Our research programme extended such investigations to the reconstruction of numerical anthropomorphic phantoms based on personal physiological data obtained by computed tomography. New procedures based on a new graphical user interface (GUI) for development of computational phantoms for Monte Carlo calculations and data analysis are being developed to take advantage of recent progress in image-processing codes. This paper presents the principal features of this new GUI. Results of calculations and comparison with experimental data are also presented and discussed in this work.

  10. Inversion of Robin coefficient by a spectral stochastic finite element approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin Bangti; Zou Jun

    2008-03-01

    This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.

  11. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2015-08-01

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology.

  12. Follow-up of solar lentigo depigmentation with a retinaldehyde-based cream by clinical evaluation and calibrated colour imaging.

    PubMed

    Questel, E; Durbise, E; Bardy, A-L; Schmitt, A-M; Josse, G

    2015-05-01

    To assess an objective method evaluating the effects of a retinaldehyde-based cream (RA-cream) on solar lentigines; 29 women randomly applied RA-cream on lentigines of one hand and a control cream on the other, once daily for 3 months. A specific method enabling a reliable visualisation of the lesions was proposed, using high-magnification colour-calibrated camera imaging. Assessment was performed using clinical evaluation by Physician Global Assessment score and image analysis. Luminance determination on the numeric images was performed either on the basis of 5 independent expert's consensus borders or probability map analysis via an algorithm automatically detecting the pigmented area. Both image analysis methods showed a similar lightening of ΔL* = 2 after a 3-month treatment by RA-cream, in agreement with single-blind clinical evaluation. High-magnification colour-calibrated camera imaging combined with probability map analysis is a fast and precise method to follow lentigo depigmentation. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. [Optimization of end-tool parameters based on robot hand-eye calibration].

    PubMed

    Zhang, Lilong; Cao, Tong; Liu, Da

    2017-04-01

    A new one-time registration method was developed in this research for hand-eye calibration of a surgical robot to simplify the operation process and reduce the preparation time. And a new and practical method is introduced in this research to optimize the end-tool parameters of the surgical robot based on analysis of the error sources in this registration method. In the process with one-time registration method, firstly a marker on the end-tool of the robot was recognized by a fixed binocular camera, and then the orientation and position of the marker were calculated based on the joint parameters of the robot. Secondly the relationship between the camera coordinate system and the robot base coordinate system could be established to complete the hand-eye calibration. Because of manufacturing and assembly errors of robot end-tool, an error equation was established with the transformation matrix between the robot end coordinate system and the robot end-tool coordinate system as the variable. Numerical optimization was employed to optimize end-tool parameters of the robot. The experimental results showed that the one-time registration method could significantly improve the efficiency of the robot hand-eye calibration compared with the existing methods. The parameter optimization method could significantly improve the absolute positioning accuracy of the one-time registration method. The absolute positioning accuracy of the one-time registration method can meet the requirements of the clinical surgery.

  14. A fast quadrature-based numerical method for the continuous spectrum biphasic poroviscoelastic model of articular cartilage.

    PubMed

    Stuebner, Michael; Haider, Mansoor A

    2010-06-18

    A new and efficient method for numerical solution of the continuous spectrum biphasic poroviscoelastic (BPVE) model of articular cartilage is presented. Development of the method is based on a composite Gauss-Legendre quadrature approximation of the continuous spectrum relaxation function that leads to an exponential series representation. The separability property of the exponential terms in the series is exploited to develop a numerical scheme that can be reduced to an update rule requiring retention of the strain history at only the previous time step. The cost of the resulting temporal discretization scheme is O(N) for N time steps. Application and calibration of the method is illustrated in the context of a finite difference solution of the one-dimensional confined compression BPVE stress-relaxation problem. Accuracy of the numerical method is demonstrated by comparison to a theoretical Laplace transform solution for a range of viscoelastic relaxation times that are representative of articular cartilage. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  15. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  16. Calculation of far-field scattering from nonspherical particles using a geometrical optics approach

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.

    1991-01-01

    A numerical method was developed using geometrical optics to predict far-field optical scattering from particles that are symmetric about the optic axis. The diffractive component of scattering is calculated and combined with the reflective and refractive components to give the total scattering pattern. The phase terms of the scattered light are calculated as well. Verification of the method was achieved by assuming a spherical particle and comparing the results to Mie scattering theory. Agreement with the Mie theory was excellent in the forward-scattering direction. However, small-amplitude oscillations near the rainbow regions were not observed using the numerical method. Numerical data from spheroidal particles and hemispherical particles are also presented. The use of hemispherical particles as a calibration standard for intensity-type optical particle-sizing instruments is discussed.

  17. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle.

    PubMed

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  18. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle

    NASA Astrophysics Data System (ADS)

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  19. Static behavior of the weld in the joint of the steel support element using experiment and numerical modeling

    NASA Astrophysics Data System (ADS)

    Krejsa, M.; Brozovsky, J.; Mikolasek, D.; Parenica, P.; Koubova, L.

    2018-04-01

    The paper is focused on the numerical modeling of welded steel bearing elements using commercial software system ANSYS, which is based on the finite element method - FEM. It is important to check and compare the results of FEM analysis with the results of physical verification test, in which the real behavior of the bearing element can be observed. The results of the comparison can be used for calibration of the computational model. The article deals with the physical test of steel supporting elements, whose main purpose is obtaining of material, geometry and strength characteristics of the fillet and butt welds including heat affected zone in the basic material of welded steel bearing element. The pressure test was performed during the experiment, wherein the total load value and the corresponding deformation of the specimens under the load was monitored. Obtained data were used for the calibration of numerical models of test samples and they are necessary for further stress and strain analysis of steel supporting elements.

  20. Prediction of Ignition of High Explosive When Submitted To Impact

    NASA Astrophysics Data System (ADS)

    Picart, Didier; Delmaire-Sizes, Franck; Gruau, Cyril; Trumel, Herve

    2009-06-01

    High explosive structures may unintentionally ignite and transit to deflagration or detonation, when subjected to mechanical loadings, such as low velocity impact. We focus our attention on ignition. The Browning and Scammon [1] criterion has been adapted. A concrete like constitutive law is derived, with an up-to-date experimental characterization. These models have been implemented in Abaqus/Explicit [2]. Numerical simulations are used to calibrate the ignition threshold. The presentation or the poster will detail the main assumptions, the models (Browning et al, mechanical behavior) and the calibration procedure. Comparisons between numerical results and experiments [3] will show the interest of this method but also its limitations (numerical artifacts, lack of mechanical data, misinterpretation of reactive tests). [1] R. Browning and R. Scammon, Shock compression of condensed matter, pp. 987-990, (2001). [2] C. Gruau, D. Picart et al., 17^th Dymat technical meeting, Cambridge, UK, (2007). [3] F. Delmaire-Sizes et al., 3^rd International symposium on energetic materials, Tokyo, Japan, (2008).

  1. Poster — Thur Eve — 14: Improving Tissue Segmentation for Monte Carlo Dose Calculation using DECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Salvio, A.; Bedwani, S.; Carrier, J-F.

    2014-08-15

    Purpose: To improve Monte Carlo dose calculation accuracy through a new tissue segmentation technique with dual energy CT (DECT). Methods: Electron density (ED) and effective atomic number (EAN) can be extracted directly from DECT data with a stoichiometric calibration method. Images are acquired with Monte Carlo CT projections using the user code egs-cbct and reconstructed using an FDK backprojection algorithm. Calibration is performed using projections of a numerical RMI phantom. A weighted parameter algorithm then uses both EAN and ED to assign materials to voxels from DECT simulated images. This new method is compared to a standard tissue characterization frommore » single energy CT (SECT) data using a segmented calibrated Hounsfield unit (HU) to ED curve. Both methods are compared to the reference numerical head phantom. Monte Carlo simulations on uniform phantoms of different tissues using dosxyz-nrc show discrepancies in depth-dose distributions. Results: Both SECT and DECT segmentation methods show similar performance assigning soft tissues. Performance is however improved with DECT in regions with higher density, such as bones, where it assigns materials correctly 8% more often than segmentation with SECT, considering the same set of tissues and simulated clinical CT images, i.e. including noise and reconstruction artifacts. Furthermore, Monte Carlo results indicate that kV photon beam depth-dose distributions can double between two tissues of density higher than muscle. Conclusions: A direct acquisition of ED and the added information of EAN with DECT data improves tissue segmentation and increases the accuracy of Monte Carlo dose calculation in kV photon beams.« less

  2. Radiochromic film calibration for the RQT9 quality beam

    NASA Astrophysics Data System (ADS)

    Costa, K. C.; Gomez, A. M. L.; Alonso, T. C.; Mourao, A. P.

    2017-11-01

    When ionizing radiation interacts with matter it generates energy deposition. Radiation dosimetry is important for medical applications of ionizing radiation due to the increasing demand for diagnostic radiology and radiotherapy. Different dosimetry methods are used and each one has its advantages and disadvantages. The film is a dose measurement method that records the energy deposition by the darkening of its emulsion. Radiochromic films have a little visible light sensitivity and respond better to ionizing radiation exposure. The aim of this study is to obtain the resulting calibration curve by the irradiation of radiochromic film strips, making it possible to relate the darkening of the film with the absorbed dose, in order to measure doses in experiments with X-ray beam of 120 kV, in computed tomography (CT). Film strips of GAFCHROMIC XR-QA2 were exposed according to RQT9 reference radiation, which defines an X-ray beam generated from a voltage of 120 kV. Strips were irradiated in "Laboratório de Calibração de Dosímetros do Centro de Desenvolvimento da Tecnologia Nuclear" (LCD / CDTN) at a dose range of 5-30 mGy, corresponding to the range values commonly used in CT scans. Digital images of the irradiated films were analyzed by using the ImageJ software. The darkening responses on film strips according to the doses were observed and they allowed obtaining the corresponding numeric values to the darkening for each specific dose value. From the numerical values of darkening, a calibration curve was obtained, which correlates the darkening of the film strip with dose values in mGy. The calibration curve equation is a simplified method for obtaining absorbed dose values using digital images of radiochromic films irradiated. With the calibration curve, radiochromic films may be applied on dosimetry in experiments on CT scans using X-ray beam of 120 kV, in order to improve CT acquisition image processes.

  3. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    NASA Astrophysics Data System (ADS)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal (up to a poly-log factor). Applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach.

  4. Carbon-14 wiggle-match dating of peat deposits: advantages and limitations

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; van Geel, Bas; Mauquoy, Dmitri; van der Plicht, Johannes

    2004-02-01

    Carbon-14 wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a series of closely spaced peat 14C dates with the 14C calibration curve. The method of WMD is discussed, and its advantages and limitations are compared with calibration of individual dates. A numerical approach to WMD is introduced that makes it possible to assess the precision of WMD chronologies. During several intervals of the Holocene, the 14C calibration curve shows less pronounced fluctuations. We assess whether wiggle-matching is also a feasible strategy for these parts of the 14C calibration curve. High-precision chronologies, such as obtainable with WMD, are needed for studies of rapid climate changes and their possible causes during the Holocene. Copyright

  5. Dimensional accuracy of aluminium extrusions in mechanical calibration

    NASA Astrophysics Data System (ADS)

    Raknes, Christian Arne; Welo, Torgeir; Paulsen, Frode

    2018-05-01

    Reducing dimensional variations in the extrusion process without increasing cost is challenging due to the nature of the process itself. An alternative approach—also from a cost perspective—is using extruded profiles with standard tolerances and utilize downstream processes, and thus calibrate the part within tolerance limits that are not achievable directly from the extrusion process. In this paper, two mechanical calibration strategies for the extruded product are investigated, utilizing the forming lines of the manufacturer. The first calibration strategy is based on global, longitudinal stretching in combination with local bending, while the second strategy utilizes the principle of transversal stretching and local bending of the cross-section. An extruded U-profile is used to make a comparison between the two methods using numerical analyses. To provide response surfaces with the FEA program, ABAQUS is used in combination with Design of Experiment (DOE). DOE is conducted with a two-level fractional factorial design to collect the appropriate data. The aim is to find the main factors affecting the dimension accuracy of the final part obtained by the two calibration methods. The results show that both calibration strategies have proven to reduce cross-sectional variations effectively form standard extrusion tolerances. It is concluded that mechanical calibration is a viable, low-cost alternative for aluminium parts that demand high dimensional accuracy, e.g. due to fit-up or welding requirements.

  6. A numerically-stable algorithm for calibrating single six-ports for national microwave reflectometry

    NASA Astrophysics Data System (ADS)

    Hodgetts, T. E.

    1990-11-01

    A full description and analysis of the numerically stable algorithm currently used for calibrating single six ports or multi states for national microwave reflectometry, employing as standards four one port devices having known voltage reflection coefficients, is given.

  7. The effects of AVIRIS atmospheric calibration methodology on identification and quantitative mapping of surface mineralogy, Drum Mountains, Utah

    NASA Technical Reports Server (NTRS)

    Kruse, Fred A.; Dwyer, John L.

    1993-01-01

    The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures reflected light in 224 contiguous spectra bands in the 0.4 to 2.45 micron region of the electromagnetic spectrum. Numerous studies have used these data for mineralogic identification and mapping based on the presence of diagnostic spectral features. Quantitative mapping requires conversion of the AVIRIS data to physical units (usually reflectance) so that analysis results can be compared and validated with field and laboratory measurements. This study evaluated two different AVIRIS calibration techniques to ground reflectance: an empirically-based method and an atmospheric model based method to determine their effects on quantitative scientific analyses. Expert system analysis and linear spectral unmixing were applied to both calibrated data sets to determine the effect of the calibration on the mineral identification and quantitative mapping results. Comparison of the image-map results and image reflectance spectra indicate that the model-based calibrated data can be used with automated mapping techniques to produce accurate maps showing the spatial distribution and abundance of surface mineralogy. This has positive implications for future operational mapping using AVIRIS or similar imaging spectrometer data sets without requiring a priori knowledge.

  8. Quantitative analysis of time-resolved microwave conductivity data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reid, Obadiah G.; Moore, David T.; Li, Zhen

    Flash-photolysis time-resolved microwave conductivity (fp-TRMC) is a versatile, highly sensitive technique for studying the complex photoconductivity of solution, solid, and gas-phase samples. The purpose of this paper is to provide a standard reference work for experimentalists interested in using microwave conductivity methods to study functional electronic materials, describing how to conduct and calibrate these experiments in order to obtain quantitative results. The main focus of the paper is on calculating the calibration factor, K, which is used to connect the measured change in microwave power absorption to the conductance of the sample. We describe the standard analytical formulae that havemore » been used in the past, and compare them to numerical simulations. This comparison shows that the most widely used analytical analysis of fp-TRMC data systematically under-estimates the transient conductivity by ~60%. We suggest a more accurate semi-empirical way of calibrating these experiments. However, we emphasize that the full numerical calculation is necessary to quantify both transient and steady-state conductance for arbitrary sample properties and geometry.« less

  9. Quantitative analysis of time-resolved microwave conductivity data

    DOE PAGES

    Reid, Obadiah G.; Moore, David T.; Li, Zhen; ...

    2017-11-10

    Flash-photolysis time-resolved microwave conductivity (fp-TRMC) is a versatile, highly sensitive technique for studying the complex photoconductivity of solution, solid, and gas-phase samples. The purpose of this paper is to provide a standard reference work for experimentalists interested in using microwave conductivity methods to study functional electronic materials, describing how to conduct and calibrate these experiments in order to obtain quantitative results. The main focus of the paper is on calculating the calibration factor, K, which is used to connect the measured change in microwave power absorption to the conductance of the sample. We describe the standard analytical formulae that havemore » been used in the past, and compare them to numerical simulations. This comparison shows that the most widely used analytical analysis of fp-TRMC data systematically under-estimates the transient conductivity by ~60%. We suggest a more accurate semi-empirical way of calibrating these experiments. However, we emphasize that the full numerical calculation is necessary to quantify both transient and steady-state conductance for arbitrary sample properties and geometry.« less

  10. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Kevin

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO2 in MEA. In addition, the overall mass transfer coefficient predictedmore » using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.« less

  11. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE PAGES

    Wang, Chao; Xu, Zhijie; Lai, Kevin; ...

    2017-10-24

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO2 in MEA. In addition, the overall mass transfer coefficient predictedmore » using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.« less

  12. Synthetic signal injection using inductive coupling

    PubMed Central

    Marro, Kenneth I.; Lee, Donghoon; Shankland, Eric G.; Mathis, Clinton M.; Hayes, Cecil E.; Amara, Catherine E.; Kushmerick, Martin J.

    2009-01-01

    Conversion of MR signals into units of metabolite concentration requires a very high level of diligence to account for the numerous parameters and transformations that affect the proportionality between the quantity of excited nuclei in the acquisition volume and the integrated area of the corresponding peak in the spectrum. We describe a method that eases this burden with respect to the transformations that occur during and following data acquisition. The conceptual approach is similar to the ERETIC method, which uses a pre-calibrated, artificial reference signal as a calibration factor to accomplish the conversion. The distinguishing feature of our method is that the artificial signal is introduced strictly via induction, rather than radiation. We tested a prototype probe that includes a second RF coil rigidly positioned close to the receive coil so that there was constant mutual inductance between them. The artificial signal was transmitted through the second RF coil and acquired by the receive coil in parallel with the real signal. Our results demonstrate that the calibration factor is immune to changes in sample resistance. This is a key advantage because it removes the cumbersome requirement that coil loading conditions be the same for the calibration sample as for experimental samples. The method should be adaptable to human studies and could allow more practical and accurate quantification of metabolite content. PMID:18595750

  13. Synthetic signal injection using inductive coupling.

    PubMed

    Marro, Kenneth I; Lee, Donghoon; Shankland, Eric G; Mathis, Clinton M; Hayes, Cecil E; Amara, Catherine E; Kushmerick, Martin J

    2008-09-01

    Conversion of MR signals into units of metabolite concentration requires a very high level of diligence to account for the numerous parameters and transformations that affect the proportionality between the quantity of excited nuclei in the acquisition volume and the integrated area of the corresponding peak in the spectrum. We describe a method that eases this burden with respect to the transformations that occur during and following data acquisition. The conceptual approach is similar to the ERETIC method, which uses a pre-calibrated, artificial reference signal as a calibration factor to accomplish the conversion. The distinguishing feature of our method is that the artificial signal is introduced strictly via induction, rather than radiation. We tested a prototype probe that includes a second RF coil rigidly positioned close to the receive coil so that there was constant mutual inductance between them. The artificial signal was transmitted through the second RF coil and acquired by the receive coil in parallel with the real signal. Our results demonstrate that the calibration factor is immune to changes in sample resistance. This is a key advantage because it removes the cumbersome requirement that coil loading conditions be the same for the calibration sample as for experimental samples. The method should be adaptable to human studies and could allow more practical and accurate quantification of metabolite content.

  14. Synthetic signal injection using inductive coupling

    NASA Astrophysics Data System (ADS)

    Marro, Kenneth I.; Lee, Donghoon; Shankland, Eric G.; Mathis, Clinton M.; Hayes, Cecil E.; Amara, Catherine E.; Kushmerick, Martin J.

    2008-09-01

    Conversion of MR signals into units of metabolite concentration requires a very high level of diligence to account for the numerous parameters and transformations that affect the proportionality between the quantity of excited nuclei in the acquisition volume and the integrated area of the corresponding peak in the spectrum. We describe a method that eases this burden with respect to the transformations that occur during and following data acquisition. The conceptual approach is similar to the ERETIC method, which uses a pre-calibrated, artificial reference signal as a calibration factor to accomplish the conversion. The distinguishing feature of our method is that the artificial signal is introduced strictly via induction, rather than radiation. We tested a prototype probe that includes a second RF coil rigidly positioned close to the receive coil so that there was constant mutual inductance between them. The artificial signal was transmitted through the second RF coil and acquired by the receive coil in parallel with the real signal. Our results demonstrate that the calibration factor is immune to changes in sample resistance. This is a key advantage because it removes the cumbersome requirement that coil loading conditions be the same for the calibration sample as for experimental samples. The method should be adaptable to human studies and could allow more practical and accurate quantification of metabolite content.

  15. Parameter estimation for groundwater models under uncertain irrigation data

    USGS Publications Warehouse

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  16. Application of Bayesian Maximum Entropy Filter in parameter calibration of groundwater flow model in PingTung Plain

    NASA Astrophysics Data System (ADS)

    Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung

    2017-04-01

    Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.

  17. Measuring the electrical properties of soil using a calibrated ground-coupled GPR system

    USGS Publications Warehouse

    Oden, C.P.; Olhoeft, G.R.; Wright, D.L.; Powers, M.H.

    2008-01-01

    Traditional methods for estimating vadose zone soil properties using ground penetrating radar (GPR) include measuring travel time, fitting diffraction hyperbolae, and other methods exploiting geometry. Additional processing techniques for estimating soil properties are possible with properly calibrated GPR systems. Such calibration using ground-coupled antennas must account for the effects of the shallow soil on the antenna's response, because changing soil properties result in a changing antenna response. A prototype GPR system using ground-coupled antennas was calibrated using laboratory measurements and numerical simulations of the GPR components. Two methods for estimating subsurface properties that utilize the calibrated response were developed. First, a new nonlinear inversion algorithm to estimate shallow soil properties under ground-coupled antennas was evaluated. Tests with synthetic data showed that the inversion algorithm is well behaved across the allowed range of soil properties. A preliminary field test gave encouraging results, with estimated soil property uncertainties (????) of ??1.9 and ??4.4 mS/m for the relative dielectric permittivity and the electrical conductivity, respectively. Next, a deconvolution method for estimating the properties of subsurface reflectors with known shapes (e.g., pipes or planar interfaces) was developed. This method uses scattering matrices to account for the response of subsurface reflectors. The deconvolution method was evaluated for use with noisy data using synthetic data. Results indicate that the deconvolution method requires reflected waves with a signal/noise ratio of about 10:1 or greater. When applied to field data with a signal/noise ratio of 2:1, the method was able to estimate the reflection coefficient and relative permittivity, but the large uncertainty in this estimate precluded inversion for conductivity. ?? Soil Science Society of America.

  18. Design of transonic airfoil sections using a similarity theory

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1978-01-01

    A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.

  19. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  20. Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque

    NASA Astrophysics Data System (ADS)

    Klaus, Leonard; Eichstädt, Sascha

    2018-04-01

    For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.

  1. Numerical modelling of gravel unconstrained flow experiments with the DAN3D and RASH3D codes

    NASA Astrophysics Data System (ADS)

    Sauthier, Claire; Pirulli, Marina; Pisani, Gabriele; Scavia, Claudio; Labiouse, Vincent

    2015-12-01

    Landslide continuum dynamic models have improved considerably in the last years, but a consensus on the best method of calibrating the input resistance parameter values for predictive analyses has not yet emerged. In the present paper, numerical simulations of a series of laboratory experiments performed at the Laboratory for Rock Mechanics of the EPF Lausanne were undertaken with the RASH3D and DAN3D numerical codes. They aimed at analysing the possibility to use calibrated ranges of parameters (1) in a code different from that they were obtained from and (2) to simulate potential-events made of a material with the same characteristics as back-analysed past-events, but involving a different volume and propagation path. For this purpose, one of the four benchmark laboratory tests was used as past-event to calibrate the dynamic basal friction angle assuming a Coulomb-type behaviour of the sliding mass, and this back-analysed value was then used to simulate the three other experiments, assumed as potential-events. The computational findings show good correspondence with experimental results in terms of characteristics of the final deposits (i.e., runout, length and width). Furthermore, the obtained best fit values of the dynamic basal friction angle for the two codes turn out to be close to each other and within the range of values measured with pseudo-dynamic tilting tests.

  2. eSIP: A Novel Solution-Based Sectioned Image Property Approach for Microscope Calibration

    PubMed Central

    Butzlaff, Malte; Weigel, Arwed; Ponimaskin, Evgeni; Zeug, Andre

    2015-01-01

    Fluorescence confocal microscopy represents one of the central tools in modern sciences. Correspondingly, a growing amount of research relies on the development of novel microscopic methods. During the last decade numerous microscopic approaches were developed for the investigation of various scientific questions. Thereby, the former qualitative imaging methods became replaced by advanced quantitative methods to gain more and more information from a given sample. However, modern microscope systems being as complex as they are, require very precise and appropriate calibration routines, in particular when quantitative measurements should be compared over longer time scales or between different setups. Multispectral beads with sub-resolution size are often used to describe the point spread function and thus the optical properties of the microscope. More recently, a fluorescent layer was utilized to describe the axial profile for each pixel, which allows a spatially resolved characterization. However, fabrication of a thin fluorescent layer with matching refractive index is technically not solved yet. Therefore, we propose a novel type of calibration concept for sectioned image property (SIP) measurements which is based on fluorescent solution and makes the calibration concept available for a broader number of users. Compared to the previous approach, additional information can be obtained by application of this extended SIP chart approach, including penetration depth, detected number of photons, and illumination profile shape. Furthermore, due to the fit of the complete profile, our method is less susceptible to noise. Generally, the extended SIP approach represents a simple and highly reproducible method, allowing setup independent calibration and alignment procedures, which is mandatory for advanced quantitative microscopy. PMID:26244982

  3. A multimethod Global Sensitivity Analysis to aid the calibration of geomechanical models via time-lapse seismic data

    NASA Astrophysics Data System (ADS)

    Price, D. C.; Angus, D. A.; Garcia, A.; Fisher, Q. J.; Parsons, S.; Kato, J.

    2018-03-01

    Time-lapse seismic attributes are used extensively in the history matching of production simulator models. However, although proven to contain information regarding production induced stress change, it is typically only loosely (i.e. qualitatively) used to calibrate geomechanical models. In this study we conduct a multimethod Global Sensitivity Analysis (GSA) to assess the feasibility and aid the quantitative calibration of geomechanical models via near-offset time-lapse seismic data. Specifically, the calibration of mechanical properties of the overburden. Via the GSA, we analyse the near-offset overburden seismic traveltimes from over 4000 perturbations of a Finite Element (FE) geomechanical model of a typical High Pressure High Temperature (HPHT) reservoir in the North Sea. We find that, out of an initially large set of material properties, the near-offset overburden traveltimes are primarily affected by Young's modulus and the effective stress (i.e. Biot) coefficient. The unexpected significance of the Biot coefficient highlights the importance of modelling fluid flow and pore pressure outside of the reservoir. The FE model is complex and highly nonlinear. Multiple combinations of model parameters can yield equally possible model realizations. Consequently, numerical calibration via a large number of random model perturbations is unfeasible. However, the significant differences in traveltime results suggest that more sophisticated calibration methods could potentially be feasible for finding numerous suitable solutions. The results of the time-varying GSA demonstrate how acquiring multiple vintages of time-lapse seismic data can be advantageous. However, they also suggest that significant overburden near-offset seismic time-shifts, useful for model calibration, may take up to 3 yrs after the start of production to manifest. Due to the nonlinearity of the model behaviour, similar uncertainty in the reservoir mechanical properties appears to influence overburden traveltime to a much greater extent. Therefore, reservoir properties must be known to a suitable degree of accuracy before the calibration of the overburden can be considered.

  4. Validation and calibration of structural models that combine information from multiple sources.

    PubMed

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  5. Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Templeton, Jeremy Alan; Blaylock, Myra L.; Domino, Stefan P.

    2015-09-01

    The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.

  6. A controlled experiment in ground water flow model calibration

    USGS Publications Warehouse

    Hill, M.C.; Cooley, R.L.; Pollock, D.W.

    1998-01-01

    Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.

  7. Multi-Method Assessment of Metacognitive Skills in Elementary School Children: How You Test Is What You Get

    ERIC Educational Resources Information Center

    Desoete, Annemie

    2008-01-01

    Third grade elementary school children solved tests on mathematical reasoning and numerical facility. Metacognitive skillfulness was assessed through think aloud protocols, prospective and retrospective child ratings, teacher questionnaires, calibration measures and EPA2000. In our dataset metacognition has a lot in common with intelligence, but…

  8. Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators

    NASA Technical Reports Server (NTRS)

    Fantini, Jay A.

    1998-01-01

    Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.

  9. Accuracy and efficiency of published film dosimetry techniques using a flat-bed scanner and EBT3 film.

    PubMed

    Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T

    2018-03-01

    Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.

  10. Use of Numerical Groundwater Model and Analytical Empirical Orthogonal Function for Calibrating Spatiotemporal pattern of Pumpage, Recharge and Parameter

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Hsu, F. C.; Liu, H. J.

    2016-12-01

    This study develops a novel methodology for the spatiotemporal groundwater calibration of mega-quantitative recharge and parameters by coupling a specialized numerical model and analytical empirical orthogonal function (EOF). The actual spatiotemporal patterns of groundwater pumpage are estimated by an originally developed back propagation neural network-based response matrix with the electrical consumption analysis. The spatiotemporal patterns of the recharge from surface water and hydrogeological parameters (i.e. horizontal hydraulic conductivity and vertical leakance) are calibrated by EOF with the simulated error hydrograph of groundwater storage, in order to qualify the multiple error sources and quantify the revised volume. The objective function of the optimization model is minimizing the root mean square error of the simulated storage error percentage across multiple aquifers, meanwhile subject to mass balance of groundwater budget and the governing equation in transient state. The established method was applied on the groundwater system of Chou-Shui River Alluvial Fan. The simulated period is from January 2012 to December 2014. The total numbers of hydraulic conductivity, vertical leakance and recharge from surface water among four aquifers are 126, 96 and 1080, respectively. Results showed that the RMSE during the calibration process was decreased dramatically and can quickly converse within 6th iteration, because of efficient filtration of the transmission induced by the estimated error and recharge across the boundary. Moreover, the average simulated error percentage according to groundwater level corresponding to the calibrated budget variables and parameters of aquifer one is as small as 0.11%. It represent that the developed methodology not only can effectively detect the flow tendency and error source in all aquifers to achieve accurately spatiotemporal calibration, but also can capture the peak and fluctuation of groundwater level in shallow aquifer.

  11. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Kevin

    The first part of this paper (Part 1) presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work has the ability to account for both chemical absorption and desorption of CO2 in MEA. In addition,more » the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry’s constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.« less

  12. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Kevin

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO 2) capture. In this study, to generate data for WWC model validation, CO 2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO 2 in MEA. In addition,more » the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO 2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N 2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO 2 reaction rate constants after using the N 2O/CO 2 analogy method. Finally, the calibrated model can be used to predict the CO 2 mass transfer in a WWC for a wider range of operating conditions.« less

  13. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE PAGES

    Wang, Chao; Xu, Zhijie; Lai, Kevin; ...

    2017-10-24

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO 2) capture. In this study, to generate data for WWC model validation, CO 2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO 2 in MEA. In addition,more » the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO 2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N 2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO 2 reaction rate constants after using the N 2O/CO 2 analogy method. Finally, the calibrated model can be used to predict the CO 2 mass transfer in a WWC for a wider range of operating conditions.« less

  14. The calibration and flight test performance of the space shuttle orbiter air data system

    NASA Technical Reports Server (NTRS)

    Dean, A. S.; Mena, A. L.

    1983-01-01

    The Space Shuttle air data system (ADS) is used by the guidance, navigation and control system (GN&C) to guide the vehicle to a safe landing. In addition, postflight aerodynamic analysis requires a precise knowledge of flight conditions. Since the orbiter is essentially an unpowered vehicle, the conventional methods of obtaining the ADS calibration were not available; therefore, the calibration was derived using a unique and extensive wind tunnel test program. This test program included subsonic tests with a 0.36-scale orbiter model, transonic and supersonic tests with a smaller 0.2-scale model, and numerous ADS probe-alone tests. The wind tunnel calibration was further refined with subsonic results from the approach and landing test (ALT) program, thus producing the ADS calibration for the orbital flight test (OFT) program. The calibration of the Space Shuttle ADS and its performance during flight are discussed in this paper. A brief description of the system is followed by a discussion of the calibration methodology, and then by a review of the wind tunnel and flight test programs. Finally, the flight results are presented, including an evaluation of the system performance for on-board systems use and a description of the calibration refinements developed to provide the best possible air data for postflight analysis work.

  15. The MeqTrees software system and its use for third-generation calibration of radio interferometers

    NASA Astrophysics Data System (ADS)

    Noordam, J. E.; Smirnov, O. M.

    2010-12-01

    Context. The formulation of the radio interferometer measurement equation (RIME) for a generic radio telescope by Hamaker et al. has provided us with an elegant mathematical apparatus for better understanding, simulation and calibration of existing and future instruments. The calibration of the new radio telescopes (LOFAR, SKA) would be unthinkable without the RIME formalism, and new software to exploit it. Aims: The MeqTrees software system is designed to implement numerical models, and to solve for arbitrary subsets of their parameters. It may be applied to many problems, but was originally geared towards implementing Measurement Equations in radio astronomy for the purposes of simulation and calibration. The technical goal of MeqTrees is to provide a tool for rapid implementation of such models, while offering performance comparable to hand-written code. We are also pursuing the wider goal of increasing the rate of evolution of radio astronomical software, by offering a tool that facilitates rapid experimentation, and exchange of ideas (and scripts). Methods: MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time (hours rather than weeks or months) for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code. Results: MeqTrees is already widely used as the simulation tool for new instruments (LOFAR, SKA) and technologies (focal plane arrays). It has demonstrated that it can achieve a noise-limited dynamic range in excess of a million, on WSRT data. It is the only package that is specifically designed to handle what we propose to call third-generation calibration (3GC), which is needed for the new generation of giant radio telescopes, but can also improve the calibration of existing instruments.

  16. Optimal Design of Calibration Signals in Space-Borne Gravitational Wave Detectors

    NASA Technical Reports Server (NTRS)

    Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Ferroni, Valerio; hide

    2016-01-01

    Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterisation of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.

  17. Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors

    NASA Technical Reports Server (NTRS)

    Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.

    2014-01-01

    Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.

  18. Calibration of 4π NaI(Tl) detectors with coincidence summing correction using new numerical procedure and ANGLE4 software

    NASA Astrophysics Data System (ADS)

    Badawi, Mohamed S.; Jovanovic, Slobodan I.; Thabet, Abouzeid A.; El-Khatib, Ahmed M.; Dlabac, Aleksandar D.; Salem, Bohaysa A.; Gouda, Mona M.; Mihaljevic, Nikola N.; Almugren, Kholud S.; Abbas, Mahmoud I.

    2017-03-01

    The 4π NaI(Tl) γ-ray detectors are consisted of the well cavity with cylindrical cross section, and the enclosing geometry of measurements with large detection angle. This leads to exceptionally high efficiency level and a significant coincidence summing effect, much more than a single cylindrical or coaxial detector especially in very low activity measurements. In the present work, the detection effective solid angle in addition to both full-energy peak and total efficiencies of well-type detectors, were mainly calculated by the new numerical simulation method (NSM) and ANGLE4 software. To obtain the coincidence summing correction factors through the previously mentioned methods, the simulation of the coincident emission of photons was modeled mathematically, based on the analytical equations and complex integrations over the radioactive volumetric sources including the self-attenuation factor. The measured full-energy peak efficiencies and correction factors were done by using 152Eu, where an exact adjustment is required for the detector efficiency curve, because neglecting the coincidence summing effect can make the results inconsistent with the whole. These phenomena, in general due to the efficiency calibration process and the coincidence summing corrections, appear jointly. The full-energy peak and the total efficiencies from the two methods typically agree with discrepancy 10%. The discrepancy between the simulation, ANGLE4 and measured full-energy peak after corrections for the coincidence summing effect was on the average, while not exceeding 14%. Therefore, this technique can be easily applied in establishing the efficiency calibration curves of well-type detectors.

  19. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.

  20. Dark Energy Survey Year 1 results: cross-correlation redshifts - methods and systematics characterization

    NASA Astrophysics Data System (ADS)

    Gatti, M.; Vielzeuf, P.; Davis, C.; Cawthon, R.; Rau, M. M.; DeRose, J.; De Vicente, J.; Alarcon, A.; Rozo, E.; Gaztanaga, E.; Hoyle, B.; Miquel, R.; Bernstein, G. M.; Bonnett, C.; Carnero Rosell, A.; Castander, F. J.; Chang, C.; da Costa, L. N.; Gruen, D.; Gschwend, J.; Hartley, W. G.; Lin, H.; MacCrann, N.; Maia, M. A. G.; Ogando, R. L. C.; Roodman, A.; Sevilla-Noarbe, I.; Troxel, M. A.; Wechsler, R. H.; Asorey, J.; Davis, T. M.; Glazebrook, K.; Hinton, S. R.; Lewis, G.; Lidman, C.; Macaulay, E.; Möller, A.; O'Neill, C. R.; Sommer, N. E.; Uddin, S. A.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Allam, S.; Annis, J.; Bechtol, K.; Brooks, D.; Burke, D. L.; Carollo, D.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; DePoy, D. L.; Desai, S.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Hoormann, J. K.; Jain, B.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Li, T. S.; Lima, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Reil, K.; Rykoff, E. S.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sheldon, E.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, B. E.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.

    2018-06-01

    We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing source galaxies from the Dark Energy Survey Year 1 sample with redMaGiC galaxies (luminous red galaxies with secure photometric redshifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We apply the method to two photo-z codes run in our simulated data: Bayesian Photometric Redshift and Directional Neighbourhood Fitting. We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering versus photo-zs. The systematic uncertainty in the mean redshift bias of the source galaxy sample is Δz ≲ 0.02, though the precise value depends on the redshift bin under consideration. We discuss possible ways to mitigate the impact of our dominant systematics in future analyses.

  1. Polymers for Traveling Wave Ion Mobility Spectrometry Calibration

    NASA Astrophysics Data System (ADS)

    Duez, Quentin; Chirot, Fabien; Liénard, Romain; Josse, Thomas; Choi, ChangMin; Coulembier, Olivier; Dugourd, Philippe; Cornil, Jérôme; Gerbaux, Pascal; De Winter, Julien

    2017-07-01

    One of the main issues when using traveling wave ion mobility spectrometry (TWIMS) for the determination of collisional cross-section (CCS) concerns the need for a robust calibration procedure built from referent ions of known CCS. Here, we implement synthetic polymer ions as CCS calibrants in positive ion mode. Based on their intrinsic polydispersities, polymers offer in a single sample the opportunity to generate, upon electrospray ionization, numerous ions covering a broad mass range and a large CCS window for different charge states at a time. In addition, the key advantage of polymer ions as CCS calibrants lies in the robustness of their gas-phase structure with respect to the instrumental conditions, making them less prone to collisional-induced unfolding (CIU) than protein ions. In this paper, we present a CCS calibration procedure using sodium cationized polylactide and polyethylene glycol, PLA and PEG, as calibrants with reference CCS determined on a home-made drift tube. Our calibration procedure is further validated by testing the polymer calibration to determine CCS of numerous different ions for which CCS are reported in the literature. [Figure not available: see fulltext.

  2. Experimental and numerical study of a 10MW TLP wind turbine in waves and wind

    NASA Astrophysics Data System (ADS)

    Pegalajar-Jurado, Antonio; Hansen, Anders M.; Laugesen, Robert; Mikkelsen, Robert F.; Borg, Michael; Kim, Taeseong; Heilskov, Nicolai F.; Bredmose, Henrik

    2016-09-01

    This paper presents tests on a 1:60 version of the DTU 10MW wind turbine mounted on a tension leg platform and their numerical reproduction. Both the experimental setup and the numerical model are Froude-scaled, and the dynamic response of the floating wind turbine to wind and waves is compared in terms of motion in the six degrees of freedom, nacelle acceleration and mooring line tension. The numerical model is implemented in the aero-elastic code Flex5, featuring the unsteady BEM method and the Morison equation for the modelling of aerodynamics and hydrodynamics, respectively. It was calibrated with the tests by matching key system features, namely the steady thrust curve and the decay tests in water. The calibrated model is used to reproduce the wind-wave climates in the laboratory, including regular and irregular waves, with and without wind. The model predictions are compared to the measured data, and a good agreement is found for surge and heave, while some discrepancies are observed for pitch, nacelle acceleration and line tension. The addition of wind generally improves the agreement with test results. The aerodynamic damping is identified in both tests and simulations. Finally, the sources of the discrepancies are discussed and some improvements in the numerical model are suggested in order to obtain a better agreement with the experiments.

  3. Analysis of drift correction in different simulated weighing schemes

    NASA Astrophysics Data System (ADS)

    Beatrici, A.; Rebelo, A.; Quintão, D.; Cacais, F. L.; Loayza, V. M.

    2015-10-01

    In the calibration of high accuracy mass standards some weighing schemes are used to reduce or eliminate the zero drift effects in mass comparators. There are different sources for the drift and different methods for its treatment. By using numerical methods, drift functions were simulated and a random term was included in each function. The comparison between the results obtained from ABABAB and ABBA weighing series was carried out. The results show a better efficacy of ABABAB method for drift with smooth variation and small randomness.

  4. Calibration and combination of dynamical seasonal forecasts to enhance the value of predicted probabilities for managing risk

    NASA Astrophysics Data System (ADS)

    Dutton, John A.; James, Richard P.; Ross, Jeremy D.

    2013-06-01

    Seasonal probability forecasts produced with numerical dynamics on supercomputers offer great potential value in managing risk and opportunity created by seasonal variability. The skill and reliability of contemporary forecast systems can be increased by calibration methods that use the historical performance of the forecast system to improve the ongoing real-time forecasts. Two calibration methods are applied to seasonal surface temperature forecasts of the US National Weather Service, the European Centre for Medium Range Weather Forecasts, and to a World Climate Service multi-model ensemble created by combining those two forecasts with Bayesian methods. As expected, the multi-model is somewhat more skillful and more reliable than the original models taken alone. The potential value of the multimodel in decision making is illustrated with the profits achieved in simulated trading of a weather derivative. In addition to examining the seasonal models, the article demonstrates that calibrated probability forecasts of weekly average temperatures for leads of 2-4 weeks are also skillful and reliable. The conversion of ensemble forecasts into probability distributions of impact variables is illustrated with degree days derived from the temperature forecasts. Some issues related to loss of stationarity owing to long-term warming are considered. The main conclusion of the article is that properly calibrated probabilistic forecasts possess sufficient skill and reliability to contribute to effective decisions in government and business activities that are sensitive to intraseasonal and seasonal climate variability.

  5. The Impact of Three Factors on the Recovery of Item Parameters for the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Kim, Kyung Yong; Lee, Won-Chan

    2017-01-01

    This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…

  6. An integrated approach to monitoring the calibration stability of operational dual-polarization radars

    DOE PAGES

    Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.; ...

    2016-11-08

    The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less

  7. Calibrating Laser Gas Measurements by Use of Natural CO2

    NASA Technical Reports Server (NTRS)

    Webster, Chris

    2003-01-01

    An improved method of calibration has been devised for instruments that utilize tunable lasers to measure the absorption spectra of atmospheric gases in order to determine the relative abundances of the gases. In this method, CO2 in the atmosphere is used as a natural calibration standard. Unlike in one prior calibration method, it is not necessary to perform calibration measurements in advance of use of the instrument and to risk deterioration of accuracy with time during use. Unlike in another prior calibration method, it is not necessary to include a calibration gas standard (and the attendant additional hardware) in the instrument and to interrupt the acquisition of atmospheric data to perform calibration measurements. In the operation of an instrument of this type, the beam from a tunable diode laser or a tunable quantum-cascade laser is directed along a path through the atmosphere, the laser is made to scan in wavelength over an infrared spectral region that contains one or two absorption spectral lines of a gas of interest, and the transmission (and, thereby, the absorption) of the beam is measured. The concentration of the gas of interest can then be calculated from the observed depth of the absorption line(s), given the temperature, pressure, and path length. CO2 is nearly ideal as a natural calibration gas for the following reasons: CO2 has numerous rotation/vibration infrared spectral lines, many of which are near absorption lines of other gases. The concentration of CO2 relative to the concentrations of the major constituents of the atmosphere is well known and varies slowly and by a small enough amount to be considered constant for calibration in the present context. Hence, absorption-spectral measurements of the concentrations of gases of interest can be normalized to the concentrations of CO2. Because at least one CO2 calibration line is present in every spectral scan of the laser during absorption measurements, the atmospheric CO2 serves continuously as a calibration standard for every measurement point. Figure 1 depicts simulated spectral transmission measurements in a wavenumber range that contains two absorption lines of N2O and one of CO2. The simulations were performed for two different upper-atmospheric pressures for an airborne instrument that has a path length of 80 m. The relative abundance of CO2 in air was assumed to be 360 parts per million by volume (approximately its natural level in terrestrial air). In applying the present method to measurements like these, one could average the signals from the two N2O absorption lines and normalize their magnitudes to that of the CO2 absorption line. Other gases with which this calibration method can be used include H2O, CH4, CO, NO, NO2, HOCl, C2H2, NH3, O3, and HCN. One can also take advantage of this method to eliminate an atmospheric-pressure gauge and thereby reduce the mass of the instrument: The atmospheric pressure can be calculated from the temperature, the known relative abundance of CO2, and the concentration of CO2 as measured by spectral absorption. Natural CO2 levels on Mars provide an ideal calibration standard. Figure 2 shows a second example of the application of this method to Mars atmospheric gas measurements. For sticky gases like H2O, the method is particularly powerful, since water is notoriously difficult to handle at low concentrations in pre-flight calibration procedures.

  8. Pareto optimal calibration of highly nonlinear reactive transport groundwater models using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Siade, A. J.; Prommer, H.; Welter, D.

    2014-12-01

    Groundwater management and remediation requires the implementation of numerical models in order to evaluate the potential anthropogenic impacts on aquifer systems. In many situations, the numerical model must, not only be able to simulate groundwater flow and transport, but also geochemical and biological processes. Each process being simulated carries with it a set of parameters that must be identified, along with differing potential sources of model-structure error. Various data types are often collected in the field and then used to calibrate the numerical model; however, these data types can represent very different processes and can subsequently be sensitive to the model parameters in extremely complex ways. Therefore, developing an appropriate weighting strategy to address the contributions of each data type to the overall least-squares objective function is not straightforward. This is further compounded by the presence of potential sources of model-structure errors that manifest themselves differently for each observation data type. Finally, reactive transport models are highly nonlinear, which can lead to convergence failure for algorithms operating on the assumption of local linearity. In this study, we propose a variation of the popular, particle swarm optimization algorithm to address trade-offs associated with the calibration of one data type over another. This method removes the need to specify weights between observation groups and instead, produces a multi-dimensional Pareto front that illustrates the trade-offs between data types. We use the PEST++ run manager, along with the standard PEST input/output structure, to implement parallel programming across multiple desktop computers using TCP/IP communications. This allows for very large swarms of particles without the need of a supercomputing facility. The method was applied to a case study in which modeling was used to gain insight into the mobilization of arsenic at a deepwell injection site. Multiple data types (e.g., hydrochemical, geophysical, tracer, temperature, etc.) were collected prior to, and during an injection trial. Visualizing the trade-off between the calibration of each data type has provided the means of identifying some model-structure deficiencies.

  9. Power Pattern Sensitivity to Calibration Errors and Mutual Coupling in Linear Arrays through Circular Interval Arithmetics

    PubMed Central

    Anselmi, Nicola; Salucci, Marco; Rocca, Paolo; Massa, Andrea

    2016-01-01

    The sensitivity to both calibration errors and mutual coupling effects of the power pattern radiated by a linear array is addressed. Starting from the knowledge of the nominal excitations of the array elements and the maximum uncertainty on their amplitudes, the bounds of the pattern deviations from the ideal one are analytically derived by exploiting the Circular Interval Analysis (CIA). A set of representative numerical results is reported and discussed to assess the effectiveness and the reliability of the proposed approach also in comparison with state-of-the-art methods and full-wave simulations. PMID:27258274

  10. Objective measurement of erythema in psoriasis using digital color photography with color calibration.

    PubMed

    Raina, A; Hennessy, R; Rains, M; Allred, J; Hirshburg, J M; Diven, D G; Markey, M K

    2016-08-01

    Traditional metrics for evaluating the severity of psoriasis are subjective, which complicates efforts to measure effective treatments in clinical trials. We collected images of psoriasis plaques and calibrated the coloration of the images according to an included color card. Features were extracted from the images and used to train a linear discriminant analysis classifier with cross-validation to automatically classify the degree of erythema. The results were tested against numerical scores obtained by a panel of dermatologists using a standard rating system. Quantitative measures of erythema based on the digital color images showed good agreement with subjective assessment of erythema severity (κ = 0.4203). The color calibration process improved the agreement from κ = 0.2364 to κ = 0.4203. We propose a method for the objective measurement of the psoriasis severity parameter of erythema and show that the calibration process improved the results. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Total internal reflection fluorescence anisotropy imaging microscopy: setup, calibration, and data processing for protein polymerization measurements in living cells

    NASA Astrophysics Data System (ADS)

    Ströhl, Florian; Wong, Hovy H. W.; Holt, Christine E.; Kaminski, Clemens F.

    2018-01-01

    Fluorescence anisotropy imaging microscopy (FAIM) measures the depolarization properties of fluorophores to deduce molecular changes in their environment. For successful FAIM, several design principles have to be considered and a thorough system-specific calibration protocol is paramount. One important calibration parameter is the G factor, which describes the system-induced errors for different polarization states of light. The determination and calibration of the G factor is discussed in detail in this article. We present a novel measurement strategy, which is particularly suitable for FAIM with high numerical aperture objectives operating in TIRF illumination mode. The method makes use of evanescent fields that excite the sample with a polarization direction perpendicular to the image plane. Furthermore, we have developed an ImageJ/Fiji plugin, AniCalc, for FAIM data processing. We demonstrate the capabilities of our TIRF-FAIM system by measuring β -actin polymerization in human embryonic kidney cells and in retinal neurons.

  12. Mach-Zehnder interferometry method for acoustic shock wave measurements in air and broadband calibration of microphones.

    PubMed

    Yuldashev, Petr; Karzova, Maria; Khokhlova, Vera; Ollivier, Sébastien; Blanc-Benon, Philippe

    2015-06-01

    A Mach-Zehnder interferometer is used to measure spherically diverging N-waves in homogeneous air. An electrical spark source is used to generate high-amplitude (1800 Pa at 15 cm from the source) and short duration (50 μs) N-waves. Pressure waveforms are reconstructed from optical phase signals using an Abel-type inversion. It is shown that the interferometric method allows one to reach 0.4 μs of time resolution, which is 6 times better than the time resolution of a 1/8-in. condenser microphone (2.5 μs). Numerical modeling is used to validate the waveform reconstruction method. The waveform reconstruction method provides an error of less than 2% with respect to amplitude in the given experimental conditions. Optical measurement is used as a reference to calibrate a 1/8-in. condenser microphone. The frequency response function of the microphone is obtained by comparing the spectra of the waveforms resulting from optical and acoustical measurements. The optically measured pressure waveforms filtered with the microphone frequency response are in good agreement with the microphone output voltage. Therefore, an optical measurement method based on the Mach-Zehnder interferometer is a reliable tool to accurately characterize evolution of weak shock waves in air and to calibrate broadband acoustical microphones.

  13. A generalised multiple-mass based method for the determination of the live mass of a force transducer

    NASA Astrophysics Data System (ADS)

    Montalvão, Diogo; Baker, Thomas; Ihracska, Balazs; Aulaqi, Muhammad

    2017-01-01

    Many applications in Experimental Modal Analysis (EMA) require that the sensors' masses are known. This is because the added mass from sensors will affect the structural mode shapes, and in particular its natural frequencies. EMA requires the measurement of the exciting forces at given coordinates, which is often made using piezoelectric force transducers. In such a case, the live mass of the force transducer, i.e. the mass as 'seen' by the structure in perpendicular directions must be measured somehow, so that compensation methods like mass cancelation can be performed. This however presents a problem on how to obtain an accurate measurement for the live mass. If the system is perfectly calibrated, then a reasonably accurate estimate can be made using a straightforward method available in most classical textbooks based on Newton's second law. However, this is often not the case (for example when the transducer's sensitivity changed over time, when it is unknown or when the connection influences the transmission of the force). In a self-calibrating iterative method, both the live mass and calibration factor are determined, but this paper shows that the problem may be ill-conditioned, producing misleading results if certain conditions are not met. Therefore, a more robust method is presented and discussed in this paper, reducing the ill-conditioning problems and the need to know the calibration factors beforehand. The three methods will be compared and discussed through numerical and experimental examples, showing that classical EMA still is a field of research that deserves the attention from scientists and engineers.

  14. Monte Carlo simulation of gamma-ray interactions in an over-square high-purity germanium detector for in-vivo measurements

    NASA Astrophysics Data System (ADS)

    Saizu, Mirela Angela

    2016-09-01

    The developments of high-purity germanium detectors match very well the requirements of the in-vivo human body measurements regarding the gamma energy ranges of the radionuclides intended to be measured, the shape of the extended radioactive sources, and the measurement geometries. The Whole Body Counter (WBC) from IFIN-HH is based on an “over-square” high-purity germanium detector (HPGe) to perform accurate measurements of the incorporated radionuclides emitting X and gamma rays in the energy range of 10 keV-1500 keV, under conditions of good shielding, suitable collimation, and calibration. As an alternative to the experimental efficiency calibration method consisting of using reference calibration sources with gamma energy lines that cover all the considered energy range, it is proposed to use the Monte Carlo method for the efficiency calibration of the WBC using the radiation transport code MCNP5. The HPGe detector was modelled and the gamma energy lines of 241Am, 57Co, 133Ba, 137Cs, 60Co, and 152Eu were simulated in order to obtain the virtual efficiency calibration curve of the WBC. The Monte Carlo method was validated by comparing the simulated results with the experimental measurements using point-like sources. For their optimum matching, the impact of the variation of the front dead layer thickness and of the detector photon absorbing layers materials on the HPGe detector efficiency was studied, and the detector’s model was refined. In order to perform the WBC efficiency calibration for realistic people monitoring, more numerical calculations were generated simulating extended sources of specific shape according to the standard man characteristics.

  15. The simple procedure for the fluxgate magnetometers calibration

    NASA Astrophysics Data System (ADS)

    Marusenkov, Andriy

    2014-05-01

    The fluxgate magnetometers are widely used in geophysics investigations including the geomagnetic field monitoring at the global network of geomagnetic observatories as well as for electromagnetic sounding of the Earth's crust conductivity. For solving these tasks the magnetometers have to be calibrated with an appropriate level of accuracy. As a particular case, the ways to satisfy the recent requirements to the scaling and orientation errors of 1-second INTERNAGNET magnetometers are considered in the work. The goal of the present study was to choose a simple and reliable calibration method for estimation of scale factors and angular errors of the three-axis magnetometers in the field. There are a large number of the scalar calibration methods, which use a free rotation of the sensor in the calibration field followed by complicated data processing procedures for numerical solution of the high-order equations set. The chosen approach also exploits the Earth's magnetic field as a calibrating signal, but, in contrast to other methods, the sensor has to be oriented in some particular positions in respect to the total field vector, instead of the sensor free rotation. This allows to use very simple and straightforward linear computation formulas and, as a result, to achieve more reliable estimations of the calibrated parameters. The estimation of the scale factors is performed by the sequential aligning of each component of the sensor in two positions: parallel and anti-parallel to the Earth's magnetic field vector. The estimation of non-orthogonality angles between each pair of components is performed after sequential aligning of the components at the angles +/- 45 and +/- 135 degrees of arc in respect to the total field vector. Due to such four positions approach the estimations of the non-orthogonality angles are invariant to the zero offsets and non-linearity of transfer functions of the components. The experimental justifying of the proposed method by means of the Coil Calibration system reveals, that the achieved accuracy (<0.04 % for scale factors and 0.03 degrees of arc for angle errors) is sufficient for many applications, particularly for satisfying the INTERMAGNET requirements to 1-second instruments.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.

    The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less

  17. Image analysis method for the measurement of water saturation in a two-dimensional experimental flow tank

    NASA Astrophysics Data System (ADS)

    Belfort, Benjamin; Weill, Sylvain; Lehmann, François

    2017-04-01

    A novel, non-invasive imaging technique that determines 2D maps of water content in unsaturated porous media is presented. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage / imbibition experiment in a 2D flow tank with inner dimensions of 40 cm x 14 cm x 6 cm (L x W x D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using numerical simulations with a state-of-the-art computational code that solves the Richards. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Application examples to a larger flow tank with various boundary conditions are finally presented to illustrate the potential of the methodology.

  18. Nonlinear Schrödinger approach to European option pricing

    NASA Astrophysics Data System (ADS)

    Wróblewski, Marcin

    2017-05-01

    This paper deals with numerical option pricing methods based on a Schrödinger model rather than the Black-Scholes model. Nonlinear Schrödinger boundary value problems seem to be alternatives to linear models which better reflect the complexity and behavior of real markets. Therefore, based on the nonlinear Schrödinger option pricing model proposed in the literature, in this paper a model augmented by external atomic potentials is proposed and numerically tested. In terms of statistical physics the developed model describes the option in analogy to a pair of two identical quantum particles occupying the same state. The proposed model is used to price European call options on a stock index. the model is calibrated using the Levenberg-Marquardt algorithm based on market data. A Runge-Kutta method is used to solve the discretized boundary value problem numerically. Numerical results are provided and discussed. It seems that our proposal more accurately models phenomena observed in the real market than do linear models.

  19. Numerical and Experimental Studies on Impact Loaded Concrete Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saarenheimo, Arja; Hakola, Ilkka; Karna, Tuomo

    2006-07-01

    An experimental set-up has been constructed for medium scale impact tests. The main objective of this effort is to provide data for the calibration and verification of numerical models of a loading scenario where an aircraft impacts against a nuclear power plant. One goal is to develop and take in use numerical methods for predicting response of reinforced concrete structures to impacts of deformable projectiles that may contain combustible liquid ('fuel'). Loading, structural behaviour, like collapsing mechanism and the damage grade, will be predicted by simple analytical methods and using non-linear FE-method. In the so-called Riera method the behavior ofmore » the missile material is assumed to be rigid plastic or rigid visco-plastic. Using elastic plastic and elastic visco-plastic material models calculations are carried out by ABAQUS/Explicit finite element code, assuming axisymmetric deformation mode for the missile. With both methods, typically, the impact force time history, the velocity of the missile rear end and the missile shortening during the impact were recorded for comparisons. (authors)« less

  20. Numerical Differentiation Methods for Computing Error Covariance Matrices in Item Response Theory Modeling: An Evaluation and a New Proposal

    ERIC Educational Resources Information Center

    Tian, Wei; Cai, Li; Thissen, David; Xin, Tao

    2013-01-01

    In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…

  1. Tribology and Friction of Soft Materials: Mississippi State Case Study

    DTIC Science & Technology

    2010-03-18

    elastomers , foams, and fabrics. B. Develop internal state variable (ISV) material model. Model will be calibrated using database and verified...Rubbers Natural rubber Santoprene (Vulcanized Elastomer ) Styrene Butadiene Rubber (SBR) Foams Polypropylene Foam Polyurethane Foam Fabrics Kevlar...Axially symmetric model PC Disk PC Numerical Implementation in FEM Codes Experiment SEM Optical methods ISV Model Void Nucleation FEM Analysis

  2. Optical tweezers theory near a flat surface: a perturbative method

    NASA Astrophysics Data System (ADS)

    Flyvbjerg, Henrik; Dutra, Rafael S.; Maia Neto, Paolo A.; Nussenzveig, H. Moyses

    We propose a perturbative calculation of the optical force exercised by a focused laser beam on a microsphere of arbitrary radius that is localized near a flat glass surface in a standard optical tweezers setup. Starting from the Mie-Debye representation for the electric field of a Gaussian laser beam, focused by an objective of high numerical aperture, we derive a recursive series that represents the multiple reflections that describe the reverberation of laser light between the microsphere and the glass slide. We present numerical results for the axial component of the optical force and the axial trap stiffness. Numerical results for a configuration typical in biological applications--a microsphere of 0.5 µm radius at a distance around 0.25 µm from the surface--show a 37 [1] Viana N B, Rocha M S. Mesquita O N, et al. (2007) Towards absolute calibration of optical tweezers. Phys Rev E 75:021914-1-14. [2] Dutra R S, Viana N B, Maia Neto P A, et al. (2014) Absolute calibration of forces in optical tweezers. Phys Rev A 90:013825-1-13. Rafael S. Dutra thanks the Brazilian ``Science without Borders'' program for a postdoctoral scholarship.

  3. Direct determination of geometric alignment parameters for cone-beam scanners

    PubMed Central

    Mennessier, C; Clackdoyle, R; Noo, F

    2009-01-01

    This paper describes a comprehensive method for determining the geometric alignment parameters for cone-beam scanners (often called calibrating the scanners or performing geometric calibration). The method is applicable to x-ray scanners using area detectors, or to SPECT systems using pinholes or cone-beam converging collimators. Images of an alignment test object (calibration phantom) fixed in the field of view of the scanner are processed to determine the nine geometric parameters for each view. The parameter values are found directly using formulae applied to the projected positions of the test object marker points onto the detector. Each view is treated independently, and no restrictions are made on the position of the cone vertex, or on the position or orientation of the detector. The proposed test object consists of 14 small point-like objects arranged with four points on each of three orthogonal lines, and two points on a diagonal line. This test object is shown to provide unique solutions for all possible scanner geometries, even when partial measurement information is lost by points superimposing in the calibration scan. For the many situations where the cone vertex stays reasonably close to a central plane (for circular, planar, or near-planar trajectories), a simpler version of the test object is appropriate. The simpler object consists of six points, two per orthogonal line, but with some restrictions on the positioning of the test object. This paper focuses on the principles and mathematical justifications for the method. Numerical simulations of the calibration process and reconstructions using estimated parameters are also presented to validate the method and to provide evidence of the robustness of the technique. PMID:19242049

  4. Friction-term response to boundary-condition type in flow models

    USGS Publications Warehouse

    Schaffranek, R.W.; Lai, C.

    1996-01-01

    The friction-slope term in the unsteady open-channel flow equations is examined using two numerical models based on different formulations of the governing equations and employing different solution methods. The purposes of the study are to analyze, evaluate, and demonstrate the behavior of the term in a set of controlled numerical experiments using varied types and combinations of boundary conditions. Results of numerical experiments illustrate that a given model can respond inconsistently for the identical resistance-coefficient value under different types and combinations of boundary conditions. Findings also demonstrate that two models employing different dependent variables and solution methods can respond similarly for the identical resistance-coefficient value under similar types and combinations of boundary conditions. Discussion of qualitative considerations and quantitative experimental results provides insight into the proper treatment, evaluation, and significance of the friction-slope term, thereby offering practical guidelines for model implementation and calibration.

  5. ITER-like antenna capacitors voltage probes: Circuit/electromagnetic calculations and calibrations.

    PubMed

    Helou, W; Dumortier, P; Durodié, F; Lombard, G; Nicholls, K

    2016-10-01

    The analyses illustrated in this manuscript have been performed in order to provide the required data for the amplitude-and-phase calibration of the D-dot voltage probes used in the ITER-like antenna at the Joint European Torus tokamak. Their equivalent electrical circuit has been extracted and analyzed, and it has been compared to the one of voltage probes installed in simple transmission lines. A radio-frequency calibration technique has been formulated and exact mathematical relations have been derived. This technique mixes in an elegant fashion data extracted from measurements and numerical calculations to retrieve the calibration factors. The latter have been compared to previous calibration data with excellent agreement proving the robustness of the proposed radio-frequency calibration technique. In particular, it has been stressed that it is crucial to take into account environmental parasitic effects. A low-frequency calibration technique has been in addition formulated and analyzed in depth. The equivalence between the radio-frequency and low-frequency techniques has been rigorously demonstrated. The radio-frequency calibration technique is preferable in the case of the ITER-like antenna due to uncertainties on the characteristics of the cables connected at the inputs of the voltage probes. A method to extract the effect of a mismatched data acquisition system has been derived for both calibration techniques. Finally it has been outlined that in the case of the ITER-like antenna voltage probes can be in addition used to monitor the currents at the inputs of the antenna.

  6. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  7. Calibrating a numerical model's morphology using high-resolution spatial and temporal datasets from multithread channel flume experiments.

    NASA Astrophysics Data System (ADS)

    Javernick, L.; Bertoldi, W.; Redolfi, M.

    2017-12-01

    Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical agreement simulations suffered to represent braiding planforms (evolving toward meandering), and parameterization that ensured braided produced exaggerated activation and bank erosion rates. Marie Sklodowska-Curie Individual Fellowship: River-HMV, 656917

  8. An investigation of hydraulic conductivity estimation in a ground-water flow study of Northern Long Valley, New Jersey

    USGS Publications Warehouse

    Hill, Mary C.

    1985-01-01

    The purpose of this study was to develop a methodology to be used to investigate the aquifer characteristics and water supply potential of an aquifer system. In particular, the geohydrology of northern Long Valley, New Jersey, was investigated. Geohydrologic data were collected and analyzed to characterize the site. Analysis was accomplished by interpreting the available data and by using a numerical simulation of the watertable aquifer. Special attention was given to the estimation of hydraulic conductivity values and hydraulic conductivity structure which together define the hydraulic conductivity of the modeled aquifer. Hydraulic conductivity and all other aspects of the system were first estimated using the trial-and-error method of calibration. The estimation of hydraulic conductivity was improved using a least squares method to estimate hydraulic conductivity values and by improvements in the parameter structure. These efforts improved the calibration of the model far more than a preceding period of similar effort using the trial-and-error method of calibration. In addition, the proposed method provides statistical information on the reliability of estimated hydraulic conductivity values, calculated heads, and calculated flows. The methodology developed and applied in this work proved to be of substantial value in the evaluation of the aquifer considered.

  9. Method for calibration-free scanned-wavelength modulation spectroscopy for gas sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, Ronald K.; Jeffries, Jay B.; Sun, Kai

    A method of calibration-free scanned-wavelength modulation spectroscopy (WMS) absorption sensing is provided by obtaining absorption lineshape measurements of a gas sample on a sensor using 1f-normalized WMS-2f where an injection current to an injection current-tunable diode laser (TDL) is modulated at a frequency f, where a wavelength modulation and an intensity modulation of the TDL are simultaneously generated, extracting using a numerical lock-in program and a low-pass filter appropriate band-width WMS-nf (n=1, 2, . . . ) signals, where the WMS-nf signals are harmonics of the f, determining a physical property of the gas sample according to ratios of themore » WMS-nf signals, determining the zero-absorption background using scanned-wavelength WMS, and determining non-absorption losses using at least two of the harmonics, where a need for a non-absorption baseline measurement is removed from measurements in environments where collision broadening has blended transition linewidths, where calibration free WMS measurements without knowledge of the transition linewidth is enabled.« less

  10. Calibration of strain-gage installations in aircraft structures for the measurement of flight loads

    NASA Technical Reports Server (NTRS)

    Skopinski, T H; Aiken, William S , Jr; Huston, Wilber B

    1954-01-01

    A general method has been developed for calibrating strain-gage installations in aircraft structures, which permits the measurement in flight of the shear or lift, the bending moment, and the torque or pitching moment on the principal lifting or control surfaces. Although the stress in structural members may not be a simple function of the three loads of interest, a straightforward procedure is given for numerically combining the outputs of several bridges in such a way that the loads may be obtained. Extensions of the basic procedure by means of electrical combination of the strain-gage bridges are described which permit compromises between strain-gage installation time, availability of recording instruments, and data reduction time. The basic principles of strain-gage calibration procedures are illustrated by reference to the data for two aircraft structures of typical construction, one a straight and the other a swept horizontal stabilizer.

  11. Modeling tidal hydrodynamics of San Diego Bay, California

    USGS Publications Warehouse

    Wang, P.-F.; Cheng, R.T.; Richter, K.; Gross, E.S.; Sutton, D.; Gartner, J.W.

    1998-01-01

    In 1983, current data were collected by the National Oceanic and Atmospheric Administration using mechanical current meters. During 1992 through 1996, acoustic Doppler current profilers as well as mechanical current meters and tide gauges were used. These measurements not only document tides and tidal currents in San Diego Bay, but also provide independent data sets for model calibration and verification. A high resolution (100-m grid), depth-averaged, numerical hydrodynamic model has been implemented for San Diego Bay to describe essential tidal hydrodynamic processes in the bay. The model is calibrated using the 1983 data set and verified using the more recent 1992-1996 data. Discrepancies between model predictions and field data in beth model calibration and verification are on the order of the magnitude of uncertainties in the field data. The calibrated and verified numerical model has been used to quantify residence time and dilution and flushing of contaminant effluent into San Diego Bay. Furthermore, the numerical model has become an important research tool in ongoing hydrodynamic and water quality studies and in guiding future field data collection programs.

  12. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    NASA Astrophysics Data System (ADS)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  13. Revision and proposed modification for a total maximum daily load model for Upper Klamath Lake, Oregon

    USGS Publications Warehouse

    Wherry, Susan A.; Wood, Tamara M.; Anderson, Chauncey W.

    2015-01-01

    Using the extended 1991–2010 external phosphorus loading dataset, the lake TMDL model was recalibrated following the same procedures outlined in the Phase 1 review. The version of the model selected for further development incorporated an updated sediment initial condition, a numerical solution method for the chlorophyll a model, changes to light and phosphorus factors limiting algal growth, and a new pH-model regression, which removed Julian day dependence in order to avoid discontinuities in pH at year boundaries. This updated lake TMDL model was recalibrated using the extended dataset in order to compare calibration parameters to those obtained from a calibration with the original 7.5-year dataset. The resulting algal settling velocity calibrated from the extended dataset was more than twice the value calibrated with the original dataset, and, because the calibrated values of algal settling velocity and recycle rate are related (more rapid settling required more rapid recycling), the recycling rate also was larger than that determined with the original dataset. These changes in calibration parameters highlight the uncertainty in critical rates in the Upper Klamath Lake TMDL model and argue for their direct measurement in future data collection to increase confidence in the model predictions.

  14. Polyatomic molecular Dirac-Hartree-Fock calculations with Gaussian basis sets

    NASA Technical Reports Server (NTRS)

    Dyall, Kenneth G.; Faegri, Knut, Jr.; Taylor, Peter R.

    1990-01-01

    Numerical methods have been used successfully in atomic Dirac-Hartree-Fock (DHF) calculations for many years. Some DHF calculations using numerical methods have been done on diatomic molecules, but while these serve a useful purpose for calibration, the computational effort in extending this approach to polyatomic molecules is prohibitive. An alternative more in line with traditional quantum chemistry is to use an analytical basis set expansion of the wave function. This approach fell into disrepute in the early 1980's due to problems with variational collapse and intruder states, but has recently been put on firm theoretical foundations. In particular, the problems of variational collapse are well understood, and prescriptions for avoiding the most serious failures have been developed. Consequently, it is now possible to develop reliable molecular programs using basis set methods. This paper describes such a program and reports results of test calculations to demonstrate the convergence and stability of the method.

  15. A simple formulation for deriving effective atomic numbers via electron density calibration from dual-energy CT data in the human body.

    PubMed

    Saito, Masatoshi; Sagara, Shota

    2017-06-01

    The main objective of this study is to propose a simple formulation (which we called DEEDZ) for deriving effective atomic numbers (Z eff ) via electron density (ρ e ) calibration from dual-energy (DE) CT data. We carried out numerical analysis of this DEEDZ method for a large variety of materials with known elemental compositions and mass densities using an available photon cross sections database. The new conversion approach was also applied to previously published experimental DECT data to validate its practical feasibility. We performed numerical analysis of the DEEDZ conversion method for tissue surrogates that have the same chemical compositions and mass densities as a commercial tissue-characterization phantom in order to determine the parameters necessary for the ρ e and Z eff calibrations in the DEEDZ conversion. These parameters were then applied to the human-body-equivalent tissues of ICRU Report 46 as objects of interest with unknown ρ e and Z eff . The attenuation coefficients of these materials were calculated using the XCOM photon cross sections database. We also applied the DEEDZ conversion to experimental DECT data available in the literature, which was measured for two commercial phantoms of different shapes and sizes using a dual-source CT scanner at 80 kV and 140 kV/Sn. The simulated Z eff 's were in excellent agreement with the reference values for almost all of the ICRU-46 human tissues over the Z eff range from 5.83 (gallstones-cholesterol) to 16.11 (bone mineral-hydroxyapatite). The relative deviations from the reference Z eff were within ± 0.3% for all materials, except for one outlier that presented a -3.1% deviation, namely, the thyroid. The reason for this discrepancy is that the thyroid contains a small amount of iodine, an element with a large atomic number (Z = 53). In the experimental case, we confirmed that the simple formulation with less fit parameters enable to calibrate Z eff as accurately as the existing calibration procedure. The DEEDZ conversion method based on the simple formulation proposed could facilitate the construction of ρ e and Z eff images from acquired DECT data. © 2017 American Association of Physicists in Medicine.

  16. Gradient-based model calibration with proxy-model assistance

    NASA Astrophysics Data System (ADS)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  17. A statistical approach to instrument calibration

    Treesearch

    Robert R. Ziemer; David Strauss

    1978-01-01

    Summary - It has been found that two instruments will yield different numerical values when used to measure identical points. A statistical approach is presented that can be used to approximate the error associated with the calibration of instruments. Included are standard statistical tests that can be used to determine if a number of successive calibrations of the...

  18. On-ground re-calibration of the GOME-2 satellite spectrometer series

    NASA Astrophysics Data System (ADS)

    Otter, Gerard; Dijkhuizen, Niels; Vosteen, Amir; Brinkers, Sanneke; Gür, Bilgehan; Kenter, Pepijn; Sallusti, Marcello; Tomuta, Dana; Veratti, Rubes; Cappani, Annalisa

    2017-11-01

    The Global Ozone Monitoring Experiment-2[1] (GOME-2) represents one of the European instruments carried on board the MetOp satellite within the ESA's "Living Planet Program". Consisting of three flight models (FM's) it is intended to provide long-term monitoring of atmospheric ozone and other trace gases over a time frame of 15-20 years, thus contributing valuable input towards climate and atmospheric research and providing near real-time data for use in air quality forecasting. The ambition to achieve highly accurate scientific results requires a thorough calibration and characterization of the instrument prior to launch. These calibration campaigns were performed by TNO in Delft in the Netherlands, in the "Thermal Vacuum Calibration Facility" of the institute. Due to refurbishment and / or storage of the instruments over a period of a few years, several re-calibration campaigns were necessary. These re-calibrations provided the unique opportunity to study the effects of long term storage and build up statistics on the instrument as well as the calibration methods used. During the re-calibration of the second flight model a difference was found in the radiometric calibration output, which was not understood initially. In order to understand the anomalies on the radiometry, a deep investigation was performed using numerous variations of the setup and different sources. The major contributor was identified to be a systematic error in the alignment, for which a correction was applied. Apart from this, it was found that the geometry of the sources influenced the results. Based on the calibration results combined with a theoretical geometrical hypothesis inferred that the on-ground calibration should mimic as close as possible the in-orbit geometry.

  19. Comment on "Radiocarbon Calibration Curve Spanning 0 to 50,000 Years B.P. Based on Paired 230Th/234U/238U and 14C Dates on Pristine Corals" by R.G. Fairbanks, R. A. Mortlock, T.-C. Chiu, L. Cao, A. Kaplan, T. P. Guilderson, T. W. Fairbanks, A. L. Bloom, P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimer, P J; Baillie, M L; Bard, E

    2005-10-02

    Radiocarbon calibration curves are essential for converting radiocarbon dated chronologies to the calendar timescale. Prior to the 1980's numerous differently derived calibration curves based on radiocarbon ages of known age material were in use, resulting in ''apples and oranges'' comparisons between various records (Klein et al., 1982), further complicated by until then unappreciated inter-laboratory variations (International Study Group, 1982). The solution was to produce an internationally-agreed calibration curve based on carefully screened data with updates at 4-6 year intervals (Klein et al., 1982; Stuiver and Reimer, 1986; Stuiver and Reimer, 1993; Stuiver et al., 1998). The IntCal working group hasmore » continued this tradition with the active participation of researchers who produced the records that were considered for incorporation into the current, internationally-ratified calibration curves, IntCal04, SHCal04, and Marine04, for Northern Hemisphere terrestrial, Southern Hemisphere terrestrial, and marine samples, respectively (Reimer et al., 2004; Hughen et al., 2004; McCormac et al., 2004). Fairbanks et al. (2005), accompanied by a more technical paper, Chiu et al. (2005), and an introductory comment, Adkins (2005), recently published a ''calibration curve spanning 0-50,000 years''. Fairbanks et al. (2005) and Chiu et al. (2005) have made a significant contribution to the database on which the IntCal04 and Marine04 calibration curves are based. These authors have now taken the further step to derive their own radiocarbon calibration extending to 50,000 cal BP, which they claim is superior to that generated by the IntCal working group. In their papers, these authors are strongly critical of the IntCal calibration efforts for what they claim to be inadequate screening and sample pretreatment methods. While these criticisms may ultimately be helpful in identifying a better set of protocols, we feel that there are also several erroneous and misleading statements made by these authors which require a response by the IntCal working group. Furthermore, we would like to comment on the sample selection criteria, pretreatment methods, and statistical methods utilized by Fairbanks et al. in derivation of their own radiocarbon calibration.« less

  20. An improved multilevel Monte Carlo method for estimating probability distribution functions in stochastic oil reservoir simulations

    DOE PAGES

    Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...

    2016-12-30

    In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less

  1. Numerical Analysis of a Radiant Heat Flux Calibration System

    NASA Technical Reports Server (NTRS)

    Jiang, Shanjuan; Horn, Thomas J.; Dhir, V. K.

    1998-01-01

    A radiant heat flux gage calibration system exists in the Flight Loads Laboratory at NASA's Dryden Flight Research Center. This calibration system must be well understood if the heat flux gages calibrated in it are to provide useful data during radiant heating ground tests or flight tests of high speed aerospace vehicles. A part of the calibration system characterization process is to develop a numerical model of the flat plate heater element and heat flux gage, which will help identify errors due to convection, heater element erosion, and other factors. A 2-dimensional mathematical model of the gage-plate system has been developed to simulate the combined problem involving convection, radiation and mass loss by chemical reaction. A fourth order finite difference scheme is used to solve the steady state governing equations and determine the temperature distribution in the gage and plate, incident heat flux on the gage face, and flat plate erosion. Initial gage heat flux predictions from the model are found to be within 17% of experimental results.

  2. Conservative self-force correction to the innermost stable circular orbit: Comparison with multiple post-Newtonian-based methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Favata, Marc

    2011-01-15

    Barack and Sago [Phys. Rev. Lett. 102, 191101 (2009)] have recently computed the shift of the innermost stable circular orbit (ISCO) of the Schwarzschild spacetime due to the conservative self-force that arises from the finite-mass of an orbiting test-particle. This calculation of the ISCO shift is one of the first concrete results of the self-force program, and provides an exact (fully relativistic) point of comparison with approximate post-Newtonian (PN) computations of the ISCO. Here this exact ISCO shift is compared with nearly all known PN-based methods. These include both 'nonresummed' and 'resummed' approaches (the latter reproduce the test-particle limit bymore » construction). The best agreement with the exact (Barack-Sago) result is found when the pseudo-4PN coefficient of the effective-one-body (EOB) metric is fit to numerical relativity simulations. However, if one considers uncalibrated methods based only on the currently known 3PN-order conservative dynamics, the best agreement is found from the gauge-invariant ISCO condition of Blanchet and Iyer [Classical Quantum Gravity 20, 755 (2003)], which relies only on the (nonresummed) 3PN equations of motion. This method reproduces the exact test-particle limit without any resummation. A comparison of PN methods with the ISCO in the equal-mass case (computed via sequences of numerical relativity initial-data sets) is also performed. Here a (different) nonresummed method also performs very well (as was previously shown). These results suggest that the EOB approach - while exactly incorporating the conservative test-particle dynamics and having several other important advantages - does not (in the absence of calibration) incorporate conservative self-force effects more accurately than standard PN methods. I also consider how the conservative self-force ISCO shift, combined in some cases with numerical relativity computations of the ISCO, can be used to constrain our knowledge of (1) the EOB effective metric, (2) phenomenological inspiral-merger-ringdown templates, and (3) 4PN- and 5PN-order terms in the PN orbital energy. These constraints could help in constructing better gravitational-wave templates. Lastly, I suggest a new method to calibrate unknown PN terms in inspiral templates using numerical-relativity calculations.« less

  3. Analysis of full disc Ca II K spectroheliograms. I. Photometric calibration and centre-to-limb variation compensation

    NASA Astrophysics Data System (ADS)

    Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.

    2018-01-01

    Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.

  4. Objective calibration of numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  5. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.

  6. Direct estimation of evoked hemoglobin changes by multimodality fusion imaging

    PubMed Central

    Huppert, Theodore J.; Diamond, Solomon G.; Boas, David A.

    2009-01-01

    In the last two decades, both diffuse optical tomography (DOT) and blood oxygen level dependent (BOLD)-based functional magnetic resonance imaging (fMRI) methods have been developed as noninvasive tools for imaging evoked cerebral hemodynamic changes in studies of brain activity. Although these two technologies measure functional contrast from similar physiological sources, i.e., changes in hemoglobin levels, these two modalities are based on distinct physical and biophysical principles leading to both limitations and strengths to each method. In this work, we describe a unified linear model to combine the complimentary spatial, temporal, and spectroscopic resolutions of concurrently measured optical tomography and fMRI signals. Using numerical simulations, we demonstrate that concurrent optical and BOLD measurements can be used to create cross-calibrated estimates of absolute micromolar deoxyhemoglobin changes. We apply this new analysis tool to experimental data acquired simultaneously with both DOT and BOLD imaging during a motor task, demonstrate the ability to more robustly estimate hemoglobin changes in comparison to DOT alone, and show how this approach can provide cross-calibrated estimates of hemoglobin changes. Using this multimodal method, we estimate the calibration of the 3 tesla BOLD signal to be −0.55% ± 0.40% signal change per micromolar change of deoxyhemoglobin. PMID:19021411

  7. Numerical analysis of one-dimensional temperature data for groundwater/surface-water exchange with 1DTempPro

    NASA Astrophysics Data System (ADS)

    Voytek, E. B.; Drenkelfuss, A.; Day-Lewis, F. D.; Healy, R. W.; Lane, J. W.; Werkema, D. D.

    2012-12-01

    Temperature is a naturally occurring tracer, which can be exploited to infer the movement of water through the vadose and saturated zones, as well as the exchange of water between aquifers and surface-water bodies, such as estuaries, lakes, and streams. One-dimensional (1D) vertical temperature profiles commonly show thermal amplitude attenuation and increasing phase lag of diurnal or seasonal temperature variations with propagation into the subsurface. This behavior is described by the heat-transport equation (i.e., the convection-conduction-dispersion equation), which can be solved analytically in 1D under certain simplifying assumptions (e.g., sinusoidal or steady-state boundary conditions and homogeneous hydraulic and thermal properties). Analysis of 1D temperature profiles using analytical models provides estimates of vertical groundwater/surface-water exchange. The utility of these estimates can be diminished when the model assumptions are violated, as is common in field applications. Alternatively, analysis of 1D temperature profiles using numerical models allows for consideration of more complex and realistic boundary conditions. However, such analyses commonly require model calibration and the development of input files for finite-difference or finite-element codes. To address the calibration and input file requirements, a new computer program, 1DTempPro, is presented that facilitates numerical analysis of vertical 1D temperature profiles. 1DTempPro is a graphical user interface (GUI) to the USGS code VS2DH, which numerically solves the flow- and heat-transport equations. Pre- and post-processor features within 1DTempPro allow the user to calibrate VS2DH models to estimate groundwater/surface-water exchange and hydraulic conductivity in cases where hydraulic head is known. This approach improves groundwater/ surface-water exchange-rate estimates for real-world data with complexities ill-suited for examination with analytical methods. Additionally, the code allows for time-varying temperature and hydraulic boundary conditions. Here, we present the approach and include examples for several datasets from stream/aquifer systems.

  8. Traceable Coulomb blockade thermometry

    NASA Astrophysics Data System (ADS)

    Hahtela, O.; Mykkänen, E.; Kemppinen, A.; Meschke, M.; Prunnila, M.; Gunnarsson, D.; Roschier, L.; Penttilä, J.; Pekola, J.

    2017-02-01

    We present a measurement and analysis scheme for determining traceable thermodynamic temperature at cryogenic temperatures using Coulomb blockade thermometry. The uncertainty of the electrical measurement is improved by utilizing two sampling digital voltmeters instead of the traditional lock-in technique. The remaining uncertainty is dominated by that of the numerical analysis of the measurement data. Two analysis methods are demonstrated: numerical fitting of the full conductance curve and measuring the height of the conductance dip. The complete uncertainty analysis shows that using either analysis method the relative combined standard uncertainty (k  =  1) in determining the thermodynamic temperature in the temperature range from 20 mK to 200 mK is below 0.5%. In this temperature range, both analysis methods produced temperature estimates that deviated from 0.39% to 0.67% from the reference temperatures provided by a superconducting reference point device calibrated against the Provisional Low Temperature Scale of 2000.

  9. Experimental calibration procedures for rotating Lorentz-force flowmeters

    DOE PAGES

    Hvasta, M. G.; Slighton, N. T.; Kolemen, E.; ...

    2017-07-14

    Rotating Lorentz-force flowmeters are a novel and useful technology with a range of applications in a variety of different industries. However, calibrating these flowmeters can be challenging, time-consuming, and expensive. In this paper, simple calibration procedures for rotating Lorentz-force flowmeters are presented. These procedures eliminate the need for expensive equipment, numerical modeling, redundant flowmeters, and system down-time. Finally, the calibration processes are explained in a step-by-step manner and compared to experimental results.

  10. Experimental calibration procedures for rotating Lorentz-force flowmeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hvasta, M. G.; Slighton, N. T.; Kolemen, E.

    Rotating Lorentz-force flowmeters are a novel and useful technology with a range of applications in a variety of different industries. However, calibrating these flowmeters can be challenging, time-consuming, and expensive. In this paper, simple calibration procedures for rotating Lorentz-force flowmeters are presented. These procedures eliminate the need for expensive equipment, numerical modeling, redundant flowmeters, and system down-time. Finally, the calibration processes are explained in a step-by-step manner and compared to experimental results.

  11. Dark Energy Survey Year 1 Results: Cross-Correlation Redshifts - Methods and Systematics Characterization

    DOE PAGES

    Gatti, M.

    2018-02-22

    We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing (WL) source galaxies from the Dark Energy Survey Year 1 (DES Y1) sample with redMaGiC galaxies (luminous red galaxies with secure photometric red- shifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We also apply the method to three photo-z codes run in our simulated data: Bayesian Photometric Redshift (BPZ), Directional Neighborhoodmore » Fitting (DNF), and Random Forest-based photo-z (RF). We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering vs photo-z's. The systematic uncertainty in the mean redshift bias of the source galaxy sample is z ≲ 0.02, though the precise value depends on the redshift bin under consideration. Here, we discuss possible ways to mitigate the impact of our dominant systematics in future analyses.« less

  12. Dark Energy Survey Year 1 Results: Cross-Correlation Redshifts - Methods and Systematics Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gatti, M.

    We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing (WL) source galaxies from the Dark Energy Survey Year 1 (DES Y1) sample with redMaGiC galaxies (luminous red galaxies with secure photometric red- shifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We also apply the method to three photo-z codes run in our simulated data: Bayesian Photometric Redshift (BPZ), Directional Neighborhoodmore » Fitting (DNF), and Random Forest-based photo-z (RF). We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering vs photo-z's. The systematic uncertainty in the mean redshift bias of the source galaxy sample is z ≲ 0.02, though the precise value depends on the redshift bin under consideration. Here, we discuss possible ways to mitigate the impact of our dominant systematics in future analyses.« less

  13. Numerical simulations with a FSI-calibrated actuator disk model of wind turbines operating in stratified ABLs

    NASA Astrophysics Data System (ADS)

    Gohari, S. M. Iman; Sarkar, Sutanu; Korobenko, Artem; Bazilevs, Yuri

    2017-11-01

    Numerical simulations of wind turbines operating under different regimes of stability are performed using LES. A reduced model, based on the generalized actuator disk model (ADM), is implemented to represent the wind turbines within the ABL. Data from the fluid-solid interaction (FSI) simulations of wind turbines have been used to calibrate and validate the reduced model. The computational cost of this method to include wind turbines is affordable and incurs an overhead as low as 1.45%. Using this reduced model, we study the coupling of unsteady turbulent flow with the wind turbine under different ABL conditions: (i) A neutral ABL with zero heat-flux and inversion layer at 350m, in which the incoming wind has the maximum mean shear between the heights of upper-tip and lower-tip; (2) A shallow ABL with surface cooling rate of -1 K/hr wherein the low level jet occurs at the wind turbine hub height. We will discuss how the differences in the unsteady flow between the two ABL regimes impact the wind turbine performance.

  14. Measurement of Antenna Bore-Sight Gain

    NASA Technical Reports Server (NTRS)

    Fortinberry, Jarrod; Shumpert, Thomas

    2016-01-01

    The absolute or free-field gain of a simple antenna can be approximated using standard antenna theory formulae or for a more accurate prediction, numerical methods may be employed to solve for antenna parameters including gain. Both of these methods will result in relatively reasonable estimates but in practice antenna gain is usually verified and documented via measurements and calibration. In this paper, a relatively simple and low-cost, yet effective means of determining the bore-sight free-field gain of a VHF/UHF antenna is proposed by using the Brewster angle relationship.

  15. Structural characterization and numerical simulations of flow properties of standard and reservoir carbonate rocks using micro-tomography

    NASA Astrophysics Data System (ADS)

    Islam, Amina; Chevalier, Sylvie; Sassi, Mohamed

    2018-04-01

    With advances in imaging techniques and computational power, Digital Rock Physics (DRP) is becoming an increasingly popular tool to characterize reservoir samples and determine their internal structure and flow properties. In this work, we present the details for imaging, segmentation, as well as numerical simulation of single-phase flow through a standard homogenous Silurian dolomite core plug sample as well as a heterogeneous sample from a carbonate reservoir. We develop a procedure that integrates experimental results into the segmentation step to calibrate the porosity. We also look into using two different numerical tools for the simulation; namely Avizo Fire Xlab Hydro that solves the Stokes' equations via the finite volume method and Palabos that solves the same equations using the Lattice Boltzmann Method. Representative Elementary Volume (REV) and isotropy studies are conducted on the two samples and we show how DRP can be a useful tool to characterize rock properties that are time consuming and costly to obtain experimentally.

  16. CALIBRATION OF SUBSURFACE BATCH AND REACTIVE-TRANSPORT MODELS INVOLVING COMPLEX BIOGEOCHEMICAL PROCESSES

    EPA Science Inventory

    In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...

  17. Calibration of z-axis linearity for arbitrary optical topography measuring instruments

    NASA Astrophysics Data System (ADS)

    Eifler, Matthias; Seewig, Jörg; Hering, Julian; von Freymann, Georg

    2015-05-01

    The calibration of the height axis of optical topography measurement instruments is essential for reliable topography measurements. A state of the art technology for the calibration of the linearity and amplification of the z-axis is the use of step height artefacts. However, a proper calibration requires numerous step heights at different positions within the measurement range. The procedure is extensive and uses artificial surface structures that are not related to real measurement tasks. Concerning these limitations, approaches should to be developed that work for arbitrary topography measurement devices and require little effort. Hence, we propose calibration artefacts which are based on the 3D-Abbott-Curve and image desired surface characteristics. Further, real geometric structures are used as an initial point of the calibration artefact. Based on these considerations, an algorithm is introduced which transforms an arbitrary measured surface into a measurement artefact for the z-axis linearity. The method works both for profiles and topographies. For considering effects of manufacturing, measuring, and evaluation an iterative approach is chosen. The mathematical impact of these processes can be calculated with morphological signal processing. The artefact is manufactured with 3D laser lithography and characterized with different optical measurement devices. An introduced calibration routine can calibrate the entire z-axis-range within one measurement and minimizes the required effort. With the results it is possible to locate potential linearity deviations and to adjust the z-axis. Results of different optical measurement principles are compared in order to evaluate the capabilities of the new artefact.

  18. Self-calibrating models for dynamic monitoring and diagnosis

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1994-01-01

    The present goal in qualitative reasoning is to develop methods for automatically building qualitative and semiquantitative models of dynamic systems and to use them for monitoring and fault diagnosis. The qualitative approach to modeling provides a guarantee of coverage while our semiquantitative methods support convergence toward a numerical model as observations are accumulated. We have developed and applied methods for automatic creation of qualitative models, developed two methods for obtaining tractable results on problems that were previously intractable for qualitative simulation, and developed more powerful methods for learning semiquantitative models from observations and deriving semiquantitative predictions from them. With these advances, qualitative reasoning comes significantly closer to realizing its aims as a practical engineering method.

  19. Invited article: Time accurate mass flow measurements of solid-fueled systems.

    PubMed

    Olliges, Jordan D; Lilly, Taylor C; Joslyn, Thomas B; Ketsdever, Andrew D

    2008-10-01

    A novel diagnostic method is described that utilizes a thrust stand mass balance (TSMB) to directly measure time-accurate mass flow from a solid-fuel thruster. The accuracy of the TSMB mass flow measurement technique was demonstrated in three ways including the use of an idealized numerical simulation, verifying a fluid mass calibration with high-speed digital photography, and by measuring mass loss in more than 30 hybrid rocket motor firings. Dynamic response of the mass balance was assessed through weight calibration and used to derive spring, damping, and mass moment of inertia coefficients for the TSMB. These dynamic coefficients were used to determine the mass flow rate and total mass loss within an acrylic and gaseous oxygen hybrid rocket motor firing. Intentional variations in the oxygen flow rate resulted in corresponding variations in the total propellant mass flow as expected. The TSMB was optimized to determine mass losses of up to 2.5 g and measured total mass loss to within 2.5% of that calculated by a NIST-calibrated digital scale. Using this method, a mass flow resolution of 0.0011 g/s or 2% of the average mass flow in this study has been achieved.

  20. Invited Article: Time accurate mass flow measurements of solid-fueled systems

    NASA Astrophysics Data System (ADS)

    Olliges, Jordan D.; Lilly, Taylor C.; Joslyn, Thomas B.; Ketsdever, Andrew D.

    2008-10-01

    A novel diagnostic method is described that utilizes a thrust stand mass balance (TSMB) to directly measure time-accurate mass flow from a solid-fuel thruster. The accuracy of the TSMB mass flow measurement technique was demonstrated in three ways including the use of an idealized numerical simulation, verifying a fluid mass calibration with high-speed digital photography, and by measuring mass loss in more than 30 hybrid rocket motor firings. Dynamic response of the mass balance was assessed through weight calibration and used to derive spring, damping, and mass moment of inertia coefficients for the TSMB. These dynamic coefficients were used to determine the mass flow rate and total mass loss within an acrylic and gaseous oxygen hybrid rocket motor firing. Intentional variations in the oxygen flow rate resulted in corresponding variations in the total propellant mass flow as expected. The TSMB was optimized to determine mass losses of up to 2.5 g and measured total mass loss to within 2.5% of that calculated by a NIST-calibrated digital scale. Using this method, a mass flow resolution of 0.0011 g/s or 2% of the average mass flow in this study has been achieved.

  1. AN ALTERNATIVE CALIBRATION OF CR-39 DETECTORS FOR RADON DETECTION BEYOND THE SATURATION LIMIT.

    PubMed

    Franci, Daniele; Aureli, Tommaso; Cardellini, Francesco

    2016-12-01

    Time-integrated measurements of indoor radon levels are commonly carried out using solid-state nuclear track detectors (SSNTDs), due to the numerous advantages offered by this radiation detection technique. However, the use of SSNTD also presents some problems that may affect the accuracy of the results. The effect of overlapping tracks often results in the underestimation of the detected track density, which leads to the reduction of the counting efficiency for increasing radon exposure. This article aims to address the effect of overlapping tracks by proposing an alternative calibration technique based on the measurement of the fraction of the detector surface covered by alpha tracks. The method has been tested against a set of Monte Carlo data and then applied to a set of experimental data collected at the radon chamber of the Istituto Nazionale di Metrologia delle Radiazioni Ionizzanti, at the ENEA centre in Casaccia, using CR-39 detectors. It has been proved that the method allows to extend the detectable range of radon exposure far beyond the intrinsic limit imposed by the standard calibration based on the track density. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Design and numerical simulation on an auto-cumulative flowmeter in horizontal oil-water two-phase flow

    NASA Astrophysics Data System (ADS)

    Xie, Beibei; Kong, Lingfu; Kong, Deming; Kong, Weihang; Li, Lei; Liu, Xingbin; Chen, Jiliang

    2017-11-01

    In order to accurately measure the flow rate under the low yield horizontal well conditions, an auto-cumulative flowmeter (ACF) was proposed. Using the proposed flowmeter, the oil flow rate in horizontal oil-water two-phase segregated flow can be finely extracted. The computational fluid dynamics software Fluent was used to simulate the fluid of the ACF in oil-water two-phase flow. In order to calibrate the simulation measurement of the ACF, a novel oil flow rate measurement method was further proposed. The models of the ACF were simulated to obtain and calibrate the oil flow rate under different total flow rates and oil cuts. Using the finite-element method, the structure of the seven conductance probes in the ACF was simulated. The response values for the probes of the ACF under the conditions of oil-water segregated flow were obtained. The experiments for oil-water segregated flow under different heights of the oil accumulation in horizontal oil-water two-phase flow were carried out to calibrate the ACF. The validity of the oil flow rate measurement in horizontal oil-water two-phase flow was verified by simulation and experimental results.

  3. Design and numerical simulation on an auto-cumulative flowmeter in horizontal oil-water two-phase flow.

    PubMed

    Xie, Beibei; Kong, Lingfu; Kong, Deming; Kong, Weihang; Li, Lei; Liu, Xingbin; Chen, Jiliang

    2017-11-01

    In order to accurately measure the flow rate under the low yield horizontal well conditions, an auto-cumulative flowmeter (ACF) was proposed. Using the proposed flowmeter, the oil flow rate in horizontal oil-water two-phase segregated flow can be finely extracted. The computational fluid dynamics software Fluent was used to simulate the fluid of the ACF in oil-water two-phase flow. In order to calibrate the simulation measurement of the ACF, a novel oil flow rate measurement method was further proposed. The models of the ACF were simulated to obtain and calibrate the oil flow rate under different total flow rates and oil cuts. Using the finite-element method, the structure of the seven conductance probes in the ACF was simulated. The response values for the probes of the ACF under the conditions of oil-water segregated flow were obtained. The experiments for oil-water segregated flow under different heights of the oil accumulation in horizontal oil-water two-phase flow were carried out to calibrate the ACF. The validity of the oil flow rate measurement in horizontal oil-water two-phase flow was verified by simulation and experimental results.

  4. Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique

    NASA Astrophysics Data System (ADS)

    Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua

    2018-05-01

    A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.

  5. On-body calibration and measurements using personal radiofrequency exposimeters in indoor diffuse and specular environments.

    PubMed

    Aminzadeh, Reza; Thielens, Arno; Bamba, Aliou; Kone, Lamine; Gaillot, Davy Paul; Lienard, Martine; Martens, Luc; Joseph, Wout

    2016-07-01

    For the first time, response of personal exposimeters (PEMs) is studied under diffuse field exposure in indoor environments. To this aim, both numerical simulations, using finite-difference time-domain method, and calibration measurements were performed in the range of 880-5875 MHz covering 10 frequency bands in Belgium. Two PEMs were mounted on the body of a human male subject and calibrated on-body in an anechoic chamber (non-diffuse) and a reverberation chamber (RC) (diffuse fields). This was motivated by the fact that electromagnetic waves in indoor environments have both specular and diffuse components. Both calibrations show that PEMs underestimate actual incident electromagnetic fields. This can be compensated by using an on-body response. Moreover, it is shown that these responses are different in anechoic chamber and RC. Therefore, it is advised to use an on-body calibration in an RC in future indoor PEM measurements where diffuse fields are present. Using the response averaged over two PEMs reduced measurement uncertainty compared to single PEMs. Following the calibration, measurements in a realistic indoor environment were done for wireless fidelity (WiFi-5G) band. Measured power density values are maximally 8.9 mW/m(2) and 165.8 μW/m(2) on average. These satisfy reference levels issued by the International Commission on Non-Ionizing Radiation Protection in 1998. Power density values obtained by applying on-body calibration in RC are higher than values obtained from no body calibration (only PEMs) and on-body calibration in anechoic room, by factors of 7.55 and 2.21, respectively. Bioelectromagnetics. 37:298-309, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. A comparison of solute-transport solution techniques based on inverse modelling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2000-01-01

    Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results-simulated breakthrough curves, sensitivity analysis, and calibrated parameter values-change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.

  7. First in-flight results of Pleiades 1A innovative methods for optical calibration

    NASA Astrophysics Data System (ADS)

    Kubik, Philippe; Lebègue, Laurent; Fourest, Sébastien; Delvit, Jean-Marc; de Lussy, Françoise; Greslou, Daniel; Blanchet, Gwendoline

    2017-11-01

    The PLEIADES program is a space Earth Observation system led by France, under the leadership of the French Space Agency (CNES). Since it was successfully launched on December 17th, 2011, Pleiades 1A high resolution optical satellite has been thoroughly tested and validated during the commissioning phase led by CNES. The whole system has been designed to deliver submetric optical images to users whose needs were taken into account very early in the design process. This satellite opens a new era in Europe since its off-nadir viewing capability delivers a worldwide 2- days access, and its great agility will make possible to image numerous targets, strips and stereo coverage from the same orbit. Its imaging capability of more than 450 images of 20 km x 20 km per day can fulfill a broad spectrum of applications for both civilian and defence users. For an earth observing satellite with no on-board calibration source, the commissioning phase is a critical quest of wellcharacterized earth landscapes and ground patterns that have to be imaged by the camera in order to compute or fit the parameters of the viewing models. It may take a long time to get the required scenes with no cloud, whilst atmosphere corrections need simultaneous measurements that are not always possible. The paper focuses on new in-flight calibration methods that were prepared before the launch in the framework of the PLEIADES program : they take advantage of the satellite agility that can deeply relax the operational constraints and may improve calibration accuracy. Many performances of the camera were assessed thanks to a dedicated innovative method that was successfully validated during the commissioning period : Modulation Transfer Function (MTF), refocusing, absolute calibration, line of sight stability were estimated on stars and on the Moon. Detectors normalization and radiometric noise were computed on specific pictures on Earth with a dedicated guidance profile. Geometric viewing frame was determined with a particular image acquisition combining different views of the same target. All these new methods are expected to play a key role in the future when active optics will need sophisticated in-flight calibration strategy.

  8. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.

  9. An Observation Analysis Tool for time-series analysis and sensor management in the FREEWAT GIS environment for water resources management

    NASA Astrophysics Data System (ADS)

    Cannata, Massimiliano; Neumann, Jakob; Cardoso, Mirko; Rossetto, Rudy; Foglia, Laura; Borsi, Iacopo

    2017-04-01

    In situ time-series are an important aspect of environmental modelling, especially with the advancement of numerical simulation techniques and increased model complexity. In order to make use of the increasing data available through the requirements of the EU Water Framework Directive, the FREEWAT GIS environment incorporates the newly developed Observation Analysis Tool for time-series analysis. The tool is used to import time-series data into QGIS from local CSV files, online sensors using the istSOS service, or MODFLOW model result files and enables visualisation, pre-processing of data for model development, and post-processing of model results. OAT can be used as a pre-processor for calibration observations, integrating the creation of observations for calibration directly from sensor time-series. The tool consists in an expandable Python library of processing methods and an interface integrated in the QGIS FREEWAT plug-in which includes a large number of modelling capabilities, data management tools and calibration capacity.

  10. White-light Interferometry using a Channeled Spectrum: II. Calibration Methods, Numerical and Experimental Results

    NASA Technical Reports Server (NTRS)

    Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.; Best, Paul K.

    2007-01-01

    In the companion paper, [Appl. Opt. 46, 5853 (2007)] a highly accurate white light interference model was developed from just a few key parameters characterized in terms of various moments of the source and instrument transmission function. We develop and implement the end-to-end process of calibrating these moment parameters together with the differential dispersion of the instrument and applying them to the algorithms developed in the companion paper. The calibration procedure developed herein is based on first obtaining the standard monochromatic parameters at the pixel level: wavenumber, phase, intensity, and visibility parameters via a nonlinear least-squares procedure that exploits the structure of the model. The pixel level parameters are then combined to obtain the required 'global' moment and dispersion parameters. The process is applied to both simulated scenarios of astrometric observations and to data from the microarcsecond metrology testbed (MAM), an interferometer testbed that has played a prominent role in the development of this technology.

  11. VS2DI: Model use, calibration, and validation

    USGS Publications Warehouse

    Healy, Richard W.; Essaid, Hedeff I.

    2012-01-01

    VS2DI is a software package for simulating water, solute, and heat transport through soils or other porous media under conditions of variable saturation. The package contains a graphical preprocessor for constructing simulations, a postprocessor for displaying simulation results, and numerical models that solve for flow and solute transport (VS2DT) and flow and heat transport (VS2DH). Flow is described by the Richards equation, and solute and heat transport are described by advection-dispersion equations; the finite-difference method is used to solve these equations. Problems can be simulated in one, two, or three (assuming radial symmetry) dimensions. This article provides an overview of calibration techniques that have been used with VS2DI; included is a detailed description of calibration procedures used in simulating the interaction between groundwater and a stream fed by drainage from agricultural fields in central Indiana. Brief descriptions of VS2DI and the various types of problems that have been addressed with the software package are also presented.

  12. Investigation of the Rock Fragmentation Process by a Single TBM Cutter Using a Voronoi Element-Based Numerical Manifold Method

    NASA Astrophysics Data System (ADS)

    Liu, Quansheng; Jiang, Yalong; Wu, Zhijun; Xu, Xiangyu; Liu, Qi

    2018-04-01

    In this study, a two-dimensional Voronoi element-based numerical manifold method (VE-NMM) is developed to analyze the granite fragmentation process by a single tunnel boring machine (TBM) cutter under different confining stresses. A Voronoi tessellation technique is adopted to generate the polygonal grain assemblage to approximate the microstructure of granite sample from the Gubei colliery of Huainan mining area in China. A modified interface contact model with cohesion and tensile strength is embedded into the numerical manifold method (NMM) to interpret the interactions between the rock grains. Numerical uniaxial compression and Brazilian splitting tests are first conducted to calibrate and validate the VE-NMM models based on the laboratory experiment results using a trial-and-error method. On this basis, numerical simulations of rock fragmentation by a single TBM cutter are conducted. The simulated crack initiation and propagation process as well as the indentation load-penetration depth behaviors in the numerical models accurately predict the laboratory indentation test results. The influence of confining stress on rock fragmentation is also investigated. Simulation results show that radial tensile cracks are more likely to be generated under a low confining stress, eventually coalescing into a major fracture along the loading axis. However, with the increase in confining stress, more side cracks initiate and coalesce, resulting in the formation of rock chips at the upper surface of the model. In addition, the peak indentation load also increases with the increasing confining stress, indicating that a higher thrust force is usually needed during the TBM boring process in deep tunnels.

  13. Empirical and numerical investigation of mass movements - data fusion and analysis

    NASA Astrophysics Data System (ADS)

    Schmalz, Thilo; Eichhorn, Andreas; Buhl, Volker; Tinkhof, Kurt Mair Am; Preh, Alexander; Tentschert, Ewald-Hans; Zangerl, Christian

    2010-05-01

    Increasing settlement activities of people in mountanious regions and the appearance of extreme climatic conditions motivate the investigation of landslides. Within the last few years a significant rising of disastrous slides could be registered which generated a broad public interest and the request for security measures. The FWF (Austrian Science Fund) funded project ‘KASIP' (Knowledge-based Alarm System with Identified Deformation Predictor) deals with the development of a new type of alarm system based on calibrated numerical slope models for the realistic calculation of failure scenarios. In KASIP, calibration is the optimal adaptation of a numerical model to available monitoring data by least-squares techniques (e.g. adaptive Kalman-filtering). Adaptation means the determination of a priori uncertain physical parameters like the strength of the geological structure. The object of our studies in KASIP is the landslide ‘Steinlehnen' near Innsbruck (Northern Tyrol, Austria). The first part of the presentation is focussed on the determination of geometrical surface-information. This also includes the description of the monitoring system for the collection of the displacement data and filter approaches for the estimation of the slopes kinematic behaviour. The necessity of continous monitoring and the effect of data gaps for reliable filter results and the prediction of the future state is discussed. The second part of the presentation is more focussed on the numerical modelling of the slope by FD- (Finite Difference-) methods and the development of the adaptive Kalman-filter. The realisation of the numerical slope model is developed by FLAC3D (software company HCItasca Ltd.). The model contains different geomechanical approaches (like Mohr-Coulomb) and enables the calculation of great deformations and the failure of the slope. Stability parameters (like the factor-of-safety FS) allow the evaluation of the current state of the slope. Until now, the adaptation of relevant material parameters is often performed by trial and error methods. This common method shall be improved by adaptive Kalman-filtering methods. In contrast to trial and error, Kalman-filtering also considers stochastical information of the input data. Especially the estimation of strength parameters (cohesion c, angle of internal friction phi) in a dynamic consideration of the slope is discussed. Problems with conditioning and numerical stability of the filter matrices, memory overflow and computing time are outlined. It is shown that the Kalman-filter is in principle suitable for an semi-automated adaptation process and obtains realistic values for the unknown material parameters.

  14. Application of the dynamic calibration method to international monitoring system stations in Central Asia using natural seismicity data

    NASA Astrophysics Data System (ADS)

    Kedrov, O. K.; Kedrov, E. O.; Sergeyeva, N. A.; Zabarinskaya, L. P.; Gordon, V. R.

    2008-05-01

    The dynamic calibration method (DCM), using natural seismicity data and initially elaborated in [Kedrov, 2001; Kedrov et al., 2001; Kedrov and Kedrov, 2003], is applied to International Monitoring System (IMS) stations in Central Asia. The algorithm of the method is refined and a program is designed for calibrating diagnostic parameters (discriminants) that characterize a seismic source on the source-station traces. The DCM calibration of stations in relation to the region under study is performed by the choice of attenuation coefficients that adapt the diagnostic parameters to the conditions in a reference region. In this method, the stable Eurasia region is used as the latter. The calibration used numerical data samples taken from the archive of the International Data Centre (IDC) for the IMS stations MKAR, BVAR, EIL, ASF, and CMAR. In this paper, we used discriminants in the spectral and time domains that have the form D_i = X_i - a_m m_b - b_Δ log Δ and are independent of the magnitude m b and the epicentral distance Δ; these discriminants were elaborated in [Kedrov et al., 1990; Kedrov and Lyuke, 1999] on the basis of a method used for identification of events at regional distances in Eurasia. Prerequisites of the DCM are the assumptions that the coefficient a m is regionindependent and the coefficient b Δ depends only on the geotectonic characteristics of the medium and does not depend on the source type. Thus, b Δ can be evaluated only from a sample of earthquakes in the region studied; it is used for adapting the discriminants D( X i ) in the region studied to the reference region. The algorithm is constructed in such a way that corrected values of D( X i) are calculated from the found values of the calibration coefficients b Δ, after which natural events in the region under study are selected by filtering. Empirical estimates of the filtering efficiency as a function of a station vary in a range of 95 100%. The DCM was independently tested using records obtained at the IRIS (Incorporated Research Institutions for Seismology) stations BRVK and MAKZ from explosions detonated in India on May 11, 1998, and Pakistan on May 28, 1998; these stations are similar in location and recording instrumentation characteristics to the IMS stations BVAR and MKAR. This test resulted in correct recognition of the source type and thereby directly confirmed the validity of the proposed calibration method of stations with the use of natural seismicity data. It is shown that the calibration coefficients b Δ for traces similar in the conditions of signal propagation (e.g., the traces from Iran to the stations EIL and ASF) are comparable for nearly all diagnostic parameters. We arrive at the conclusion that the method of dynamic calibration of stations using natural seismicity data in a region where no explosions were detonated can be significant for a rapid and inexpensive calibration of IMS stations. The DCM can also be used for recognition of industrial chemical explosions that are sometimes erroneously classified in regional catalogs as earthquakes.

  15. Analysis and calibration of stage axial vibration for synchrotron radiation nanoscale computed tomography.

    PubMed

    Fu, Jian; Li, Chen; Liu, Zhenzhong

    2015-10-01

    Synchrotron radiation nanoscale computed tomography (SR nano-CT) is a powerful analysis tool and can be used to perform chemical identification, mapping, or speciation of carbon and other elements together with X-ray fluorescence and X-ray absorption near edge structure (XANES) imaging. In practical applications, there are often challenges for SR nano-CT due to the misaligned geometry caused by the sample stage axial vibration. It occurs quite frequently because of experimental constraints from the mechanical error of manufacturing and assembly and the thermal expansion during the time-consuming scanning. The axial vibration will lead to the structure overlap among neighboring layers and degrade imaging results by imposing artifacts into the nano-CT images. It becomes worse for samples with complicated axial structure. In this work, we analyze the influence of axial vibration on nano-CT image by partial derivative. Then, an axial vibration calibration method for SR nano-CT is developed and investigated. It is based on the cross correlation of plane integral curves of the sample at different view angles. This work comprises a numerical study of the method and its experimental verification using a dataset measured with the full-field transmission X-ray microscope nano-CT setup at the beamline 4W1A of the Beijing Synchrotron Radiation Facility. The results demonstrate that the presented method can handle the stage axial vibration. It can work for random axial vibration and needs neither calibration phantom nor additional calibration scanning. It will be helpful for the development and application of synchrotron radiation nano-CT systems.

  16. Simulation of water flow in fractured porous medium by using discretized virtual internal bond

    NASA Astrophysics Data System (ADS)

    Peng, Shujun; Zhang, Zhennan; Li, Chunfang; He, Guofu; Miao, Guoqing

    2017-12-01

    The discretized virtual internal bond (DVIB) is adopted to simulate the water flow in fractured porous medium. The intact porous medium is permeable because it contains numerous micro cracks and pores. These micro discontinuities construct a fluid channel network. The representative volume of this fluid channel network is modeled as a lattice bond cell with finite number of bonds in statistical sense. Each bond serves as a fluid channel. In fractured porous medium, many bond cells are cut by macro fractures. The conductivity of the fracture facet in a bond cell is taken over by the bonds parallel to the flow direction. The equivalent permeability and volumetric storage coefficient of a micro bond are calibrated based on the ideal bond cell conception, which makes it unnecessary to consider the detailed geometry of a specific element. Such parameter calibration method is flexible and applicable to any type of element. The accuracy check results suggest this method has a satisfying accuracy in both the steady and transient flow simulation. To simulate the massive fractures in rockmass, the bond cells intersected by fracture are assigned aperture values, which are assumed random numbers following a certain distribution law. By this method, any number of fractures can be implicitly incorporated into the background mesh, avoiding the setup of fracture element and mesh modification. The fracture aperture heterogeneity is well represented by this means. The simulation examples suggest that the present method is a feasible, simple and efficient approach to the numerical simulation of water flow in fractured porous medium.

  17. Comparison of accelerometer data calibration methods used in thermospheric neutral density estimation

    NASA Astrophysics Data System (ADS)

    Vielberg, Kristin; Forootan, Ehsan; Lück, Christina; Löcher, Anno; Kusche, Jürgen; Börger, Klaus

    2018-05-01

    Ultra-sensitive space-borne accelerometers on board of low Earth orbit (LEO) satellites are used to measure non-gravitational forces acting on the surface of these satellites. These forces consist of the Earth radiation pressure, the solar radiation pressure and the atmospheric drag, where the first two are caused by the radiation emitted from the Earth and the Sun, respectively, and the latter is related to the thermospheric density. On-board accelerometer measurements contain systematic errors, which need to be mitigated by applying a calibration before their use in gravity recovery or thermospheric neutral density estimations. Therefore, we improve, apply and compare three calibration procedures: (1) a multi-step numerical estimation approach, which is based on the numerical differentiation of the kinematic orbits of LEO satellites; (2) a calibration of accelerometer observations within the dynamic precise orbit determination procedure and (3) a comparison of observed to modeled forces acting on the surface of LEO satellites. Here, accelerometer measurements obtained by the Gravity Recovery And Climate Experiment (GRACE) are used. Time series of bias and scale factor derived from the three calibration procedures are found to be different in timescales of a few days to months. Results are more similar (statistically significant) when considering longer timescales, from which the results of approach (1) and (2) show better agreement to those of approach (3) during medium and high solar activity. Calibrated accelerometer observations are then applied to estimate thermospheric neutral densities. Differences between accelerometer-based density estimations and those from empirical neutral density models, e.g., NRLMSISE-00, are observed to be significant during quiet periods, on average 22 % of the simulated densities (during low solar activity), and up to 28 % during high solar activity. Therefore, daily corrections are estimated for neutral densities derived from NRLMSISE-00. Our results indicate that these corrections improve model-based density simulations in order to provide density estimates at locations outside the vicinity of the GRACE satellites, in particular during the period of high solar/magnetic activity, e.g., during the St. Patrick's Day storm on 17 March 2015.

  18. Fragmentation modeling of a resin bonded sand

    NASA Astrophysics Data System (ADS)

    Hilth, William; Ryckelynck, David

    2017-06-01

    Cemented sands exhibit a complex mechanical behavior that can lead to sophisticated models, with numerous parameters without real physical meaning. However, using a rather simple generalized critical state bonded soil model has proven to be a relevant compromise between an easy calibration and good results. The constitutive model formulation considers a non-associated elasto-plastic formulation within the critical state framework. The calibration procedure, using standard laboratory tests, is complemented by the study of an uniaxial compression test observed by tomography. Using finite elements simulations, this test is simulated considering a non-homogeneous 3D media. The tomography of compression sample gives access to 3D displacement fields by using image correlation techniques. Unfortunately these fields have missing experimental data because of the low resolution of correlations for low displacement magnitudes. We propose a recovery method that reconstructs 3D full displacement fields and 2D boundary displacement fields. These fields are mandatory for the calibration of the constitutive parameters by using 3D finite element simulations. The proposed recovery technique is based on a singular value decomposition of available experimental data. This calibration protocol enables an accurate prediction of the fragmentation of the specimen.

  19. Calculating Effective Elastic Properties of Berea Sandstone Using Segmentation-less Method without Targets

    NASA Astrophysics Data System (ADS)

    Ikeda, K.; Goldfarb, E. J.; Tisato, N.

    2017-12-01

    Digital rock physics (DRP) allows performing common laboratory experiments on numerical models to estimate, for example, rock hydraulic permeability. The standard procedure of DRP involves turning a rock sample into a numerical array using X-ray micro computed tomography (micro-CT). Each element of the array bears a value proportional to the X-ray attenuation of the rock at the element (voxel). However, the traditional DRP methodology, which includes segmentation, over-predicts rock moduli by significant amounts (e.g., 100%). Recently, a new methodology - the segmentation-less approach - has been proposed leading to more accurate DRP estimate of elastic moduli. This new method is based on homogenization theory. Typically, segmentation-less approach requires calibration points from known density objects, known as targets. Not all micro-CT datasets have these reference points. Here, we describe how we perform segmentation- and target-less DRP to estimate elastic properties of rocks (i.e., elastic moduli), which are crucial parameters to perform subsurface modeling. We calculate the elastic properties of a Berea sandstone sample that was scanned at a resolution of 40 microns per voxel. We transformed the CT images into density matrices using polynomial fitting curve with four calibration points: the whole rock, the center of quartz grains, the center of iron oxide grains, and the center of air-filled volumes. The first calibration point is obtained by assigning the density of the whole rock to the average of all CT-numbers in the dataset. Then, we locate the center of each phase by finding local extrema point in the dataset. The average CT-numbers of these center points are assigned the density equal to either pristine minerals (quartz and iron oxide) or air. Next, density matrices are transformed to porosity and moduli matrices by means of an effective medium theory. Finally, effective static bulk and shear modulus are numerically calculated by using a Matlab code derived from the elas3D NIST code. The calculated quasi-static P- and S-wave speed overestimates the laboratory result by 37% and 5%, respectively. In fact, our approach predicts wave speeds more accurately than traditional DRP methods. Nevertheless, the presented methodology need to be further investigated and improved.

  20. Rigorous evaluation of chemical measurement uncertainty: liquid chromatographic analysis methods using detector response factor calibration

    NASA Astrophysics Data System (ADS)

    Toman, Blaza; Nelson, Michael A.; Bedner, Mary

    2017-06-01

    Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).

  1. A numerical identifiability test for state-space models--application to optimal experimental design.

    PubMed

    Hidalgo, M E; Ayesa, E

    2001-01-01

    This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.

  2. Point-particle method to compute diffusion-limited cellular uptake.

    PubMed

    Sozza, A; Piazza, F; Cencini, M; De Lillo, F; Boffetta, G

    2018-02-01

    We present an efficient point-particle approach to simulate reaction-diffusion processes of spherical absorbing particles in the diffusion-limited regime, as simple models of cellular uptake. The exact solution for a single absorber is used to calibrate the method, linking the numerical parameters to the physical particle radius and uptake rate. We study the configurations of multiple absorbers of increasing complexity to examine the performance of the method by comparing our simulations with available exact analytical or numerical results. We demonstrate the potential of the method to resolve the complex diffusive interactions, here quantified by the Sherwood number, measuring the uptake rate in terms of that of isolated absorbers. We implement the method in a pseudospectral solver that can be generalized to include fluid motion and fluid-particle interactions. As a test case of the presence of a flow, we consider the uptake rate by a particle in a linear shear flow. Overall, our method represents a powerful and flexible computational tool that can be employed to investigate many complex situations in biology, chemistry, and related sciences.

  3. Hybrid method for determining the parameters of condenser microphones from measured membrane velocities and numerical calculations.

    PubMed

    Barrera-Figueroa, Salvador; Rasmussen, Knud; Jacobsen, Finn

    2009-10-01

    Typically, numerical calculations of the pressure, free-field, and random-incidence response of a condenser microphone are carried out on the basis of an assumed displacement distribution of the diaphragm of the microphone; the conventional assumption is that the displacement follows a Bessel function. This assumption is probably valid at frequencies below the resonance frequency. However, at higher frequencies the movement of the membrane is heavily coupled with the damping of the air film between membrane and backplate and with resonances in the back chamber of the microphone. A solution to this problem is to measure the velocity distribution of the membrane by means of a non-contact method, such as laser vibrometry. The measured velocity distribution can be used together with a numerical formulation such as the boundary element method for estimating the microphone response and other parameters, e.g., the acoustic center. In this work, such a hybrid method is presented and examined. The velocity distributions of a number of condenser microphones have been determined using a laser vibrometer, and these measured velocity distributions have been used for estimating microphone responses and other parameters. The agreement with experimental data is generally good. The method can be used as an alternative for validating the parameters of the microphones determined by classical calibration techniques.

  4. Prediction of SOFC Performance with or without Experiments: A Study on Minimum Requirements for Experimental Data

    DOE PAGES

    Yang, Tao; Sezer, Hayri; Celik, Ismail B.; ...

    2015-06-02

    In the present paper, a physics-based procedure combining experiments and multi-physics numerical simulations is developed for overall analysis of SOFCs operational diagnostics and performance predictions. In this procedure, essential information for the fuel cell is extracted first by utilizing empirical polarization analysis in conjunction with experiments and refined by multi-physics numerical simulations via simultaneous analysis and calibration of polarization curve and impedance behavior. The performance at different utilization cases and operating currents is also predicted to confirm the accuracy of the proposed model. It is demonstrated that, with the present electrochemical model, three air/fuel flow conditions are needed to producemore » a set of complete data for better understanding of the processes occurring within SOFCs. After calibration against button cell experiments, the methodology can be used to assess performance of planar cell without further calibration. The proposed methodology would accelerate the calibration process and improve the efficiency of design and diagnostics.« less

  5. Preliminary design of the HARMONI science software

    NASA Astrophysics Data System (ADS)

    Piqueras, Laure; Jarno, Aurelien; Pécontal-Rousset, Arlette; Loupias, Magali; Richard, Johan; Schwartz, Noah; Fusco, Thierry; Sauvage, Jean-François; Neichel, Benoît; Correia, Carlos M.

    2016-08-01

    This paper introduces the science software of HARMONI. The Instrument Numerical Model simulates the instrument from the optical point of view and provides synthetic exposures simulating detector readouts from data-cubes containing astrophysical scenes. The Data Reduction Software converts raw-data frames into a fully calibrated, scientifically usable data cube. We present the functionalities and the preliminary design of this software, describe some of the methods and algorithms used and highlight the challenges that we will have to face.

  6. Evaluation of liquid aerosol transport through porous media

    NASA Astrophysics Data System (ADS)

    Hall, R.; Murdoch, L.; Falta, R.; Looney, B.; Riha, B.

    2016-07-01

    Application of remediation methods in contaminated vadose zones has been hindered by an inability to effectively distribute liquid- or solid-phase amendments. Injection as aerosols in a carrier gas could be a viable method for achieving useful distributions of amendments in unsaturated materials. The objectives of this work were to characterize radial transport of aerosols in unsaturated porous media, and to develop capabilities for predicting results of aerosol injection scenarios at the field-scale. Transport processes were investigated by conducting lab-scale injection experiments with radial flow geometry, and predictive capabilities were obtained by developing and validating a numerical model for simulating coupled aerosol transport, deposition, and multi-phase flow in porous media. Soybean oil was transported more than 2 m through sand by injecting it as micron-scale aerosol droplets. Oil saturation in the sand increased with time to a maximum of 0.25, and decreased with radial distance in the experiments. The numerical analysis predicted the distribution of oil saturation with only minor calibration. The results indicated that evolution of oil saturation was controlled by aerosol deposition and subsequent flow of the liquid oil, and simulation requires including these two coupled processes. The calibrated model was used to evaluate field applications. The results suggest that amendments can be delivered to the vadose zone as aerosols, and that gas injection rate and aerosol particle size will be important controls on the process.

  7. Hybrid x-space: a new approach for MPI reconstruction.

    PubMed

    Tateo, A; Iurino, A; Settanni, G; Andrisani, A; Stifanelli, P F; Larizza, P; Mazzia, F; Mininni, R M; Tangaro, S; Bellotti, R

    2016-06-07

    Magnetic particle imaging (MPI) is a new medical imaging technique capable of recovering the distribution of superparamagnetic particles from their measured induced signals. In literature there are two main MPI reconstruction techniques: measurement-based (MB) and x-space (XS). The MB method is expensive because it requires a long calibration procedure as well as a reconstruction phase that can be numerically costly. On the other side, the XS method is simpler than MB but the exact knowledge of the field free point (FFP) motion is essential for its implementation. Our simulation work focuses on the implementation of a new approach for MPI reconstruction: it is called hybrid x-space (HXS), representing a combination of the previous methods. Specifically, our approach is based on XS reconstruction because it requires the knowledge of the FFP position and velocity at each time instant. The difference with respect to the original XS formulation is how the FFP velocity is computed: we estimate it from the experimental measurements of the calibration scans, typical of the MB approach. Moreover, a compressive sensing technique is applied in order to reduce the calibration time, setting a fewer number of sampling positions. Simulations highlight that HXS and XS methods give similar results. Furthermore, an appropriate use of compressive sensing is crucial for obtaining a good balance between time reduction and reconstructed image quality. Our proposal is suitable for open geometry configurations of human size devices, where incidental factors could make the currents, the fields and the FFP trajectory irregular.

  8. A new polarimetric active radar calibrator and calibration technique

    NASA Astrophysics Data System (ADS)

    Tang, Jianguo; Xu, Xiaojian

    2015-10-01

    Polarimetric active radar calibrator (PARC) is one of the most important calibrators with high radar cross section (RCS) for polarimetry measurement. In this paper, a new double-antenna polarimetric active radar calibrator (DPARC) is proposed, which consists of two rotatable antennas with wideband electromagnetic polarization filters (EMPF) to achieve lower cross-polarization for transmission and reception. With two antennas which are rotatable around the radar line of sight (LOS), the DPARC provides a variety of standard polarimetric scattering matrices (PSM) through the rotation combination of receiving and transmitting polarization, which are useful for polarimatric calibration in different applications. In addition, a technique based on Fourier analysis is proposed for calibration processing. Numerical simulation results are presented to demonstrate the superior performance of the proposed DPARC and processing technique.

  9. Numerical simulation of the effect of regular and sub-caliber projectiles on military bunkers

    NASA Astrophysics Data System (ADS)

    Jiricek, Pavel; Foglar, Marek

    2015-09-01

    One of the most demanding topics in blast and impact engineering is the modelling of projectile impact. To introduce this topic, a set of numerical simulations was undertaken. The simulations study the impact of regular and sub-calibre projectile on Czech pre-WW2 military bunkers. The penetrations of the military objects are well documented and can be used for comparison. The numerical model composes of a part from a wall of a military object. The concrete block is subjected to an impact of a regular and sub-calibre projectile. The model is divided into layers to simplify the evaluation of the results. The simulations are processed within ANSYS AUTODYN software. A nonlinear material model of with damage and incorporated strain-rate effect was used. The results of the numerical simulations are evaluated in means of the damage of the concrete block. Progress of the damage is described versus time. The numerical simulation provides good agreement with the documented penetrations.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atamturktur, Sez; Unal, Cetin; Hemez, Francois

    The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy’s resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed frameworkmore » is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this framework, the project team has focused on optimizing resource allocation for improving numerical models through further code development and experimentation. Related to further code development, we have developed a code prioritization index (CPI) for coupled numerical models. CPI is implemented to effectively improve the predictive capability of the coupled model by increasing the sophistication of constituent codes. In relation to designing new experiments, we investigated the information gained by the addition of each new experiment used for calibration and bias correction of a simulation model. Additionally, the variability of ‘information gain’ through the design domain has been investigated in order to identify the experiment settings where maximum information gain occurs and thus guide the experimenters in the selection of the experiment settings. This idea was extended to evaluate the information gain from each experiment can be improved by intelligently selecting the experiments, leading to the development of the Batch Sequential Design (BSD) technique. Additionally, we evaluated the importance of sufficiently exploring the domain of applicability in experiment-based validation of high-consequence modeling and simulation by developing a new metric to quantify coverage. This metric has also been incorporated into the design of new experiments. Finally, we have proposed a data-aware calibration approach for the calibration of numerical models. This new method considers the complexity of a numerical model (the number of parameters to be calibrated, parameter uncertainty, and form of the model) and seeks to identify the number of experiments necessary to calibrate the model based on the level of sophistication of the physics. The final component in the project team’s work to improve model calibration and validation methods is the incorporation of robustness to non-probabilistic uncertainty in the input parameters. This is an improvement to model validation and uncertainty quantification stemming beyond the originally proposed scope of the project. We have introduced a new metric for incorporating the concept of robustness into experiment-based validation of numerical models. This project has accounted for the graduation of two Ph.D. students (Kendra Van Buren and Josh Hegenderfer) and two M.S. students (Matthew Egeberg and Parker Shields). One of the doctoral students is now working in the nuclear engineering field and the other one is a post-doctoral fellow at the Los Alamos National Laboratory. Additionally, two more Ph.D. students (Garrison Stevens and Tunc Kulaksiz) who are working towards graduation have been supported by this project.« less

  11. Nonlinear gamma correction via normed bicoherence minimization in optical fringe projection metrology

    NASA Astrophysics Data System (ADS)

    Kamagara, Abel; Wang, Xiangzhao; Li, Sikun

    2018-03-01

    We propose a method to compensate for the projector intensity nonlinearity induced by gamma effect in three-dimensional (3-D) fringe projection metrology by extending high-order spectra analysis and bispectral norm minimization to digital sinusoidal fringe pattern analysis. The bispectrum estimate allows extraction of vital signal information features such as spectral component correlation relationships in fringe pattern images. Our approach exploits the fact that gamma introduces high-order harmonic correlations in the affected fringe pattern image. Estimation and compensation of projector nonlinearity is realized by detecting and minimizing the normed bispectral coherence of these correlations. The proposed technique does not require calibration information and technical knowledge or specification of fringe projection unit. This is promising for developing a modular and calibration-invariant model for intensity nonlinear gamma compensation in digital fringe pattern projection profilometry. Experimental and numerical simulation results demonstrate this method to be efficient and effective in improving the phase measuring accuracies with phase-shifting fringe pattern projection profilometry.

  12. Application of validation data for assessing spatial interpolation methods for 8-h ozone or other sparsely monitored constituents.

    PubMed

    Joseph, John; Sharif, Hatim O; Sunil, Thankam; Alamgir, Hasanat

    2013-07-01

    The adverse health effects of high concentrations of ground-level ozone are well-known, but estimating exposure is difficult due to the sparseness of urban monitoring networks. This sparseness discourages the reservation of a portion of the monitoring stations for validation of interpolation techniques precisely when the risk of overfitting is greatest. In this study, we test a variety of simple spatial interpolation techniques for 8-h ozone with thousands of randomly selected subsets of data from two urban areas with monitoring stations sufficiently numerous to allow for true validation. Results indicate that ordinary kriging with only the range parameter calibrated in an exponential variogram is the generally superior method, and yields reliable confidence intervals. Sparse data sets may contain sufficient information for calibration of the range parameter even if the Moran I p-value is close to unity. R script is made available to apply the methodology to other sparsely monitored constituents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Curve Number Application in Continuous Runoff Models: An Exercise in Futility?

    NASA Astrophysics Data System (ADS)

    Lamont, S. J.; Eli, R. N.

    2006-12-01

    The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a surrogate for the selected subset of HPSF parameters could not be justified. These results suggest that use of the Curve Number in other complex continuous time series hydrologic models may not be appropriate, given the limitations inherent in the definition of the NRCS CN method.

  14. Calibration and Flight Results for the Ares I-X 5-Hole Probe

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Brandon, Jay M.

    2011-01-01

    Flight and calibration results are presented for the Ares I-X 5-hole probe. The probe is calibrated by using a combination of wind tunnel, CFD, and other numerical modeling techniques. This is then applied to the probe flight data and comparisons are made between the vanes and 5-hole probe. Using this and other data it is shown the probe was corrupted by water rendering that measurement unreliable.

  15. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  16. Development of a Machine-Vision System for Recording of Force Calibration Data

    NASA Astrophysics Data System (ADS)

    Heamawatanachai, Sumet; Chaemthet, Kittipong; Changpan, Tawat

    This paper presents the development of a new system for recording of force calibration data using machine vision technology. Real time camera and computer system were used to capture images of the reading from the instruments during calibration. Then, the measurement images were transformed and translated to numerical data using optical character recognition (OCR) technique. These numerical data along with raw images were automatically saved to memories as the calibration database files. With this new system, the human error of recording would be eliminated. The verification experiments were done by using this system for recording the measurement results from an amplifier (DMP 40) with load cell (HBM-Z30-10kN). The NIMT's 100-kN deadweight force standard machine (DWM-100kN) was used to generate test forces. The experiments setup were done in 3 categories; 1) dynamics condition (record during load changing), 2) statics condition (record during fix load), and 3) full calibration experiments in accordance with ISO 376:2011. The captured images from dynamics condition experiment gave >94% without overlapping of number. The results from statics condition experiment were >98% images without overlapping. All measurement images without overlapping were translated to number by the developed program with 100% accuracy. The full calibration experiments also gave 100% accurate results. Moreover, in case of incorrect translation of any result, it is also possible to trace back to the raw calibration image to check and correct it. Therefore, this machine-vision-based system and program should be appropriate for recording of force calibration data.

  17. Usefulness of Wave Data Assimilation to the WAVE WATCH III Modeling System

    NASA Astrophysics Data System (ADS)

    Choi, J. K.; Dykes, J. D.; Yaremchuk, M.; Wittmann, P.

    2017-12-01

    In-situ and remote-sensed wave data are more abundant currently than in years past, with excellent accuracy at global scales. Forecast skill of the WAVE WATCH III model is improved by assimilation of these measurements and they are also useful for model validation and calibration. It has been known that the impact of assimilation in wind-sea conditions is not large, but spectra that result in large swell with long term propagation are identified and assimilated, the improved accuracy of the initial conditions improve the long-term forecasts. The Navy's assimilation method started with the simple Optimal Interpolation (OI) method. Operationally, Fleet Numerical Meteorology and Oceanography Center uses the sequential 2DVar scheme, but a new approach has been tested based on an adjoint-free method to variational assimilation in WAVE WATCH III. We will present the status of wave data assimilation into the WAVE WATCH III numerical model and upcoming development of this new adjoint-free variational approach.

  18. Wavefront-aberration measurement and systematic-error analysis of a high numerical-aperture objective

    NASA Astrophysics Data System (ADS)

    Liu, Zhixiang; Xing, Tingwen; Jiang, Yadong; Lv, Baobin

    2018-02-01

    A two-dimensional (2-D) shearing interferometer based on an amplitude chessboard grating was designed to measure the wavefront aberration of a high numerical-aperture (NA) objective. Chessboard gratings offer better diffraction efficiencies and fewer disturbing diffraction orders than traditional cross gratings. The wavefront aberration of the tested objective was retrieved from the shearing interferogram using the Fourier transform and differential Zernike polynomial-fitting methods. Grating manufacturing errors, including the duty-cycle and pattern-deviation errors, were analyzed with the Fourier transform method. Then, according to the relation between the spherical pupil and planar detector coordinates, the influence of the distortion of the pupil coordinates was simulated. Finally, the systematic error attributable to grating alignment errors was deduced through the geometrical ray-tracing method. Experimental results indicate that the measuring repeatability (3σ) of the wavefront aberration of an objective with NA 0.4 was 3.4 mλ. The systematic-error results were consistent with previous analyses. Thus, the correct wavefront aberration can be obtained after calibration.

  19. Calibration and Finite Element Implementation of an Energy-Based Material Model for Shape Memory Alloys

    NASA Astrophysics Data System (ADS)

    Junker, Philipp; Hackl, Klaus

    2016-09-01

    Numerical simulations are a powerful tool to analyze the complex thermo-mechanically coupled material behavior of shape memory alloys during product engineering. The benefit of the simulations strongly depends on the quality of the underlying material model. In this contribution, we discuss a variational approach which is based solely on energetic considerations and demonstrate that unique calibration of such a model is sufficient to predict the material behavior at varying ambient temperature. In the beginning, we recall the necessary equations of the material model and explain the fundamental idea. Afterwards, we focus on the numerical implementation and provide all information that is needed for programing. Then, we show two different ways to calibrate the model and discuss the results. Furthermore, we show how this model is used during real-life industrial product engineering.

  20. Poster - 16: Time-resolved diode dosimetry for in vivo proton therapy range verification: calibration through numerical modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toltz, Allison; Hoesl, Michaela; Schuemann, Jan

    Purpose: A method to refine the implementation of an in vivo, adaptive proton therapy range verification methodology was investigated. Simulation experiments and in-phantom measurements were compared to validate the calibration procedure of a time-resolved diode dosimetry technique. Methods: A silicon diode array system has been developed and experimentally tested in phantom for passively scattered proton beam range verification by correlating properties of the detector signal to the water equivalent path length (WEPL). The implementation of this system requires a set of calibration measurements to establish a beam-specific diode response to WEPL fit for the selected ‘scout’ beam in a solidmore » water phantom. This process is both tedious, as it necessitates a separate set of measurements for every ‘scout’ beam that may be appropriate to the clinical case, as well as inconvenient due to limited access to the clinical beamline. The diode response to WEPL relationship for a given ‘scout’ beam may be determined within a simulation environment, facilitating the applicability of this dosimetry technique. Measurements for three ‘scout’ beams were compared against simulated detector response with Monte Carlo methods using the Tool for Particle Simulation (TOPAS). Results: Detector response in water equivalent plastic was successfully validated against simulation for spread out Bragg peaks of range 10 cm, 15 cm, and 21 cm (168 MeV, 177 MeV, and 210 MeV) with adjusted R{sup 2} of 0.998. Conclusion: Feasibility has been shown for performing calibration of detector response for a given ‘scout’ beam through simulation for the time resolved diode dosimetry technique.« less

  1. Experimental verification of radial magnetic levitation force on the cylindrical magnets in ferrofluid dampers

    NASA Astrophysics Data System (ADS)

    Yang, Wenming; Wang, Pengkai; Hao, Ruican; Ma, Buchuan

    2017-03-01

    Analytical and numerical calculation methods of the radial magnetic levitation force on the cylindrical magnets in cylindrical vessels filled with ferrofluid was reviewed. An experimental apparatus to measure this force was designed and tailored, which could measure the forces in a range of 0-2.0 N with an accuracy of 0.001 N. After calibrated, this apparatus was used to study the radial magnetic levitation force experimentally. The results showed that the numerical method overestimates this force, while the analytical ones underestimate it. The maximum deviation between the numerical results and the experimental ones was 18.5%, while that between the experimental results with the analytical ones attained 68.5%. The latter deviation narrowed with the lengthening of the magnets. With the aids of the experimental verification of the radial magnetic levitation force, the effect of eccentric distance of magnets on the viscous energy dissipation in ferrofluid dampers could be assessed. It was shown that ignorance of the eccentricity of magnets during the estimation could overestimate the viscous dissipation in ferrofluid dampers.

  2. A calibration method for fringe reflection technique based on the analytical phase-slope description

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong

    2018-05-01

    The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.

  3. Approaching the Post-Newtonian Regime with Numerical Relativity: A Compact-Object Binary Simulation Spanning 350 Gravitational-Wave Cycles

    NASA Astrophysics Data System (ADS)

    Szilágyi, Béla; Blackman, Jonathan; Buonanno, Alessandra; Taracchini, Andrea; Pfeiffer, Harald P.; Scheel, Mark A.; Chu, Tony; Kidder, Lawrence E.; Pan, Yi

    2015-07-01

    We present the first numerical-relativity simulation of a compact-object binary whose gravitational waveform is long enough to cover the entire frequency band of advanced gravitational-wave detectors, such as LIGO, Virgo, and KAGRA, for mass ratio 7 and total mass as low as 45.5 M⊙ . We find that effective-one-body models, either uncalibrated or calibrated against substantially shorter numerical-relativity waveforms at smaller mass ratios, reproduce our new waveform remarkably well, with a negligible loss in detection rate due to modeling error. In contrast, post-Newtonian inspiral waveforms and existing calibrated phenomenological inspiral-merger-ringdown waveforms display greater disagreement with our new simulation. The disagreement varies substantially depending on the specific post-Newtonian approximant used.

  4. Approaching the Post-Newtonian Regime with Numerical Relativity: A Compact-Object Binary Simulation Spanning 350 Gravitational-Wave Cycles.

    PubMed

    Szilágyi, Béla; Blackman, Jonathan; Buonanno, Alessandra; Taracchini, Andrea; Pfeiffer, Harald P; Scheel, Mark A; Chu, Tony; Kidder, Lawrence E; Pan, Yi

    2015-07-17

    We present the first numerical-relativity simulation of a compact-object binary whose gravitational waveform is long enough to cover the entire frequency band of advanced gravitational-wave detectors, such as LIGO, Virgo, and KAGRA, for mass ratio 7 and total mass as low as 45.5M_{⊙}. We find that effective-one-body models, either uncalibrated or calibrated against substantially shorter numerical-relativity waveforms at smaller mass ratios, reproduce our new waveform remarkably well, with a negligible loss in detection rate due to modeling error. In contrast, post-Newtonian inspiral waveforms and existing calibrated phenomenological inspiral-merger-ringdown waveforms display greater disagreement with our new simulation. The disagreement varies substantially depending on the specific post-Newtonian approximant used.

  5. Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture. Part 1: Non-reactive physical mass transfer across the wetted wall column: Original Research Article: Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    A hierarchical model calibration and validation is proposed for quantifying the confidence level of mass transfer prediction using a computational fluid dynamics (CFD) model, where the solvent-based carbon dioxide (CO2) capture is simulated and simulation results are compared to the parallel bench-scale experimental data. Two unit problems with increasing level of complexity are proposed to breakdown the complex physical/chemical processes of solvent-based CO2 capture into relatively simpler problems to separate the effects of physical transport and chemical reaction. This paper focuses on the calibration and validation of the first unit problem, i.e. the CO2 mass transfer across a falling ethanolaminemore » (MEA) film in absence of chemical reaction. This problem is investigated both experimentally and numerically using nitrous oxide (N2O) as a surrogate for CO2. To capture the motion of gas-liquid interface, a volume of fluid method is employed together with a one-fluid formulation to compute the mass transfer between the two phases. Bench-scale parallel experiments are designed and conducted to validate and calibrate the CFD models using a general Bayesian calibration. Two important transport parameters, e.g. Henry’s constant and gas diffusivity, are calibrated to produce the posterior distributions, which will be used as the input for the second unit problem to address the chemical adsorption of CO2 across the MEA falling film, where both mass transfer and chemical reaction are involved.« less

  6. Homogenization of Periodic Masonry Using Self-Consistent Scheme and Finite Element Method

    NASA Astrophysics Data System (ADS)

    Kumar, Nitin; Lambadi, Harish; Pandey, Manoj; Rajagopal, Amirtham

    2016-01-01

    Masonry is a heterogeneous anisotropic continuum, made up of the brick and mortar arranged in a periodic manner. Obtaining the effective elastic stiffness of the masonry structures has been a challenging task. In this study, the homogenization theory for periodic media is implemented in a very generic manner to derive the anisotropic global behavior of the masonry, through rigorous application of the homogenization theory in one step and through a full three-dimensional behavior. We have considered the periodic Eshelby self-consistent method and the finite element method. Two representative unit cells that represent the microstructure of the masonry wall exactly are considered for calibration and numerical application of the theory.

  7. The Aquarius Simulator and Cold-Sky Calibration

    NASA Technical Reports Server (NTRS)

    Le Vine, David M.; Dinnat, Emmanuel P.; Abraham, Saji; deMatthaeis, Paolo; Wentz, Frank J.

    2011-01-01

    A numerical simulator has been developed to study remote sensing from space in the spectral window at 1.413 GHz (L-band), and it has been used to optimize the cold-sky calibration (CSC) for the Aquarius radiometers. The celestial sky is a common cold reference in microwave radiometry. It is currently being used by the Soil Moisture and Ocean Salinity satellite, and it is planned that, after launch, the Aquarius/SAC-D observatory will periodically rotate to view "cold sky" as part of the calibration plan. Although radiation from the celestial sky is stable and relatively well known, it varies with location. In addition, radiation from the Earth below contributes to the measured signal through the antenna back lobes and also varies along the orbit. Both effects must be taken into account for a careful calibration. The numerical simulator has been used with the Aquarius configuration (antennas and orbit) to investigate these issues and determine optimum conditions for performing a CSC. This paper provides an overview of the simulator and the analysis leading to the selection of the optimum locations for a CSC.

  8. Flow and fracture behavior of aluminum alloy 6082-T6 at different tensile strain rates and triaxialities.

    PubMed

    Chen, Xuanzhen; Peng, Yong; Peng, Shan; Yao, Song; Chen, Chao; Xu, Ping

    2017-01-01

    This study aims to investigate the flow and fracture behavior of aluminum alloy 6082-T6 (AA6082-T6) at different strain rates and triaxialities. Two groups of Charpy impact tests were carried out to further investigate its dynamic impact fracture property. A series of tensile tests and numerical simulations based on finite element analysis (FEA) were performed. Experimental data on smooth specimens under various strain rates ranging from 0.0001~3400 s-1 shows that AA6082-T6 is rather insensitive to strain rates in general. However, clear rate sensitivity was observed in the range of 0.001~1 s-1 while such a characteristic is counteracted by the adiabatic heating of specimens under high strain rates. A Johnson-Cook constitutive model was proposed based on tensile tests at different strain rates. In this study, the average stress triaxiality and equivalent plastic strain at facture obtained from numerical simulations were used for the calibration of J-C fracture model. Both of the J-C constitutive model and fracture model were employed in numerical simulations and the results was compared with experimental results. The calibrated J-C fracture model exhibits higher accuracy than the J-C fracture model obtained by the common method in predicting the fracture behavior of AA6082-T6. Finally, the Scanning Electron Microscope (SEM) of fractured specimens with different initial stress triaxialities were analyzed. The magnified fractographs indicate that high initial stress triaxiality likely results in dimple fracture.

  9. The most powerful astrophysical events: Gravitational-wave peak luminosity of binary black holes as predicted by numerical relativity

    NASA Astrophysics Data System (ADS)

    Keitel, David; Forteza, Xisco Jiménez; Husa, Sascha; London, Lionel; Bernuzzi, Sebastiano; Harms, Enno; Nagar, Alessandro; Hannam, Mark; Khan, Sebastian; Pürrer, Michael; Pratten, Geraint; Chaurasia, Vivek

    2017-07-01

    For a brief moment, a binary black hole (BBH) merger can be the most powerful astrophysical event in the visible Universe. Here we present a model fit for this gravitational-wave peak luminosity of nonprecessing quasicircular BBH systems as a function of the masses and spins of the component black holes, based on numerical relativity (NR) simulations and the hierarchical fitting approach introduced by X. Jiménez-Forteza et al. [Phys. Rev. D 95, 064024 (2017)., 10.1103/PhysRevD.95.064024]. This fit improves over previous results in accuracy and parameter-space coverage and can be used to infer posterior distributions for the peak luminosity of future astrophysical signals like GW150914 and GW151226. The model is calibrated to the ℓ≤6 modes of 378 nonprecessing NR simulations up to mass ratios of 18 and dimensionless spin magnitudes up to 0.995, and includes unequal-spin effects. We also constrain the fit to perturbative numerical results for large mass ratios. Studies of key contributions to the uncertainty in NR peak luminosities, such as (i) mode selection, (ii) finite resolution, (iii) finite extraction radius, and (iv) different methods for converting NR waveforms to luminosity, allow us to use NR simulations from four different codes as a homogeneous calibration set. This study of systematic fits to combined NR and large-mass-ratio data, including higher modes, also paves the way for improved inspiral-merger-ringdown waveform models.

  10. Mathematical modeling of a survey-meter used to measure radioactivity in human thyroids: Monte Carlo calculations of the device response and uncertainties

    PubMed Central

    Khrutchinsky, Arkady; Drozdovitch, Vladimir; Kutsen, Semion; Minenko, Victor; Khrouch, Valeri; Luckyanov, Nickolas; Voillequé, Paul; Bouville, André

    2012-01-01

    This paper presents results of Monte Carlo modeling of the SRP-68-01 survey meter used to measure exposure rates near the thyroid glands of persons exposed to radioactivity following the Chernobyl accident. This device was not designed to measure radioactivity in humans. To estimate the uncertainty associated with the measurement results, a mathematical model of the SRP-68-01 survey meter was developed and verified. A Monte Carlo method of numerical simulation of radiation transport has been used to calculate the calibration factor for the device and evaluate its uncertainty. The SRP-68-01 survey meter scale coefficient, an important characteristic of the device, was also estimated in this study. The calibration factors of the survey meter were calculated for 131I, 132I, 133I, and 135I content in the thyroid gland for six age groups of population: newborns; children aged 1 yr, 5 yr, 10 yr, 15 yr; and adults. A realistic scenario of direct thyroid measurements with an “extended” neck was used to calculate the calibration factors for newborns and one-year-olds. Uncertainties in the device calibration factors due to variability of the device scale coefficient, variability in thyroid mass and statistical uncertainty of Monte Carlo method were evaluated. Relative uncertainties in the calibration factor estimates were found to be from 0.06 for children aged 1 yr to 0.1 for 10-yr and 15-yr children. The positioning errors of the detector during measurements deviate mainly in one direction from the estimated calibration factors. Deviations of the device position from the proper geometry of measurements were found to lead to overestimation of the calibration factor by up to 24 percent for adults and up to 60 percent for 1-yr children. The results of this study improve the estimates of 131I thyroidal content and, consequently, thyroid dose estimates that are derived from direct thyroid measurements performed in Belarus shortly after the Chernobyl accident. PMID:22245289

  11. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  12. Anchorage Behaviors of Frictional Tieback Anchors in Silty Sand

    NASA Astrophysics Data System (ADS)

    Hsu, Shih-Tsung; Hsiao, Wen-Ta; Chen, Ke-Ting; Hu, Wen-Chi; Wu, Ssu-Yi

    2017-06-01

    Soil anchors are extensively used in geotechnical applications, most commonly serve as tieback walls in deep excavations. To investigate the anchorage mechanisms of this tieback anchor, a constitutive model that considers both strain hardening and softening and volume dilatancy entitled SHASOVOD model, and FLAC3D software are used to perform 3-D numerical analyses. The results from field anchor tests are compared with those calculated by numerical analyses to enhance the applicability of the numerical method. After the calibration, this research carried out the parameter studies by numerical analyses. The numerical results reveal that whether the yield of soil around an anchor develops to ground surface and/or touches the diaphragm wall depending on the overburden depth H and the embedded depth Z of an anchor, this study suggests the minimum overburden and embedded depths to avoid the yield of soils develop to ground surface and/or touch the diaphragm wall. When the embedded depth, overburden depth or fixed length of an anchor increases, the anchorage capacity also increases. Increasing fixed length should be the optimum method to increase the anchorage capacity for fixed length less than 20m. However, when the fixed length of an anchor exceeds 30 m, the increasing rate of anchorage capacity per fixed length decreases, and progressive yield occurs obviously between the fixed length and surrounding soil.

  13. The Gaia FGK benchmark stars. High resolution spectral library

    NASA Astrophysics Data System (ADS)

    Blanco-Cuaresma, S.; Soubiran, C.; Jofré, P.; Heiter, U.

    2014-06-01

    Context. An increasing number of high-resolution stellar spectra is available today thanks to many past and ongoing spectroscopic surveys. Consequently, numerous methods have been developed to perform an automatic spectral analysis on a massive amount of data. When reviewing published results, biases arise and they need to be addressed and minimized. Aims: We are providing a homogeneous library with a common set of calibration stars (known as the Gaia FGK benchmark stars) that will allow us to assess stellar analysis methods and calibrate spectroscopic surveys. Methods: High-resolution and signal-to-noise spectra were compiled from different instruments. We developed an automatic process to homogenize the observed data and assess the quality of the resulting library. Results: We built a high-quality library that will facilitate the assessment of spectral analyses and the calibration of present and future spectroscopic surveys. The automation of the process minimizes the human subjectivity and ensures reproducibility. Additionally, it allows us to quickly adapt the library to specific needs that can arise from future spectroscopic analyses. Based on NARVAL and HARPS data obtained within the Gaia Data Processing and Analysis Consortium (DPAC) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group, and on data retrieved from the ESO-ADP database.The library of spectra is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/566/A98

  14. Ion chamber absorbed dose calibration coefficients, N{sub D,w}, measured at ADCLs: Distribution analysis and stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muir, B. R., E-mail: Bryan.Muir@nrc-cnrc.gc.ca

    2015-04-15

    Purpose: To analyze absorbed dose calibration coefficients, N{sub D,w}, measured at accredited dosimetry calibration laboratories (ADCLs) for client ionization chambers to study (i) variability among N{sub D,w} coefficients for chambers of the same type calibrated at each ADCL to investigate ion chamber volume fluctuations and chamber manufacturing tolerances; (ii) equivalency of ion chamber calibration coefficients measured at different ADCLs by intercomparing N{sub D,w} coefficients for chambers of the same type; and (iii) the long-term stability of N{sub D,w} coefficients for different chamber types by investigating repeated chamber calibrations. Methods: Large samples of N{sub D,w} coefficients for several chamber types measuredmore » over the time period between 1998 and 2014 were obtained from the three ADCLs operating in the United States. These are analyzed using various graphical and numerical statistical tests for the four chamber types with the largest samples of calibration coefficients to investigate (i) and (ii) above. Ratios of calibration coefficients for the same chamber, typically obtained two years apart, are calculated to investigate (iii) above and chambers with standard deviations of old/new ratios less than 0.3% meet stability requirements for accurate reference dosimetry recommended in dosimetry protocols. Results: It is found that N{sub D,w} coefficients for a given chamber type compared among different ADCLs may arise from differing probability distributions potentially due to slight differences in calibration procedures and/or the transfer of the primary standard. However, average N{sub D,w} coefficients from different ADCLs for given chamber types are very close with percent differences generally less than 0.2% for Farmer-type chambers and are well within reported uncertainties. Conclusions: The close agreement among calibrations performed at different ADCLs reaffirms the Calibration Laboratory Accreditation Subcommittee process of ensuring ADCL conformance with National Institute of Standards and Technology standards. This study shows that N{sub D,w} coefficients measured at different ADCLs are statistically equivalent, especially considering reasonable uncertainties. This analysis of N{sub D,w} coefficients also allows identification of chamber types that can be considered stable enough for accurate reference dosimetry.« less

  15. The development and validation of command schedules for SeaWiFS

    NASA Astrophysics Data System (ADS)

    Woodward, Robert H.; Gregg, Watson W.; Patt, Frederick S.

    1994-11-01

    An automated method for developing and assessing spacecraft and instrument command schedules is presented for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) project. SeaWiFS is to be carried on the polar-orbiting SeaStar satellite in 1995. The primary goal of the SeaWiFS mission is to provide global ocean chlorophyll concentrations every four days by employing onboard recorders and a twice-a-day data downlink schedule. Global Area Coverage (GAC) data with about 4.5 km resolution will be used to produce the global coverage. Higher resolution (1.1 km resolution) Local Area Coverage (LAC) data will also be recorded to calibrate the sensor. In addition, LAC will be continuously transmitted from the satellite and received by High Resolution Picture Transmission (HRPT) stations. The methods used to generate commands for SeaWiFS employ numerous hierarchical checks as a means of maximizing coverage of the Earth's surface and fulfilling the LAC data requirements. The software code is modularized and written in Fortran with constructs to mirror the pre-defined mission rules. The overall method is specifically developed for low orbit Earth-observing satellites with finite onboard recording capabilities and regularly scheduled data downlinks. Two software packages using the Interactive Data Language (IDL) for graphically displaying and verifying the resultant command decisions are presented. Displays can be generated which show portions of the Earth viewed by the sensor and spacecraft sub-orbital locations during onboard calibration activities. An IDL-based interactive method of selecting and testing LAC targets and calibration activities for command generation is also discussed.

  16. Effects of magnetometer calibration and maneuvers on accuracies of magnetometer-only attitude-and-rate determination

    NASA Technical Reports Server (NTRS)

    Challa, M.; Natanson, G.

    1998-01-01

    Two different algorithms - a deterministic magnetic-field-only algorithm and a Kalman filter for gyroless spacecraft - are used to estimate the attitude and rates of the Rossi X-Ray Timing Explorer (RXTE) using only measurements from a three-axis magnetometer. The performance of these algorithms is examined using in-flight data from various scenarios. In particular, significant enhancements in accuracies are observed when' the telemetered magnetometer data are accurately calibrated using a recently developed calibration algorithm. Interesting features observed in these studies of the inertial-pointing RXTE include a remarkable sensitivity of the filter to the numerical values of the noise parameters and relatively long convergence time spans. By analogy, the accuracy of the deterministic scheme is noticeably lower as a result of reduced rates of change of the body-fixed geomagnetic field. Preliminary results show the filter-per-axis attitude accuracies ranging between 0.1 and 0.5 deg and rate accuracies between 0.001 deg/sec and 0.005 deg./sec, whereas the deterministic method needs a more sophisticated techniques for smoothing time derivatives of the measured geomagnetic field to clearly distinguish both attitude and rate solutions from the numerical noise. Also included is a new theoretical development in the deterministic algorithm: the transformation of a transcendental equation in the original theory into an 8th-order polynomial equation. It is shown that this 8th-order polynomial reduces to quadratic equations in the two limiting cases-infinitely high wheel momentum, and constant rates-discussed in previous publications.

  17. Factor VII assay performance: an analysis of the North American Specialized Coagulation Laboratory Association proficiency testing results.

    PubMed

    Zantek, N D; Hsu, P; Refaai, M A; Ledford-Kraemer, M; Meijer, P; Van Cott, E M

    2013-06-01

    The performance of factor VII (FVII) assays currently used by clinical laboratories was examined in North American Specialized Coagulation Laboratory Association (NASCOLA) proficiency tests. Data from 12 surveys conducted between 2008 and 2010, involving 20 unique specimens plus four repeat-tested specimens, were analyzed. The number of laboratories per survey was 49-54 with a total of 1224 responses. Numerous reagent/instrument combinations were used. For FVII > 80 or <40 U/dL, 99.5% of results (859/863) were correctly classified by laboratories as normal/abnormal. Classification of specimens with 40-73 U/dL FVII was heterogeneous. Interlaboratory precision was better for normal specimens (coefficient of variation (CV) 10.7%) than for FVII<20 U/dL (CV 33.1%), with a mean CV of 17.2% per specimen. Intralaboratory precision for repeated specimens demonstrated no significant difference between the paired survey results (mean absolute difference 2.5-5.0 U/dL). For specimens with FVII >50 U/dL, among commonly used methods, one thromboplastin and one calibrator produced results 5-6 U/dL higher and another thromboplastin and calibrator produced results 5-6 U/dL lower than all other methods, and human thromboplastin differed from rabbit by +7.6 U/dL. Preliminary evidence suggests these differences could be due to the calibrator. For FVII <50 U/dL, differences among the commonly used reagents and calibrators were generally not significant. © 2013 Blackwell Publishing Ltd.

  18. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  19. Numerical simulation of damage evolution for ductile materials and mechanical properties study

    NASA Astrophysics Data System (ADS)

    El Amri, A.; Hanafi, I.; Haddou, M. E. Y.; Khamlichi, A.

    2015-12-01

    This paper presents results of a numerical modelling of ductile fracture and failure of elements made of 5182H111 aluminium alloys subjected to dynamic traction. The analysis was performed using Johnson-Cook model based on ABAQUS software. The modelling difficulty related to prediction of ductile fracture mainly arises because there is a tremendous span of length scales from the structural problem to the micro-mechanics problem governing the material separation process. This study has been used the experimental results to calibrate a simple crack propagation criteria for shell elements of which one has often been used in practical analyses. The performance of the proposed model is in general good and it is believed that the presented results and experimental-numerical calibration procedure can be of use in practical finite-element simulations.

  20. Transient Inverse Calibration of Hanford Site-Wide Groundwater Model to Hanford Operational Impacts - 1943 to 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Charles R.; Bergeron, Marcel P.; Wurstner, Signe K.

    2001-05-31

    This report describes a new initiative to strengthen the technical defensibility of predictions made with the Hanford site-wide groundwater flow and transport model. The focus is on characterizing major uncertainties in the current model. PNNL will develop and implement a calibration approach and methodology that can be used to evaluate alternative conceptual models of the Hanford aquifer system. The calibration process will involve a three-dimensional transient inverse calibration of each numerical model to historical observations of hydraulic and water quality impacts to the unconfined aquifer system from Hanford operations since the mid-1940s.

  1. Experimental investigation of strain errors in stereo-digital image correlation due to camera calibration

    NASA Astrophysics Data System (ADS)

    Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan

    2018-03-01

    The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.

  2. Comparison of Calibration of Sensors Used for the Quantification of Nuclear Energy Rate Deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brun, J.; Reynard-Carette, C.; Tarchalski, M.

    This present work deals with a collaborative program called GAMMA-MAJOR 'Development and qualification of a deterministic scheme for the evaluation of GAMMA heating in MTR reactors with exploitation as example MARIA reactor and Jules Horowitz Reactor' between the National Centre for Nuclear Research of Poland, the French Atomic Energy and Alternative Energies Commission and Aix Marseille University. One of main objectives of this program is to optimize the nuclear heating quantification thanks to calculation validated from experimental measurements of radiation energy deposition carried out in irradiation reactors. The quantification of the nuclear heating is a key data especially for themore » thermal, mechanical design and sizing of irradiation experimental devices in specific irradiated conditions and locations. The determination of this data is usually performed by differential calorimeters and gamma thermometers such as used in the experimental multi-sensors device called CARMEN 'Calorimetric en Reacteur et Mesures des Emissions Nucleaires'. In the framework of the GAMMA-MAJOR program a new calorimeter was designed for the nuclear energy deposition quantification. It corresponds to a single-cell calorimeter and it is called KAROLINA. This calorimeter was recently tested during an irradiation campaign inside MARIA reactor in Poland. This new single-cell calorimeter differs from previous CALMOS or CARMEN type differential calorimeters according to three main points: its geometry, its preliminary out-of-pile calibration, and its in-pile measurement method. The differential calorimeter, which is made of two identical cells containing heaters, has a calibration method based on the use of steady thermal states reached by simulating the nuclear energy deposition into the calorimeter sample by Joule effect; whereas the single-cell calorimeter, which has no heater, is calibrated by using the transient thermal response of the sensor (heating and cooling steps). The paper will concern these two kinds of calorimetric sensors. It will focus in particular on studies on their out-of-pile calibrations. Firstly, the characteristics of the sensor designs will be detailed (such as geometry, dimension, material sample, assembly, instrumentation). Then the out-of-pile calibration methods will be described. Furthermore numerical results obtained thanks to 2D axisymmetrical thermal simulations (Finite Element Method, CAST3M) and experimental results will be presented for each sensor. A comparison of the two different thermal sensor behaviours will be realized. To conclude a discussion of the advantages and the drawbacks of each sensor will be performed especially regarding measurement methods. (authors)« less

  3. Online geometric calibration of cone-beam computed tomography for arbitrary imaging objects.

    PubMed

    Meng, Yuanzheng; Gong, Hui; Yang, Xiaoquan

    2013-02-01

    A novel online method based on the symmetry property of the sum of projections (SOP) is proposed to obtain the geometric parameters in cone-beam computed tomography (CBCT). This method requires no calibration phantom and can be used in circular trajectory CBCT with arbitrary cone angles. An objective function is deduced to illustrate the dependence of the symmetry of SOP on geometric parameters, which will converge to its minimum when the geometric parameters achieve their true values. Thus, by minimizing the objective function, we can obtain the geometric parameters for image reconstruction. To validate this method, numerical phantom studies with different noise levels are simulated. The results show that our method is insensitive to the noise and can determine the skew (in-plane rotation angle of the detector), the roll (rotation angle around the projection of the rotation axis on the detector), and the rotation axis with high accuracy, while the mid-plane and source-to-detector distance will be obtained with slightly lower accuracy. However, our simulation studies validate that the errors of the latter two parameters brought by our method will hardly degrade the quality of reconstructed images. The small animal studies show that our method is able to deal with arbitrary imaging objects. In addition, the results of the reconstructed images in different slices demonstrate that we have achieved comparable image quality in the reconstructions as some offline methods.

  4. A short-term ensemble wind speed forecasting system for wind power applications

    NASA Astrophysics Data System (ADS)

    Baidya Roy, S.; Traiteur, J. J.; Callicutt, D.; Smith, M.

    2011-12-01

    This study develops an adaptive, blended forecasting system to provide accurate wind speed forecasts 1 hour ahead of time for wind power applications. The system consists of an ensemble of 21 forecasts with different configurations of the Weather Research and Forecasting Single Column Model (WRFSCM) and a persistence model. The ensemble is calibrated against observations for a 2 month period (June-July, 2008) at a potential wind farm site in Illinois using the Bayesian Model Averaging (BMA) technique. The forecasting system is evaluated against observations for August 2008 at the same site. The calibrated ensemble forecasts significantly outperform the forecasts from the uncalibrated ensemble while significantly reducing forecast uncertainty under all environmental stability conditions. The system also generates significantly better forecasts than persistence, autoregressive (AR) and autoregressive moving average (ARMA) models during the morning transition and the diurnal convective regimes. This forecasting system is computationally more efficient than traditional numerical weather prediction models and can generate a calibrated forecast, including model runs and calibration, in approximately 1 minute. Currently, hour-ahead wind speed forecasts are almost exclusively produced using statistical models. However, numerical models have several distinct advantages over statistical models including the potential to provide turbulence forecasts. Hence, there is an urgent need to explore the role of numerical models in short-term wind speed forecasting. This work is a step in that direction and is likely to trigger a debate within the wind speed forecasting community.

  5. A Self-Calibrating Radar Sensor System for Measuring Vital Signs.

    PubMed

    Huang, Ming-Chun; Liu, Jason J; Xu, Wenyao; Gu, Changzhan; Li, Changzhi; Sarrafzadeh, Majid

    2016-04-01

    Vital signs (i.e., heartbeat and respiration) are crucial physiological signals that are useful in numerous medical applications. The process of measuring these signals should be simple, reliable, and comfortable for patients. In this paper, a noncontact self-calibrating vital signs monitoring system based on the Doppler radar is presented. The system hardware and software were designed with a four-tiered layer structure. To enable accurate vital signs measurement, baseband signals in the radar sensor were modeled and a framework for signal demodulation was proposed. Specifically, a signal model identification method was formulated into a quadratically constrained l1 minimization problem and solved using the upper bound and linear matrix inequality (LMI) relaxations. The performance of the proposed system was comprehensively evaluated using three experimental sets, and the results indicated that this system can be used to effectively measure human vital signs.

  6. A Novel Miniature Wide-band Radiometer for Space Applications

    NASA Astrophysics Data System (ADS)

    Sykulska-Lawrence, H. M.

    2016-12-01

    Design, development and testing of a novel miniaturised infrared radiometer is described. The instrument opens up new possibilities in planetary science of deployment on smaller platforms - such as unmanned aerial vehicles and microprobes - to enable study of a planet's radiation balance, as well as terrestrial volcano plumes and trace gases in planetary atmospheres, using low-cost long-term observations. Thus a key enabling development is that of miniaturised, low-power and well-calibrated instrumentation. The talk reports advances in miniature technology to perform high accuracy visible / IR remote sensing measurements. The infrared radiometer is akin to those widely used for remote sensing for earth and space applications, which are currently either large instruments on orbiting platforms or medium-sized payloads on balloons. We use MEMS microfabrication techniques to shrink a conventional design, while combining the calibration benefits of large (>1kg) type radiometers with the flexibility and portability of a <10g device. The instrument measures broadband (0.2 to 100µm) upward and downward radiation fluxes, showing improvements in calibration stability and accuracy,with built-in calibration capability, incorporating traceability to temperature standards such as ITS-90. The miniature instrument described here was derived from a concept developed for a European Space Agency study, Dalomis (Proc. of 'i-SAIRAS 2005', Munich, 2005), which involved dropping multiple probes into the atmosphere of Venus from a balloon to sample numerous parts of the complex weather systems on the planet. Data from such an in-situ instrument would complement information from a satellite remote sensing instrument or balloon radiosonde. Moreover, the addition of an internal calibration standard facilitates comparisons between datasets. One of the main challenges for a reduced size device is calibration. We use an in-situ method whereby a blackbody source is integrated within the device and a micromirror switches the input to the detector between the measured signal and the calibration target. Achieving two well-calibrated radiometer channels within a small (<10g) payload is made possible by using modern micromachining techniques.

  7. Calibration of а single hexagonal NaI(Tl) detector using a new numerical method based on the efficiency transfer method

    NASA Astrophysics Data System (ADS)

    Abbas, Mahmoud I.; Badawi, M. S.; Ruskov, I. N.; El-Khatib, A. M.; Grozdanov, D. N.; Thabet, A. A.; Kopatch, Yu. N.; Gouda, M. M.; Skoy, V. R.

    2015-01-01

    Gamma-ray detector systems are important instruments in a broad range of science and new setup are continually developing. The most recent step in the evolution of detectors for nuclear spectroscopy is the construction of large arrays of detectors of different forms (for example, conical, pentagonal, hexagonal, etc.) and sizes, where the performance and the efficiency can be increased. In this work, a new direct numerical method (NAM), in an integral form and based on the efficiency transfer (ET) method, is used to calculate the full-energy peak efficiency of a single hexagonal NaI(Tl) detector. The algorithms and the calculations of the effective solid angle ratios for a point (isotropic irradiating) gamma-source situated coaxially at different distances from the detector front-end surface, taking into account the attenuation of the gamma-rays in the detector's material, end-cap and the other materials in-between the gamma-source and the detector, are considered as the core of this (ET) method. The calculated full-energy peak efficiency values by the (NAM) are found to be in a good agreement with the measured experimental data.

  8. Investigation Study on Determination of Fracture Strain and Fractuer Forming Limit Curve Using Different Experimental and Numerical Methods

    NASA Astrophysics Data System (ADS)

    Farahnak, P.; Urbanek, M.; Džugan, J.

    2017-09-01

    Forming Limit Curve (FLC) is a well-known tool for the evaluation of failure in sheet metal process. However, its experimental determination and evaluation are rather complex. From theoretical point of view, FLC describes initiation of the instability not fracture. During the last years Digital Image Correlation (DIC) techniques have been developed extensively. Throughout this paper, all the measurements were done using DIC and as it is reported in the literature, different approaches to capture necking and fracture phenomena using Cross Section Method (CSM), Time dependent Method (TDM) and Thinning Method (TM) were investigated. Each aforementioned method has some advantages and disadvantages. Moreover, a cruciform specimen was used in order to cover whole FLC in the range between uniaxial to equi-biaxial tension and as an alternative for Nakajima test. Based on above-mentioned uncertainty about the fracture strain, some advanced numerical failure models can describe necking and fracture phenomena accurately with consideration of anisotropic effects. It is noticeable that in this paper, dog-bone, notch and circular disk specimens are used to calibrate Johnson-Cook (J-C) fracture model. The results are discussed for mild steel DC01.

  9. Bayesian calibration of coarse-grained forces: Efficiently addressing transferability

    NASA Astrophysics Data System (ADS)

    Patrone, Paul N.; Rosch, Thomas W.; Phelan, Frederick R.

    2016-04-01

    Generating and calibrating forces that are transferable across a range of state-points remains a challenging task in coarse-grained (CG) molecular dynamics. In this work, we present a coarse-graining workflow, inspired by ideas from uncertainty quantification and numerical analysis, to address this problem. The key idea behind our approach is to introduce a Bayesian correction algorithm that uses functional derivatives of CG simulations to rapidly and inexpensively recalibrate initial estimates f0 of forces anchored by standard methods such as force-matching. Taking density-temperature relationships as a running example, we demonstrate that this algorithm, in concert with various interpolation schemes, can be used to efficiently compute physically reasonable force curves on a fine grid of state-points. Importantly, we show that our workflow is robust to several choices available to the modeler, including the interpolation schemes and tools used to construct f0. In a related vein, we also demonstrate that our approach can speed up coarse-graining by reducing the number of atomistic simulations needed as inputs to standard methods for generating CG forces.

  10. Application and Analysis of Measurement Model for Calibrating Spatial Shear Surface in Triaxial Test

    NASA Astrophysics Data System (ADS)

    Zhang, Zhihua; Qiu, Hongsheng; Zhang, Xiedong; Zhang, Hang

    2017-12-01

    Discrete element method has great advantages in simulating the contacts, fractures, large displacement and deformation between particles. In order to analyze the spatial distribution of the shear surface in the three-dimensional triaxial test, a measurement model is inserted in the numerical triaxial model which is generated by weighted average assembling method. Due to the non-visibility of internal shear surface in laboratory, it is largely insufficient to judge the trend of internal shear surface only based on the superficial cracks of sheared sample, therefore, the measurement model is introduced. The trend of the internal shear zone is analyzed according to the variations of porosity, coordination number and volumetric strain in each layer. It shows that as a case study on confining stress of 0.8 MPa, the spatial shear surface is calibrated with the results of the rotated particle distribution and the theoretical value with the specific characteristics of the increase of porosity, the decrease of coordination number, and the increase of volumetric strain, which represents the measurement model used in three-dimensional model is applicable.

  11. Research on atmospheric CO2 remote sensing with open-path tunable diode laser absorption spectroscopy and comparison methods

    NASA Astrophysics Data System (ADS)

    Xin, Fengxin; Guo, Jinjia; Sun, Jiayun; Li, Jie; Zhao, Chaofang; Liu, Zhishen

    2017-06-01

    An open-path atmospheric CO2 measurement system was built based on tunable diode laser absorption spectroscopy (TDLAS). The CO2 absorption line near 2 μm was selected, measuring the atmospheric CO2 with direct absorption spectroscopy and carrying on the comparative experiment with multipoint measuring instruments of the open-path. The detection limit of the TDLAS system is 1.94×10-6. The calibration experiment of three AZ-7752 handheld CO2 measuring instruments was carried out with the Los Gatos Research gas analyzer. The consistency of the results was good, and the handheld instrument could be used in the TDLAS system after numerical calibration. With the contrast of three AZ-7752 and their averages, the correlation coefficients are 0.8828, 0.9004, 0.9079, and 0.9393 respectively, which shows that the open-path TDLAS has the best correlation with the average of three AZ-7752 and measures the concentration of atmospheric CO2 accurately. Multipoint measurement provides a convenient comparative method for open-path TDLAS.

  12. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  13. Application of nonlinear-regression methods to a ground-water flow model of the Albuquerque Basin, New Mexico

    USGS Publications Warehouse

    Tiedeman, C.R.; Kernodle, J.M.; McAda, D.P.

    1998-01-01

    This report documents the application of nonlinear-regression methods to a numerical model of ground-water flow in the Albuquerque Basin, New Mexico. In the Albuquerque Basin, ground water is the primary source for most water uses. Ground-water withdrawal has steadily increased since the 1940's, resulting in large declines in water levels in the Albuquerque area. A ground-water flow model was developed in 1994 and revised and updated in 1995 for the purpose of managing basin ground- water resources. In the work presented here, nonlinear-regression methods were applied to a modified version of the previous flow model. Goals of this work were to use regression methods to calibrate the model with each of six different configurations of the basin subsurface and to assess and compare optimal parameter estimates, model fit, and model error among the resulting calibrations. The Albuquerque Basin is one in a series of north trending structural basins within the Rio Grande Rift, a region of Cenozoic crustal extension. Mountains, uplifts, and fault zones bound the basin, and rock units within the basin include pre-Santa Fe Group deposits, Tertiary Santa Fe Group basin fill, and post-Santa Fe Group volcanics and sediments. The Santa Fe Group is greater than 14,000 feet (ft) thick in the central part of the basin. During deposition of the Santa Fe Group, crustal extension resulted in development of north trending normal faults with vertical displacements of as much as 30,000 ft. Ground-water flow in the Albuquerque Basin occurs primarily in the Santa Fe Group and post-Santa Fe Group deposits. Water flows between the ground-water system and surface-water bodies in the inner valley of the basin, where the Rio Grande, a network of interconnected canals and drains, and Cochiti Reservoir are located. Recharge to the ground-water flow system occurs as infiltration of precipitation along mountain fronts and infiltration of stream water along tributaries to the Rio Grande; subsurface flow from adjacent regions; irrigation and septic field seepage; and leakage through the Rio Grande, canal, and Cochiti Reservoir beds. Ground water is discharged from the basin by withdrawal; evapotranspiration; subsurface flow; and flow to the Rio Grande, canals, and drains. The transient, three-dimensional numerical model of ground-water flow to which nonlinear-regression methods were applied simulates flow in the Albuquerque Basin from 1900 to March 1995. Six different basin subsurface configurations are considered in the model. These configurations are designed to test the effects of (1) varying the simulated basin thickness, (2) including a hypothesized hydrogeologic unit with large hydraulic conductivity in the western part of the basin (the west basin high-K zone), and (3) substantially lowering the simulated hydraulic conductivity of a fault in the western part of the basin (the low-K fault zone). The model with each of the subsurface configurations was calibrated using a nonlinear least- squares regression technique. The calibration data set includes 802 hydraulic-head measurements that provide broad spatial and temporal coverage of basin conditions, and one measurement of net flow from the Rio Grande and drains to the ground-water system in the Albuquerque area. Data are weighted on the basis of estimates of the standard deviations of measurement errors. The 10 to 12 parameters to which the calibration data as a whole are generally most sensitive were estimated by nonlinear regression, whereas the remaining model parameter values were specified. Results of model calibration indicate that the optimal parameter estimates as a whole are most reasonable in calibrations of the model with with configurations 3 (which contains 1,600-ft-thick basin deposits and the west basin high-K zone), 4 (which contains 5,000-ft-thick basin de

  14. A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry

    NASA Astrophysics Data System (ADS)

    Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.

    2018-03-01

    Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.

  15. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  16. Computer Image Analysis of Histochemically-Labeled Acetylcholinesterase.

    DTIC Science & Technology

    1984-11-30

    image analysis on conjunction with histochemical techniques to describe the distribution of acetylcholinesterase (AChE) activity in nervous and muscular tissue in rats treated with organophosphates (OPs). The objective of the first year of work on this remaining 2 years. We began by adopting a version of the AChE staining method as modified by Hanker, which consistent with the optical properties of our video system. We wrote computer programs for provide a numeric quantity which represents the degree of staining in a tissue section. The staining was calibrated by

  17. Self-recalibration of a robot-assisted structured-light-based measurement system.

    PubMed

    Xu, Jing; Chen, Rui; Liu, Shuntao; Guan, Yong

    2017-11-10

    The structured-light-based measurement method is widely employed in numerous fields. However, for industrial inspection, to achieve complete scanning of a work piece and overcome occlusion, the measurement system needs to be moved to different viewpoints. Moreover, frequent reconfiguration of the measurement system may be needed based on the size of the measured object, making the self-recalibration of extrinsic parameters indispensable. To this end, this paper proposes an automatic self-recalibration and reconstruction method, wherein a robot arm is employed to move the measurement system for complete scanning; the self-recalibration is achieved using fundamental matrix calculations and point cloud registration without the need for an accurate calibration gauge. Experimental results demonstrate the feasibility and accuracy of our method.

  18. Numerical dissipation vs. subgrid-scale modelling for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos

    2017-05-01

    This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.

  19. The difficulty of measuring the absorption of scattered sunlight by H2O and CO2 in volcanic plumes: A comment on Pering et al. “A novel and inexpensive method for measuring volcanic plume water fluxes at high temporal resolution,” Remote Sens. 2017, 9, 146

    USGS Publications Warehouse

    Kern, Christoph

    2017-01-01

    In their recent study, Pering et al. (2017) presented a novel method for measuring volcanic water vapor fluxes. Their method is based on imaging volcanic gas and aerosol plumes using a camera sensitive to the near-infrared (NIR) absorption of water vapor. The imaging data are empirically calibrated by comparison with in situ water measurements made within the plumes. Though the presented method may give reasonable results over short time scales, the authors fail to recognize the sensitivity of the technique to light scattering on aerosols within the plume. In fact, the signals measured by Pering et al. are not related to the absorption of NIR radiation by water vapor within the plume. Instead, the measured signals are most likely caused by a change in the effective light path of the detected radiation through the atmospheric background water vapor column. Therefore, their method is actually based on establishing an empirical relationship between in-plume scattering efficiency and plume water content. Since this relationship is sensitive to plume aerosol abundance and numerous environmental factors, the method will only yield accurate results if it is calibrated very frequently using other measurement techniques.

  20. Calibration of the NASA GRC 16 In. Mass-Flow Plug

    NASA Technical Reports Server (NTRS)

    Davis, David O.; Friedlander, David J.; Saunders, J. David; Frate, Franco C.; Foster, Lancert E.

    2012-01-01

    The results of an experimental calibration of the NASA Glenn Research Center 16 in. Mass-Flow Plug (MFP) are presented and compared to a previously obtained calibration of a 15 in. Mass-Flow Plug. An ASME low-beta, long-radius nozzle was used as the calibration reference. The discharge coefficient for the ASME nozzle was obtained by numerically simulating the flow through the nozzle from the WIND-US code. The results showed agreement between the 15 in. and 16 in. MFPs for area ratios (MFP to pipe area ratio) greater than 0.6 but deviate at area ratios below this value for reasons that are not fully understood. A general uncertainty analysis was also performed and indicates that large uncertainties in the calibration are present for low MFP area ratios.

  1. Calibration of the NASA Glenn Research Center 16 in. Mass-Flow Plug

    NASA Technical Reports Server (NTRS)

    Davis, David O.; Friedlander, David J.; Saunders, J. David; Frate, Franco C.; Foster, Lancert E.

    2014-01-01

    The results of an experimental calibration of the NASA Glenn Research Center 16 in. Mass-Flow Plug (MFP) are presented and compared to a previously obtained calibration of a 15 in. Mass-Flow Plug. An ASME low-beta, long-radius nozzle was used as the calibration reference. The discharge coefficient for the ASME nozzle was obtained by numerically simulating the flow through the nozzle from the WIND-US code. The results showed agreement between the 15 and 16 in. MFPs for area ratios (MFP to pipe area ratio) greater than 0.6 but deviate at area ratios below this value for reasons that are not fully understood. A general uncertainty analysis was also performed and indicates that large uncertainties in the calibration are present for low MFP area ratios.

  2. FEM simulation of the die compaction of pharmaceutical products: influence of visco-elastic phenomena and comparison with experiments.

    PubMed

    Diarra, Harona; Mazel, Vincent; Busignies, Virginie; Tchoreloff, Pierre

    2013-09-10

    This work studies the influence of visco-elastic behavior in the finite element method (FEM) modeling of die compaction of pharmaceutical products and how such a visco-elastic behavior may improve the agreement between experimental and simulated compression curves. The modeling of the process was conducted on a pharmaceutical excipient, microcrystalline cellulose (MCC), by using Drucker-Prager cap model coupled with creep behavior in Abaqus(®) software. The experimental data were obtained on a compaction simulator (STYLCAM 200R). The elastic deformation of the press was determined by performing experimental tests on a calibration disk and was introduced in the simulation. Numerical optimization was performed to characterize creep parameters. The use of creep behavior in the simulations clearly improved the agreement between the numerical and experimental compression curves (stresses, thickness), mainly during the unloading part of the compaction cycle. For the first time, it was possible to reproduce numerically the fact that the minimum tablet thickness is not obtained at the maximum compression stress. This study proves that creep behavior must be taken into account when modeling the compaction of pharmaceutical products using FEM methods. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. A calibration method of infrared LVF based spectroradiometer

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin

    2017-10-01

    In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.

  4. A comparison of solute-transport solution techniques and their effect on sensitivity analysis and inverse modeling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2001-01-01

    Five common numerical techniques for solving the advection-dispersion equation (finite difference, predictor corrector, total variation diminishing, method of characteristics, and modified method of characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using discrete, randomly distributed, homogeneous blocks of five sand types. This experimental model provides an opportunity to compare the solution techniques: the heterogeneous hydraulic-conductivity distribution of known structure can be accurately represented by a numerical model, and detailed measurements can be compared with simulated concentrations and total flow through the tank. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation given the different methods of simulating solute transport. The breakthrough curves show that simulated peak concentrations, even at very fine grid spacings, varied between the techniques because of different amounts of numerical dispersion. Sensitivity-analysis results revealed: (1) a high correlation between hydraulic conductivity and porosity given the concentration and flow observations used, so that both could not be estimated; and (2) that the breakthrough curve data did not provide enough information to estimate individual values of dispersivity for the five sands. This study demonstrates that the choice of assigned dispersivity and the amount of numerical dispersion present in the solution technique influence estimated hydraulic conductivity values to a surprising degree.

  5. Pressure Distribution on Inner Wall of Parabolic Nozzle in Laser Propulsion with Single Pulse

    NASA Astrophysics Data System (ADS)

    Cui, Cunyan; Hong, Yanji; Wen, Ming; Song, Junling; Fang, Juan

    2011-11-01

    A system based of dynamic pressure sensors was established to study the time resolved pressure distribution on the inner wall of a parabolic nozzle in laser propulsion. Dynamic calibration and static calibration of the test system were made and the results showed that frequency response was up to 412 kHz and linear error was less than 10%. Experimental model was a parabolic nozzle and three test points were preset along one generating line. This study showed that experimental results agreed well with those obtained by numerical calculation way in pressure evolution tendency. The peak value of the calculation was higher than that of the experiment at each tested orifice because of the limitation of the numerical models. The results of this study were very useful for analyzing the energy deposition in laser propulsion and modifying numerical models.

  6. Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.

  7. A Numerical Study on Toppling Failure of a Jointed Rock Slope by Using the Distinct Lattice Spring Model

    NASA Astrophysics Data System (ADS)

    Lian, Ji-Jian; Li, Qin; Deng, Xi-Fei; Zhao, Gao-Feng; Chen, Zu-Yu

    2018-02-01

    In this work, toppling failure of a jointed rock slope is studied by using the distinct lattice spring model (DLSM). The gravity increase method (GIM) with a sub-step loading scheme is implemented in the DLSM to mimic the loading conditions of a centrifuge test. A classical centrifuge test for a jointed rock slope, previously simulated by the finite element method and the discrete element model, is simulated by using the GIM-DLSM. Reasonable boundary conditions are obtained through detailed comparisons among existing numerical solutions with experimental records. With calibrated boundary conditions, the influences of the tensional strength of the rock block, cohesion and friction angles of the joints, as well as the spacing and inclination angles of the joints, on the flexural toppling failure of the jointed rock slope are investigated by using the GIM-DLSM, leading to some insight into evaluating the state of flexural toppling failure for a jointed slope and effectively preventing the flexural toppling failure of jointed rock slopes.

  8. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  9. Wavelength calibration of dispersive near-infrared spectrometer using relative k-space distribution with low coherence interferometer

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2016-05-01

    The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.

  10. Verification of the ISO calibration method for field pyranometers under tropical sky conditions

    NASA Astrophysics Data System (ADS)

    Janjai, Serm; Tohsing, Korntip; Pattarapanitchai, Somjet; Detkhon, Pasakorn

    2017-02-01

    Field pyranomters need to be annually calibrated and the International Organization for Standardization (ISO) has defined a standard method (ISO 9847) for calibrating these pyranometers. According to this standard method for outdoor calibration, the field pyranometers have to be compared to a reference pyranometer for the period of 2 to 14 days, depending on sky conditions. In this work, the ISO 9847 standard method was verified under tropical sky conditions. To verify the standard method, calibration of field pyranometers was conducted at a tropical site located in Nakhon Pathom (13.82o N, 100.04o E), Thailand under various sky conditions. The conditions of the sky were monitored by using a sky camera. The calibration results for different time periods used for the calibration under various sky conditions were analyzed. It was found that the calibration periods given by this standard method could be reduced without significant change in the final calibration result. In addition, recommendation and discussion on the use of this standard method in the tropics were also presented.

  11. Large-Eddy Simulation of Waked Turbines in a Scaled Wind Farm Facility

    NASA Astrophysics Data System (ADS)

    Wang, J.; McLean, D.; Campagnolo, F.; Yu, T.; Bottasso, C. L.

    2017-05-01

    The aim of this paper is to present the numerical simulation of waked scaled wind turbines operating in a boundary layer wind tunnel. The simulation uses a LES-lifting-line numerical model. An immersed boundary method in conjunction with an adequate wall model is used to represent the effects of both the wind turbine nacelle and tower, which are shown to have a considerable effect on the wake behavior. Multi-airfoil data calibrated at different Reynolds numbers are used to account for the lift and drag characteristics at the low and varying Reynolds conditions encountered in the experiments. The present study focuses on low turbulence inflow conditions and inflow non-uniformity due to wind tunnel characteristics, while higher turbulence conditions are considered in a separate study. The numerical model is validated by using experimental data obtained during test campaigns conducted with the scaled wind farm facility. The simulation and experimental results are compared in terms of power capture, rotor thrust, downstream velocity profiles and turbulence intensity.

  12. Deriving global parameter estimates for the Noah land surface model using FLUXNET and machine learning

    NASA Astrophysics Data System (ADS)

    Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.

    2016-11-01

    With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.

  13. Research on camera on orbit radial calibration based on black body and infrared calibration stars

    NASA Astrophysics Data System (ADS)

    Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng

    2018-05-01

    Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.

  14. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  15. Comparison of various methods for mathematical analysis of the Foucault knife edge test pattern to determine optical imperfections

    NASA Technical Reports Server (NTRS)

    Gatewood, B. E.

    1971-01-01

    The linearized integral equation for the Foucault test of a solid mirror was solved by various methods: power series, Fourier series, collocation, iteration, and inversion integral. The case of the Cassegrain mirror was solved by a particular power series method, collocation, and inversion integral. The inversion integral method appears to be the best overall method for both the solid and Cassegrain mirrors. Certain particular types of power series and Fourier series are satisfactory for the Cassegrain mirror. Numerical integration of the nonlinear equation for selected surface imperfections showed that results start to deviate from those given by the linearized equation at a surface deviation of about 3 percent of the wavelength of light. Several possible procedures for calibrating and scaling the input data for the integral equation are described.

  16. A Review of LIDAR Radiometric Processing: From Ad Hoc Intensity Correction to Rigorous Radiometric Calibration.

    PubMed

    Kashani, Alireza G; Olsen, Michael J; Parrish, Christopher E; Wilson, Nicholas

    2015-11-06

    In addition to precise 3D coordinates, most light detection and ranging (LIDAR) systems also record "intensity", loosely defined as the strength of the backscattered echo for each measured point. To date, LIDAR intensity data have proven beneficial in a wide range of applications because they are related to surface parameters, such as reflectance. While numerous procedures have been introduced in the scientific literature, and even commercial software, to enhance the utility of intensity data through a variety of "normalization", "correction", or "calibration" techniques, the current situation is complicated by a lack of standardization, as well as confusing, inconsistent use of terminology. In this paper, we first provide an overview of basic principles of LIDAR intensity measurements and applications utilizing intensity information from terrestrial, airborne topographic, and airborne bathymetric LIDAR. Next, we review effective parameters on intensity measurements, basic theory, and current intensity processing methods. We define terminology adopted from the most commonly-used conventions based on a review of current literature. Finally, we identify topics in need of further research. Ultimately, the presented information helps lay the foundation for future standards and specifications for LIDAR radiometric calibration.

  17. Calibration of discrete element model parameters: soybeans

    NASA Astrophysics Data System (ADS)

    Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal

    2018-05-01

    Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.

  18. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Clark, Martyn P.

    2010-10-01

    Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.

  19. Calibration and Data Retrieval Algorithms for the NASA Langley/Ames Diode Laser Hygrometer for the NASA Trace-P Mission

    NASA Technical Reports Server (NTRS)

    Podolske, James R.; Sachse, Glen W.; Diskin, Glenn S.; Hipskino, R. Stephen (Technical Monitor)

    2002-01-01

    This paper describes the procedures and algorithms for the laboratory calibration and the field data retrieval of the NASA Langley / Ames Diode Laser Hygrometer as implemented during the NASA Trace-P mission during February to April 2000. The calibration is based on a NIST traceable dewpoint hygrometer using relatively high humidity and short pathlength. Two water lines of widely different strengths are used to increase the dynamic range of the instrument in the course of a flight. The laboratory results are incorporated into a numerical model of the second harmonic spectrum for each of the two spectral window regions using spectroscopic parameters from the HITRAN database and other sources, allowing water vapor retrieval at upper tropospheric and lower stratospheric temperatures and humidity levels. The data retrieval algorithm is simple, numerically stable, and accurate. A comparison with other water vapor instruments on board the NASA DC-8 and ER-2 aircraft is presented.

  20. Effects of Distortion on Mass Flow Plug Calibration

    NASA Technical Reports Server (NTRS)

    Sasson, Jonathan; Davis, David O.; Barnhart, Paul J.

    2015-01-01

    A numerical, and experimental investigation to study the effects of flow distortion on a Mass Flow Plug (MFP) used to control and measure mass-flow during an inlet test has been conducted. The MFP was first calibrated using the WIND-US flow solver for uniform (undistorted) inflow conditions. These results are shown to compare favorably with an experimental calibration under similar conditions. The effects of distortion were investigated by imposing distorted flow conditions taken from an actual inlet test to the inflow plane of the numerical simulation. The computational fluid dynamic (CFD) based distortion study only showed the general trend in mass flow rate. The study used only total pressure as the upstream boundary condition, which was not enough to define the flow. A better simulation requires knowledge of the turbulence structure and a specific distortion pattern over a range of plug positions. It is recommended that future distortion studies utilize a rake with at least the same amount of pitot tubes as the AIP rake.

  1. Reactive transport model for bioremediation of nitrate using fumarate in groundwater system: verification and field application

    NASA Astrophysics Data System (ADS)

    Lee, S.; Yeo, I. W.; Yeum, Y.; Kim, Y.

    2016-12-01

    Previous studies showed that groundwater of rural areas in Korea is often contaminated with nitrate highly exceeding the drinking water standard of 10 mg/L (NO3-N), which poses a major threat in human and livestock health. In-situ bioremediation method has been developed to reduce high nitrate-nitrogen concentration in groundwater using slowly released encapsulated carbon source. Collaborative research of this study revealed that fumarate was found to be a very effective carbon source in terms of cost and nitrate reduction against formate, propionate, and lactate. For reactive transport modeling of the bioremediation of nitrate using fumarate, the BTEX module of RT3D incorporated in GMS, a commercial groundwater modeling software developed by AQUAVEO, was adopted, where BTEX was replaced with fumarate as a carbon source. Column tests were carried out to determine transport and reaction parameters for numerical modeling such as dispersity and first order degradation rate of nitrate by fumarate. The calibration of the numerical model against column tests strongly indicated that nitrate, known to be not reactive in groundwater system, appeared to be retarded due to sorption by fumarate. The calibrated model was tested for field-scale application to the composting facility in Gimje, Korea. The numerical results showed that the model could simulate the nitrate reduction by fumarate in field scale groundwater system. The reactive transport model for nitrate can be used as a tool for optimum design of in-situ nitrate bioremediation system, such as released depth and amount of fumarate and the spacing of wells that encapsulated fumarate is released through.

  2. Design and Operation of the Synthesis Gas Generator System for Reformed Propane and Glycerin Combustion

    NASA Astrophysics Data System (ADS)

    Pickett, Derek Kyle

    Due to an increased interest in sustainable energy, biodiesel has become much more widely used in the last several years. Glycerin, one major waste component in biodiesel production, can be converted into a hydrogen rich synthesis gas to be used in an engine generator to recover energy from the biodiesel production process. This thesis contains information detailing the production, testing, and analysis of a unique synthesis generator rig at the University of Kansas. Chapter 2 gives a complete background of all major components, as well as how they are operated. In addition to component descriptions, methods for operating the system on pure propane, reformed propane, reformed glycerin along with the methodology of data acquisition is described. This chapter will serve as a complete operating manual for future students to continue research on the project. Chapter 3 details the literature review that was completed to better understand fuel reforming of propane and glycerin. This chapter also describes the numerical model produced to estimate the species produced during reformation activities. The model was applied to propane reformation in a proof of concept and calibration test before moving to glycerin reformation and its subsequent combustion. Chapter 4 first describes the efforts to apply the numerical model to glycerin using the calibration tools from propane reformation. It then discusses catalytic material preparation and glycerin reformation tests. Gas chromatography analysis of the reformer effluent was completed to compare to theoretical values from the numerical model. Finally, combustion of reformed glycerin was completed for power generation. Tests were completed to compare emissions from syngas combustion and propane combustion.

  3. Flow and fracture behavior of aluminum alloy 6082-T6 at different tensile strain rates and triaxialities

    PubMed Central

    Chen, Xuanzhen; Peng, Shan; Yao, Song; Chen, Chao; Xu, Ping

    2017-01-01

    This study aims to investigate the flow and fracture behavior of aluminum alloy 6082-T6 (AA6082-T6) at different strain rates and triaxialities. Two groups of Charpy impact tests were carried out to further investigate its dynamic impact fracture property. A series of tensile tests and numerical simulations based on finite element analysis (FEA) were performed. Experimental data on smooth specimens under various strain rates ranging from 0.0001~3400 s-1 shows that AA6082-T6 is rather insensitive to strain rates in general. However, clear rate sensitivity was observed in the range of 0.001~1 s-1 while such a characteristic is counteracted by the adiabatic heating of specimens under high strain rates. A Johnson-Cook constitutive model was proposed based on tensile tests at different strain rates. In this study, the average stress triaxiality and equivalent plastic strain at facture obtained from numerical simulations were used for the calibration of J-C fracture model. Both of the J-C constitutive model and fracture model were employed in numerical simulations and the results was compared with experimental results. The calibrated J-C fracture model exhibits higher accuracy than the J-C fracture model obtained by the common method in predicting the fracture behavior of AA6082-T6. Finally, the Scanning Electron Microscope (SEM) of fractured specimens with different initial stress triaxialities were analyzed. The magnified fractographs indicate that high initial stress triaxiality likely results in dimple fracture. PMID:28759617

  4. Incorporating Target Priorities in the Sensor Tasking Reward Function

    NASA Astrophysics Data System (ADS)

    Gehly, S.; Bennett, J.

    2016-09-01

    Orbital debris tracking poses many challenges, most fundamentally the need to track a large number of objects from a limited number of sensors. The use of information theoretic sensor allocation provides a means to efficiently collect data on the multitarget system. An additional need of the community is the ability to specify target priorities, driven both by user needs and environmental factors such as collision warnings. This research develops a method to incorporate target priorities in the sensor tasking reward function, allowing for several applications in different tasking modes such as catalog maintenance, calibration, and collision monitoring. A set of numerical studies is included to demonstrate the functionality of the method.

  5. Emissivity correction for interpreting thermal radiation from a terrestrial surface

    NASA Technical Reports Server (NTRS)

    Sutherland, R. A.; Bartholic, J. F.; Gerber, J. F.

    1979-01-01

    A general method of accounting for emissivity in making temperature determinations of graybody surfaces from radiometric data is presented. The method differs from previous treatments in that a simple blackbody calibration and graphical approach is used rather than numerical integrations which require detailed knowledge of an instrument's spectral characteristics. Also, errors caused by approximating instrumental response with the Stephan-Boltzman law rather than with an appropriately weighted Planck integral are examined. In the 8-14 micron wavelength interval, it is shown that errors are at most on the order of 3 C for the extremes of the earth's temperature and emissivity. For more practical limits, however, errors are less than 0.5 C.

  6. Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter

    NASA Astrophysics Data System (ADS)

    Tulyakov, Stepan; Ivanov, Anton; Thomas, Nicolas; Roloff, Victoria; Pommerol, Antoine; Cremonese, Gabriele; Weigel, Thomas; Fleuret, Francois

    2018-01-01

    There are many geometric calibration methods for "standard" cameras. These methods, however, cannot be used for the calibration of telescopes with large focal lengths and complex off-axis optics. Moreover, specialized calibration methods for the telescopes are scarce in literature. We describe the calibration method that we developed for the Colour and Stereo Surface Imaging System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO). Although our method is described in the context of CaSSIS, with camera-specific experiments, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.

  7. Smoothed particle hydrodynamic modeling of volcanic debris flows: Application to Huiloac Gorge lahars (Popocatépetl volcano, Mexico)

    NASA Astrophysics Data System (ADS)

    Haddad, Bouchra; Palacios, David; Pastor, Manuel; Zamorano, José Juan

    2016-09-01

    Lahars are among the most catastrophic volcanic processes, and the ability to model them is central to mitigating their effects. Several lahars recently generated by the Popocatépetl volcano (Mexico) moved downstream through the Huiloac Gorge towards the village of Santiago Xalitzintla. The most dangerous was the 2001 lahar, in which the destructive power of the debris flow was maintained throughout the extent of the flow. Identifying the zone of hazard can be based either on numerical or empirical models, but a calibration and validation process is required to ensure hazard map quality. The Geoflow-SPH depth integrated numerical model used in this study to reproduce the 2001 lahar was derived from the velocity-pressure version of the Biot-Zienkiewicz model, and was discretized using the smoothed particle hydrodynamics (SPH) method. The results of the calibrated SPH model were validated by comparing the simulated deposit depth with the field depth measured at 16 cross sections distributed strategically along the gorge channel. Moreover, the dependency of the results on topographic mesh resolution, initial lahar mass shape and dimensions is also investigated. The results indicate that to accurately reproduce the 2001 lahar flow dynamics the channel topography needed to be discretized using a mesh having a minimum 5 m resolution, and an initial lahar mass shape that adopted the source area morphology. Field validation of the calibrated model showed that there was a satisfactory relationship between the simulated and field depths, the error being less than 20% for 11 of the 16 cross sections. This study demonstrates that the Geoflow-SPH model was able to accurately reproduce the lahar path and the extent of the flow, but also reproduced other parameters including flow velocity and deposit depth.

  8. Simulating groundwater flow and runoff for the Oro Moraine aquifer system. Part II. Automated calibration and mass balance calculations

    NASA Astrophysics Data System (ADS)

    Beckers, J.; Frind, E. O.

    2001-03-01

    A steady-state groundwater model of the Oro Moraine aquifer system in Central Ontario, Canada, is developed. The model is used to identify the role of baseflow in the water balance of the Minesing Swamp, a 70 km 2 wetland of international significance. Lithologic descriptions are used to develop a hydrostratigraphic conceptual model of the aquifer system. The numerical model uses long-term averages to represent temporal variations of the flow regime and includes a mechanism to redistribute recharge in response to near-surface geologic heterogeneity. The model is calibrated to water level and streamflow measurements through inverse modeling. Observed baseflow and runoff quantities validate the water mass balance of the numerical model and provide information on the fraction of the water surplus that contributes to groundwater flow. The inverse algorithm is used to compare alternative model zonation scenarios, illustrating the power of non-linear regression in calibrating complex aquifer systems. The adjoint method is used to identify sensitive recharge areas for groundwater discharge to the Minesing Swamp. Model results suggest that nearby urban development will have a significant impact on baseflow to the swamp. Although the direct baseflow contribution makes up only a small fraction of the total inflow to the swamp, it provides an important steady influx of water over relatively large portions of the wetland. Urban development will also impact baseflow to the headwaters of local streams. The model provides valuable insight into crucial characteristics of the aquifer system although definite conclusions regarding details of its water budget are difficult to draw given current data limitations. The model therefore also serves to guide future data collection and studies of sub-areas within the basin.

  9. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  10. Calibrating the Decline Rate - Peak Luminosity Relation for Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Rust, Bert W.; Pruzhinskaya, Maria V.; Thijsse, Barend J.

    2015-08-01

    The correlation between peak luminosity and rate of decline in luminosity for Type I supernovae was first studied by B. W. Rust [Ph.D. thesis, Univ. of Illinois (1974) ORNL-4953] and Yu. P. Pskovskii [Sov. Astron., 21 (1977) 675] in the 1970s. Their work was little-noted until Phillips rediscovered the correlation in 1993 [ApJ, 413 (1993) L105] and attempted to derive a calibration relation using a difference quotient approximation Δm15(B) to the decline rate after peak luminosity Mmax(B). Numerical differentiation of data containing measuring errors is a notoriously unstable calculation, but Δm15(B) remains the parameter of choice for most calibration methods developed since 1993. To succeed, it should be computed from good functional fits to the lightcurves, but most workers never exhibit their fits. In the few instances where they have, the fits are not very good. Some of the 9 supernovae in the Phillips study required extinction corrections in their estimates of the Mmax(B), and so were not appropriate for establishing a calibration relation. Although the relative uncertainties in his Δm15(B) estimates were comparable to those in his Mmax(B) estimates, he nevertheless used simple linear regression of the latter on the former, rather than major-axis regression (total least squares) which would have been more appropriate.Here we determine some new calibration relations using a sample of nearby "pure" supernovae suggested by M. V. Pruzhinskaya [Astron. Lett., 37 (2011) 663]. Their parent galaxies are all in the NED collection, with good distance estimates obtained by several different methods. We fit each lightcurve with an optimal regression spline obtained by B. J. Thijsse's spline2 [Comp. in Sci. & Eng., 10 (2008) 49]. The fits, which explain more that 99% of the variance in each case, are better than anything heretofore obtained by stretching "template" lightcurves or fitting combinations of standard lightcurves. We use the fits to compute estimates of Δm15(B) and some other calibration parameters suggested by Pskovskii [Sov. Astron., 28 (1984) 858] and compare their utility for cosmological testing.

  11. How to obtain accurate resist simulations in very low-k1 era?

    NASA Astrophysics Data System (ADS)

    Chiou, Tsann-Bim; Park, Chan-Ha; Choi, Jae-Seung; Min, Young-Hong; Hansen, Steve; Tseng, Shih-En; Chen, Alek C.; Yim, Donggyu

    2006-03-01

    A procedure for calibrating a resist model iteratively adjusts appropriate parameters until the simulations of the model match the experimental data. The tunable parameters may include the shape of the illuminator, the geometry and transmittance/phase of the mask, light source and scanner-related parameters that affect imaging quality, resist process control and most importantly the physical/chemical factors in the resist model. The resist model can be accurately calibrated by measuring critical dimensions (CD) of a focus-exposure matrix (FEM) and the technique has been demonstrated to be very successful in predicting lithographic performance. However, resist model calibration is more challenging in the low k1 (<0.3) regime because numerous uncertainties, such as mask and resist CD metrology errors, are becoming too large to be ignored. This study demonstrates a resist model calibration procedure for a 0.29 k1 process using a 6% halftone mask containing 2D brickwall patterns. The influence of different scanning electron microscopes (SEM) and their wafer metrology signal analysis algorithms on the accuracy of the resist model is evaluated. As an example of the metrology issue of the resist pattern, the treatment of a sidewall angle is demonstrated for the resist line ends where the contrast is relatively low. Additionally, the mask optical proximity correction (OPC) and corner rounding are considered in the calibration procedure that is based on captured SEM images. Accordingly, the average root-mean-square (RMS) error, which is the difference between simulated and experimental CDs, can be improved by considering the metrological issues. Moreover, a weighting method and a measured CD tolerance are proposed to handle the different CD variations of the various edge points of the wafer resist pattern. After the weighting method is implemented and the CD selection criteria applied, the RMS error can be further suppressed. Therefore, the resist CD and process window can be confidently evaluated using the accurately calibrated resist model. One of the examples simulates the sensitivity of the mask pattern error, which is helpful to specify the mask CD control.

  12. Calibration of mass spectrometric peptide mass fingerprint data without specific external or internal calibrants

    PubMed Central

    Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut

    2005-01-01

    Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175

  13. A new measurement method of coatings thickness based on lock-in thermography

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-Yu; Meng, Xiang-bin; Ma, Yong-chao

    2016-05-01

    Coatings have been widely used in modern industry and it plays an important role. Coatings thickness is directly related to the performance of the functional coatings, therefore, rapid and accurate coatings thickness inspection has great significance. Existing coatings thickness measurement method is difficult to achieve fast and accurate on-site non-destructive coatings inspection due to cost, accuracy, destruction during inspection and other reasons. This paper starts from the introduction of the principle of lock-in thermography, and then performs an in-depth study on the application of lock-in thermography in coatings inspection through numerical modeling and analysis. The numerical analysis helps explore the relationship between coatings thickness and phase, and the relationship lays the foundation for accurate calculation of coatings thickness. The author sets up a lock-in thermography inspection system and uses thermal barrier coatings specimens to conduct an experiment. The specimen coatings thickness is measured and calibrated to verify the quantitative inspection. Experiment results show that the lock-in thermography method can perform fast coatings inspection and the inspection accuracy is about 95%. Therefore, the method can meet the field testing requirements for engineering projects.

  14. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  15. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    PubMed Central

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  16. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction.

    PubMed

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-10

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  17. An investigation into force-moment calibration techniques applicable to a magnetic suspension and balance system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Eskins, Jonathan

    1988-01-01

    The problem of determining the forces and moments acting on a wind tunnel model suspended in a Magnetic Suspension and Balance System is addressed. Two calibration methods were investigated for three types of model cores, i.e., Alnico, Samarium-Cobalt, and a superconducting solenoid. Both methods involve calibrating the currents in the electromagnetic array against known forces and moments. The first is a static calibration method using calibration weights and a system of pulleys. The other method, dynamic calibration, involves oscillating the model and using its inertia to provide calibration forces and moments. Static calibration data, found to produce the most reliable results, is presented for three degrees of freedom at 0, 15, and -10 deg angle of attack. Theoretical calculations are hampered by the inability to represent iron-cored electromagnets. Dynamic calibrations, despite being quicker and easier to perform, are not as accurate as static calibrations. Data for dynamic calibrations at 0 and 15 deg is compared with the relevant static data acquired. Distortion of oscillation traces is cited as a major source of error in dynamic calibrations.

  18. The calibration methods for Multi-Filter Rotating Shadowband Radiometer: a review

    NASA Astrophysics Data System (ADS)

    Chen, Maosi; Davis, John; Tang, Hongzhao; Ownby, Carolyn; Gao, Wei

    2013-09-01

    The continuous, over two-decade data record from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) is ideal for climate research which requires timely and accurate information of important atmospheric components such as gases, aerosols, and clouds. Except for parameters derived from MFRSR measurement ratios, which are not impacted by calibration error, most applications require accurate calibration factor(s), angular correction, and spectral response function(s) from calibration. Although a laboratory lamp (or reference) calibration can provide all the information needed to convert the instrument readings to actual radiation, in situ calibration methods are implemented routinely (daily) to fill the gaps between lamp calibrations. In this paper, the basic structure and the data collection and pretreatment of the MFRSR are described. The laboratory lamp calibration and its limitations are summarized. The cloud screening algorithms for MFRSR data are presented. The in situ calibration methods, the standard Langley method and its variants, the ratio-Langley method, the general method, Alexandrov's comprehensive method, and Chen's multi-channel method, are outlined. The reason that all these methods do not fit for all situations is that they assume some properties, such as aerosol optical depth (AOD), total optical depth (TOD), precipitable water vapor (PWV), effective size of aerosol particles, or angstrom coefficient, are invariant over time. These properties are not universal and some of them rarely happen. In practice, daily calibration factors derived from these methods should be smoothed to restrain error.

  19. Optical Calibration Process Developed for Neural-Network-Based Optical Nondestructive Evaluation Method

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    A completely optical calibration process has been developed at Glenn for calibrating a neural-network-based nondestructive evaluation (NDE) method. The NDE method itself detects very small changes in the characteristic patterns or vibration mode shapes of vibrating structures as discussed in many references. The mode shapes or characteristic patterns are recorded using television or electronic holography and change when a structure experiences, for example, cracking, debonds, or variations in fastener properties. An artificial neural network can be trained to be very sensitive to changes in the mode shapes, but quantifying or calibrating that sensitivity in a consistent, meaningful, and deliverable manner has been challenging. The standard calibration approach has been difficult to implement, where the response to damage of the trained neural network is compared with the responses of vibration-measurement sensors. In particular, the vibration-measurement sensors are intrusive, insufficiently sensitive, and not numerous enough. In response to these difficulties, a completely optical alternative to the standard calibration approach was proposed and tested successfully. Specifically, the vibration mode to be monitored for structural damage was intentionally contaminated with known amounts of another mode, and the response of the trained neural network was measured as a function of the peak-to-peak amplitude of the contaminating mode. The neural network calibration technique essentially uses the vibration mode shapes of the undamaged structure as standards against which the changed mode shapes are compared. The published response of the network can be made nearly independent of the contaminating mode, if enough vibration modes are used to train the net. The sensitivity of the neural network can be adjusted for the environment in which the test is to be conducted. The response of a neural network trained with measured vibration patterns for use on a vibration isolation table in the presence of various sources of laboratory noise is shown. The output of the neural network is called the degradable classification index. The curve was generated by a simultaneous comparison of means, and it shows a peak-to-peak sensitivity of about 100 nm. The following graph uses model generated data from a compressor blade to show that much higher sensitivities are possible when the environment can be controlled better. The peak-to-peak sensitivity here is about 20 nm. The training procedure was modified for the second graph, and the data were subjected to an intensity-dependent transformation called folding. All the measurements for this approach to calibration were optical. The peak-to-peak amplitudes of the vibration modes were measured using heterodyne interferometry, and the modes themselves were recorded using television (electronic) holography.

  20. Simple transfer calibration method for a Cimel Sun-Moon photometer: calculating lunar calibration coefficients from Sun calibration constants.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane

    2016-09-20

    The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.

  1. Multiphase, multicomponent parameter estimation for liquid and vapor fluxes in deep arid systems using hydrologic data and natural environmental tracers

    USGS Publications Warehouse

    Kwicklis, Edward M.; Wolfsberg, Andrew V.; Stauffer, Philip H.; Walvoord, Michelle Ann; Sully, Michael J.

    2006-01-01

    Multiphase, multicomponent numerical models of long-term unsaturated-zone liquid and vapor movement were created for a thick alluvial basin at the Nevada Test Site to predict present-day liquid and vapor fluxes. The numerical models are based on recently developed conceptual models of unsaturated-zone moisture movement in thick alluvium that explain present-day water potential and tracer profiles in terms of major climate and vegetation transitions that have occurred during the past 10 000 yr or more. The numerical models were calibrated using borehole hydrologic and environmental tracer data available from a low-level radioactive waste management site located in a former nuclear weapons testing area. The environmental tracer data used in the model calibration includes tracers that migrate in both the liquid and vapor phases (??D, ??18O) and tracers that migrate solely as dissolved solutes (Cl), thus enabling the estimation of some gas-phase as well as liquid-phase transport parameters. Parameter uncertainties and correlations identified during model calibration were used to generate parameter combinations for a set of Monte Carlo simulations to more fully characterize the uncertainty in liquid and vapor fluxes. The calculated background liquid and vapor fluxes decrease as the estimated time since the transition to the present-day arid climate increases. However, on the whole, the estimated fluxes display relatively little variability because correlations among parameters tend to create parameter sets for which changes in some parameters offset the effects of others in the set. Independent estimates on the timing since the climate transition established from packrat midden data were essential for constraining the model calibration results. The study demonstrates the utility of environmental tracer data in developing numerical models of liquid- and gas-phase moisture movement and the importance of considering parameter correlations when using Monte Carlo analysis to characterize the uncertainty in moisture fluxes. ?? Soil Science Society of America.

  2. Optimized star sensors laboratory calibration method using a regularization neural network.

    PubMed

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  3. Features calibration of the dynamic force transducers

    NASA Astrophysics Data System (ADS)

    Sc., M. Yu Prilepko D.; Lysenko, V. G.

    2018-04-01

    The article discusses calibration methods of dynamic forces measuring instruments. The relevance of work is dictated by need to valid definition of the dynamic forces transducers metrological characteristics taking into account their intended application. The aim of this work is choice justification of calibration method, which provides the definition dynamic forces transducers metrological characteristics under simulation operating conditions for determining suitability for using in accordance with its purpose. The following tasks are solved: the mathematical model and the main measurements equation of calibration dynamic forces transducers by load weight, the main budget uncertainty components of calibration are defined. The new method of dynamic forces transducers calibration with use the reference converter “force-deformation” based on the calibrated elastic element and measurement of his deformation by a laser interferometer is offered. The mathematical model and the main measurements equation of the offered method is constructed. It is shown that use of calibration method based on measurements by the laser interferometer of calibrated elastic element deformations allows to exclude or to considerably reduce the uncertainty budget components inherent to method of load weight.

  4. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  5. Application of High-resolution Aerial LiDAR Data in Calibration of a Two-dimensional Urban Flood Simulation

    NASA Astrophysics Data System (ADS)

    Piotrowski, J.; Goska, R.; Chen, B.; Krajewski, W. F.; Young, N.; Weber, L.

    2009-12-01

    In June 2008, the state of Iowa experienced an unprecedented flood event which resulted in an economic loss of approximately $2.88 billion. Flooding in the Iowa River corridor, which exceeded the previous flood of record by 3 feet, devastated several communities, including Coralville and Iowa City, home to the University of Iowa. Recognizing an opportunity to capture a unique dataset detailing the impacts of the historic flood, the investigators contacted the National Center for Airborne Laser Mapping (NCALM), which performed an aerial Light Detection and Ranging (LiDAR) survey along the Iowa River. The survey, conducted immediately following the flood peak, provided coverage of a 60-mile reach. The goal of the present research is to develop a process by which flood extents and water surface elevations can be accurately extracted from the LiDAR data set and to evaluate the benefit of such data in calibrating one- and two-dimensional hydraulic models. Whereas data typically available for model calibration include sparsely distributed point observations and high water marks, the LiDAR data used in the present study provide broad-scale, detailed, and continuous information describing the spatial extent and depth of flooding. Initial efforts were focused on a 10-mile, primarily urban reach of the Iowa River extending from Coralville Reservoir, a United States Army Corps of Engineers flood control project, downstream through the Coralville and Iowa City. Spatial extent and depth of flooding were estimated from the LiDAR data. At a given cross-sectional location, river channel and floodplain measurements were compared. When differences between floodplain and river channel measurements were less than a standard deviation of the vertical uncertainty in the LiDAR survey, floodplain measurements were classified as flooded. A flood water surface DEM was created using measurements classified as flooded. A two-dimensional, depth-averaged numerical model of a 10-mile reach of the Iowa River corridor was developed using the United States Bureau of Reclamation SRH-2D hydraulic modeling software. The numerical model uses an unstructured numerical mesh and variable surface roughness, assigned according to observed land use and cover. The numerical model was calibrated using inundation extents and water surface elevations derived from the LiDAR data. It was also calibrated using high water marks and land survey data collected daily during the 2008 flood. The investigators compared the two calibrations to evaluate the benefit of high-resolution LiDAR data in improving the accuracy of a two-dimensional urban flood simulation.

  6. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  7. A Novel Miniature Wide-band Radiometer for Space Applications

    NASA Astrophysics Data System (ADS)

    Sykulska-Lawrence, Hanna

    2016-10-01

    Design, development and testing of a novel miniaturised infrared radiometer is described. The instrument opens up new possibilities in planetary science of deployment on smaller platforms - such as unmanned aerial vehicles and microprobes - to enable study of a planet's radiation balance, as well as terrestrial volcano plumes and trace gases in planetary atmospheres, using low-cost long-term observations. Thus a key enabling development is that of miniaturised, low-power and well-calibrated instrumentation.The paper reports advances in miniature technology to perform high accuracy visible / IR remote sensing measurements. The infrared radiometer is akin to those widely used for remote sensing for earth and space applications, which are currently either large instruments on orbiting platforms or medium-sized payloads on balloons. We use MEMS microfabrication techniques to shrink a conventional design, while combining the calibration benefits of large (>1kg) type radiometers with the flexibility and portability of a <10g device. The instrument measures broadband (0.2 to 100um) upward and downward radiation fluxes, with built-in calibration capability, incorporating traceability to temperature standards such as ITS-90.The miniature instrument described here was derived from a concept developed for a European Space Agency study, Dalomis (Proc. of 'i-SAIRAS 2005', Munich, 2005), which involved dropping multiple probes into the atmosphere of Venus from a balloon to sample numerous parts of the complex weather systems on the planet. Data from such an in-situ instrument would complement information from a satellite remote sensing instrument or balloon radiosonde. Moreover, the addition of an internal calibration standard facilitates comparisons between datasets.One of the main challenges for a reduced size device is calibration. We use an in-situ method whereby a blackbody source is integrated within the device and a micromirror switches the input to the detector between the measured signal and the calibration target. Achieving two well-calibrated radiometer channels within a small (<10g) payload is made possible by using micromachining techniques.

  8. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  9. Calibrated Blade-Element/Momentum Theory Aerodynamic Model of the MARIN Stock Wind Turbine: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goupee, A.; Kimball, R.; de Ridder, E. J.

    2015-04-02

    In this paper, a calibrated blade-element/momentum theory aerodynamic model of the MARIN stock wind turbine is developed and documented. The model is created using open-source software and calibrated to closely emulate experimental data obtained by the DeepCwind Consortium using a genetic algorithm optimization routine. The provided model will be useful for those interested in validating interested in validating floating wind turbine numerical simulators that rely on experiments utilizing the MARIN stock wind turbine—for example, the International Energy Agency Wind Task 30’s Offshore Code Comparison Collaboration Continued, with Correlation project.

  10. Finding trap stiffness of optical tweezers using digital filters.

    PubMed

    Almendarez-Rangel, Pedro; Morales-Cruzado, Beatriz; Sarmiento-Gómez, Erick; Pérez-Gutiérrez, Francisco G

    2018-02-01

    Obtaining trap stiffness and calibration of the position detection system is the basis of a force measurement using optical tweezers. Both calibration quantities can be calculated using several experimental methods available in the literature. In most cases, stiffness determination and detection system calibration are performed separately, often requiring procedures in very different conditions, and thus confidence of calibration methods is not assured due to possible changes in the environment. In this work, a new method to simultaneously obtain both the detection system calibration and trap stiffness is presented. The method is based on the calculation of the power spectral density of positions through digital filters to obtain the harmonic contributions of the position signal. This method has the advantage of calculating both trap stiffness and photodetector calibration factor from the same dataset in situ. It also provides a direct method to avoid unwanted frequencies that could greatly affect calibration procedure, such as electric noise, for example.

  11. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    PubMed Central

    Deng, Mingjun; Li, Jiansong

    2017-01-01

    The Chinese Gaofen-3 (GF-3) mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR) sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts) using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method. PMID:29240675

  12. Radiometric calibration of the Earth observing system's imaging sensors

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1987-01-01

    Philosophy, requirements, and methods of calibration of multispectral space sensor systems as applicable to the Earth Observing System (EOS) are discussed. Vicarious methods for calibration of low spatial resolution systems, with respect to the Advanced Very High Resolution Radiometer (AVHRR), are then summarized. Finally, a theoretical introduction is given to a new vicarious method of calibration using the ratio of diffuse-to-global irradiance at the Earth's surfaces as the key input. This may provide an additional independent method for in-flight calibration.

  13. Configurations and calibration methods for passive sampling techniques.

    PubMed

    Ouyang, Gangfeng; Pawliszyn, Janusz

    2007-10-19

    Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.

  14. Application of composite small calibration objects in traffic accident scene photogrammetry.

    PubMed

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  15. Signal inference with unknown response: calibration-uncertainty renormalized estimator.

    PubMed

    Dorn, Sebastian; Enßlin, Torsten A; Greiner, Maksim; Selig, Marco; Boehm, Vanessa

    2015-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.

  16. Effects of self-calibration of intrinsic alignment on cosmological parameter constraints from future cosmic shear surveys

    NASA Astrophysics Data System (ADS)

    Yao, Ji; Ishak, Mustapha; Lin, Weikang; Troxel, Michael

    2017-10-01

    Intrinsic alignments (IA) of galaxies have been recognized as one of the most serious contaminants to weak lensing. These systematics need to be isolated and mitigated in order for ongoing and future lensing surveys to reach their full potential. The IA self-calibration (SC) method was shown in previous studies to be able to reduce the GI contamination by up to a factor of 10 for the 2-point and 3-point correlations. The SC method does not require the assumption of an IA model in its working and can extract the GI signal from the same photo-z survey offering the possibility to test and understand structure formation scenarios and their relationship to IA models. In this paper, we study the effects of the IA SC mitigation method on the precision and accuracy of cosmological parameter constraints from future cosmic shear surveys LSST, WFIRST and Euclid. We perform analytical and numerical calculations to estimate the loss of precision and the residual bias in the best fit cosmological parameters after the self-calibration is performed. We take into account uncertainties from photometric redshifts and the galaxy bias. We find that the confidence contours are slightly inflated from applying the SC method itself while a significant increase is due to the inclusion of the photo-z uncertainties. The bias of cosmological parameters is reduced from several-σ, when IA is not corrected for, to below 1-σ after SC is applied. These numbers are comparable to those resulting from applying the method of marginalizing over IA model parameters despite the fact that the two methods operate very differently. We conclude that implementing the SC for these future cosmic-shear surveys will not only allow one to efficiently mitigate the GI contaminant but also help to understand their modeling and link to structure formation.

  17. Development and experimental assessment of a numerical modelling code to aid the design of profile extrusion cooling tools

    NASA Astrophysics Data System (ADS)

    Carneiro, O. S.; Rajkumar, A.; Fernandes, C.; Ferrás, L. L.; Habla, F.; Nóbrega, J. M.

    2017-10-01

    On the extrusion of thermoplastic profiles, upon the forming stage that takes place in the extrusion die, the profile must be cooled in a metallic calibrator. This stage must be done at a high rate, to assure increased productivity, but avoiding the development of high temperature gradients, in order to minimize the level of induced thermal residual stresses. In this work, we present a new coupled numerical solver, developed in the framework of the OpenFOAM® computational library, that computes the temperature distribution in both domains simultaneously (metallic calibrator and plastic profile), whose implementation aimed the minimization of the computational time. The new solver was experimentally assessed with an industrial case study.

  18. Structured light system calibration method with optimal fringe angle.

    PubMed

    Li, Beiwen; Zhang, Song

    2014-11-20

    For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H)  mm×250(W)  mm×500(D)  mm.

  19. Numerical modelling of methane oxidation efficiency and coupled water-gas-heat reactive transfer in a sloping landfill cover.

    PubMed

    Feng, S; Ng, C W W; Leung, A K; Liu, H W

    2017-10-01

    Microbial aerobic methane oxidation in unsaturated landfill cover involves coupled water, gas and heat reactive transfer. The coupled process is complex and its influence on methane oxidation efficiency is not clear, especially in steep covers where spatial variations of water, gas and heat are significant. In this study, two-dimensional finite element numerical simulations were carried out to evaluate the performance of unsaturated sloping cover. The numerical model was calibrated using a set of flume model test data, and was then subsequently used for parametric study. A new method that considers transient changes of methane concentration during the estimation of the methane oxidation efficiency was proposed and compared against existing methods. It was found that a steeper cover had a lower oxidation efficiency due to enhanced downslope water flow, during which desaturation of soil promoted gas transport and hence landfill gas emission. This effect was magnified as the cover angle and landfill gas generation rate at the bottom of the cover increased. Assuming the steady-state methane concentration in a cover would result in a non-conservative overestimation of oxidation efficiency, especially when a steep cover was subjected to rainfall infiltration. By considering the transient methane concentration, the newly-modified method can give a more accurate oxidation efficiency. Copyright © 2017. Published by Elsevier Ltd.

  20. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  1. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  2. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  3. Use of paired simple and complex models to reduce predictive bias and quantify uncertainty

    NASA Astrophysics Data System (ADS)

    Doherty, John; Christensen, Steen

    2011-12-01

    Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.

  4. A new systematic calibration method of ring laser gyroscope inertial navigation system

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu

    2016-10-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Chengming; Yan, Yihua; Tan, Baolin

    This work presents a systematic investigation of the influence of weather conditions on the calibration errors by using Gaussian fitness, least chi-square linear fitness, and wavelet transform to analyze the calibration coefficients from observations of the Chinese Solar Broadband Radio Spectrometers (at frequency bands of 1.0–2.0 GHz, 2.6–3.8 GHz, and 5.2–7.6 GHz) during 1997–2007. We found that calibration coefficients are influenced by the local air temperature. Considering the temperature correction, the calibration error will reduce by about 10%–20% at 2800 MHz. Based on the above investigation and the calibration corrections, we further study the radio emission of the quiet Sunmore » by using an appropriate hybrid model of the quiet-Sun atmosphere. The results indicate that the numerical flux of the hybrid model is much closer to the observation flux than that of other ones.« less

  6. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  7. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  8. Specific methodology for capacitance imaging by atomic force microscopy: A breakthrough towards an elimination of parasitic effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estevez, Ivan; Concept Scientific Instruments, ZA de Courtaboeuf, 2 rue de la Terre de Feu, 91940 Les Ulis; Chrétien, Pascal

    2014-02-24

    On the basis of a home-made nanoscale impedance measurement device associated with a commercial atomic force microscope, a specific operating process is proposed in order to improve absolute (in sense of “nonrelative”) capacitance imaging by drastically reducing the parasitic effects due to stray capacitance, surface topography, and sample tilt. The method, combining a two-pass image acquisition with the exploitation of approach curves, has been validated on sets of calibration samples consisting in square parallel plate capacitors for which theoretical capacitance values were numerically calculated.

  9. Error-analysis and comparison to analytical models of numerical waveforms produced by the NRAR Collaboration

    NASA Astrophysics Data System (ADS)

    Hinder, Ian; Buonanno, Alessandra; Boyle, Michael; Etienne, Zachariah B.; Healy, James; Johnson-McDaniel, Nathan K.; Nagar, Alessandro; Nakano, Hiroyuki; Pan, Yi; Pfeiffer, Harald P.; Pürrer, Michael; Reisswig, Christian; Scheel, Mark A.; Schnetter, Erik; Sperhake, Ulrich; Szilágyi, Bela; Tichy, Wolfgang; Wardell, Barry; Zenginoğlu, Anıl; Alic, Daniela; Bernuzzi, Sebastiano; Bode, Tanja; Brügmann, Bernd; Buchman, Luisa T.; Campanelli, Manuela; Chu, Tony; Damour, Thibault; Grigsby, Jason D.; Hannam, Mark; Haas, Roland; Hemberger, Daniel A.; Husa, Sascha; Kidder, Lawrence E.; Laguna, Pablo; London, Lionel; Lovelace, Geoffrey; Lousto, Carlos O.; Marronetti, Pedro; Matzner, Richard A.; Mösta, Philipp; Mroué, Abdul; Müller, Doreen; Mundim, Bruno C.; Nerozzi, Andrea; Paschalidis, Vasileios; Pollney, Denis; Reifenberger, George; Rezzolla, Luciano; Shapiro, Stuart L.; Shoemaker, Deirdre; Taracchini, Andrea; Taylor, Nicholas W.; Teukolsky, Saul A.; Thierfelder, Marcus; Witek, Helvi; Zlochower, Yosef

    2013-01-01

    The Numerical-Relativity-Analytical-Relativity (NRAR) collaboration is a joint effort between members of the numerical relativity, analytical relativity and gravitational-wave data analysis communities. The goal of the NRAR collaboration is to produce numerical-relativity simulations of compact binaries and use them to develop accurate analytical templates for the LIGO/Virgo Collaboration to use in detecting gravitational-wave signals and extracting astrophysical information from them. We describe the results of the first stage of the NRAR project, which focused on producing an initial set of numerical waveforms from binary black holes with moderate mass ratios and spins, as well as one non-spinning binary configuration which has a mass ratio of 10. All of the numerical waveforms are analysed in a uniform and consistent manner, with numerical errors evaluated using an analysis code created by members of the NRAR collaboration. We compare previously-calibrated, non-precessing analytical waveforms, notably the effective-one-body (EOB) and phenomenological template families, to the newly-produced numerical waveforms. We find that when the binary's total mass is ˜100-200M⊙, current EOB and phenomenological models of spinning, non-precessing binary waveforms have overlaps above 99% (for advanced LIGO) with all of the non-precessing-binary numerical waveforms with mass ratios ⩽4, when maximizing over binary parameters. This implies that the loss of event rate due to modelling error is below 3%. Moreover, the non-spinning EOB waveforms previously calibrated to five non-spinning waveforms with mass ratio smaller than 6 have overlaps above 99.7% with the numerical waveform with a mass ratio of 10, without even maximizing on the binary parameters.

  10. Shortwave Radiometer Calibration Methods Comparison and Resulting Solar Irradiance Measurement Differences: A User Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Banks financing solar energy projects require assurance that these systems will produce the energy predicted. Furthermore, utility planners and grid system operators need to understand the impact of the variable solar resource on solar energy conversion system performance. Accurate solar radiation data sets reduce the expense associated with mitigating performance risk and assist in understanding the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methods provided by radiometric calibrationmore » service providers, such as NREL and manufacturers of radiometers, on the resulting calibration responsivity. Some of these radiometers are calibrated indoors and some outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides the outdoor calibration responsivity of pyranometers and pyrheliometers at 45 degree solar zenith angle, and as a function of solar zenith angle determined by clear-sky comparisons with reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison between the test radiometer under calibration and a reference radiometer of the same type. In both methods, the reference radiometer calibrations are traceable to the World Radiometric Reference (WRR). These different methods of calibration demonstrated +1% to +2% differences in solar irradiance measurement. Analyzing these differences will ultimately help determine the uncertainty of the field radiometer data and guide the development of a consensus standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainty will allow more accurate prediction of solar output and improve the bankability of solar projects.« less

  11. Quantifying uncertainties in streamflow predictions through signature based inference of hydrological model parameters

    NASA Astrophysics Data System (ADS)

    Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro

    2016-04-01

    The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.

  12. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  13. Surface plasmon resonance microscopy: achieving a quantitative optical response

    PubMed Central

    Peterson, Alexander W.; Halter, Michael; Plant, Anne L.; Elliott, John T.

    2016-01-01

    Surface plasmon resonance (SPR) imaging allows real-time label-free imaging based on index of refraction, and changes in index of refraction at an interface. Optical parameter analysis is achieved by application of the Fresnel model to SPR data typically taken by an instrument in a prism based configuration. We carry out SPR imaging on a microscope by launching light into a sample, and collecting reflected light through a high numerical aperture microscope objective. The SPR microscope enables spatial resolution that approaches the diffraction limit, and has a dynamic range that allows detection of subnanometer to submicrometer changes in thickness of biological material at a surface. However, unambiguous quantitative interpretation of SPR changes using the microscope system could not be achieved using the Fresnel model because of polarization dependent attenuation and optical aberration that occurs in the high numerical aperture objective. To overcome this problem, we demonstrate a model to correct for polarization diattenuation and optical aberrations in the SPR data, and develop a procedure to calibrate reflectivity to index of refraction values. The calibration and correction strategy for quantitative analysis was validated by comparing the known indices of refraction of bulk materials with corrected SPR data interpreted with the Fresnel model. Subsequently, we applied our SPR microscopy method to evaluate the index of refraction for a series of polymer microspheres in aqueous media and validated the quality of the measurement with quantitative phase microscopy. PMID:27782542

  14. A PFC2D model of the interactions between the tire and the aggregate filled arrester bed on escape ramp

    NASA Astrophysics Data System (ADS)

    Qin, Pin-pin; Chen, Chui-ce; Pei, Shi-kang; Li, Xin

    2017-06-01

    The stopping distance of a runaway vehicle is determined by the entry speed, the design of aggregate-filled arrester bed and the longitudinal grade of escape ramp. Although numerous previous studies have been carried out on the influence of speed and grade on stopping distance, taking into account aggregate properties is rare. Firstly, this paper analyzes the interactions between the tire and the aggregate. The tire and the aggregate are abstracted into a big particle unit and a particle combination unit consisting of lots of aggregates, respectively. Secondly this paper proposes an assumption that this interaction is a kind of particle flow. Later, this paper uses some particle properties to describe the tire-particle unit and aggregate-particle unit respectively, then puts forward several simplified steps of modeling by particle flow code in 2 dimensions (PFC2D). Therefore, a PFC2D micro-simulation model of the interactions between the tire and the aggregate is proposed. The parameters of particle properties are then calibrated by three groups of numerical tests. The calibrated model is verified by eight full-scale arrester bed testing data to demonstrate its feasibility and accuracy. This model provides escape ramp designers a feasible simulation method not only for predicting the stopping distance but also considering the aggregate properties.

  15. Focal plane based wavefront sensing with random DM probes

    NASA Astrophysics Data System (ADS)

    Pluzhnik, Eugene; Sirbu, Dan; Belikov, Ruslan; Bendek, Eduardo; Dudinov, Vladimir N.

    2017-09-01

    An internal coronagraph with an adaptive optical system for wavefront control is being considered for direct imaging of exoplanets with upcoming space missions and concepts, including WFIRST, HabEx, LUVOIR, EXCEDE and ACESat. The main technical challenge associated with direct imaging of exoplanets is to control of both diffracted and scattered light from the star so that even a dim planetary companion can be imaged. For a deformable mirror (DM) to create a dark hole with 10-10 contrast in the image plane, wavefront errors must be accurately measured on the science focal plane detector to ensure a common optical path. We present here a method that uses a set of random phase probes applied to the DM to obtain a high accuracy wavefront estimate even for a dynamically changing optical system. The presented numerical simulations and experimental results show low noise sensitivity, high reliability, and robustness of the proposed approach. The method does not use any additional optics or complex calibration procedures and can be used during the calibration stage of any direct imaging mission. It can also be used in any optical experiment that uses a DM as an active optical element in the layout.

  16. Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry

    PubMed Central

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies. PMID:26011052

  17. Quantifying and estimating the predictive accuracy for censored time-to-event data with competing risks.

    PubMed

    Wu, Cai; Li, Liang

    2018-05-15

    This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time-to-event outcomes with competing events. We consider the time-dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time-dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end-stage renal disease, accounting for the competing risk of pre-end-stage renal disease death, and evaluate its numerical performance in extensive simulation studies. Copyright © 2018 John Wiley & Sons, Ltd.

  18. Quantitation without Calibration: Response Profile as an Indicator of Target Amount.

    PubMed

    Debnath, Mrittika; Farace, Jessica M; Johnson, Kristopher D; Nesterova, Irina V

    2018-06-21

    Quantitative assessment of biomarkers is essential in numerous contexts from decision-making in clinical situations to food quality monitoring to interpretation of life-science research findings. However, appropriate quantitation techniques are not as widely addressed as detection methods. One of the major challenges in biomarker's quantitation is the need to have a calibration for correlating a measured signal to a target amount. The step complicates the methodologies and makes them less sustainable. In this work we address the issue via a new strategy: relying on position of response profile rather than on an absolute signal value for assessment of a target's amount. In order to enable the capability we develop a target-probe binding mechanism based on a negative cooperativity effect. A proof-of-concept example demonstrates that the model is suitable for quantitative analysis of nucleic acids over a wide concentration range. The general principles of the platform will be applicable toward a variety of biomarkers such as nucleic acids, proteins, peptides, and others.

  19. Flow and transport model of the Savannah River Site Old Burial Grounds using Data Fusion modeling (DFM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-11-01

    The Data Fusion Modeling (DFM) approach has been used to develop a groundwater flow and transport model of the Old Burial Grounds (OBG) at the US Department of Energy`s Savannah River Site (SRS). The resulting DFM model was compared to an existing model that was calibrated via the typical trial-and-error method. The OBG was chosen because a substantial amount of hydrogeologic information is available, a FACT (derivative of VAM3DCG) flow and transport model of the site exists, and the calibration and numerics were challenging with standard approaches. The DFM flow model developed here is similar to the flow model bymore » Flach et al. This allows comparison of the two flow models and validates the utility of DFM. The contaminant of interest for this study is tritium, because it is a geochemically conservative tracer that has been monitored along the seepline near the F-Area effluent and Fourmile Branch for several years.« less

  20. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  1. Temperature profile around a basaltic sill intruded into wet sediments

    USGS Publications Warehouse

    Baker, Leslie; Bernard, Andrew; Rember, William C.; Milazzo, Moses; Dundas, Colin M.; Abramov, Oleg; Kestay, Laszlo P.

    2015-01-01

    The transfer of heat into wet sediments from magmatic intrusions or lava flows is not well constrained from field data. Such field constraints on numerical models of heat transfer could significantly improve our understanding of water–lava interactions. We use experimentally calibrated pollen darkening to measure the temperature profile around a basaltic sill emplaced into wet lakebed sediments. It is well known that, upon heating, initially transparent palynomorphs darken progressively through golden, brown, and black shades before being destroyed; however, this approach to measuring temperature has not been applied to volcanological questions. We collected sediment samples from established Miocene fossil localities at Clarkia, Idaho. Fossils in the sediments include pollen from numerous tree and shrub species. We experimentally calibrated changes in the color of Clarkia sediment pollen and used this calibration to determine sediment temperatures around a Miocene basaltic sill emplaced in the sediments. Results indicated a flat temperature profile above and below the sill, with T > 325 °C within 1 cm of the basalt-sediment contact, near 300 °C at 1–2 cm from the contact, and ~ 250 °C at 1 m from the sill contact. This profile suggests that heat transport in the sediments was hydrothermally rather than conductively controlled. This information will be used to test numerical models of heat transfer in wet sediments on Earth and Mars.

  2. On-line calibration of high-response pressure transducers during jet-engine testing

    NASA Technical Reports Server (NTRS)

    Armentrout, E. C.

    1974-01-01

    Jet engine testing is reported concerned with the effect of inlet pressure and temperature distortions on engine performance and involves the use of numerous miniature pressure transducers. Despite recent improvements in the manufacture of miniature pressure transducers, they still exhibit sensitivity change and zero-shift with temperature and time. To obtain meaningful data, a calibration system is needed to determine these changes. A system has been developed which provides for computer selection of appropriate reference pressures selected from nine different sources to provide a two- or three-point calibration. Calibrations are made on command, before and sometimes after each data point. A unique no leak matrix valve design is used in the reference pressure system. Zero-shift corrections are measured and the values are automatically inserted into the data reduction program.

  3. Evaluating the potential for remote bathymetric mapping of a turbid, sand-bed river: 2. Application to hyperspectral image data from the Platte River

    USGS Publications Warehouse

    Legleiter, C.J.; Kinzel, P.J.; Overstreet, B.T.

    2011-01-01

    This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes. ?? 2011 by the American Geophysical Union.

  4. Evaluating the potential for remote bathymetric mapping of a turbid, sand-bed river: 2. application to hyperspectral image data from the Platte River

    USGS Publications Warehouse

    Legleiter, Carl J.; Kinzel, Paul J.; Overstreet, Brandon T.

    2011-01-01

    This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes.

  5. Evaluation of medium-range ensemble flood forecasting based on calibration strategies and ensemble methods in Lanjiang Basin, Southeast China

    NASA Astrophysics Data System (ADS)

    Liu, Li; Gao, Chao; Xuan, Weidong; Xu, Yue-Ping

    2017-11-01

    Ensemble flood forecasts by hydrological models using numerical weather prediction products as forcing data are becoming more commonly used in operational flood forecasting applications. In this study, a hydrological ensemble flood forecasting system comprised of an automatically calibrated Variable Infiltration Capacity model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated. The hydrological model is optimized by the parallel programmed ε-NSGA II multi-objective algorithm. According to the solutions by ε-NSGA II, two differently parameterized models are determined to simulate daily flows and peak flows at each of the three hydrological stations. Then a simple yet effective modular approach is proposed to combine these daily and peak flows at the same station into one composite series. Five ensemble methods and various evaluation metrics are adopted. The results show that ε-NSGA II can provide an objective determination on parameter estimation, and the parallel program permits a more efficient simulation. It is also demonstrated that the forecasts from ECMWF have more favorable skill scores than other Ensemble Prediction Systems. The multimodel ensembles have advantages over all the single model ensembles and the multimodel methods weighted on members and skill scores outperform other methods. Furthermore, the overall performance at three stations can be satisfactory up to ten days, however the hydrological errors can degrade the skill score by approximately 2 days, and the influence persists until a lead time of 10 days with a weakening trend. With respect to peak flows selected by the Peaks Over Threshold approach, the ensemble means from single models or multimodels are generally underestimated, indicating that the ensemble mean can bring overall improvement in forecasting of flows. For peak values taking flood forecasts from each individual member into account is more appropriate.

  6. Numerical simulation of the solidification microstructure of a 17-4PH stainless steel investment casting and its experimental verification

    NASA Astrophysics Data System (ADS)

    Li, You Yun; Tsai, DeChang; Hwang, Weng Sing

    2008-06-01

    The purpose of this study is to develop a technique of numerically simulating the microstructure of 17-4PH (precipitation hardening) stainless steel during investment casting. A cellular automation (CA) algorithm was adopted to simulate the nucleation and grain growth. First a calibration casting was made, and then by comparing the microstructures of the calibration casting with those simulated using different kinetic growth coefficients (a2, a3) in CA, the most appropriate set of values for a2 and a3 would be obtained. Then, this set of values was applied to the microstructure simulation of a separate casting, where the casting was actually made. Through this approach, this study has arrived at a set of growth kinetic coefficients from the calibration casting: a2 is 2.9 × 10-5, a3 is 1.49 × 10-7, which is then used to predict the microstructure of the other test casting. Consequently, a good correlation has been found between the microstructure of actual 17-4PH casting and the simulation result.

  7. Comparison of TLD calibration methods for  192Ir dosimetry

    PubMed Central

    Butler, Duncan J.; Wilfert, Lisa; Ebert, Martin A.; Todd, Stephen P.; Hayton, Anna J.M.; Kron, Tomas

    2013-01-01

    For the purpose of dose measurement using a high‐dose rate  192Ir source, four methods of thermoluminescent dosimeter (TLD) calibration were investigated. Three of the four calibration methods used the  192Ir source. Dwell times were calculated to deliver 1 Gy to the TLDs irradiated either in air or water. Dwell time calculations were confirmed by direct measurement using an ionization chamber. The fourth method of calibration used 6 MV photons from a medical linear accelerator, and an energy correction factor was applied to account for the difference in sensitivity of the TLDs in  192Ir and 6 M V. The results of the four TLD calibration methods are presented in terms of the results of a brachytherapy audit where seven Australian centers irradiated three sets of TLDs in a water phantom. The results were in agreement within estimated uncertainties when the TLDs were calibrated with the  192Ir source. Calibrating TLDs in a phantom similar to that used for the audit proved to be the most practical method and provided the greatest confidence in measured dose. When calibrated using 6 MV photons, the TLD results were consistently higher than the  192Ir−calibrated TLDs, suggesting this method does not fully correct for the response of the TLDs when irradiated in the audit phantom. PACS number: 87 PMID:23318392

  8. Methods for Calibration of Prout-Tompkins Kinetics Parameters Using EZM Iteration and GLO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K; de Supinski, B

    2006-11-07

    This document contains information regarding the standard procedures used to calibrate chemical kinetics parameters for the extended Prout-Tompkins model to match experimental data. Two methods for calibration are mentioned: EZM calibration and GLO calibration. EZM calibration matches kinetics parameters to three data points, while GLO calibration slightly adjusts kinetic parameters to match multiple points. Information is provided regarding the theoretical approach and application procedure for both of these calibration algorithms. It is recommended that for the calibration process, the user begin with EZM calibration to provide a good estimate, and then fine-tune the parameters using GLO. Two examples have beenmore » provided to guide the reader through a general calibrating process.« less

  9. Predicting Upscaled Behavior of Aqueous Reactants in Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Wright, E. E.; Hansen, S. K.; Bolster, D.; Richter, D. H.; Vesselinov, V. V.

    2017-12-01

    When modeling reactive transport, reaction rates are often overestimated due to the improper assumption of perfect mixing at the support scale of the transport model. In reality, fronts tend to form between participants in thermodynamically favorable reactions, leading to segregation of reactants into islands or fingers. When such a configuration arises, reactions are limited to the interface between the reactive solutes. Closure methods for estimating control-volume-effective reaction rates in terms of quantities defined at the control volume scale do not presently exist, but their development is crucial for effective field-scale modeling. We attack this problem through a combination of analytical and numerical means. Specifically, we numerically study reactive transport through an ensemble of realizations of two-dimensional heterogeneous porous media. We then employ regression analysis to calibrate an analytically-derived relationship between reaction rate and various dimensionless quantities representing conductivity-field heterogeneity and the respective strengths of diffusion, reaction and advection.

  10. A computational and cellular solids approach to the stiffness-based design of bone scaffolds.

    PubMed

    Norato, J A; Wagoner Johnson, A J

    2011-09-01

    We derive a cellular solids approach to the design of bone scaffolds for stiffness and pore size. Specifically, we focus on scaffolds made of stacked, alternating, orthogonal layers of hydroxyapatite rods, such as those obtained via micro-robotic deposition, and aim to determine the rod diameter, spacing and overlap required to obtain specified elastic moduli and pore size. To validate and calibrate the cellular solids model, we employ a finite element model and determine the effective scaffold moduli via numerical homogenization. In order to perform an efficient, automated execution of the numerical studies, we employ a geometry projection method so that analyses corresponding to different scaffold dimensions can be performed on a fixed, non-conforming mesh. Based on the developed model, we provide design charts to aid in the selection of rod diameter, spacing and overlap to be used in the robotic deposition to attain desired elastic moduli and pore size.

  11. Fast Estimation of Strains for Cross-Beams Six-Axis Force/Torque Sensors by Mechanical Modeling

    PubMed Central

    Ma, Junqing; Song, Aiguo

    2013-01-01

    Strain distributions are crucial criteria of cross-beams six-axis force/torque sensors. The conventional method for calculating the criteria is to utilize Finite Element Analysis (FEA) to get numerical solutions. This paper aims to obtain analytical solutions of strains under the effect of external force/torque in each dimension. Genetic mechanical models for cross-beams six-axis force/torque sensors are proposed, in which deformable cross elastic beams and compliant beams are modeled as quasi-static Timoshenko beam. A detailed description of model assumptions, model idealizations, application scope and model establishment is presented. The results are validated by both numerical FEA simulations and calibration experiments, and test results are found to be compatible with each other for a wide range of geometric properties. The proposed analytical solutions are demonstrated to be an accurate estimation algorithm with higher efficiency. PMID:23686144

  12. SandiaMRCR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-01-05

    SandiaMCR was developed to identify pure components and their concentrations from spectral data. This software efficiently implements the multivariate calibration regression alternating least squares (MCR-ALS), principal component analysis (PCA), and singular value decomposition (SVD). Version 3.37 also includes the PARAFAC-ALS Tucker-1 (for trilinear analysis) algorithms. The alternating least squares methods can be used to determine the composition without or with incomplete prior information on the constituents and their concentrations. It allows the specification of numerous preprocessing, initialization and data selection and compression options for the efficient processing of large data sets. The software includes numerous options including the definition ofmore » equality and non-negativety constraints to realistically restrict the solution set, various normalization or weighting options based on the statistics of the data, several initialization choices and data compression. The software has been designed to provide a practicing spectroscopist the tools required to routinely analysis data in a reasonable time and without requiring expert intervention.« less

  13. A calibration method based on virtual large planar target for cameras with large FOV

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu

    2018-02-01

    In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.

  14. A Full-Envelope Air Data Calibration and Three-Dimensional Wind Estimation Method Using Global Output-Error Optimization and Flight-Test Techniques

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.

    2012-01-01

    A novel, efficient air data calibration method is proposed for aircraft with limited envelopes. This method uses output-error optimization on three-dimensional inertial velocities to estimate calibration and wind parameters. Calibration parameters are based on assumed calibration models for static pressure, angle of attack, and flank angle. Estimated wind parameters are the north, east, and down components. The only assumptions needed for this method are that the inertial velocities and Euler angles are accurate, the calibration models are correct, and that the steady-state component of wind is constant throughout the maneuver. A two-minute maneuver was designed to excite the aircraft over the range of air data calibration parameters and de-correlate the angle-of-attack bias from the vertical component of wind. Simulation of the X-48B (The Boeing Company, Chicago, Illinois) aircraft was used to validate the method, ultimately using data derived from wind-tunnel testing to simulate the un-calibrated air data measurements. Results from the simulation were accurate and robust to turbulence levels comparable to those observed in flight. Future experiments are planned to evaluate the proposed air data calibration in a flight environment.

  15. A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.

    PubMed

    Pagoulatos, N; Haynor, D R; Kim, Y

    2001-09-01

    We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.

  16. Self-calibration method for rotating laser positioning system using interscanning technology and ultrasonic ranging.

    PubMed

    Wu, Jun; Yu, Zhijing; Zhuge, Jingchang

    2016-04-01

    A rotating laser positioning system (RLPS) is an efficient measurement method for large-scale metrology. Due to multiple transmitter stations, which consist of a measurement network, the position relationship of these stations must be first calibrated. However, with such auxiliary devices such as a laser tracker, scale bar, and complex calibration process, the traditional calibration methods greatly reduce the measurement efficiency. This paper proposes a self-calibration method for RLPS, which can automatically obtain the position relationship. The method is implemented through interscanning technology by using a calibration bar mounted on the transmitter station. Each bar is composed of three RLPS receivers and one ultrasonic sensor whose coordinates are known in advance. The calibration algorithm is mainly based on multiplane and distance constraints and is introduced in detail through a two-station mathematical model. The repeated experiments demonstrate that the coordinate measurement uncertainty of spatial points by using this method is about 0.1 mm, and the accuracy experiments show that the average coordinate measurement deviation is about 0.3 mm compared with a laser tracker. The accuracy can meet the requirements of most applications, while the calibration efficiency is significantly improved.

  17. Mathematical modeling of malaria infection with innate and adaptive immunity in individuals and agent-based communities.

    PubMed

    Gurarie, David; Karl, Stephan; Zimmerman, Peter A; King, Charles H; St Pierre, Timothy G; Davis, Timothy M E

    2012-01-01

    Agent-based modeling of Plasmodium falciparum infection offers an attractive alternative to the conventional Ross-Macdonald methodology, as it allows simulation of heterogeneous communities subjected to realistic transmission (inoculation patterns). We developed a new, agent based model that accounts for the essential in-host processes: parasite replication and its regulation by innate and adaptive immunity. The model also incorporates a simplified version of antigenic variation by Plasmodium falciparum. We calibrated the model using data from malaria-therapy (MT) studies, and developed a novel calibration procedure that accounts for a deterministic and a pseudo-random component in the observed parasite density patterns. Using the parasite density patterns of 122 MT patients, we generated a large number of calibrated parameters. The resulting data set served as a basis for constructing and simulating heterogeneous agent-based (AB) communities of MT-like hosts. We conducted several numerical experiments subjecting AB communities to realistic inoculation patterns reported from previous field studies, and compared the model output to the observed malaria prevalence in the field. There was overall consistency, supporting the potential of this agent-based methodology to represent transmission in realistic communities. Our approach represents a novel, convenient and versatile method to model Plasmodium falciparum infection.

  18. Fine PM measurements: personal and indoor air monitoring.

    PubMed

    Jantunen, M; Hänninen, O; Koistinen, K; Hashim, J H

    2002-12-01

    This review compiles personal and indoor microenvironment particulate matter (PM) monitoring needs from recently set research objectives, most importantly the NRC published "Research Priorities for Airborne Particulate Matter (1998)". Techniques and equipment used to monitor PM personal exposures and microenvironment concentrations and the constituents of the sampled PM during the last 20 years are then reviewed. Development objectives are set and discussed for personal and microenvironment PM samplers and monitors, for filter materials, and analytical laboratory techniques for equipment calibration, filter weighing and laboratory climate control. The progress is leading towards smaller sample flows, lighter, silent, independent (battery powered) monitors with data logging capacity to store microenvironment or activity relevant sensor data, advanced flow controls and continuous recording of the concentration. The best filters are non-hygroscopic, chemically pure and inert, and physically robust against mechanical wear. Semiautomatic and primary standard equivalent positive displacement flow meters are replacing the less accurate methods in flow calibration, and also personal sampling flow rates should become mass flow controlled (with or without volumetric compensation for pressure and temperature changes). In the weighing laboratory the alternatives are climatic control (set temperature and relative humidity), and mechanically simpler thermostatic heating, air conditioning and dehumidification systems combined with numerical control of temperature, humidity and pressure effects on flow calibration and filter weighing.

  19. A nonlinear propagation model-based phase calibration technique for membrane hydrophones.

    PubMed

    Cooling, Martin P; Humphrey, Victor F

    2008-01-01

    A technique for the phase calibration of membrane hydrophones in the frequency range up to 80 MHz is described. This is achieved by comparing measurements and numerical simulation of a nonlinearly distorted test field. The field prediction is obtained using a finite-difference model that solves the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation in the frequency domain. The measurements are made in the far field of a 3.5 MHz focusing circular transducer in which it is demonstrated that, for the high drive level used, spatial averaging effects due to the hydrophone's finite-receive area are negligible. The method provides a phase calibration of the hydrophone under test without the need for a device serving as a phase response reference, but it requires prior knowledge of the amplitude sensitivity at the fundamental frequency. The technique is demonstrated using a 50-microm thick bilaminar membrane hydrophone, for which the results obtained show functional agreement with predictions of a hydrophone response model. Further validation of the results is obtained by application of the response to the measurement of the high amplitude waveforms generated by a modern biomedical ultrasonic imaging system. It is demonstrated that full deconvolution of the calculated complex frequency response of a nonideal hydrophone results in physically realistic measurements of the transmitted waveforms.

  20. Volumetric calibration of a plenoptic camera.

    PubMed

    Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S

    2018-02-01

    The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.

  1. MATHEMATICS PANEL QUARTERLY PROGRESS REPORT FOR PERIOD ENDING JULY 31, 1952

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perry, C.L. ed.

    1952-10-27

    The background and status of the following projects of the Mathematics Panel are reported: test problems for the ORAC arithmetic units errors in matrix operations; basic studies in the Monte Carlo methods A Sturm-Liouville problems approximate steady-state solution of the equation of continuity; estimation of volume of lymph space; xradiation effects on respiration rates in grasshopper embnyos; temperature effects in irradiation experiments with yeast; LD/sub 50/ estimation for burros and swine exposed to gamma radiation; thermal-neutron penetration in tissues; kinetics of HBr-HBrO/sub 3/ reaction; isotope effect in reaction rate constants; experimental determination of diffusivity coefficientss Dirac wave equationss fitting amore » calibration curves beta decay (field factors); neutron decay theorys calculation of internal conversion coefficients with screening; estimation of alignment ratios; optimum allocation of counting times calculation of coincidence probabilities for a double-crystal detectors reactor inequalities; heat flow in long rectangular tubes; solving an equation by numerical methods; numerical integration; evalvation of a functions depigmentation of a biological dosimeter. (L.M.T.)« less

  2. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  3. BESTEST-EX | Buildings | NREL

    Science.gov Websites

    method for testing home energy audit software and associated calibration methods. BESTEST-EX is one of Energy Analysis Model Calibration Methods. When completed, the ANSI/RESNET SMOT will specify test procedures for evaluating calibration methods used in conjunction with predicting building energy use and

  4. Influence of mesh structure on 2D full shallow water equations and SCS Curve Number simulation of rainfall/runoff events

    NASA Astrophysics Data System (ADS)

    Caviedes-Voullième, Daniel; García-Navarro, Pilar; Murillo, Javier

    2012-07-01

    SummaryHydrological simulation of rain-runoff processes is often performed with lumped models which rely on calibration to generate storm hydrographs and study catchment response to rain. In this paper, a distributed, physically-based numerical model is used for runoff simulation in a mountain catchment. This approach offers two advantages. The first is that by using shallow-water equations for runoff flow, there is less freedom to calibrate routing parameters (as compared to, for example, synthetic hydrograph methods). The second, is that spatial distributions of water depth and velocity can be obtained. Furthermore, interactions among the various hydrological processes can be modeled in a physically-based approach which may depend on transient and spatially distributed factors. On the other hand, the undertaken numerical approach relies on accurate terrain representation and mesh selection, which also affects significantly the computational cost of the simulations. Hence, we investigate the response of a gauged catchment with this distributed approach. The methodology consists of analyzing the effects that the mesh has on the simulations by using a range of meshes. Next, friction is applied to the model and the response to variations and interaction with the mesh is studied. Finally, a first approach with the well-known SCS Curve Number method is studied to evaluate its behavior when coupled with a shallow-water model for runoff flow. The results show that mesh selection is of great importance, since it may affect the results in a magnitude as large as physical factors, such as friction. Furthermore, results proved to be less sensitive to roughness spatial distribution than to mesh properties. Finally, the results indicate that SCS-CN may not be suitable for simulating hydrological processes together with a shallow-water model.

  5. High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu

    2017-05-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.

  6. Uncertainty propagation in the calibration equations for NTC thermistors

    NASA Astrophysics Data System (ADS)

    Liu, Guang; Guo, Liang; Liu, Chunlong; Wu, Qingwen

    2018-06-01

    The uncertainty propagation problem is quite important for temperature measurements, since we rely so much on the sensors and calibration equations. Although uncertainty propagation for platinum resistance or radiation thermometers is well known, there have been few publications concerning negative temperature coefficient (NTC) thermistors. Insight into the propagation characteristics of uncertainty that develop when equations are determined using the Lagrange interpolation or least-squares fitting method is presented here with respect to several of the most common equations used in NTC thermistor calibration. Within this work, analytical expressions of the propagated uncertainties for both fitting methods are derived for the uncertainties in the measured temperature and resistance at each calibration point. High-precision calibration of an NTC thermistor in a precision water bath was performed by means of the comparison method. Results show that, for both fitting methods, the propagated uncertainty is flat in the interpolation region but rises rapidly beyond the calibration range. Also, for temperatures interpolated between calibration points, the propagated uncertainty is generally no greater than that associated with the calibration points. For least-squares fitting, the propagated uncertainty is significantly reduced by increasing the number of calibration points and can be well kept below the uncertainty of the calibration points.

  7. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  8. SU-C-207A-05: Feature Based Water Equivalent Path Length (WEPL) Determination for Proton Radiography by the Technique of Time Resolved Dose Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, R; Jee, K; Sharp, G

    Purpose: Studies show that WEPL can be determined from modulated dose rate functions (DRF). However, the previous calibration method based on statistics of the DRF is sensitive to energy mixing of protons due to scattering through different materials (termed as range mixing here), causing inaccuracies in the determination of WEPL. This study intends to explore time-domain features of the DRF to reduce the effect of range mixing in proton radiography (pRG) by this technique. Methods: An amorphous silicon flat panel (PaxScan™ 4030CB, Varian Medical Systems, Inc., Palo Alto, CA) was placed behind phantoms to measure DRFs from a proton beammore » modulated by a specially designed modulator wheel. The performance of two methods, the previously used method based on the root mean square (RMS) and the new approach based on time-domain features of the DRF, are compared for retrieving WEPL and RSP from pRG of a Gammex phantom. Results: Calibration by T{sub 80} (the time point for 80% of the major peak) was more robust to range mixing and produced WEPL with improved accuracy. The error of RSP was reduced from 8.2% to 1.7% for lung equivalent material, with the mean error for all other materials reduced from 1.2% to 0.7%. The mean error of the full width at half maximum (FWHM) of retrieved inserts was decreased from 25.85% to 5.89% for the RMS and T{sub 80} method respectively. Monte Carlo simulations in simplified cases also demonstrated that the T{sub 80} method is less sensitive to range mixing than the RMS method. Conclusion: WEPL images have been retrieved based on single flat panel measured DRFs, with inaccuracies reduced by exploiting time-domain features as the calibration parameter. The T{sub 80} method is validated to be less sensitive to range mixing and can thus retrieve the WEPL values in proximity of interfaces with improved numerical and spatial accuracy for proton radiography.« less

  9. Calibration procedure of Hukseflux SR25 to Establish the Diffuse Reference for the Outdoor Broadband Radiometer Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reda, Ibrahim M.; Andreas, Afshin M.

    2017-08-01

    Accurate pyranometer calibrations, traceable to internationally recognized standards, are critical for solar irradiance measurements. One calibration method is the component summation method, where the pyranometers are calibrated outdoors under clear sky conditions, and the reference global solar irradiance is calculated as the sum of two reference components, the diffuse horizontal and subtended beam solar irradiances. The beam component is measured with pyrheliometers traceable to the World Radiometric Reference, while there is no internationally recognized reference for the diffuse component. In the absence of such a reference, we present a method to consistently calibrate pyranometers for measuring the diffuse component. Themore » method is based on using a modified shade/unshade method and a pyranometer with less than 0.5 W/m2 thermal offset. The calibration result shows that the responsivity of Hukseflux SR25 pyranometer equals 10.98 uV/(W/m2) with +/-0.86 percent uncertainty.« less

  10. Radiation calibration for LWIR Hyperspectral Imager Spectrometer

    NASA Astrophysics Data System (ADS)

    Yang, Zhixiong; Yu, Chunchao; Zheng, Wei-jian; Lei, Zhenggang; Yan, Min; Yuan, Xiaochun; Zhang, Peizhong

    2014-11-01

    The radiometric calibration of LWIR Hyperspectral imager Spectrometer is presented. The lab has been developed to LWIR Interferometric Hyperspectral imager Spectrometer Prototype(CHIPED-I) to study Lab Radiation Calibration, Two-point linear calibration is carried out for the spectrometer by using blackbody respectively. Firstly, calibration measured relative intensity is converted to the absolute radiation lightness of the object. Then, radiation lightness of the object is is converted the brightness temperature spectrum by the method of brightness temperature. The result indicated †that this method of Radiation Calibration calibration was very good.

  11. Final results of the Resonance spacecraft calibration effort

    NASA Astrophysics Data System (ADS)

    Sampl, Manfred; Macher, Wolfgang; Gruber, Christian; Oswald, Thomas; Rucker, Helmut O.

    2010-05-01

    We report our dedicated analyses of electrical field sensors onboard the Resonance spacecraft with a focus on the high-frequency electric antennas. The aim of the Resonance mission is to investigate wave-particle interactions and plasma dynamics in the inner magnetosphere of the Earth, with a focus on phenomena occurring along the same field line and within the same flux tube of the Earth's magnetic field. Four spacecraft will be launched, in the middle of the next decade, to perform these observations and measurements. Amongst a variety of instruments and probes several low- and high-frequency electric sensors will be carried which can be used for simultaneous remote sensing and in-situ measurements. The high-frequency electric sensors consist of cylindrical antennas mounted on four booms extruded from the central body of the spacecraft. In addition, the boom rods themselves are used together with the these sensors for mutual impedance measurements. Due to the parasitic effects of the conducting spacecraft body the electrical antenna representations (effective length vector, capacitances) do not coincide with their physical representations. The analysis of the reception properties of these antennas is presented, along with a contribution to the understanding of their impairment by other objects; in particular the influence of large magnetic loop sensors is studied. In order to analyse the antenna system, we applied experimental and numerical methods. The experimental method, called rheometry, is essentially an electrolytic tank measurement, where a scaled-down spacecraft model is immersed into an electrolytic medium (water) with corresponding measurements of voltages at the antennas. The numerical method consists of a numerical solution of the underlying field equations by means of computer programs, which are based on wire-grid and patch-grid models. The experimental and numerical results show that parasitic effects of the antenna-spacecraft assembly alter the antenna properties significantly. The antenna directions and lengths, represented by the "effective length vector" are altered by up to 4 degree in direction and 50% in length, for the quasi-static range. High frequency analyses (up to 40 MHz) illustrate massive antenna pattern changes beyond the quasi-static frequency limit of approximately 1.5 MHz. In addition we found that the magnetic loop sensors tremendously increase the effective lengths and capacitances, depending on their placement on the booms. The antenna calibration results and loop placement findings are of great benefit to the Resonance mission. In particular, goniopolarimetry techniques like polarization analysis and direction finding depend crucially on the effective axes.

  12. A proposed standard method for polarimetric calibration and calibration verification

    NASA Astrophysics Data System (ADS)

    Persons, Christopher M.; Jones, Michael W.; Farlow, Craig A.; Morell, L. Denise; Gulley, Michael G.; Spradley, Kevin D.

    2007-09-01

    Accurate calibration of polarimetric sensors is critical to reducing and analyzing phenomenology data, producing uniform polarimetric imagery for deployable sensors, and ensuring predictable performance of polarimetric algorithms. It is desirable to develop a standard calibration method, including verification reporting, in order to increase credibility with customers and foster communication and understanding within the polarimetric community. This paper seeks to facilitate discussions within the community on arriving at such standards. Both the calibration and verification methods presented here are performed easily with common polarimetric equipment, and are applicable to visible and infrared systems with either partial Stokes or full Stokes sensitivity. The calibration procedure has been used on infrared and visible polarimetric imagers over a six year period, and resulting imagery has been presented previously at conferences and workshops. The proposed calibration method involves the familiar calculation of the polarimetric data reduction matrix by measuring the polarimeter's response to a set of input Stokes vectors. With this method, however, linear combinations of Stokes vectors are used to generate highly accurate input states. This allows the direct measurement of all system effects, in contrast with fitting modeled calibration parameters to measured data. This direct measurement of the data reduction matrix allows higher order effects that are difficult to model to be discovered and corrected for in calibration. This paper begins with a detailed tutorial on the proposed calibration and verification reporting methods. Example results are then presented for a LWIR rotating half-wave retarder polarimeter.

  13. Calibration and validation of a small-scale urban surface water flood event using crowdsourced images

    NASA Astrophysics Data System (ADS)

    Green, Daniel; Yu, Dapeng; Pattison, Ian

    2017-04-01

    Surface water flooding occurs when intense precipitation events overwhelm the drainage capacity of an area and excess overland flow is unable to infiltrate into the ground or drain via natural or artificial drainage channels, such as river channels, manholes or SuDS. In the UK, over 3 million properties are at risk from surface water flooding alone, accounting for approximately one third of the UK's flood risk. The risk of surface water flooding is projected to increase due to several factors, including population increases, land-use alterations and future climatic changes in precipitation resulting in an increased magnitude and frequency of intense precipitation events. Numerical inundation modelling is a well-established method of investigating surface water flood risk, allowing the researcher to gain a detailed understanding of the depth, velocity, discharge and extent of actual or hypothetical flood scenarios over a wide range of spatial scales. However, numerical models require calibration of key hydrological and hydraulic parameters (e.g. infiltration, evapotranspiration, drainage rate, roughness) to ensure model outputs adequately represent the flood event being studied. Furthermore, validation data such as crowdsourced images or spatially-referenced flood depth collected during a flood event may provide a useful validation of inundation depth and extent for actual flood events. In this study, a simplified two-dimensional inertial based flood inundation model requiring minimal pre-processing of data (FloodMap-HydroInundation) was used to model a short-duration, intense rainfall event (27.8 mm in 15 minutes) that occurred over the Loughborough University campus on the 28th June 2012. High resolution (1m horizontal, +/- 15cm vertical) DEM data, rasterised Ordnance Survey topographic structures data and precipitation data recorded at the University weather station were used to conduct numerical modelling over the small (< 2km2), contained urban catchment. To validate model outputs and allow a reconstruction of spatially referenced flood depth and extent during the flood event, crowdsourced images were obtained from social media (Twitter) and from individuals present during the flood event via the University noticeboards, as well as using dGPS flood depth data collected at one of the worst affected areas. An investigation into the sensitivity of key model parameters suggests that the numerical model code is highly sensitivity to changes within the recommended range of roughness and infiltration values, as well as changes in DEM and building mesh resolutions, but less sensitive to changes in evapotranspiration and drainage capacity parameters. The study also demonstrates the potential of using crowdsourced images to validate urban surface water flood models and inform parameterisation when calibrating numerical inundation models.

  14. Backward-gazing method for measuring solar concentrators shape errors.

    PubMed

    Coquand, Mathieu; Henault, François; Caliot, Cyril

    2017-03-01

    This paper describes a backward-gazing method for measuring the optomechanical errors of solar concentrating surfaces. It makes use of four cameras placed near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. Simple data processing then allows reconstructing the slope and shape errors of the surfaces. The originality of the method is enforced by the use of generalized quad-cell formulas and approximate mathematical relations between the slope errors of the mirrors and their reflected wavefront in the case of sun-tracking heliostats at high-incidence angles. Numerical simulations demonstrate that the measurement accuracy is compliant with standard requirements of solar concentrating optics in the presence of noise or calibration errors. The method is suited to fine characterization of the optical and mechanical errors of heliostats and their facets, or to provide better control for real-time sun tracking.

  15. Optimization of groundwater sampling approach under various hydrogeological conditions using a numerical simulation model

    NASA Astrophysics Data System (ADS)

    Qi, Shengqi; Hou, Deyi; Luo, Jian

    2017-09-01

    This study presents a numerical model based on field data to simulate groundwater flow in both the aquifer and the well-bore for the low-flow sampling method and the well-volume sampling method. The numerical model was calibrated to match well with field drawdown, and calculated flow regime in the well was used to predict the variation of dissolved oxygen (DO) concentration during the purging period. The model was then used to analyze sampling representativeness and sampling time. Site characteristics, such as aquifer hydraulic conductivity, and sampling choices, such as purging rate and screen length, were found to be significant determinants of sampling representativeness and required sampling time. Results demonstrated that: (1) DO was the most useful water quality indicator in ensuring groundwater sampling representativeness in comparison with turbidity, pH, specific conductance, oxidation reduction potential (ORP) and temperature; (2) it is not necessary to maintain a drawdown of less than 0.1 m when conducting low flow purging. However, a high purging rate in a low permeability aquifer may result in a dramatic decrease in sampling representativeness after an initial peak; (3) the presence of a short screen length may result in greater drawdown and a longer sampling time for low-flow purging. Overall, the present study suggests that this new numerical model is suitable for describing groundwater flow during the sampling process, and can be used to optimize sampling strategies under various hydrogeological conditions.

  16. Problems With Risk Reclassification Methods for Evaluating Prediction Models

    PubMed Central

    Pepe, Margaret S.

    2011-01-01

    For comparing the performance of a baseline risk prediction model with one that includes an additional predictor, a risk reclassification analysis strategy has been proposed. The first step is to cross-classify risks calculated according to the 2 models for all study subjects. Summary measures including the percentage of reclassification and the percentage of correct reclassification are calculated, along with 2 reclassification calibration statistics. The author shows that interpretations of the proposed summary measures and P values are problematic. The author's recommendation is to display the reclassification table, because it shows interesting information, but to use alternative methods for summarizing and comparing model performance. The Net Reclassification Index has been suggested as one alternative method. The author argues for reporting components of the Net Reclassification Index because they are more clinically relevant than is the single numerical summary measure. PMID:21555714

  17. An adaptable parallel algorithm for the direct numerical simulation of incompressible turbulent flows using a Fourier spectral/hp element method and MPI virtual topologies

    NASA Astrophysics Data System (ADS)

    Bolis, A.; Cantwell, C. D.; Moxey, D.; Serson, D.; Sherwin, S. J.

    2016-09-01

    A hybrid parallelisation technique for distributed memory systems is investigated for a coupled Fourier-spectral/hp element discretisation of domains characterised by geometric homogeneity in one or more directions. The performance of the approach is mathematically modelled in terms of operation count and communication costs for identifying the most efficient parameter choices. The model is calibrated to target a specific hardware platform after which it is shown to accurately predict the performance in the hybrid regime. The method is applied to modelling turbulent flow using the incompressible Navier-Stokes equations in an axisymmetric pipe and square channel. The hybrid method extends the practical limitations of the discretisation, allowing greater parallelism and reduced wall times. Performance is shown to continue to scale when both parallelisation strategies are used.

  18. The Impact of Indoor and Outdoor Radiometer Calibration on Solar Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    2016-06-02

    This study addresses the effect of calibration methodologies on calibration responsivities and the resulting impact on radiometric measurements. The calibration responsivities used in this study are provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides outdoor calibration responsivity of pyranometers and pyrheliometers at a 45 degree solar zenith angle and responsivity as a function of solar zenith angle determined by clear-sky comparisons to reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturersmore » are performed using a stable artificial light source in a side-by-side comparison of the test radiometer under calibration to a reference radiometer of the same type. These different methods of calibration demonstrated 1percent to 2 percent differences in solar irradiance measurement. Analyzing these values will ultimately enable a reduction in radiometric measurement uncertainties and assist in developing consensus on a standard for calibration.« less

  19. Comparisons of Particle Tracking Techniques and Galerkin Finite Element Methods in Flow Simulations on Watershed Scales

    NASA Astrophysics Data System (ADS)

    Shih, D.; Yeh, G.

    2009-12-01

    This paper applies two numerical approximations, the particle tracking technique and Galerkin finite element method, to solve the diffusive wave equation in both one-dimensional and two-dimensional flow simulations. The finite element method is one of most commonly approaches in numerical problems. It can obtain accurate solutions, but calculation times may be rather extensive. The particle tracking technique, using either single-velocity or average-velocity tracks to efficiently perform advective transport, could use larger time-step sizes than the finite element method to significantly save computational time. Comparisons of the alternative approximations are examined in this poster. We adapt the model WASH123D to examine the work. WASH123D is an integrated multimedia, multi-processes, physics-based computational model suitable for various spatial-temporal scales, was first developed by Yeh et al., at 1998. The model has evolved in design capability and flexibility, and has been used for model calibrations and validations over the course of many years. In order to deliver a locally hydrological model in Taiwan, the Taiwan Typhoon and Flood Research Institute (TTFRI) is working with Prof. Yeh to develop next version of WASH123D. So, the work of our preliminary cooperationx is also sketched in this poster.

  20. Calibration of the island effect: Experimental validation of closed-loop focal plane wavefront control on Subaru/SCExAO

    NASA Astrophysics Data System (ADS)

    N'Diaye, M.; Martinache, F.; Jovanovic, N.; Lozi, J.; Guyon, O.; Norris, B.; Ceau, A.; Mary, D.

    2018-02-01

    Context. Island effect (IE) aberrations are induced by differential pistons, tips, and tilts between neighboring pupil segments on ground-based telescopes, which severely limit the observations of circumstellar environments on the recently deployed exoplanet imagers (e.g., VLT/SPHERE, Gemini/GPI, Subaru/SCExAO) during the best observing conditions. Caused by air temperature gradients at the level of the telescope spiders, these aberrations were recently diagnosed with success on VLT/SPHERE, but so far no complete calibration has been performed to overcome this issue. Aims: We propose closed-loop focal plane wavefront control based on the asymmetric Fourier pupil wavefront sensor (APF-WFS) to calibrate these aberrations and improve the image quality of exoplanet high-contrast instruments in the presence of the IE. Methods: Assuming the archetypal four-quadrant aperture geometry in 8 m class telescopes, we describe these aberrations as a sum of the independent modes of piston, tip, and tilt that are distributed in each quadrant of the telescope pupil. We calibrate these modes with the APF-WFS before introducing our wavefront control for closed-loop operation. We perform numerical simulations and then experimental tests on a real system using Subaru/SCExAO to validate our control loop in the laboratory and on-sky. Results: Closed-loop operation with the APF-WFS enables the compensation for the IE in simulations and in the laboratory for the small aberration regime. Based on a calibration in the near infrared, we observe an improvement of the image quality in the visible range on the SCExAO/VAMPIRES module with a relative increase in the image Strehl ratio of 37%. Conclusions: Our first IE calibration paves the way for maximizing the science operations of the current exoplanet imagers. Such an approach and its results prove also very promising in light of the Extremely Large Telescopes (ELTs) and the presence of similar artifacts with their complex aperture geometry.

  1. The Role of Wakes in Modelling Tidal Current Turbines

    NASA Astrophysics Data System (ADS)

    Conley, Daniel; Roc, Thomas; Greaves, Deborah

    2010-05-01

    The eventual proper development of arrays of Tidal Current Turbines (TCT) will require a balance which maximizes power extraction while minimizing environmental impacts. Idealized analytical analogues and simple 2-D models are useful tools for investigating questions of a general nature but do not represent a practical tool for application to realistic cases. Some form of 3-D numerical simulations will be required for such applications and the current project is designed to develop a numerical decision-making tool for use in planning large scale TCT projects. The project is predicated on the use of an existing regional ocean modelling framework (the Regional Ocean Modelling System - ROMS) which is modified to enable the user to account for the effects of TCTs. In such a framework where mixing processes are highly parametrized, the fidelity of the quantitative results is critically dependent on the parameter values utilized. In light of the early stage of TCT development and the lack of field scale measurements, the calibration of such a model is problematic. In the absence of explicit calibration data sets, the device wake structure has been identified as an efficient feature for model calibration. This presentation will discuss efforts to design an appropriate calibration scheme which focuses on wake decay and the motivation for this approach, techniques applied, validation results from simple test cases and limitations shall be presented.

  2. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  3. Node-to-node field calibration of wireless distributed air pollution sensor network.

    PubMed

    Kizel, Fadi; Etzion, Yael; Shafran-Nathan, Rakefet; Levy, Ilan; Fishbain, Barak; Bartonova, Alena; Broday, David M

    2018-02-01

    Low-cost air quality sensors offer high-resolution spatiotemporal measurements that can be used for air resources management and exposure estimation. Yet, such sensors require frequent calibration to provide reliable data, since even after a laboratory calibration they might not report correct values when they are deployed in the field, due to interference with other pollutants, as a result of sensitivity to environmental conditions and due to sensor aging and drift. Field calibration has been suggested as a means for overcoming these limitations, with the common strategy involving periodical collocations of the sensors at an air quality monitoring station. However, the cost and complexity involved in relocating numerous sensor nodes back and forth, and the loss of data during the repeated calibration periods make this strategy inefficient. This work examines an alternative approach, a node-to-node (N2N) calibration, where only one sensor in each chain is directly calibrated against the reference measurements and the rest of the sensors are calibrated sequentially one against the other while they are deployed and collocated in pairs. The calibration can be performed multiple times as a routine procedure. This procedure minimizes the total number of sensor relocations, and enables calibration while simultaneously collecting data at the deployment sites. We studied N2N chain calibration and the propagation of the calibration error analytically, computationally and experimentally. The in-situ N2N calibration is shown to be generic and applicable for different pollutants, sensing technologies, sensor platforms, chain lengths, and sensor order within the chain. In particular, we show that chain calibration of three nodes, each calibrated for a week, propagate calibration errors that are similar to those found in direct field calibration. Hence, N2N calibration is shown to be suitable for calibration of distributed sensor networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Katrašnik, Jaka; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    The goal of this article is to present a novel method for spectral characterization and calibration of spectrometers and hyper-spectral imaging systems based on non-collinear acousto-optical tunable filters. The method characterizes the spectral tuning curve (frequency-wavelength characteristic) of the AOTF (Acousto-Optic Tunable Filter) filter by matching the acquired and modeled spectra of the HgAr calibration lamp, which emits line spectrum that can be well modeled via AOTF transfer function. In this way, not only tuning curve characterization and corresponding spectral calibration but also spectral resolution assessment is performed. The obtained results indicated that the proposed method is efficient, accurate and feasible for routine calibration of AOTF spectrometers and hyper-spectral imaging systems and thereby a highly competitive alternative to the existing calibration methods.

  5. The Calibration of AVHRR/3 Visible Dual Gain Using Meteosat-8 as a MODIS Calibration Transfer Medium

    NASA Technical Reports Server (NTRS)

    Avey, Lance; Garber, Donald; Nguyen, Louis; Minnis, Patrick

    2007-01-01

    This viewgraph presentation reviews the NOAA-17 AVHRR visible channels calibrated against MET-8/MODIS using dual gain regression methods. The topics include: 1) Motivation; 2) Methodology; 3) Dual Gain Regression Methods; 4) Examples of Regression methods; 5) AVHRR/3 Regression Strategy; 6) Cross-Calibration Method; 7) Spectral Response Functions; 8) MET8/NOAA-17; 9) Example of gain ratio adjustment; 10) Effect of mixed low/high count FOV; 11) Monitor dual gains over time; and 12) Conclusions

  6. A Novel Protocol for Model Calibration in Biological Wastewater Treatment

    PubMed Central

    Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen

    2015-01-01

    Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959

  7. Comparison between different direct search optimization algorithms in the calibration of a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca

    2010-05-01

    The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.

  8. Prediction of CP and starch concentrations in ruminal in situ studies and ruminal degradation of cereal grains using NIRS.

    PubMed

    Krieg, J; Koenzen, E; Seifried, N; Steingass, H; Schenkel, H; Rodehutscord, M

    2018-03-01

    Ruminal in situ incubations are widely used to assess the nutritional value of feedstuffs for ruminants. In in situ methods, feed samples are ruminally incubated in indigestible bags over a predefined timespan and the disappearance of nutrients from the bags is recorded. To describe the degradation of specific nutrients, information on the concentration of feed samples and undegraded feed after in situ incubation ('bag residues') is needed. For cereal and pea grains, CP and starch (ST) analyses are of interest. The numerous analyses of residues following ruminal incubation contribute greatly to the substantial investments in labour and money, and faster methods would be beneficial. Therefore, calibrations were developed to estimate CP and ST concentrations in grains and bag residues following in situ incubations by using their near-infrared spectra recorded from 680 to 2500 nm. The samples comprised rye, triticale, barley, wheat, and maize grains (20 genotypes each), and 15 durum wheat and 13 pea grains. In addition, residues after ruminal incubation were included (at least from four samples per species for various incubation times). To establish CP and ST calibrations, 620 and 610 samples (grains and bag residues after incubation, respectively) were chemically analysed for their CP and ST concentration. Calibrations using wavelengths from 1250 to 2450 nm and the first derivative of the spectra produced the best results (R 2 Validation=0.99 for CP and ST; standard error of prediction=0.47 and 2.10% DM for CP and ST, respectively). Hence, CP and ST concentration in cereal grains and peas and their bag residues could be predicted with high precision by NIRS for use in in situ studies. No differences were found between the effective ruminal degradation calculated from NIRS estimations and those calculated from chemical analyses (P>0.70). Calibrations were also calculated to predict ruminal degradation kinetics of cereal grains from the spectra of ground grains. Estimation of the effective ruminal degradation of CP and ST from the near-infrared spectra of cereal grains showed promising results (R 2>0.90), but the database needs to be extended to obtain more stable calibrations for routine use.

  9. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-06-22

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.

  10. IMU-based online kinematic calibration of robot manipulator.

    PubMed

    Du, Guanglong; Zhang, Ping

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  11. SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Elder, E; Roper, J

    2015-06-15

    Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less

  12. Accurate Determination of the Frequency Response Function of Submerged and Confined Structures by Using PZT-Patches†.

    PubMed

    Presas, Alexandre; Valentin, David; Egusquiza, Eduard; Valero, Carme; Egusquiza, Mònica; Bossio, Matias

    2017-03-22

    To accurately determine the dynamic response of a structure is of relevant interest in many engineering applications. Particularly, it is of paramount importance to determine the Frequency Response Function (FRF) for structures subjected to dynamic loads in order to avoid resonance and fatigue problems that can drastically reduce their useful life. One challenging case is the experimental determination of the FRF of submerged and confined structures, such as hydraulic turbines, which are greatly affected by dynamic problems as reported in many cases in the past. The utilization of classical and calibrated exciters such as instrumented hammers or shakers to determine the FRF in such structures can be very complex due to the confinement of the structure and because their use can disturb the boundary conditions affecting the experimental results. For such cases, Piezoelectric Patches (PZTs), which are very light, thin and small, could be a very good option. Nevertheless, the main drawback of these exciters is that the calibration as dynamic force transducers (relationship voltage/force) has not been successfully obtained in the past. Therefore, in this paper, a method to accurately determine the FRF of submerged and confined structures by using PZTs is developed and validated. The method consists of experimentally determining some characteristic parameters that define the FRF, with an uncalibrated PZT exciting the structure. These parameters, which have been experimentally determined, are then introduced in a validated numerical model of the tested structure. In this way, the FRF of the structure can be estimated with good accuracy. With respect to previous studies, where only the natural frequencies and mode shapes were considered, this paper discuss and experimentally proves the best excitation characteristic to obtain also the damping ratios and proposes a procedure to fully determine the FRF. The method proposed here has been validated for the structure vibrating in air comparing the FRF experimentally obtained with a calibrated exciter (impact Hammer) and the FRF obtained with the described method. Finally, the same methodology has been applied for the structure submerged and close to a rigid wall, where it is extremely important to not modify the boundary conditions for an accurate determination of the FRF. As experimentally shown in this paper, in such cases, the use of PZTs combined with the proposed methodology gives much more accurate estimations of the FRF than other calibrated exciters typically used for the same purpose. Therefore, the validated methodology proposed in this paper can be used to obtain the FRF of a generic submerged and confined structure, without a previous calibration of the PZT.

  13. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  14. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  15. Terrestrial photovoltaic measurements, 2

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The following major topics are discussed; (1) Terrestrial solar irradiance; (2) Solar simulation and reference cell calibration; and (3) Cell and array measurement procedures. Numerous related subtopics are also discussed within each major topic area.

  16. Geomorphically based predictive mapping of soil thickness in upland watersheds

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.; Rasmussen, Craig

    2009-09-01

    The hydrologic response of upland watersheds is strongly controlled by soil (regolith) thickness. Despite the need to quantify soil thickness for input into hydrologic models, there is currently no widely used, geomorphically based method for doing so. In this paper we describe and illustrate a new method for predictive mapping of soil thicknesses using high-resolution topographic data, numerical modeling, and field-based calibration. The model framework works directly with input digital elevation model data to predict soil thicknesses assuming a long-term balance between soil production and erosion. Erosion rates in the model are quantified using one of three geomorphically based sediment transport models: nonlinear slope-dependent transport, nonlinear area- and slope-dependent transport, and nonlinear depth- and slope-dependent transport. The model balances soil production and erosion locally to predict a family of solutions corresponding to a range of values of two unconstrained model parameters. A small number of field-based soil thickness measurements can then be used to calibrate the local value of those unconstrained parameters, thereby constraining which solution is applicable at a particular study site. As an illustration, the model is used to predictively map soil thicknesses in two small, ˜0.1 km2, drainage basins in the Marshall Gulch watershed, a semiarid drainage basin in the Santa Catalina Mountains of Pima County, Arizona. Field observations and calibration data indicate that the nonlinear depth- and slope-dependent sediment transport model is the most appropriate transport model for this site. The resulting framework provides a generally applicable, geomorphically based tool for predictive mapping of soil thickness using high-resolution topographic data sets.

  17. Volumetric calibration of a plenoptic camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  18. Volumetric calibration of a plenoptic camera

    DOE PAGES

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...

    2018-02-01

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  19. Blind calibration of radio interferometric arrays using sparsity constraints and its implications for self-calibration

    NASA Astrophysics Data System (ADS)

    Chiarucci, Simone; Wijnholds, Stefan J.

    2018-02-01

    Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.

  20. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  1. Novel Calibration Algorithm for a Three-Axis Strapdown Magnetometer

    PubMed Central

    Liu, Yan Xia; Li, Xi Sheng; Zhang, Xiao Juan; Feng, Yi Bo

    2014-01-01

    A complete error calibration model with 12 independent parameters is established by analyzing the three-axis magnetometer error mechanism. The said model conforms to an ellipsoid restriction, the parameters of the ellipsoid equation are estimated, and the ellipsoid coefficient matrix is derived. However, the calibration matrix cannot be determined completely, as there are fewer ellipsoid parameters than calibration model parameters. Mathematically, the calibration matrix derived from the ellipsoid coefficient matrix by a different matrix decomposition method is not unique, and there exists an unknown rotation matrix R between them. This paper puts forward a constant intersection angle method (angles between the geomagnetic field and gravitational field are fixed) to estimate R. The Tikhonov method is adopted to solve the problem that rounding errors or other errors may seriously affect the calculation results of R when the condition number of the matrix is very large. The geomagnetic field vector and heading error are further corrected by R. The constant intersection angle method is convenient and practical, as it is free from any additional calibration procedure or coordinate transformation. In addition, the simulation experiment indicates that the heading error declines from ±1° calibrated by classical ellipsoid fitting to ±0.2° calibrated by a constant intersection angle method, and the signal-to-noise ratio is 50 dB. The actual experiment exhibits that the heading error is further corrected from ±0.8° calibrated by the classical ellipsoid fitting to ±0.3° calibrated by a constant intersection angle method. PMID:24831110

  2. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  3. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  4. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  5. A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature

    NASA Astrophysics Data System (ADS)

    Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min

    2017-05-01

    This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.

  6. Comparison of infusion pumps calibration methods

    NASA Astrophysics Data System (ADS)

    Batista, Elsa; Godinho, Isabel; do Céu Ferreira, Maria; Furtado, Andreia; Lucas, Peter; Silva, Claudia

    2017-12-01

    Nowadays, several types of infusion pump are commonly used for drug delivery, such as syringe pumps and peristaltic pumps. These instruments present different measuring features and capacities according to their use and therapeutic application. In order to ensure the metrological traceability of these flow and volume measuring equipment, it is necessary to use suitable calibration methods and standards. Two different calibration methods can be used to determine the flow error of infusion pumps. One is the gravimetric method, considered as a primary method, commonly used by National Metrology Institutes. The other calibration method, a secondary method, relies on an infusion device analyser (IDA) and is typically used by hospital maintenance offices. The suitability of the IDA calibration method was assessed by testing several infusion instruments at different flow rates using the gravimetric method. In addition, a measurement comparison between Portuguese Accredited Laboratories and hospital maintenance offices was performed under the coordination of the Portuguese Institute for Quality, the National Metrology Institute. The obtained results were directly related to the used calibration method and are presented in this paper. This work has been developed in the framework of the EURAMET projects EMRP MeDD and EMPIR 15SIP03.

  7. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  8. On-Demand Calibration and Evaluation for Electromagnetically Tracked Laparoscope in Augmented Reality Visualization

    PubMed Central

    Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Purpose Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that calibration can be performed in the OR on demand. Methods We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration result in the OR, we integrated a tube phantom with fCalib and overlaid a virtual representation of the tube on the live video scene. Results We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggested that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, would affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s – 22.7 s). Conclusions We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand. PMID:27250853

  9. External calibration of polarimetric radars using point and distributed targets

    NASA Technical Reports Server (NTRS)

    Yueh, S. H.; Kong, J. A.; Shin, R. T.

    1991-01-01

    Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.

  10. Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach

    NASA Astrophysics Data System (ADS)

    Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.

    2016-09-01

    The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.

  11. External calibration of polarimetric radars using point and distributed targets

    NASA Astrophysics Data System (ADS)

    Yueh, S. H.; Kong, J. A.; Shin, R. T.

    1991-08-01

    Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.

  12. VIIRS reflective solar bands on-orbit calibration and performance: a three-year update

    NASA Astrophysics Data System (ADS)

    Sun, Junqiang; Wang, Menghua

    2014-11-01

    The on-orbit calibration of the reflective solar bands (RSBs) of VIIRS and the result from the analysis of the up-to-date 3 years of mission data are presented. The VIIRS solar diffuser (SD) and lunar calibration methodology are discussed, and the calibration coefficients, called F-factors, for the RSBs are given for the latest reincarnation. The coefficients derived from the two calibrations are compared and the uncertainties of the calibrations are discussed. Numerous improvements are made, with the major improvement to the calibration result come mainly from the improved bidirectional reflectance factor (BRF) of the SD and the vignetting functions of both the SD screen and the sun-view screen. The very clean results, devoid of many previously known noises and artifacts, assures that VIIRS has performed well for the three years on orbit since launch, and in particular that the solar diffuser stability monitor (SDSM) is functioning essentially without flaws. The SD degradation, or H-factors, for most part shows the expected decline except for the surprising rise on day 830 lasting for 75 days signaling a new degradation phenomenon. Nevertheless the SDSM and the calibration methodology have successfully captured the SD degradation for RSB calibration. The overall improvement has the most significant and direct impact on the ocean color products which demands high accuracy from RSB observations.

  13. Design of an ultra-portable field transfer radiometer supporting automated vicarious calibration

    NASA Astrophysics Data System (ADS)

    Anderson, Nikolaus; Thome, Kurtis; Czapla-Myers, Jeffrey; Biggar, Stuart

    2015-09-01

    The University of Arizona Remote Sensing Group (RSG) began outfitting the radiometric calibration test site (RadCaTS) at Railroad Valley Nevada in 2004 for automated vicarious calibration of Earth-observing sensors. RadCaTS was upgraded to use RSG custom 8-band ground viewing radiometers (GVRs) beginning in 2011 and currently four GVRs are deployed providing an average reflectance for the test site. This measurement of ground reflectance is the most critical component of vicarious calibration using the reflectance-based method. In order to ensure the quality of these measurements, RSG has been exploring more efficient and accurate methods of on-site calibration evaluation. This work describes the design of, and initial results from, a small portable transfer radiometer for the purpose of GVR calibration validation on site. Prior to deployment, RSG uses high accuracy laboratory calibration methods in order to provide radiance calibrations with low uncertainties for each GVR. After deployment, a solar radiation based calibration has typically been used. The method is highly dependent on a clear, stable atmosphere, requires at least two people to perform, is time consuming in post processing, and is dependent on several large pieces of equipment. In order to provide more regular and more accurate calibration monitoring, the small portable transfer radiometer is designed for quick, one-person operation and on-site field calibration comparison results. The radiometer is also suited for laboratory calibration use and thus could be used as a transfer radiometer calibration standard for ground viewing radiometers of a RadCalNet site.

  14. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    PubMed

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Automatic Calibration Method for Driver’s Head Orientation in Natural Driving Environment

    PubMed Central

    Fu, Xianping; Guan, Xiao; Peli, Eli; Liu, Hongbo; Luo, Gang

    2013-01-01

    Gaze tracking is crucial for studying driver’s attention, detecting fatigue, and improving driver assistance systems, but it is difficult in natural driving environments due to nonuniform and highly variable illumination and large head movements. Traditional calibrations that require subjects to follow calibrators are very cumbersome to be implemented in daily driving situations. A new automatic calibration method, based on a single camera for determining the head orientation and which utilizes the side mirrors, the rear-view mirror, the instrument board, and different zones in the windshield as calibration points, is presented in this paper. Supported by a self-learning algorithm, the system tracks the head and categorizes the head pose in 12 gaze zones based on facial features. The particle filter is used to estimate the head pose to obtain an accurate gaze zone by updating the calibration parameters. Experimental results show that, after several hours of driving, the automatic calibration method without driver’s corporation can achieve the same accuracy as a manual calibration method. The mean error of estimated eye gazes was less than 5°in day and night driving. PMID:24639620

  16. Submerged flow bridge scour under clear water conditions

    DOT National Transportation Integrated Search

    2012-09-01

    Prediction of pressure flow (vertical contraction) scour underneath a partially or fully submerged bridge superstructure : in an extreme flood event is crucial for bridge safety. An experimentally and numerically calibrated formulation is : developed...

  17. Numerical computation of Pop plot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    The Pop plot — distance-of-run to detonation versus initial shock pressure — is a key characterization of shock initiation in a heterogeneous explosive. Reactive burn models for high explosives (HE) must reproduce the experimental Pop plot to have any chance of accurately predicting shock initiation phenomena. This report describes a methodology for automating the computation of a Pop plot for a specific explosive with a given HE model. Illustrative examples of the computation are shown for PBX 9502 with three burn models (SURF, WSD and Forest Fire) utilizing the xRage code, which is the Eulerian ASC hydrocode at LANL. Comparisonmore » of the numerical and experimental Pop plot can be the basis for a validation test or as an aid in calibrating the burn rate of an HE model. Issues with calibration are discussed.« less

  18. Using noble gas tracers to constrain a groundwater flow model with recharge elevations: A novel approach for mountainous terrain

    USGS Publications Warehouse

    Doyle, Jessica M.; Gleeson, Tom; Manning, Andrew H.; Mayer, K. Ulrich

    2015-01-01

    Environmental tracers provide information on groundwater age, recharge conditions, and flow processes which can be helpful for evaluating groundwater sustainability and vulnerability. Dissolved noble gas data have proven particularly useful in mountainous terrain because they can be used to determine recharge elevation. However, tracer-derived recharge elevations have not been utilized as calibration targets for numerical groundwater flow models. Herein, we constrain and calibrate a regional groundwater flow model with noble-gas-derived recharge elevations for the first time. Tritium and noble gas tracer results improved the site conceptual model by identifying a previously uncertain contribution of mountain block recharge from the Coast Mountains to an alluvial coastal aquifer in humid southwestern British Columbia. The revised conceptual model was integrated into a three-dimensional numerical groundwater flow model and calibrated to hydraulic head data in addition to recharge elevations estimated from noble gas recharge temperatures. Recharge elevations proved to be imperative for constraining hydraulic conductivity, recharge location, and bedrock geometry, and thus minimizing model nonuniqueness. Results indicate that 45% of recharge to the aquifer is mountain block recharge. A similar match between measured and modeled heads was achieved in a second numerical model that excludes the mountain block (no mountain block recharge), demonstrating that hydraulic head data alone are incapable of quantifying mountain block recharge. This result has significant implications for understanding and managing source water protection in recharge areas, potential effects of climate change, the overall water budget, and ultimately ensuring groundwater sustainability.

  19. Impact of model complexity and multi-scale data integration on the estimation of hydrogeological parameters in a dual-porosity aquifer

    NASA Astrophysics Data System (ADS)

    Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi

    2018-03-01

    This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.

  20. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  1. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  2. Fast wavelength calibration method for spectrometers based on waveguide comb optical filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Zhengang; Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240; Huang, Meizhen, E-mail: mzhuang@sjtu.edu.cn

    2015-04-15

    A novel fast wavelength calibration method for spectrometers based on a standard spectrometer and a double metal-cladding waveguide comb optical filter (WCOF) is proposed and demonstrated. By using the WCOF device, a wide-spectrum beam is comb-filtered, which is very suitable for spectrometer wavelength calibration. The influence of waveguide filter’s structural parameters and the beam incident angle on the comb absorption peaks’ wavelength and its bandwidth are also discussed. The verification experiments were carried out in the wavelength range of 200–1100 nm with satisfactory results. Comparing with the traditional wavelength calibration method based on discrete sparse atomic emission or absorption lines,more » the new method has some advantages: sufficient calibration data, high accuracy, short calibration time, fit for produce process, stability, etc.« less

  3. A Flexile and High Precision Calibration Method for Binocular Structured Light Scanning System

    PubMed Central

    Yuan, Jianying; Wang, Qiong; Li, Bailin

    2014-01-01

    3D (three-dimensional) structured light scanning system is widely used in the field of reverse engineering, quality inspection, and so forth. Camera calibration is the key for scanning precision. Currently, 2D (two-dimensional) or 3D fine processed calibration reference object is usually applied for high calibration precision, which is difficult to operate and the cost is high. In this paper, a novel calibration method is proposed with a scale bar and some artificial coded targets placed randomly in the measuring volume. The principle of the proposed method is based on hierarchical self-calibration and bundle adjustment. We get initial intrinsic parameters from images. Initial extrinsic parameters in projective space are estimated with the method of factorization and then upgraded to Euclidean space with orthogonality of rotation matrix and rank 3 of the absolute quadric as constraint. Last, all camera parameters are refined through bundle adjustment. Real experiments show that the proposed method is robust, and has the same precision level as the result using delicate artificial reference object, but the hardware cost is very low compared with the current calibration method used in 3D structured light scanning system. PMID:25202736

  4. Radiometric calibration method for large aperture infrared system with broad dynamic range.

    PubMed

    Sun, Zhiyuan; Chang, Songtao; Zhu, Wei

    2015-05-20

    Infrared radiometric measurements can acquire important data for missile defense systems. When observation is carried out by ground-based infrared systems, a missile is characterized by long distance, small size, and large variation of radiance. Therefore, the infrared systems should be manufactured with a larger aperture to enhance detection ability and calibrated at a broader dynamic range to extend measurable radiance. Nevertheless, the frequently used calibration methods demand an extended-area blackbody with broad dynamic range or a huge collimator for filling the system's field stop, which would greatly increase manufacturing costs and difficulties. To overcome this restriction, a calibration method based on amendment of inner and outer calibration is proposed. First, the principles and procedures of this method are introduced. Then, a shifting strategy of infrared systems for measuring targets with large fluctuations of infrared radiance is put forward. Finally, several experiments are performed on a shortwave infrared system with Φ400  mm aperture. The results indicate that the proposed method cannot only ensure accuracy of calibration but have the advantage of low cost, low power, and high motility. Hence, it is an effective radiometric calibration method in the outfield.

  5. Automatic alignment method for calibration of hydrometers

    NASA Astrophysics Data System (ADS)

    Lee, Y. J.; Chang, K. H.; Chon, J. C.; Oh, C. Y.

    2004-04-01

    This paper presents a new method to automatically align specific scale-marks for the calibration of hydrometers. A hydrometer calibration system adopting the new method consists of a vision system, a stepping motor, and software to control the system. The vision system is composed of a CCD camera and a frame grabber, and is used to acquire images. The stepping motor moves the camera, which is attached to the vessel containing a reference liquid, along the hydrometer. The operating program has two main functions: to process images from the camera to find the position of the horizontal plane and to control the stepping motor for the alignment of the horizontal plane with a particular scale-mark. Any system adopting this automatic alignment method is a convenient and precise means of calibrating a hydrometer. The performance of the proposed method is illustrated by comparing the calibration results using the automatic alignment method with those obtained using the manual method.

  6. Hierarchical data-driven approach to fitting numerical relativity data for nonprecessing binary black holes with an application to final spin and radiated energy

    NASA Astrophysics Data System (ADS)

    Jiménez-Forteza, Xisco; Keitel, David; Husa, Sascha; Hannam, Mark; Khan, Sebastian; Pürrer, Michael

    2017-03-01

    Numerical relativity is an essential tool in studying the coalescence of binary black holes (BBHs). It is still computationally prohibitive to cover the BBH parameter space exhaustively, making phenomenological fitting formulas for BBH waveforms and final-state properties important for practical applications. We describe a general hierarchical bottom-up fitting methodology to design and calibrate fits to numerical relativity simulations for the three-dimensional parameter space of quasicircular nonprecessing merging BBHs, spanned by mass ratio and by the individual spin components orthogonal to the orbital plane. Particular attention is paid to incorporating the extreme-mass-ratio limit and to the subdominant unequal-spin effects. As an illustration of the method, we provide two applications, to the final spin and final mass (or equivalently: radiated energy) of the remnant black hole. Fitting to 427 numerical relativity simulations, we obtain results broadly consistent with previously published fits, but improving in overall accuracy and particularly in the approach to extremal limits and for unequal-spin configurations. We also discuss the importance of data quality studies when combining simulations from diverse sources, how detailed error budgets will be necessary for further improvements of these already highly accurate fits, and how this first detailed study of unequal-spin effects helps in choosing the most informative parameters for future numerical relativity runs.

  7. An overview of in-orbit radiometric calibration of typical satellite sensors

    NASA Astrophysics Data System (ADS)

    Zhou, G. Q.; Li, C. Y.; Yue, T.; Jiang, L. J.; Liu, N.; Sun, Y.; Li, M. Y.

    2015-06-01

    This paper reviews the development of in-orbit radiometric calibration methods in the past 40 years. It summarizes the development of in-orbit radiometric calibration technology of typical satellite sensors in the visible/near-infrared bands and the thermal infrared band. Focuses on the visible/near-infrared bands radiometric calibration method including: Lamp calibration and solar radiationbased calibration. Summarizes the calibration technology of Landsat series satellite sensors including MSS, TM, ETM+, OLI, TIRS; SPOT series satellite sensors including HRV, HRS. In addition to the above sensors, there are also summarizing ALI which was equipped on EO-1, IRMSS which was equipped on CBERS series satellite. Comparing the in-orbit radiometric calibration technology of different periods but the same type satellite sensors analyzes the similarities and differences of calibration technology. Meanwhile summarizes the in-orbit radiometric calibration technology in the same periods but different country satellite sensors advantages and disadvantages of calibration technology.

  8. Cloned plasmid DNA fragments as calibrators for controlling GMOs: different real-time duplex quantitative PCR methods.

    PubMed

    Taverniers, Isabel; Van Bockstaele, Erik; De Loose, Marc

    2004-03-01

    Analytical real-time PCR technology is a powerful tool for implementation of the GMO labeling regulations enforced in the EU. The quality of analytical measurement data obtained by quantitative real-time PCR depends on the correct use of calibrator and reference materials (RMs). For GMO methods of analysis, the choice of appropriate RMs is currently under debate. So far, genomic DNA solutions from certified reference materials (CRMs) are most often used as calibrators for GMO quantification by means of real-time PCR. However, due to some intrinsic features of these CRMs, errors may be expected in the estimations of DNA sequence quantities. In this paper, two new real-time PCR methods are presented for Roundup Ready soybean, in which two types of plasmid DNA fragments are used as calibrators. Single-target plasmids (STPs) diluted in a background of genomic DNA were used in the first method. Multiple-target plasmids (MTPs) containing both sequences in one molecule were used as calibrators for the second method. Both methods simultaneously detect a promoter 35S sequence as GMO-specific target and a lectin gene sequence as endogenous reference target in a duplex PCR. For the estimation of relative GMO percentages both "delta C(T)" and "standard curve" approaches are tested. Delta C(T) methods are based on direct comparison of measured C(T) values of both the GMO-specific target and the endogenous target. Standard curve methods measure absolute amounts of target copies or haploid genome equivalents. A duplex delta C(T) method with STP calibrators performed at least as well as a similar method with genomic DNA calibrators from commercial CRMs. Besides this, high quality results were obtained with a standard curve method using MTP calibrators. This paper demonstrates that plasmid DNA molecules containing either one or multiple target sequences form perfect alternative calibrators for GMO quantification and are especially suitable for duplex PCR reactions.

  9. Overview of calibration and validation activities for the EUMETSAT polar system: second generation (EPS-SG) visible/infrared imager (METimage)

    NASA Astrophysics Data System (ADS)

    Phillips, P.; Bonsignori, R.; Schlüssel, P.; Schmülling, F.; Spezzi, L.; Watts, P.; Zerfowski, I.

    2016-10-01

    The EPS-SG Visible/Infrared Imaging (VII) mission is dedicated to supporting the optical imagery user needs for Numerical Weather Prediction (NWP), Nowcasting (NWC) and climate in the timeframe beyond 2020. The VII mission is fulfilled by the METimage instrument, developed by the German Space Agency (DLR) and funded by the German government and EUMETSAT. Following on from an important list of predecessors such as the Advanced Very High Resolution Radiometer (AVHRR) and the Moderate resolution Imaging Spectro-radiometer (MODIS), METimage will fly in the mid-morning orbit of the Joint Polar System, whilst the early-afternoon orbits are served by the JPSS (U.S. Joint Polar Satellite System) Visible Infrared Imager Radiometer Suite (VIIRS). METimage itself is a cross-purpose medium resolution, multi-spectral optical imager, measuring the optical spectrum of radiation emitted and reflected by the Earth from a low-altitude sun synchronous orbit over a minimum swath width of 2700 km. The top of the atmosphere outgoing radiance will be sampled every 500 m (at nadir) with measurements made in 20 spectral channels ranging from 443 nm in the visible up to 13.345 μm in the thermal infrared. The three major objectives of the EPS-SG METimage calibration and validation activities are: • Verification of the instrument performances through continuous in-flight calibration and characterisation, including monitoring of long term stability. • Provision of validated level 1 and level 2 METimage products. • Revision of product processing facilities, i.e. algorithms and auxiliary data sets, to assure that products conform with user requirements, and then, if possible, exceed user expectations. This paper will describe the overall Calibration and Validation (Cal/Val) logic and the methods adopted to ensure that the METimage data products meet performance specifications for the lifetime of the mission. Such methods include inter-comparisons with other missions through simultaneous nadir overpasses and comparisons with ground based observations, analysis of algorithm internal diagnostics to confirm retrieval performance for geophysical products and vicarious calibration to assist with validation of the instrument on-board calibration. Any identified deficiencies in the products will lead to either an update any auxiliary data sets (e.g. calibration key data) that are used to configure the product processors or to a revision of algorithms themselves. The Cal/Val activities are mostly foreseen during commissioning but will inevitably extend to routine operations in order to take on board seasonal variations and ensure long term stability of the calibrated radiances and geophysical products. Pre-requisite to validation of products at scientific level is that the satellite and instrument itself have been verified against their respective specifications both pre-launch and during the satellite in-orbit verification phase.

  10. Cross-Calibration of Secondary Electron Multiplier in Noble Gas Analysis

    NASA Astrophysics Data System (ADS)

    Santato, Alessandro; Hamilton, Doug; Deerberg, Michael; Wijbrans, Jan; Kuiper, Klaudia; Bouman, Claudia

    2015-04-01

    The latest generation of multi-collector noble gas mass spectrometers has decisively improved the precision in isotopic ratio analysis [1, 2] and helped the scientific community to address new questions [3]. Measuring numerous isotopes simultaneously has two significant advantages: firstly, any fluctuations in signal intensity have no effect on the isotope ratio and secondly, the analysis time is reduced. This particular point becomes very important in static vacuum mass spectrometry where during the analysis, the signal intensity decays and at the same time the background increases. However, when multi-collector analysis is utilized, it is necessary to pay special attention to the cross calibration of the detectors. This is a key point in order to have accurate and reproducible isotopic ratios. In isotope ratio mass spectrometry, with regard to the type of detector (i.e. Faraday or Secondary Electron Multiplier, SEM), analytical technique (TIMS, MC-ICP-MS or IRMS) and isotope system of interest, several techniques are currently applied to cross-calibrate the detectors. Specifically, the gain of the Faraday cups is generally stable and only the associated amplifier must be calibrated. For example, on the Thermo Scientific instrument control systems, the 1011 and 1012 ohm amplifiers can easily be calibrated through a fully software controlled procedure by inputting a constant electric signal to each amplifier sequentially [4]. On the other hand, the yield of the SEMs can drift up to 0.2% / hour and other techniques such as peak hopping, standard-sample bracketing and multi-dynamic measurement must be used. Peak hopping allows the detectors to be calibrated by measuring an ion beam of constant intensity across the detectors whereas standard-sample bracketing corrects the drift of the detectors through the analysis of a reference standard of a known isotopic ratio. If at least one isotopic pair of the sample is known, multi-dynamic measurement can be used; in this case the known isotopic ratio is measured on different pairs of detectors and the true value of the isotopic ratio of interest can be determined by a specific equation. In noble gas analysis, due to the decay of the ion beam during the measurement as well as the special isotopic systematic of the gases themselves, the cross-calibration of the SEM using these techniques becomes more complex and other methods should be investigated. In this work we present a comparison between different approaches to cross-calibrate multiple SEM's in noble gas analysis in order to evaluate the most suitable and reliable method. References: [1] Mark et al. (2009) Geochem. Geophys. Geosyst. 10, 1-9. [2] Mark et al. (2011) Geochim. Cosmochim. 75, 7494-7501. [3] Phillips and Matchan (2013) Geochimica et Cosmochimica Acta 121, 229-239. [4] Koornneef et al. (2014) Journal of Analytical Atomic Spectrometry 28, 749-754.

  11. Critical Analyses of Data Differences Between FNMOC and AFGWC Spawned SSM/I Datasets

    NASA Technical Reports Server (NTRS)

    Ritchie, Adrian A., Jr.; Smith, Matthew R.; Goodman, H. Michael; Schudalla, Ronald L.; Conway, Dawn K.; LaFontaine, Frank J.; Moss, Don; Motta, Brian

    1998-01-01

    Antenna temperatures and the corresponding geolocation data from the five sources of the Special Sensor Microwave/Imager data from the Defense Meteorological Satellite Program F11 satellite have been characterized. Data from the Fleet Numerical Meteorology and Oceanography Center (FNMOC) have been compared with data from other sources to define and document the differences resulting from different processing systems. While all sources used similar methods to calculate antenna temperatures, different calibration averaging techniques and other processing methods yielded temperature differences. Analyses of the geolocation data identified perturbations in the FNMOC and National Environmental Satellite, Data and Information Service data. The effects of the temperature differences were examined by generating rain rates using the Goddard Scattering Algorithm. Differences in the geophysical precipitation products are directly attributable to antenna temperature differences.

  12. General rigid motion correction for computed tomography imaging based on locally linear embedding

    NASA Astrophysics Data System (ADS)

    Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge

    2018-02-01

    The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.

  13. Bayesian Treatment of Uncertainty in Environmental Modeling: Optimization, Sampling and Data Assimilation Using the DREAM Software Package

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2012-12-01

    In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.

  14. Experimental Demonstration of In-Place Calibration for Time Domain Microwave Imaging System

    NASA Astrophysics Data System (ADS)

    Kwon, S.; Son, S.; Lee, K.

    2018-04-01

    In this study, the experimental demonstration of in-place calibration was conducted using the developed time domain measurement system. Experiments were conducted using three calibration methods—in-place calibration and two existing calibrations, that is, array rotation and differential calibration. The in-place calibration uses dual receivers located at an equal distance from the transmitter. The received signals at the dual receivers contain similar unwanted signals, that is, the directly received signal and antenna coupling. In contrast to the simulations, the antennas are not perfectly matched and there might be unexpected environmental errors. Thus, we experimented with the developed experimental system to demonstrate the proposed method. The possible problems with low signal-to-noise ratio and clock jitter, which may exist in time domain systems, were rectified by averaging repeatedly measured signals. The tumor was successfully detected using the three calibration methods according to the experimental results. The cross correlation was calculated using the reconstructed image of the ideal differential calibration for a quantitative comparison between the existing rotation calibration and the proposed in-place calibration. The mean value of cross correlation between the in-place calibration and ideal differential calibration was 0.80, and the mean value of cross correlation of the rotation calibration was 0.55. Furthermore, the results of simulation were compared with the experimental results to verify the in-place calibration method. A quantitative analysis was also performed, and the experimental results show a tendency similar to the simulation.

  15. Influence of Installation Errors On the Output Data of the Piezoelectric Vibrations Transducers

    NASA Astrophysics Data System (ADS)

    Kozuch, Barbara; Chelmecki, Jaroslaw; Tatara, Tadeusz

    2017-10-01

    The paper examines an influence of installation errors of the piezoelectric vibrations transducers on the output data. PCB Piezotronics piezoelectric accelerometers were used to perform calibrations by comparison. The measurements were performed with TMS 9155 Calibration Workstation version 5.4.0 at frequency in the range of 5Hz - 2000Hz. Accelerometers were fixed on the calibration station in a so-called back-to-back configuration in accordance with the applicable international standard - ISO 16063-21: Methods for the calibration of vibration and shock transducers - Part 21: Vibration calibration by comparison to a reference transducer. The first accelerometer was calibrated by suitable methods with traceability to a primary reference transducer. Each subsequent calibration was performed when changing one setting in relation to the original calibration. The alterations were related to negligence and failures in relation to the above-mentioned standards and operating guidelines - e.g. the sensor was not tightened or appropriate substance was not placed. Also, there was modified the method of connection which was in the standards requirements. Different kind of wax, light oil, grease and other assembly methods were used. The aim of the study was to verify the significance of standards requirements and to estimate of their validity. The authors also wanted to highlight the most significant calibration errors. Moreover, relation between various appropriate methods of the connection was demonstrated.

  16. Numerical and Experimental Thermal Responses of Single-cell and Differential Calorimeters: from Out-of-Pile Calibration to Irradiation Campaigns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brun, J.; Reynard-Carette, C.; Carette, M.

    2015-07-01

    The nuclear radiation energy deposition rate (usually expressed in W.g{sup -1}) is a key parameter for the thermal design of experiments, on materials and nuclear fuel, carried out in experimental channels of irradiation reactors such as the French OSIRIS reactor in Saclay or inside the Polish MARIA reactor. In particular the quantification of the nuclear heating allows to predicting the heat and thermal conditions induced in the irradiation devices or/and structural materials. Various sensors are used to quantify this parameter, in particular radiometric calorimeters also called in-pile calorimeters. Two main kinds of in-pile calorimeter exist with in particular specific designs:more » single-cell calorimeter and differential calorimeter. The present work focuses on these two calorimeter kinds from their out-of-pile calibration step (transient and steady experiments respectively) to comparison between numerical and experimental results obtained from two irradiation campaigns (MARIA reactor and OSIRIS reactor respectively). The main aim of this paper is to propose a steady numerical approach to estimate the single-cell calorimeter response under irradiation conditions. (authors)« less

  17. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  18. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  19. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  20. Numerical simulation of asphalt mixtures fracture using continuum models

    NASA Astrophysics Data System (ADS)

    Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz

    2018-01-01

    The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.

  1. Correspondence of verbal descriptor and numeric rating scales for pain intensity: an item response theory calibration.

    PubMed

    Edelen, Maria Orlando; Saliba, Debra

    2010-07-01

    Assessing pain intensity in older adults is critical and challenging. There is debate about the most effective way to ask older adults to describe their pain severity, and clinicians vary in their preferred approaches, making comparison of pain intensity scores across settings difficult. A total of 3,676 residents from 71 community nursing homes across eight states were asked about pain presence. The 1,960 residents who reported pain within the past 5 days (53% of total, 70% female; age: M = 77.9, SD = 12.4) were included in analyses. Those who reported pain were also asked to provide a rating of pain intensity using either a verbal descriptor scale (VDS; mild, moderate, severe, and very severe and horrible), a numeric rating scale (NRS; 0 = no pain to 10 = worst pain imaginable), or both. We used item response theory (IRT) methods to identify the correspondence between the VDS and the NRS response options by estimating item parameters for these and five additional pain items. The sample reported moderate amounts of pain on average. Examination of the IRT location parameters for the pain intensity items indicated the following approximate correspondence: VDS mild approximately NRS 1-4, VDS moderate approximately NRS 5-7, VDS severe approximately NRS 8-9, and VDS very severe, horrible approximately NRS 10. This IRT calibration provides a crosswalk between the two response scales so that either can be used in practice depending on the preference of the clinician and respondent.

  2. An evaluation of the predictive capabilities of CTRW and MRMT

    NASA Astrophysics Data System (ADS)

    Fiori, Aldo; Zarlenga, Antonio; Gotovac, Hrvoje; Jankovic, Igor; Cvetkovic, Vladimir; Dagan, Gedeon

    2016-04-01

    The prediction capability of two approximate models of non-Fickian transport in highly heterogeneous aquifers is checked by comparison with accurate numerical simulations, for mean uniform flow of velocity U. The two models considered are the MRMT (Multi Rate Mass Transfer) and CTRW (Continuous Time Random Walk) models. Both circumvent the need to solve the flow and transport equations by using proxy models, which provide the BTC μ(x,t) depending on a vector a of unknown 5 parameters. Although underlain by different conceptualisations, the two models have a similar mathematical structure. The proponents of the models suggest using field transport experiments at a small scale to calibrate a, toward predicting transport at larger scale. The strategy was tested with the aid of accurate numerical simulations in two and three dimensions from the literature. First, the 5 parameter values were calibrated by using the simulated μ at a control plane close to the injection one and subsequently using these same parameters for predicting μ at further 10 control planes. It is found that the two methods perform equally well, though the parameters identification is nonunique, with a large set of parameters providing similar fitting. Also, errors in the determination of the mean eulerian velocity may lead to significant shifts of the predicted BTC. It is found that the simulated BTCs satisfy Markovianity: they can be found as n-fold convolutions of a "kernel", in line with the models' main assumption.

  3. On-demand calibration and evaluation for electromagnetically tracked laparoscope in augmented reality visualization.

    PubMed

    Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2016-06-01

    Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that the calibration can be performed in the OR on demand. We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration results in the OR, we integrated a tube phantom with fCalib prototype and overlaid a virtual representation of the tube on the live video scene. We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggest that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, might affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s-22.7 s). We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand.

  4. Numerical Estimation of the Outer Bank Resistance Characteristics in AN Evolving Meandering River

    NASA Astrophysics Data System (ADS)

    Wang, D.; Konsoer, K. M.; Rhoads, B. L.; Garcia, M. H.; Best, J.

    2017-12-01

    Few studies have examined the three-dimensional flow structure and its interaction with bed morphology within elongate loops of large meandering rivers. The present study uses a numerical model to simulate the flow pattern and sediment transport, especially the flow close to the outer-bank, at two elongate meandering loops in Wabash River, USA. The numerical grid for the model is based on a combination of airborne LIDAR data on floodplains and the multibeam data within the river channel. A Finite Element Method (FEM) is used to solve the non-hydrostatic RANS equation using a K-epsilon turbulence closure scheme. High-resolution topographic data allows detailed numerical simulation of flow patterns along the outer bank and model calibration involves comparing simulated velocities to ADCP measurements at 41 cross sections near this bank. Results indicate that flow along the outer bank is strongly influenced by large resistance elements, including woody debris, large erosional scallops within the bank face, and outcropping bedrock. In general, patterns of bank migration conform with zones of high near-bank velocity and shear stress. Using the existing model, different virtual events can be simulated to explore the impacts of different resistance characteristics on patterns of flow, sediment transport, and bank erosion.

  5. Developing and refining NIR calibrations for total carbohydrate composition and isoflavones and saponins in ground whole soy meal

    USDA-ARS?s Scientific Manuscript database

    Although many near infrared (NIR) spectrometric calibrations exist for a variety of components in soy, current calibration methods are often limited by either a small sample size on which the calibrations are based or a wide variation in sample preparation and measurement methods, which yields unrel...

  6. Multiplexed fluctuation-dissipation-theorem calibration of optical tweezers inside living cells

    NASA Astrophysics Data System (ADS)

    Yan, Hao; Johnston, Jessica F.; Cahn, Sidney B.; King, Megan C.; Mochrie, Simon G. J.

    2017-11-01

    In order to apply optical tweezers-based force measurements within an uncharacterized viscoelastic medium such as the cytoplasm of a living cell, a quantitative calibration method that may be applied in this complex environment is needed. We describe an improved version of the fluctuation-dissipation-theorem calibration method, which has been developed to perform in situ calibration in viscoelastic media without prior knowledge of the trapped object. Using this calibration procedure, it is possible to extract values of the medium's viscoelastic moduli as well as the force constant describing the optical trap. To demonstrate our method, we calibrate an optical trap in water, in polyethylene oxide solutions of different concentrations, and inside living fission yeast (S. pombe).

  7. Calibration of a γ- Re θ transition model and its application in low-speed flows

    NASA Astrophysics Data System (ADS)

    Wang, YunTao; Zhang, YuLun; Meng, DeHong; Wang, GunXue; Li, Song

    2014-12-01

    The prediction of laminar-turbulent transition in boundary layer is very important for obtaining accurate aerodynamic characteristics with computational fluid dynamic (CFD) tools, because laminar-turbulent transition is directly related to complex flow phenomena in boundary layer and separated flow in space. Unfortunately, the transition effect isn't included in today's major CFD tools because of non-local calculations in transition modeling. In this paper, Menter's γ- Re θ transition model is calibrated and incorporated into a Reynolds-Averaged Navier-Stokes (RANS) code — Trisonic Platform (TRIP) developed in China Aerodynamic Research and Development Center (CARDC). Based on the experimental data of flat plate from the literature, the empirical correlations involved in the transition model are modified and calibrated numerically. Numerical simulation for low-speed flow of Trapezoidal Wing (Trap Wing) is performed and compared with the corresponding experimental data. It is indicated that the γ- Re θ transition model can accurately predict the location of separation-induced transition and natural transition in the flow region with moderate pressure gradient. The transition model effectively imporves the simulation accuracy of the boundary layer and aerodynamic characteristics.

  8. Flight Test Results of an Angle of Attack and Angle of Sideslip Calibration Method Using Output-Error Optimization

    NASA Technical Reports Server (NTRS)

    Siu, Marie-Michele; Martos, Borja; Foster, John V.

    2013-01-01

    As part of a joint partnership between the NASA Aviation Safety Program (AvSP) and the University of Tennessee Space Institute (UTSI), research on advanced air data calibration methods has been in progress. This research was initiated to expand a novel pitot-static calibration method that was developed to allow rapid in-flight calibration for the NASA Airborne Subscale Transport Aircraft Research (AirSTAR) facility. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. Subscale flight tests demonstrated small 2-s error bounds with significant reduction in test time compared to other methods. Recent UTSI full scale flight tests have shown airspeed calibrations with the same accuracy or better as the Federal Aviation Administration (FAA) accepted GPS 'four-leg' method in a smaller test area and in less time. The current research was motivated by the desire to extend this method for inflight calibration of angle of attack (AOA) and angle of sideslip (AOS) flow vanes. An instrumented Piper Saratoga research aircraft from the UTSI was used to collect the flight test data and evaluate flight test maneuvers. Results showed that the output-error approach produces good results for flow vane calibration. In addition, maneuvers for pitot-static and flow vane calibration can be integrated to enable simultaneous and efficient testing of each system.

  9. The site-scale saturated zone flow model for Yucca Mountain: Calibration of different conceptual models and their impact on flow paths

    USGS Publications Warehouse

    Zyvoloski, G.; Kwicklis, E.; Eddebbarh, A.-A.; Arnold, B.; Faunt, C.; Robinson, B.A.

    2003-01-01

    This paper presents several different conceptual models of the Large Hydraulic Gradient (LHG) region north of Yucca Mountain and describes the impact of those models on groundwater flow near the potential high-level repository site. The results are based on a numerical model of site-scale saturated zone beneath Yucca Mountain. This model is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. The numerical model is calibrated by matching available water level measurements using parameter estimation techniques, along with more informal comparisons of the model to hydrologic and geochemical information. The model software (hydrologic simulation code FEHM and parameter estimation software PEST) and model setup allows for efficient calibration of multiple conceptual models. Until now, the Large Hydraulic Gradient has been simulated using a low-permeability, east-west oriented feature, even though direct evidence for this feature is lacking. In addition to this model, we investigate and calibrate three additional conceptual models of the Large Hydraulic Gradient, all of which are based on a presumed zone of hydrothermal chemical alteration north of Yucca Mountain. After examining the heads and permeabilities obtained from the calibrated models, we present particle pathways from the potential repository that record differences in the predicted groundwater flow regime. The results show that Large Hydraulic Gradient can be represented with the alternate conceptual models that include the hydrothermally altered zone. The predicted pathways are mildly sensitive to the choice of the conceptual model and more sensitive to the quality of calibration in the vicinity on the repository. These differences are most likely due to different degrees of fit of model to data, and do not represent important differences in hydrologic conditions for the different conceptual models. ?? 2002 Elsevier Science B.V. All rights reserved.

  10. Development of Rapid, Continuous Calibration Techniques and Implementation as a Prototype System for Civil Engineering Materials Evaluation

    NASA Astrophysics Data System (ADS)

    Scott, M. L.; Gagarin, N.; Mekemson, J. R.; Chintakunta, S. R.

    2011-06-01

    Until recently, civil engineering material calibration data could only be obtained from material sample cores or via time consuming, stationary calibration measurements in a limited number of locations. Calibration data are used to determine material propagation velocities of electromagnetic waves in test materials for use in layer thickness measurements and subsurface imaging. Limitations these calibration methods impose have been a significant impediment to broader use of nondestructive evaluation methods such as ground-penetrating radar (GPR). In 2006, a new rapid, continuous calibration approach was designed using simulation software to address these measurement limitations during a Federal Highway Administration (FHWA) research and development effort. This continuous calibration method combines a digitally-synthesized step-frequency (SF)-GPR array and a data collection protocol sequence for the common midpoint (CMP) method. Modeling and laboratory test results for various data collection protocols and materials are presented in this paper. The continuous-CMP concept was finally implemented for FHWA in a prototype demonstration system called the Advanced Pavement Evaluation (APE) system in 2009. Data from the continuous-CMP protocol is processed using a semblance/coherency analysis to determine material propagation velocities. Continuously calibrated pavement thicknesses measured with the APE system in 2009 are presented. This method is efficient, accurate, and cost-effective.

  11. User-friendly freehand ultrasound calibration using Lego bricks and automatic registration.

    PubMed

    Xiao, Yiming; Yan, Charles Xiao Bo; Drouin, Simon; De Nigris, Dante; Kochanowska, Anna; Collins, D Louis

    2016-09-01

    As an inexpensive, noninvasive, and portable clinical imaging modality, ultrasound (US) has been widely employed in many interventional procedures for monitoring potential tissue deformation, surgical tool placement, and locating surgical targets. The application requires the spatial mapping between 2D US images and 3D coordinates of the patient. Although positions of the devices (i.e., ultrasound transducer) and the patient can be easily recorded by a motion tracking system, the spatial relationship between the US image and the tracker attached to the US transducer needs to be estimated through an US calibration procedure. Previously, various calibration techniques have been proposed, where a spatial transformation is computed to match the coordinates of corresponding features in a physical phantom and those seen in the US scans. However, most of these methods are difficult to use for novel users. We proposed an ultrasound calibration method by constructing a phantom from simple Lego bricks and applying an automated multi-slice 2D-3D registration scheme without volumetric reconstruction. The method was validated for its calibration accuracy and reproducibility. Our method yields a calibration accuracy of [Formula: see text] mm and a calibration reproducibility of 1.29 mm. We have proposed a robust, inexpensive, and easy-to-use ultrasound calibration method.

  12. Development of rapid, continuous calibration techniques and implementation as a prototype system for civil engineering materials evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, M. L.; Gagarin, N.; Mekemson, J. R.

    Until recently, civil engineering material calibration data could only be obtained from material sample cores or via time consuming, stationary calibration measurements in a limited number of locations. Calibration data are used to determine material propagation velocities of electromagnetic waves in test materials for use in layer thickness measurements and subsurface imaging. Limitations these calibration methods impose have been a significant impediment to broader use of nondestructive evaluation methods such as ground-penetrating radar (GPR). In 2006, a new rapid, continuous calibration approach was designed using simulation software to address these measurement limitations during a Federal Highway Administration (FHWA) research andmore » development effort. This continuous calibration method combines a digitally-synthesized step-frequency (SF)-GPR array and a data collection protocol sequence for the common midpoint (CMP) method. Modeling and laboratory test results for various data collection protocols and materials are presented in this paper. The continuous-CMP concept was finally implemented for FHWA in a prototype demonstration system called the Advanced Pavement Evaluation (APE) system in 2009. Data from the continuous-CMP protocol is processed using a semblance/coherency analysis to determine material propagation velocities. Continuously calibrated pavement thicknesses measured with the APE system in 2009 are presented. This method is efficient, accurate, and cost-effective.« less

  13. IMU-Based Online Kinematic Calibration of Robot Manipulator

    PubMed Central

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods. PMID:24302854

  14. Extrinsic Calibration of Camera Networks Based on Pedestrians

    PubMed Central

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  15. Hybrid Geometric Calibration Method for Multi-Platform Spaceborne SAR Image with Sparse Gcps

    NASA Astrophysics Data System (ADS)

    Lv, G.; Tang, X.; Ai, B.; Li, T.; Chen, Q.

    2018-04-01

    Geometric calibration is able to provide high-accuracy geometric coordinates of spaceborne SAR image through accurate geometric parameters in the Range-Doppler model by ground control points (GCPs). However, it is very difficult to obtain GCPs that covering large-scale areas, especially in the mountainous regions. In addition, the traditional calibration method is only used for single platform SAR images and can't support the hybrid geometric calibration for multi-platform images. To solve the above problems, a hybrid geometric calibration method for multi-platform spaceborne SAR images with sparse GCPs is proposed in this paper. First, we calibrate the master image that contains GCPs. Secondly, the point tracking algorithm is used to obtain the tie points (TPs) between the master and slave images. Finally, we calibrate the slave images using TPs as the GCPs. We take the Beijing-Tianjin- Hebei region as an example to study SAR image hybrid geometric calibration method using 3 TerraSAR-X images, 3 TanDEM-X images and 5 GF-3 images covering more than 235 kilometers in the north-south direction. Geometric calibration of all images is completed using only 5 GCPs. The GPS data extracted from GNSS receiver are used to assess the plane accuracy after calibration. The results after geometric calibration with sparse GCPs show that the geometric positioning accuracy is 3 m for TSX/TDX images and 7.5 m for GF-3 images.

  16. Temperature Measurement and Numerical Prediction in Machining Inconel 718.

    PubMed

    Díaz-Álvarez, José; Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar

    2017-06-30

    Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning.

  17. Standardization of glycohemoglobin results and reference values in whole blood studied in 103 laboratories using 20 methods.

    PubMed

    Weykamp, C W; Penders, T J; Miedema, K; Muskiet, F A; van der Slik, W

    1995-01-01

    We investigated the effect of calibration with lyophilized calibrators on whole-blood glycohemoglobin (glyHb) results. One hundred three laboratories, using 20 different methods, determined glyHb in two lyophilized calibrators and two whole-blood samples. For whole-blood samples with low (5%) and high (9%) glyHb percentages, respectively, calibration decreased overall interlaboratory variation (CV) from 16% to 9% and from 11% to 6% and decreased intermethod variation from 14% to 6% and from 12% to 5%. Forty-seven laboratories, using 14 different methods, determined mean glyHb percentages in self-selected groups of 10 nondiabetic volunteers each. With calibration their overall mean (2SD) was 5.0% (0.5%), very close to the 5.0% (0.3%) derived from the reference method used in the Diabetes Control and Complications Trial. In both experiments the Abbott IMx and Vision showed deviating results. We conclude that, irrespective of the analytical method used, calibration enables standardization of glyHb results, reference values, and interpretation criteria.

  18. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  19. Evaluation of the AMSR-E Data Calibration Over Land

    NASA Technical Reports Server (NTRS)

    Njoku, E.; Chan, T.; Crosson, W.; Limaye, A.

    2004-01-01

    Land observations by the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E), particularly of soil and vegetation moisture changes, have numerous applications in hydrology, ecology and climate. Quantitative retrieval of soil and vegetation parameters relies on accurate calibration of the brightness temperature measurements. Analyses of the spectral and polarization characteristics of early versions of the AMSR-E data revealed significant calibration biases over land at 6.9 GHz. The biases were estimated and removed in the current archived version of the data Radiofrequency interference (RFI) observed at 6.9 GHz is more difficult to quanti@ however. A calibration analysis of AMSR-E data over land is presented in this paper for a complete annual cycle from June 2002 through September 2003. The analysis indicates the general high quality of the data for land applications (except for RFI), and illustrates seasonal trends of the data for different land surface types and regions.

  20. A Summary of Numerous Strain-Gage Load Calibrations on Aircraft Wings and Tails in a Technological Format

    NASA Technical Reports Server (NTRS)

    Jenkins, Jerald M.; DeAngelis, V. Michael

    1997-01-01

    Fifteen aircraft structures that were calibrated for flight loads using strain gages are examined. The primary purpose of this paper is to document important examples of load calibrations on airplanes during the past four decades. The emphasis is placed on studying the physical procedures of calibrating strain-gaged structures and all the supporting analyses and computational techniques that have been used. The results and experiences obtained from actual data from 14 structures (on 13 airplanes and 1 laboratory test structure) are presented. This group of structures includes fins, tails, and wings with a wide variety of aspect ratios. Straight- wing, swept-wing, and delta-wing configurations are studied. Some of the structures have skin-dominant construction; others are spar-dominant. Anisotropic materials, heat shields, corrugated components, nonorthogonal primary structures, and truss-type structures are particular characteristics that are included.

  1. Uncertainty quantification in capacitive RF MEMS switches

    NASA Astrophysics Data System (ADS)

    Pax, Benjamin J.

    Development of radio frequency micro electrical-mechanical systems (RF MEMS) has led to novel approaches to implement electrical circuitry. The introduction of capacitive MEMS switches, in particular, has shown promise in low-loss, low-power devices. However, the promise of MEMS switches has not yet been completely realized. RF-MEMS switches are known to fail after only a few months of operation, and nominally similar designs show wide variability in lifetime. Modeling switch operation using nominal or as-designed parameters cannot predict the statistical spread in the number of cycles to failure, and probabilistic methods are necessary. A Bayesian framework for calibration, validation and prediction offers an integrated approach to quantifying the uncertainty in predictions of MEMS switch performance. The objective of this thesis is to use the Bayesian framework to predict the creep-related deflection of the PRISM RF-MEMS switch over several thousand hours of operation. The PRISM switch used in this thesis is the focus of research at Purdue's PRISM center, and is a capacitive contacting RF-MEMS switch. It employs a fixed-fixed nickel membrane which is electrostatically actuated by applying voltage between the membrane and a pull-down electrode. Creep plays a central role in the reliability of this switch. The focus of this thesis is on the creep model, which is calibrated against experimental data measured for a frog-leg varactor fabricated and characterized at Purdue University. Creep plasticity is modeled using plate element theory with electrostatic forces being generated using either parallel plate approximations where appropriate, or solving for the full 3D potential field. For the latter, structure-electrostatics interaction is determined through immersed boundary method. A probabilistic framework using generalized polynomial chaos (gPC) is used to create surrogate models to mitigate the costly full physics simulations, and Bayesian calibration and forward propagation of uncertainty are performed using this surrogate model. The first step in the analysis is Bayesian calibration of the creep related parameters. A computational model of the frog-leg varactor is created, and the computed creep deflection of the device over 800 hours is used to generate a surrogate model using a polynomial chaos expansion in Hermite polynomials. Parameters related to the creep phenomenon are calibrated using Bayesian calibration with experimental deflection data from the frog-leg device. The calibrated input distributions are subsequently propagated through a surrogate gPC model for the PRISM MEMS switch to produce probability density functions of the maximum membrane deflection of the membrane over several thousand hours. The assumptions related to the Bayesian calibration and forward propagation are analyzed to determine the sensitivity to these assumptions of the calibrated input distributions and propagated output distributions of the PRISM device. The work is an early step in understanding the role of geometric variability, model uncertainty, numerical errors and experimental uncertainties in the long-term performance of RF-MEMS.

  2. Model calibration for ice sheets and glaciers dynamics: a general theory of inverse problems in glaciology

    NASA Astrophysics Data System (ADS)

    Giudici, Mauro; Baratelli, Fulvia; Vassena, Chiara; Cattaneo, Laura

    2014-05-01

    Numerical modelling of the dynamic evolution of ice sheets and glaciers requires the solution of discrete equations which are based on physical principles (e.g. conservation of mass, linear momentum and energy) and phenomenological constitutive laws (e.g. Glen's and Fourier's laws). These equations must be accompanied by information on the forcing term and by initial and boundary conditions (IBC) on ice velocity, stress and temperature; on the other hand the constitutive laws involves many physical parameters, which possibly depend on the ice thermodynamical state. The proper forecast of the dynamics of ice sheets and glaciers (forward problem, FP) requires a precise knowledge of several quantities which appear in the IBCs, in the forcing terms and in the phenomenological laws and which cannot be easily measured at the study scale in the field. Therefore these quantities can be obtained through model calibration, i.e. by the solution of an inverse problem (IP). Roughly speaking, the IP aims at finding the optimal values of the model parameters that yield the best agreement of the model output with the field observations and data. The practical application of IPs is usually formulated as a generalised least squares approach, which can be cast in the framework of Bayesian inference. IPs are well developed in several areas of science and geophysics and several applications were proposed also in glaciology. The objective of this paper is to provide a further step towards a thorough and rigorous theoretical framework in cryospheric studies. Although the IP is often claimed to be ill-posed, this is rigorously true for continuous domain models, whereas for numerical models, which require the solution of algebraic equations, the properties of the IP must be analysed with more care. First of all, it is necessary to clarify the role of experimental and monitoring data to determine the calibration targets and the values of the parameters that can be considered to be fixed, whereas only the model output should depend on the subset of the parameters that can be identified with the calibration procedure and the solution to the IP. It is actually difficult to guarantee the existence and uniqueness of a solution to the IP for complex non-linear models. Also identifiability, a property related to the solution to the FP, and resolution should be carefully considered. Moreover, instability of the IP should not be confused with ill-conditioning and with the properties of the method applied to compute a solution. Finally, sensitivity analysis is of paramount importance to assess the reliability of the estimated parameters and of the model output, but it is often based on the one-at-a-time approach, through the application of the adjoint-state method, to compute local sensitivity, i.e. the uncertainty on the model output due to small variations of the input parameters, whereas first-order approaches that consider the whole possible variability of the model parameters should be considered. This theoretical framework and the relevant properties are illustrated by means of a simple numerical example of isothermal ice flow, based on the shallow ice approximation.

  3. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition

    PubMed Central

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-01-01

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible. PMID:29695041

  4. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition.

    PubMed

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-04-24

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible.

  5. A method for soil moisture probes calibration and validation of satellite estimates.

    PubMed

    Holzman, Mauro; Rivas, Raúl; Carmona, Facundo; Niclòs, Raquel

    2017-01-01

    Optimization of field techniques is crucial to ensure high quality soil moisture data. The aim of the work is to present a sampling method for undisturbed soil and soil water content to calibrated soil moisture probes, in a context of the SMOS (Soil Moisture and Ocean Salinity) mission MIRAS Level 2 soil moisture product validation in Pampean Region of Argentina. The method avoids soil alteration and is recommended to calibrated probes based on soil type under a freely drying process at ambient temperature. A detailed explanation of field and laboratory procedures to obtain reference soil moisture is shown. The calibration results reflected accurate operation for the Delta-T thetaProbe ML2x probes in most of analyzed cases (RMSE and bias ≤ 0.05 m 3 /m 3 ). Post-calibration results indicated that the accuracy improves significantly applying the adjustments of the calibration based on soil types (RMSE ≤ 0.022 m 3 /m 3 , bias ≤ -0.010 m 3 /m 3 ). •A sampling method that provides high quality data of soil water content for calibration of probes is described.•Importance of calibration based on soil types.•A calibration process for similar soil types could be suitable in practical terms, depending on the required accuracy level.

  6. Simplified stereo-optical ultrasound plane calibration

    NASA Astrophysics Data System (ADS)

    Hoßbach, Martin; Noll, Matthias; Wesarg, Stefan

    2013-03-01

    Image guided therapy is a natural concept and commonly used in medicine. In anesthesia, a common task is the injection of an anesthetic close to a nerve under freehand ultrasound guidance. Several guidance systems exist using electromagnetic tracking of the ultrasound probe as well as the needle, providing the physician with a precise projection of the needle into the ultrasound image. This, however, requires additional expensive devices. We suggest using optical tracking with miniature cameras attached to a 2D ultrasound probe to achieve a higher acceptance among physicians. The purpose of this paper is to present an intuitive method to calibrate freehand ultrasound needle guidance systems employing a rigid stereo camera system. State of the art methods are based on a complex series of error prone coordinate system transformations which makes them susceptible to error accumulation. By reducing the amount of calibration steps to a single calibration procedure we provide a calibration method that is equivalent, yet not prone to error accumulation. It requires a linear calibration object and is validated on three datasets utilizing di erent calibration objects: a 6mm metal bar and a 1:25mm biopsy needle were used for experiments. Compared to existing calibration methods for freehand ultrasound needle guidance systems, we are able to achieve higher accuracy results while additionally reducing the overall calibration complexity. Ke

  7. Correlation Characterization of Particles in Volume Based on Peak-to-Basement Ratio

    PubMed Central

    Vovk, Tatiana A.; Petrov, Nikolay V.

    2017-01-01

    We propose a new express method of the correlation characterization of the particles suspended in the volume of optically transparent medium. It utilizes inline digital holography technique for obtaining two images of the adjacent layers from the investigated volume with subsequent matching of the cross-correlation function peak-to-basement ratio calculated for these images. After preliminary calibration via numerical simulation, the proposed method allows one to quickly distinguish parameters of the particle distribution and evaluate their concentration. The experimental verification was carried out for the two types of physical suspensions. Our method can be applied in environmental and biological research, which includes analyzing tools in flow cytometry devices, express characterization of particles and biological cells in air and water media, and various technical tasks, e.g. the study of scattering objects or rapid determination of cutting tool conditions in mechanisms. PMID:28252020

  8. Multichannel emission spectrometer for high dynamic range optical pyrometry of shock-driven materials

    NASA Astrophysics Data System (ADS)

    Bassett, Will P.; Dlott, Dana D.

    2016-10-01

    An emission spectrometer (450-850 nm) using a high-throughput, high numerical aperture (N.A. = 0.3) prism spectrograph with stepped fiberoptic coupling, 32 fast photomultipliers and thirty-two 1.25 GHz digitizers is described. The spectrometer can capture single-shot events with a high dynamic range in amplitude and time (nanoseconds to milliseconds or longer). Methods to calibrate the spectrometer and verify its performance and accuracy are described. When a reference thermal source is used for calibration, the spectrometer can function as a fast optical pyrometer. Applications of the spectrometer are illustrated by using it to capture single-shot emission transients from energetic materials or reactive materials initiated by kmṡs-1 impacts with laser-driven flyer plates. A log (time) data analysis method is used to visualize multiple kinetic processes resulting from impact initiation of HMX (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) or a Zr/CuO nanolaminate thermite. Using a gray body algorithm to interpret the spectral radiance from shocked HMX, a time history of temperature and emissivity was obtained, which could be used to investigate HMX hot spot dynamics. Finally, two examples are presented showing how the spectrometer can avoid temperature determination errors in systems where thermal emission is accompanied by atomic or molecular emission lines.

  9. Multichannel emission spectrometer for high dynamic range optical pyrometry of shock-driven materials.

    PubMed

    Bassett, Will P; Dlott, Dana D

    2016-10-01

    An emission spectrometer (450-850 nm) using a high-throughput, high numerical aperture (N.A. = 0.3) prism spectrograph with stepped fiberoptic coupling, 32 fast photomultipliers and thirty-two 1.25 GHz digitizers is described. The spectrometer can capture single-shot events with a high dynamic range in amplitude and time (nanoseconds to milliseconds or longer). Methods to calibrate the spectrometer and verify its performance and accuracy are described. When a reference thermal source is used for calibration, the spectrometer can function as a fast optical pyrometer. Applications of the spectrometer are illustrated by using it to capture single-shot emission transients from energetic materials or reactive materials initiated by km⋅s -1 impacts with laser-driven flyer plates. A log (time) data analysis method is used to visualize multiple kinetic processes resulting from impact initiation of HMX (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) or a Zr/CuO nanolaminate thermite. Using a gray body algorithm to interpret the spectral radiance from shocked HMX, a time history of temperature and emissivity was obtained, which could be used to investigate HMX hot spot dynamics. Finally, two examples are presented showing how the spectrometer can avoid temperature determination errors in systems where thermal emission is accompanied by atomic or molecular emission lines.

  10. Simultaneous multi-headed imager geometry calibration method

    DOEpatents

    Tran, Vi-Hoa [Newport News, VA; Meikle, Steven Richard [Penshurst, AU; Smith, Mark Frederick [Yorktown, VA

    2008-02-19

    A method for calibrating multi-headed high sensitivity and high spatial resolution dynamic imaging systems, especially those useful in the acquisition of tomographic images of small animals. The method of the present invention comprises: simultaneously calibrating two or more detectors to the same coordinate system; and functionally correcting for unwanted detector movement due to gantry flexing.

  11. Model independent approach to the single photoelectron calibration of photomultiplier tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saldanha, R.; Grandi, L.; Guardincerri, Y.

    2017-08-01

    The accurate calibration of photomultiplier tubes is critical in a wide variety of applications in which it is necessary to know the absolute number of detected photons or precisely determine the resolution of the signal. Conventional calibration methods rely on fitting the photomultiplier response to a low intensity light source with analytical approximations to the single photoelectron distribution, often leading to biased estimates due to the inability to accurately model the full distribution, especially at low charge values. In this paper we present a simple statistical method to extract the relevant single photoelectron calibration parameters without making any assumptions aboutmore » the underlying single photoelectron distribution. We illustrate the use of this method through the calibration of a Hamamatsu R11410 photomultiplier tube and study the accuracy and precision of the method using Monte Carlo simulations. The method is found to have significantly reduced bias compared to conventional methods and works under a wide range of light intensities, making it suitable for simultaneously calibrating large arrays of photomultiplier tubes.« less

  12. ASTER preflight and inflight calibration and the validation of level 2 products

    USGS Publications Warehouse

    Thome, K.; Aral, K.; Hook, S.; Kieffer, H.; Lang, H.; Matsunaga, T.; Ono, A.; Palluconi, F. D.; Sakuma, H.; Slater, P.; Takashima, T.; Tonooka, H.; Tsuchida, S.; Welch, R.M.; Zalewski, E.

    1998-01-01

    This paper describes the preflight and inflight calibration approaches used for the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). The system is a multispectral, high-spatial resolution sensor on the Earth Observing System's (EOS)-AMl platform. Preflight calibration of ASTER uses well-characterized sources to provide calibration and preflight round-robin exercises to understand biases between the calibration sources of ASTER and other EOS sensors. These round-robins rely on well-characterized, ultra-stable radiometers. An experiment held in Yokohama, Japan, showed that the output from the source used for the visible and near-infrared (VNIR) subsystem of ASTER may be underestimated by 1.5%, but this is still within the 4% specification for the absolute, radiometric calibration of these bands. Inflight calibration will rely on vicarious techniques and onboard blackbodies and lamps. Vicarious techniques include ground-reference methods using desert and water sites. A recent joint field campaign gives confidence that these methods currently provide absolute calibration to better than 5%, and indications are that uncertainties less than the required 4% should be achievable at launch. The EOS-AMI platform will also provide a spacecraft maneuver that will allow ASTER to see the moon, allowing further characterization of the sensor. A method for combining the results of these independent calibration results is presented. The paper also describes the plans for validating the Level 2 data products from ASTER. These plans rely heavily upon field campaigns using methods similar to those used for the ground-reference, vicarious calibration methods. ?? 1998 IEEE.

  13. Apparatus for in-situ calibration of instruments that measure fluid depth

    DOEpatents

    Campbell, Melvin D.

    1994-01-01

    The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position.

  14. Experimental calibration and validation of sewer/surface flow exchange equations in steady and unsteady flow conditions

    NASA Astrophysics Data System (ADS)

    Rubinato, Matteo; Martins, Ricardo; Kesserwani, Georges; Leandro, Jorge; Djordjević, Slobodan; Shucksmith, James

    2017-09-01

    The linkage between sewer pipe flow and floodplain flow is recognised to induce an important source of uncertainty within two-dimensional (2D) urban flood models. This uncertainty is often attributed to the use of empirical hydraulic formulae (the one-dimensional (1D) weir and orifice steady flow equations) to achieve data-connectivity at the linking interface, which require the determination of discharge coefficients. Because of the paucity of high resolution localised data for this type of flows, the current understanding and quantification of a suitable range for those discharge coefficients is somewhat lacking. To fulfil this gap, this work presents the results acquired from an instrumented physical model designed to study the interaction between a pipe network flow and a floodplain flow. The full range of sewer-to-surface and surface-to-sewer flow conditions at the exchange zone are experimentally analysed in both steady and unsteady flow regimes. Steady state measured discharges are first analysed considering the relationship between the energy heads from the sewer flow and the floodplain flow; these results show that existing weir and orifice formulae are valid for describing the flow exchange for the present physical model, and yield new calibrated discharge coefficients for each of the flow conditions. The measured exchange discharges are also integrated (as a source term) within a 2D numerical flood model (a finite volume solver to the 2D Shallow Water Equations (SWE)), which is shown to reproduce the observed coefficients. This calibrated numerical model is then used to simulate a series of unsteady flow tests reproduced within the experimental facility. Results show that the numerical model overestimated the values of mean surcharge flow rate. This suggests the occurrence of additional head losses in unsteady conditions which are not currently accounted for within flood models calibrated in steady flow conditions.

  15. A study of short test and charge retention test methods for nickel-cadmium spacecraft cells

    NASA Technical Reports Server (NTRS)

    Scott, W. R.

    1975-01-01

    Methods for testing nickel-cadmium cells for internal shorts and charge retention were studied. Included were (a) open circuit voltage decay after a brief charge, (b) open circuit voltage recovery after shorting, and (c) open circuit voltage decay and capacity loss after a full charge. The investigation included consideration of the effects of prior history, of conditioning cells prior to testing, and of various test method variables on the results of the tests. Sensitivity of the tests was calibrated in terms of equivalent external resistance. The results were correlated. It was shown that a large number of variables may affect the results of these tests. It is concluded that the voltage decay after a brief charge and the voltage recovery methods are more sensitive than the charged stand method, and can detect an internal short equivalent to a resistance of about (10,000/C)ohms where "C' is the numerical value of the capacity of the cell in ampere hours.

  16. Adaptive Prior Variance Calibration in the Bayesian Continual Reassessment Method

    PubMed Central

    Zhang, Jin; Braun, Thomas M.; Taylor, Jeremy M.G.

    2012-01-01

    Use of the Continual Reassessment Method (CRM) and other model-based approaches to design in Phase I clinical trials has increased due to the ability of the CRM to identify the maximum tolerated dose (MTD) better than the 3+3 method. However, the CRM can be sensitive to the variance selected for the prior distribution of the model parameter, especially when a small number of patients are enrolled. While methods have emerged to adaptively select skeletons and to calibrate the prior variance only at the beginning of a trial, there has not been any approach developed to adaptively calibrate the prior variance throughout a trial. We propose three systematic approaches to adaptively calibrate the prior variance during a trial and compare them via simulation to methods proposed to calibrate the variance at the beginning of a trial. PMID:22987660

  17. The Use of Color Sensors for Spectrographic Calibration

    NASA Astrophysics Data System (ADS)

    Thomas, Neil B.

    2018-04-01

    The wavelength calibration of spectrographs is an essential but challenging task in many disciplines. Calibration is traditionally accomplished by imaging the spectrum of a light source containing features that are known to appear at certain wavelengths and mapping them to their location on the sensor. This is typically required in conjunction with each scientific observation to account for mechanical and optical variations of the instrument over time, which may span years for certain projects. The method presented here investigates the usage of color itself instead of spectral features to calibrate a spectrograph. The primary advantage of such a calibration is that any broad-spectrum light source such as the sky or an incandescent bulb is suitable. This method allows for calibration using the full optical pathway of the instrument instead of incorporating separate calibration equipment that may introduce errors. This paper focuses on the potential for color calibration in the field of radial velocity astronomy, in which instruments must be finely calibrated for long periods of time to detect tiny Doppler wavelength shifts. This method is not restricted to radial velocity, however, and may find application in any field requiring calibrated spectrometers such as sea water analysis, cellular biology, chemistry, atmospheric studies, and so on. This paper demonstrates that color sensors have the potential to provide calibration with greatly reduced complexity.

  18. On-orbit characterization of hyperspectral imagers

    NASA Astrophysics Data System (ADS)

    McCorkel, Joel

    Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne- and satellite-based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This dissertation presents a method for determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on a multispectral sensor, Moderate-resolution Imaging Spectroradiometer (MODIS), as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. A method to predict hyperspectral surface reflectance using a combination of MODIS data and spectral shape information is developed and applied for the characterization of Hyperion. Spectral shape information is based on RSG's historical in situ data for the Railroad Valley test site and spectral library data for the Libyan test site. Average atmospheric parameters, also based on historical measurements, are used in reflectance prediction and transfer to space. Results of several cross-calibration scenarios that differ in image acquisition coincidence, test site, and reference sensor are found for the characterization of Hyperion. These are compared with results from the reflectance-based approach of vicarious calibration, a well-documented method developed by the RSG that serves as a baseline for calibration performance for the cross-calibration method developed here. Cross-calibration provides results that are within 2% of those of reflectance-based results in most spectral regions. Larger disagreements exist for shorter wavelengths studied in this work as well as in spectral areas that experience absorption by the atmosphere.

  19. The research on calibration methods of dual-CCD laser three-dimensional human face scanning system

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong

    2013-09-01

    In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.

  20. Calibration of Axisymmetric and Quasi-1D Solvers for High Enthalpy Nozzles

    NASA Technical Reports Server (NTRS)

    Papadopoulos, P. E.; Gochberg, L. A.; Tokarcik-Polsky, S.; Venkatapathy, E.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)

    1994-01-01

    The proposed paper will present a numerical investigation of the flow characteristics and boundary layer development in the nozzles of high enthalpy shock tunnel facilities used for hypersonic propulsion testing. The computed flow will be validated against existing experimental data. Pitot pressure data obtained at the entrance of the test cabin will be used to validate the numerical simulations. It is necessary to accurately model the facility nozzles in order to characterize the test article flow conditions. Initially the axisymmetric nozzle flow will be computed using a Navier Stokes solver for a range of reservoir conditions. The calculated solutions will be compared and calibrated against available experimental data from the DLR HEG piston-driven shock tunnel and the 16-inch shock tunnel at NASA Ames Research Center. The Reynolds number is assumed to be high enough at the throat that the boundary layer flow is assumed turbulent at this point downstream. The real gas affects will be examined. In high Mach number facilities the boundary layer is thick. Attempts will be made to correlate the boundary layer displacement thickness. The displacement thickness correlation will be used to calibrate the quasi-1D codes NENZF and LSENS in order to provide fast and efficient tools of characterizing the facility nozzles. The calibrated quasi-1D codes will be implemented to study the effects of chemistry and the flow condition variations at the test section due to small variations in the driver gas conditions.

  1. A Method of Calibrating Airspeed Installations on Airplanes at Transonic and Supersonic Speeds by the Use of Accelerometer and Attitude-Angle Measurements

    NASA Technical Reports Server (NTRS)

    Zalovick, John A; Lina, Lindsay J; Trant, James P , Jr

    1953-01-01

    A method is described for calibrating airspeed installation on airplanes at transonic and supersonic speeds in vertical-plane maneuvers in which use is made of measurements of normal and longitudinal accelerations and attitude angle. In this method all the required instrumentation is carried within the airplane. An analytical study of the effects of various sources of error on the accuracy of an airspeed calibration by the accelerometer method indicated that the required measurements can be made accurately enough to insure a satisfactory calibration.

  2. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  3. Fast calibration of electromagnetically tracked oblique-viewing rigid endoscopes.

    PubMed

    Liu, Xinyang; Rice, Christina E; Shekhar, Raj

    2017-10-01

    The oblique-viewing (i.e., angled) rigid endoscope is a commonly used tool in conventional endoscopic surgeries. The relative rotation between its two moveable parts, the telescope and the camera head, creates a rotation offset between the actual and the projection of an object in the camera image. A calibration method tailored to compensate such offset is needed. We developed a fast calibration method for oblique-viewing rigid endoscopes suitable for clinical use. In contrast to prior approaches based on optical tracking, we used electromagnetic (EM) tracking as the external tracking hardware to improve compactness and practicality. Two EM sensors were mounted on the telescope and the camera head, respectively, with considerations to minimize EM tracking errors. Single-image calibration was incorporated into the method, and a sterilizable plate, laser-marked with the calibration pattern, was also developed. Furthermore, we proposed a general algorithm to estimate the rotation center in the camera image. Formulas for updating the camera matrix in terms of clockwise and counterclockwise rotations were also developed. The proposed calibration method was validated using a conventional [Formula: see text], 5-mm laparoscope. Freehand calibrations were performed using the proposed method, and the calibration time averaged 2 min and 8 s. The calibration accuracy was evaluated in a simulated clinical setting with several surgical tools present in the magnetic field of EM tracking. The root-mean-square re-projection error averaged 4.9 pixel (range 2.4-8.5 pixel, with image resolution of [Formula: see text] for rotation angles ranged from [Formula: see text] to [Formula: see text]. We developed a method for fast and accurate calibration of oblique-viewing rigid endoscopes. The method was also designed to be performed in the operating room and will therefore support clinical translation of many emerging endoscopic computer-assisted surgical systems.

  4. Application of partial inversion pulse to ultrasonic time-domain correlation method to measure the flow rate in a pipe

    NASA Astrophysics Data System (ADS)

    Wada, Sanehiro; Furuichi, Noriyuki; Shimada, Takashi

    2017-11-01

    This paper proposes the application of a novel ultrasonic pulse, called a partial inversion pulse (PIP), to the measurement of the velocity profile and flow rate in a pipe using the ultrasound time-domain correlation (UTDC) method. In general, the measured flow rate depends on the velocity profile in the pipe; thus, on-site calibration is the only method of checking the accuracy of on-site flow rate measurements. Flow rate calculation using UTDC is based on the integration of the measured velocity profile. The advantages of this method compared with the ultrasonic pulse Doppler method include the possibility of the velocity range having no limitation and its applicability to flow fields without a sufficient amount of reflectors. However, it has been previously reported that the measurable velocity range for UTDC is limited by false detections. Considering the application of this method to on-site flow fields, the issue of velocity range is important. To reduce the effect of false detections, a PIP signal, which is an ultrasound signal that contains a partially inverted region, was developed in this study. The advantages of the PIP signal are that it requires little additional hardware cost and no additional software cost in comparison with conventional methods. The effects of inversion on the characteristics of the ultrasound transmission were estimated through numerical calculation. Then, experimental measurements were performed at a national standard calibration facility for water flow rate in Japan. The experimental results demonstrate that measurements made using a PIP signal are more accurate and yield a higher detection ratio than measurements using a normal pulse signal.

  5. A New Calibration Method for Commercial RGB-D Sensors.

    PubMed

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-05-24

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.

  6. Model Calibration with Censored Data

    DOE PAGES

    Cao, Fang; Ba, Shan; Brenneman, William A.; ...

    2017-06-28

    Here, the purpose of model calibration is to make the model predictions closer to reality. The classical Kennedy-O'Hagan approach is widely used for model calibration, which can account for the inadequacy of the computer model while simultaneously estimating the unknown calibration parameters. In many applications, the phenomenon of censoring occurs when the exact outcome of the physical experiment is not observed, but is only known to fall within a certain region. In such cases, the Kennedy-O'Hagan approach cannot be used directly, and we propose a method to incorporate the censoring information when performing model calibration. The method is applied tomore » study the compression phenomenon of liquid inside a bottle. The results show significant improvement over the traditional calibration methods, especially when the number of censored observations is large.« less

  7. Assessment of opacimeter calibration according to International Standard Organization 10155.

    PubMed

    Gomes, J F

    2001-01-01

    This paper compares the calibration method for opacimeters issued by the International Standard Organization (ISO) 10155 with the manual reference method for determination of dust content in stack gases. ISO 10155 requires at least nine operational measurements, corresponding to three operational measurements per each dust emission range within the stack. The procedure is assessed by comparison with previous calibration methods for opacimeters using only two operational measurements from a set of measurements made at stacks from pulp mills. The results show that even if the international standard for opacimeter calibration requires that the calibration curve is to be obtained using 3 x 3 points, a calibration curve derived using 3 points could be, at times, acceptable in statistical terms, provided that the amplitude of individual measurements is low.

  8. Wettability of graphitic-carbon and silicon surfaces: MD modeling and theoretical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.

    2015-07-28

    The wettability of graphitic carbon and silicon surfaces was numerically and theoretically investigated. A multi-response method has been developed for the analysis of conventional molecular dynamics (MD) simulations of droplets wettability. The contact angle and indicators of the quality of the computations are tracked as a function of the data sets analyzed over time. This method of analysis allows accurate calculations of the contact angle obtained from the MD simulations. Analytical models were also developed for the calculation of the work of adhesion using the mean-field theory, accounting for the interfacial entropy changes. A calibration method is proposed to providemore » better predictions of the respective contact angles under different solid-liquid interaction potentials. Estimations of the binding energy between a water monomer and graphite match those previously reported. In addition, a breakdown in the relationship between the binding energy and the contact angle was observed. The macroscopic contact angles obtained from the MD simulations were found to match those predicted by the mean-field model for graphite under different wettability conditions, as well as the contact angles of Si(100) and Si(111) surfaces. Finally, an assessment of the effect of the Lennard-Jones cutoff radius was conducted to provide guidelines for future comparisons between numerical simulations and analytical models of wettability.« less

  9. Patient-specific calibration of cone-beam computed tomography data sets for radiotherapy dose calculations and treatment plan assessment.

    PubMed

    MacFarlane, Michael; Wong, Daniel; Hoover, Douglas A; Wong, Eugene; Johnson, Carol; Battista, Jerry J; Chen, Jeff Z

    2018-03-01

    In this work, we propose a new method of calibrating cone beam computed tomography (CBCT) data sets for radiotherapy dose calculation and plan assessment. The motivation for this patient-specific calibration (PSC) method is to develop an efficient, robust, and accurate CBCT calibration process that is less susceptible to deformable image registration (DIR) errors. Instead of mapping the CT numbers voxel-by-voxel with traditional DIR calibration methods, the PSC methods generates correlation plots between deformably registered planning CT and CBCT voxel values, for each image slice. A linear calibration curve specific to each slice is then obtained by least-squares fitting, and applied to the CBCT slice's voxel values. This allows each CBCT slice to be corrected using DIR without altering the patient geometry through regional DIR errors. A retrospective study was performed on 15 head-and-neck cancer patients, each having routine CBCTs and a middle-of-treatment re-planning CT (reCT). The original treatment plan was re-calculated on the patient's reCT image set (serving as the gold standard) as well as the image sets produced by voxel-to-voxel DIR, density-overriding, and the new PSC calibration methods. Dose accuracy of each calibration method was compared to the reference reCT data set using common dose-volume metrics and 3D gamma analysis. A phantom study was also performed to assess the accuracy of the DIR and PSC CBCT calibration methods compared with planning CT. Compared with the gold standard using reCT, the average dose metric differences were ≤ 1.1% for all three methods (PSC: -0.3%; DIR: -0.7%; density-override: -1.1%). The average gamma pass rates with thresholds 3%, 3 mm were also similar among the three techniques (PSC: 95.0%; DIR: 96.1%; density-override: 94.4%). An automated patient-specific calibration method was developed which yielded strong dosimetric agreement with the results obtained using a re-planning CT for head-and-neck patients. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  10. Research on orbit prediction for solar-based calibration proper satellite

    NASA Astrophysics Data System (ADS)

    Chen, Xuan; Qi, Wenwen; Xu, Peng

    2018-03-01

    Utilizing the mathematical model of the orbit mechanics, the orbit prediction is to forecast the space target's orbit information of a certain time based on the orbit of the initial moment. The proper satellite radiometric calibration and calibration orbit prediction process are introduced briefly. On the basis of the research of the calibration space position design method and the radiative transfer model, an orbit prediction method for proper satellite radiometric calibration is proposed to select the appropriate calibration arc for the remote sensor and to predict the orbit information of the proper satellite and the remote sensor. By analyzing the orbit constraint of the proper satellite calibration, the GF-1solar synchronous orbit is chose as the proper satellite orbit in order to simulate the calibration visible durance for different satellites to be calibrated. The results of simulation and analysis provide the basis for the improvement of the radiometric calibration accuracy of the satellite remote sensor, which lays the foundation for the high precision and high frequency radiometric calibration.

  11. Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods

    NASA Astrophysics Data System (ADS)

    Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan

    2017-03-01

    Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.

  12. The use of groundwater age as a calibration target

    USGS Publications Warehouse

    Konikow, Leonard F.; Hornberger, G.Z.; Putnam, L.D.; Shapiro, A.M.; Zinn, B.A.

    2008-01-01

    Groundwater age (or residence time), as estimated on the basis of concentrations of one or more environmental tracers, can provide a useful and independent calibration target for groundwater models. However, concentrations of environmental tracers are affected by the complexities and mixing inherent in groundwater flow through heterogeneous media, especially in the presence of pumping wells. An analysis of flow and age distribution in the Madison aquifer in South Dakota, USA, illustrates the additional benefits and difficulties of using age as a calibration target. Alternative numerical approaches to estimating travel time and age with backward particle tracking are assessed, and the resulting estimates are used to refine estimates of effective porosity and to help assess the adequacy and credibility of the flow model.

  13. Computation and analysis for a constrained entropy optimization problem in finance

    NASA Astrophysics Data System (ADS)

    He, Changhong; Coleman, Thomas F.; Li, Yuying

    2008-12-01

    In [T. Coleman, C. He, Y. Li, Calibrating volatility function bounds for an uncertain volatility model, Journal of Computational Finance (2006) (submitted for publication)], an entropy minimization formulation has been proposed to calibrate an uncertain volatility option pricing model (UVM) from market bid and ask prices. To avoid potential infeasibility due to numerical error, a quadratic penalty function approach is applied. In this paper, we show that the solution to the quadratic penalty problem can be obtained by minimizing an objective function which can be evaluated via solving a Hamilton-Jacobian-Bellman (HJB) equation. We prove that the implicit finite difference solution of this HJB equation converges to its viscosity solution. In addition, we provide computational examples illustrating accuracy of calibration.

  14. Three-dimensional fuse deposition modeling of tissue-simulating phantom for biomedical optical imaging

    NASA Astrophysics Data System (ADS)

    Dong, Erbao; Zhao, Zuhua; Wang, Minjie; Xie, Yanjun; Li, Shidi; Shao, Pengfei; Cheng, Liuquan; Xu, Ronald X.

    2015-12-01

    Biomedical optical devices are widely used for clinical detection of various tissue anomalies. However, optical measurements have limited accuracy and traceability, partially owing to the lack of effective calibration methods that simulate the actual tissue conditions. To facilitate standardized calibration and performance evaluation of medical optical devices, we develop a three-dimensional fuse deposition modeling (FDM) technique for freeform fabrication of tissue-simulating phantoms. The FDM system uses transparent gel wax as the base material, titanium dioxide (TiO2) powder as the scattering ingredient, and graphite powder as the absorption ingredient. The ingredients are preheated, mixed, and deposited at the designated ratios layer-by-layer to simulate tissue structural and optical heterogeneities. By printing the sections of human brain model based on magnetic resonance images, we demonstrate the capability for simulating tissue structural heterogeneities. By measuring optical properties of multilayered phantoms and comparing with numerical simulation, we demonstrate the feasibility for simulating tissue optical properties. By creating a rat head phantom with embedded vasculature, we demonstrate the potential for mimicking physiologic processes of a living system.

  15. Development and application of a hillslope hydrologic model

    USGS Publications Warehouse

    Blain, C.A.; Milly, P.C.D.

    1991-01-01

    A vertically integrated two-dimensional lateral flow model of soil moisture has been developed. Derivation of the governing equation is based on a physical interpretation of hillslope processes. The lateral subsurface-flow model permits variability of precipitation and evapotranspiration, and allows arbitrary specification of soil-moisture retention properties. Variable slope, soil thickness, and saturation are all accommodated. The numerical solution method, a Crank-Nicolson, finite-difference, upstream-weighted scheme, is simple and robust. A small catchment in northeastern Kansas is the subject of an application of the lateral subsurface-flow model. Calibration of the model using observed discharge provides estimates of the active porosity (0.1 cm3/cm3) and of the saturated horizontal hydraulic conductivity (40 cm/hr). The latter figure is at least an order of magnitude greater than the vertical hydraulic conductivity associated with the silty clay loam soil matrix. The large value of hydraulic conductivity derived from the calibration is suggestive of macropore-dominated hillslope drainage. The corresponding value of active porosity agrees well with a published average value of the difference between total porosity and field capacity for a silty clay loam. ?? 1991.

  16. Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach

    EPA Science Inventory

    Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...

  17. Clusters of Monoisotopic Elements for Calibration in (TOF) Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Kolářová, Lenka; Prokeš, Lubomír; Kučera, Lukáš; Hampl, Aleš; Peňa-Méndez, Eladia; Vaňhara, Petr; Havel, Josef

    2017-03-01

    Precise calibration in TOF MS requires suitable and reliable standards, which are not always available for high masses. We evaluated inorganic clusters of the monoisotopic elements gold and phosphorus (Au n +/Au n - and P n +/P n -) as an alternative to peptides or proteins for the external and internal calibration of mass spectra in various experimental and instrumental scenarios. Monoisotopic gold or phosphorus clusters can be easily generated in situ from suitable precursors by laser desorption/ionization (LDI) or matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS). Their use offers numerous advantages, including simplicity of preparation, biological inertness, and exact mass determination even at lower mass resolution. We used citrate-stabilized gold nanoparticles to generate gold calibration clusters, and red phosphorus powder to generate phosphorus clusters. Both elements can be added to samples to perform internal calibration up to mass-to-charge ( m/z) 10-15,000 without significantly interfering with the analyte. We demonstrated the use of the gold and phosphorous clusters in the MS analysis of complex biological samples, including microbial standards and total extracts of mouse embryonic fibroblasts. We believe that clusters of monoisotopic elements could be used as generally applicable calibrants for complex biological samples.

  18. Transferable Calibration Standard Developed for Quantitative Raman Scattering Diagnostics in High-Pressure Flames

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang-Viet; Kojima, Jun

    2005-01-01

    Researchers from NASA Glenn Research Center s Combustion Branch and the Ohio Aerospace Institute (OAI) have developed a transferable calibration standard for an optical technique called spontaneous Raman scattering (SRS) in high-pressure flames. SRS is perhaps the only technique that provides spatially and temporally resolved, simultaneous multiscalar measurements in turbulent flames. Such measurements are critical for the validation of numerical models of combustion. This study has been a combined experimental and theoretical effort to develop a spectral calibration database for multiscalar diagnostics using SRS in high-pressure flames. However, in the past such measurements have used a one-of-a-kind experimental setup and a setup-dependent calibration procedure to empirically account for spectral interferences, or crosstalk, among the major species of interest. Such calibration procedures, being non-transferable, are prohibitively expensive to duplicate. A goal of this effort is to provide an SRS calibration database using transferable standards that can be implemented widely by other researchers for both atmospheric-pressure and high-pressure (less than 30 atm) SRS studies. A secondary goal of this effort is to provide quantitative multiscalar diagnostics in high pressure environments to validate computational combustion codes.

  19. Measurements of Supersonic Wing Tip Vortices

    NASA Technical Reports Server (NTRS)

    Smart, Michael K.; Kalkhoran, Iraj M.; Benston, James

    1994-01-01

    An experimental survey of supersonic wing tip vortices has been conducted at Mach 2.5 using small performed 2.25 chords down-stream of a semi-span rectangular wing at angle of attack of 5 and 10 degrees. The main objective of the experiments was to determine the Mach number, flow angularity and total pressure distribution in the core region of supersonic wing tip vortices. A secondary aim was to demonstrate the feasibility of using cone probes calibrated with a numerical flow solver to measure flow characteristics at supersonic speeds. Results showed that the numerically generated calibration curves can be used for 4-hole cone probes, but were not sufficiently accurate for conventional 5-hole probes due to nose bluntness effects. Combination of 4-hole cone probe measurements with independent pitot pressure measurements indicated a significant Mach number and total pressure deficit in the core regions of supersonic wing tip vortices, combined with an asymmetric 'Burger like' swirl distribution.

  20. On-orbit calibration for star sensors without priori information.

    PubMed

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, Chengfen; Yang, Yanqiang

    2017-07-24

    The star sensor is a prerequisite navigation device for a spacecraft. The on-orbit calibration is an essential guarantee for its operation performance. However, traditional calibration methods rely on ground information and are invalid without priori information. The uncertain on-orbit parameters will eventually influence the performance of guidance navigation and control system. In this paper, a novel calibration method without priori information for on-orbit star sensors is proposed. Firstly, the simplified back propagation neural network is designed for focal length and main point estimation along with system property evaluation, called coarse calibration. Then the unscented Kalman filter is adopted for the precise calibration of all parameters, including focal length, main point and distortion. The proposed method benefits from self-initialization and no attitude or preinstalled sensor parameter is required. Precise star sensor parameter estimation can be achieved without priori information, which is a significant improvement for on-orbit devices. Simulations and experiments results demonstrate that the calibration is easy for operation with high accuracy and robustness. The proposed method can satisfy the stringent requirement for most star sensors.

  1. A Visual Servoing-Based Method for ProCam Systems Calibration

    PubMed Central

    Berry, Francois; Aider, Omar Ait; Mosnier, Jeremie

    2013-01-01

    Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy. PMID:24084121

  2. 75 FR 8039 - Announcement of the American Petroleum Institute's Standards Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-23

    ... Provers, 3rd Ed. MPMS Ch. 4.9.3, Methods of Calibration for Displacement and Volumetric Tank Provers, Part 3--Determination of the Volume of Displacement Provers by the Master Meter Method of Calibration, 1st Ed. MPMS Ch. 4.9.4, Methods of Calibration for Displacement and Volumetric Tank Provers, Part 4...

  3. Absolute Radiometric Calibration of Narrow-Swath Imaging Sensors with Reference to Non-Coincident Wide-Swath Sensors

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Thome, Kurtis; Lockwood, Ronald

    2012-01-01

    An inter-calibration method is developed to provide absolute radiometric calibration of narrow-swath imaging sensors with reference to non-coincident wide-swath sensors. The method predicts at-sensor radiance using non-coincident imagery from the reference sensor and knowledge of spectral reflectance of the test site. The imagery of the reference sensor is restricted to acquisitions that provide similar view and solar illumination geometry to reduce uncertainties due to directional reflectance effects. Spectral reflectance of the test site is found with a simple iterative radiative transfer method using radiance values of a well-understood wide-swath sensor and spectral shape information based on historical ground-based measurements. At-sensor radiance is calculated for the narrow-swath sensor using this spectral reflectance and atmospheric parameters that are also based on historical in situ measurements. Results of the inter-calibration method show agreement on the 2 5 percent level in most spectral regions with the vicarious calibration technique relying on coincident ground-based measurements referred to as the reflectance-based approach. While the variability of the inter-calibration method based on non-coincident image pairs is significantly larger, results are consistent with techniques relying on in situ measurements. The method is also insensitive to spectral differences between the sensors by transferring to surface spectral reflectance prior to prediction of at-sensor radiance. The utility of this inter-calibration method is made clear by its flexibility to utilize image pairings with acquisition dates differing in excess of 30 days allowing frequent absolute calibration comparisons between wide- and narrow-swath sensors.

  4. Non-orthogonal tool/flange and robot/world calibration.

    PubMed

    Ernst, Floris; Richter, Lars; Matthäus, Lars; Martens, Volker; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim

    2012-12-01

    For many robot-assisted medical applications, it is necessary to accurately compute the relation between the robot's coordinate system and the coordinate system of a localisation or tracking device. Today, this is typically carried out using hand-eye calibration methods like those proposed by Tsai/Lenz or Daniilidis. We present a new method for simultaneous tool/flange and robot/world calibration by estimating a solution to the matrix equation AX = YB. It is computed using a least-squares approach. Because real robots and localisation are all afflicted by errors, our approach allows for non-orthogonal matrices, partially compensating for imperfect calibration of the robot or localisation device. We also introduce a new method where full robot/world and partial tool/flange calibration is possible by using localisation devices providing less than six degrees of freedom (DOFs). The methods are evaluated on simulation data and on real-world measurements from optical and magnetical tracking devices, volumetric ultrasound providing 3-DOF data, and a surface laser scanning device. We compare our methods with two classical approaches: the method by Tsai/Lenz and the method by Daniilidis. In all experiments, the new algorithms outperform the classical methods in terms of translational accuracy by up to 80% and perform similarly in terms of rotational accuracy. Additionally, the methods are shown to be stable: the number of calibration stations used has far less influence on calibration quality than for the classical methods. Our work shows that the new method can be used for estimating the relationship between the robot's and the localisation device's coordinate systems. The new method can also be used for deficient systems providing only 3-DOF data, and it can be employed in real-time scenarios because of its speed. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Regression Analysis and Calibration Recommendations for the Characterization of Balance Temperature Effects

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2018-01-01

    Analysis and use of temperature-dependent wind tunnel strain-gage balance calibration data are discussed in the paper. First, three different methods are presented and compared that may be used to process temperature-dependent strain-gage balance data. The first method uses an extended set of independent variables in order to process the data and predict balance loads. The second method applies an extended load iteration equation during the analysis of balance calibration data. The third method uses temperature-dependent sensitivities for the data analysis. Physical interpretations of the most important temperature-dependent regression model terms are provided that relate temperature compensation imperfections and the temperature-dependent nature of the gage factor to sets of regression model terms. Finally, balance calibration recommendations are listed so that temperature-dependent calibration data can be obtained and successfully processed using the reviewed analysis methods.

  6. A Review on Microdialysis Calibration Methods: the Theory and Current Related Efforts.

    PubMed

    Kho, Chun Min; Enche Ab Rahim, Siti Kartini; Ahmad, Zainal Arifin; Abdullah, Norazharuddin Shah

    2017-07-01

    Microdialysis is a sampling technique first introduced in the late 1950s. Although this technique was originally designed to study endogenous compounds in animal brain, it is later modified to be used in other organs. Additionally, microdialysis is not only able to collect unbound concentration of compounds from tissue sites; this technique can also be used to deliver exogenous compounds to a designated area. Due to its versatility, microdialysis technique is widely employed in a number of areas, including biomedical research. However, for most in vivo studies, the concentration of substance obtained directly from the microdialysis technique does not accurately describe the concentration of the substance on-site. In order to relate the results collected from microdialysis to the actual in vivo condition, a calibration method is required. To date, various microdialysis calibration methods have been reported, with each method being capable to provide valuable insights of the technique itself and its applications. This paper aims to provide a critical review on various calibration methods used in microdialysis applications, inclusive of a detailed description of the microdialysis technique itself to start with. It is expected that this article shall review in detail, the various calibration methods employed, present examples of work related to each calibration method including clinical efforts, plus the advantages and disadvantages of each of the methods.

  7. Apparatus for in-situ calibration of instruments that measure fluid depth

    DOEpatents

    Campbell, M.D.

    1994-01-11

    The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position. 8 figures.

  8. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    PubMed

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.

  9. Simulation of temperature field for temperature-controlled radio frequency ablation using a hyperbolic bioheat equation and temperature-varied voltage calibration: a liver-mimicking phantom study.

    PubMed

    Zhang, Man; Zhou, Zhuhuang; Wu, Shuicai; Lin, Lan; Gao, Hongjian; Feng, Yusheng

    2015-12-21

    This study aims at improving the accuracy of temperature simulation for temperature-controlled radio frequency ablation (RFA). We proposed a new voltage-calibration method in the simulation and investigated the feasibility of a hyperbolic bioheat equation (HBE) in the RFA simulation with longer durations and higher power. A total of 40 RFA experiments was conducted in a liver-mimicking phantom. Four mathematical models with multipolar electrodes were developed by the finite element method in COMSOL software: HBE with/without voltage calibration, and the Pennes bioheat equation (PBE) with/without voltage calibration. The temperature-varied voltage calibration used in the simulation was calculated from an experimental power output and temperature-dependent resistance of liver tissue. We employed the HBE in simulation by considering the delay time τ of 16 s. First, for simulations by each kind of bioheat equation (PBE or HBE), we compared the differences between the temperature-varied voltage-calibration and the fixed-voltage values used in the simulations. Then, the comparisons were conducted between the PBE and the HBE in the simulations with temperature-varied voltage calibration. We verified the simulation results by experimental temperature measurements on nine specific points of the tissue phantom. The results showed that: (1) the proposed voltage-calibration method improved the simulation accuracy of temperature-controlled RFA for both the PBE and the HBE, and (2) for temperature-controlled RFA simulation with the temperature-varied voltage calibration, the HBE method was 0.55 °C more accurate than the PBE method. The proposed temperature-varied voltage calibration may be useful in temperature field simulations of temperature-controlled RFA. Besides, the HBE may be used as an alternative in the simulation of long-duration high-power RFA.

  10. Simultaneous auto-calibration and gradient delays estimation (SAGE) in non-Cartesian parallel MRI using low-rank constraints.

    PubMed

    Jiang, Wenwen; Larson, Peder E Z; Lustig, Michael

    2018-03-09

    To correct gradient timing delays in non-Cartesian MRI while simultaneously recovering corruption-free auto-calibration data for parallel imaging, without additional calibration scans. The calibration matrix constructed from multi-channel k-space data should be inherently low-rank. This property is used to construct reconstruction kernels or sensitivity maps. Delays between the gradient hardware across different axes and RF receive chain, which are relatively benign in Cartesian MRI (excluding EPI), lead to trajectory deviations and hence data inconsistencies for non-Cartesian trajectories. These in turn lead to higher rank and corrupted calibration information which hampers the reconstruction. Here, a method named Simultaneous Auto-calibration and Gradient delays Estimation (SAGE) is proposed that estimates the actual k-space trajectory while simultaneously recovering the uncorrupted auto-calibration data. This is done by estimating the gradient delays that result in the lowest rank of the calibration matrix. The Gauss-Newton method is used to solve the non-linear problem. The method is validated in simulations using center-out radial, projection reconstruction and spiral trajectories. Feasibility is demonstrated on phantom and in vivo scans with center-out radial and projection reconstruction trajectories. SAGE is able to estimate gradient timing delays with high accuracy at a signal to noise ratio level as low as 5. The method is able to effectively remove artifacts resulting from gradient timing delays and restore image quality in center-out radial, projection reconstruction, and spiral trajectories. The low-rank based method introduced simultaneously estimates gradient timing delays and provides accurate auto-calibration data for improved image quality, without any additional calibration scans. © 2018 International Society for Magnetic Resonance in Medicine.

  11. The Value of Hydrograph Partitioning Curves for Calibrating Hydrological Models in Glacierized Basins

    NASA Astrophysics Data System (ADS)

    He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno

    2018-03-01

    This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.

  12. Thermal Pollution Mathematical Model. Volume 2; Verification of One-Dimensional Numerical Model at Lake Keowee

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.

    1980-01-01

    A one dimensional model for studying the thermal dynamics of cooling lakes was developed and verified. The model is essentially a set of partial differential equations which are solved by finite difference methods. The model includes the effects of variation of area with depth, surface heating due to solar radiation absorbed at the upper layer, and internal heating due to the transmission of solar radiation to the sub-surface layers. The exchange of mechanical energy between the lake and the atmosphere is included through the coupling of thermal diffusivity and wind speed. The effects of discharge and intake by power plants are also included. The numerical model was calibrated by applying it to Cayuga Lake. The model was then verified through a long term simulation using Lake Keowee data base. The comparison between measured and predicted vertical temperature profiles for the nine years is good. The physical limnology of Lake Keowee is presented through a set of graphical representations of the measured data base.

  13. Spectral CT of the extremities with a silicon strip photon counting detector

    NASA Astrophysics Data System (ADS)

    Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Siewerdsen, J. H.

    2015-03-01

    Purpose: Photon counting x-ray detectors (PCXDs) are an important emerging technology for spectral imaging and material differentiation with numerous potential applications in diagnostic imaging. We report development of a Si-strip PCXD system originally developed for mammography with potential application to spectral CT of musculoskeletal extremities, including challenges associated with sparse sampling, spectral calibration, and optimization for higher energy x-ray beams. Methods: A bench-top CT system was developed incorporating a Si-strip PCXD, fixed anode x-ray source, and rotational and translational motions to execute complex acquisition trajectories. Trajectories involving rotation and translation combined with iterative reconstruction were investigated, including single and multiple axial scans and longitudinal helical scans. The system was calibrated to provide accurate spectral separation in dual-energy three-material decomposition of soft-tissue, bone, and iodine. Image quality and decomposition accuracy were assessed in experiments using a phantom with pairs of bone and iodine inserts (3, 5, 15 and 20 mm) and an anthropomorphic wrist. Results: The designed trajectories improved the sampling distribution from 56% minimum sampling of voxels to 75%. Use of iterative reconstruction (viz., penalized likelihood with edge preserving regularization) in combination with such trajectories resulted in a very low level of artifacts in images of the wrist. For large bone or iodine inserts (>5 mm diameter), the error in the estimated material concentration was <16% for (50 mg/mL) bone and <8% for (5 mg/mL) iodine with strong regularization. For smaller inserts, errors of 20-40% were observed and motivate improved methods for spectral calibration and optimization of the edge-preserving regularizer. Conclusion: Use of PCXDs for three-material decomposition in joint imaging proved feasible through a combination of rotation-translation acquisition trajectories and iterative reconstruction with optimized regularization.

  14. Predictive uncertainty analysis of plume distribution for geological carbon sequestration using sparse-grid Bayesian method

    NASA Astrophysics Data System (ADS)

    Shi, X.; Zhang, G.

    2013-12-01

    Because of the extensive computational burden, parametric uncertainty analyses are rarely conducted for geological carbon sequestration (GCS) process based multi-phase models. The difficulty of predictive uncertainty analysis for the CO2 plume migration in realistic GCS models is not only due to the spatial distribution of the caprock and reservoir (i.e. heterogeneous model parameters), but also because the GCS optimization estimation problem has multiple local minima due to the complex nonlinear multi-phase (gas and aqueous), and multi-component (water, CO2, salt) transport equations. The geological model built by Doughty and Pruess (2004) for the Frio pilot site (Texas) was selected and assumed to represent the 'true' system, which was composed of seven different facies (geological units) distributed among 10 layers. We chose to calibrate the permeabilities of these facies. Pressure and gas saturation values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. Each simulation of the model lasts about 2 hours. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid stochastic collocation method. This surrogate response surface global optimization algorithm is firstly used to calibrate the model parameters, then prediction uncertainty of the CO2 plume position is quantified due to the propagation from parametric uncertainty in the numerical experiments, which is also compared to the actual plume from the 'true' model. Results prove that the approach is computationally efficient for multi-modal optimization and prediction uncertainty quantification for computationally expensive simulation models. Both our inverse methodology and findings can be broadly applicable to GCS in heterogeneous storage formations.

  15. A new time calibration method for switched-capacitor-array-based waveform samplers

    NASA Astrophysics Data System (ADS)

    Kim, H.; Chen, C.-T.; Eclov, N.; Ronzhin, A.; Murat, P.; Ramberg, E.; Los, S.; Moses, W.; Choong, W.-S.; Kao, C.-M.

    2014-12-01

    We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be 2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.

  16. A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers.

    PubMed

    Kim, H; Chen, C-T; Eclov, N; Ronzhin, A; Murat, P; Ramberg, E; Los, S; Moses, W; Choong, W-S; Kao, C-M

    2014-12-11

    We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.

  17. A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers

    PubMed Central

    Kim, H.; Chen, C.-T.; Eclov, N.; Ronzhin, A.; Murat, P.; Ramberg, E.; Los, S.; Moses, W.; Choong, W.-S.; Kao, C.-M.

    2014-01-01

    We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration. PMID:25506113

  18. Application of six sigma and AHP in analysis of variable lead time calibration process instrumentation

    NASA Astrophysics Data System (ADS)

    Rimantho, Dino; Rahman, Tomy Abdul; Cahyadi, Bambang; Tina Hernawati, S.

    2017-02-01

    Calibration of instrumentation equipment in the pharmaceutical industry is an important activity to determine the true value of a measurement. Preliminary studies indicated that occur lead-time calibration resulted in disruption of production and laboratory activities. This study aimed to analyze the causes of lead-time calibration. Several methods used in this study such as, Six Sigma in order to determine the capability process of the calibration instrumentation of equipment. Furthermore, the method of brainstorming, Pareto diagrams, and Fishbone diagrams were used to identify and analyze the problems. Then, the method of Hierarchy Analytical Process (AHP) was used to create a hierarchical structure and prioritize problems. The results showed that the value of DPMO around 40769.23 which was equivalent to the level of sigma in calibration equipment approximately 3,24σ. This indicated the need for improvements in the calibration process. Furthermore, the determination of problem-solving strategies Lead Time Calibration such as, shortens the schedule preventive maintenance, increase the number of instrument Calibrators, and train personnel. Test results on the consistency of the whole matrix of pairwise comparisons and consistency test showed the value of hierarchy the CR below 0.1.

  19. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  20. Influence of Ultrasonic Nonlinear Propagation on Hydrophone Calibration Using Two-Transducer Reciprocity Method

    NASA Astrophysics Data System (ADS)

    Yoshioka, Masahiro; Sato, Sojun; Kikuchi, Tsuneo; Matsuda, Yoichi

    2006-05-01

    In this study, the influence of ultrasonic nonlinear propagation on hydrophone calibration by the two-transducer reciprocity method is investigated quantitatively using the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation. It is proposed that the correction for the diffraction and attenuation of ultrasonic waves used in two-transducer reciprocity calibration can be derived using the KZK equation to remove the influence of nonlinear propagation. The validity of the correction is confirmed by comparing the sensitivities calibrated by the two-transducer reciprocity method and laser interferometry.

Top