NASA Technical Reports Server (NTRS)
Racette, Paul; Lang, Roger; Zhang, Zhao-Nan; Zacharias, David; Krebs, Carolyn A. (Technical Monitor)
2002-01-01
Radiometers must be periodically calibrated because the receiver response fluctuates. Many techniques exist to correct for the time varying response of a radiometer receiver. An analytical technique has been developed that uses generalized least squares regression (LSR) to predict the performance of a wide variety of calibration algorithms. The total measurement uncertainty including the uncertainty of the calibration can be computed using LSR. The uncertainties of the calibration samples used in the regression are based upon treating the receiver fluctuations as non-stationary processes. Signals originating from the different sources of emission are treated as simultaneously existing random processes. Thus, the radiometer output is a series of samples obtained from these random processes. The samples are treated as random variables but because the underlying processes are non-stationary the statistics of the samples are treated as non-stationary. The statistics of the calibration samples depend upon the time for which the samples are to be applied. The statistics of the random variables are equated to the mean statistics of the non-stationary processes over the interval defined by the time of calibration sample and when it is applied. This analysis opens the opportunity for experimental investigation into the underlying properties of receiver non stationarity through the use of multiple calibration references. In this presentation we will discuss the application of LSR to the analysis of various calibration algorithms, requirements for experimental verification of the theory, and preliminary results from analyzing experiment measurements.
Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang
2016-01-01
It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894
Bayesian Treed Calibration: An Application to Carbon Capture With AX Sorbent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Lai, Kevin
2017-01-02
In cases where field or experimental measurements are not available, computer models can model real physical or engineering systems to reproduce their outcomes. They are usually calibrated in light of experimental data to create a better representation of the real system. Statistical methods, based on Gaussian processes, for calibration and prediction have been especially important when the computer models are expensive and experimental data limited. In this paper, we develop the Bayesian treed calibration (BTC) as an extension of standard Gaussian process calibration methods to deal with non-stationarity computer models and/or their discrepancy from the field (or experimental) data. Ourmore » proposed method partitions both the calibration and observable input space, based on a binary tree partitioning, into sub-regions where existing model calibration methods can be applied to connect a computer model with the real system. The estimation of the parameters in the proposed model is carried out using Markov chain Monte Carlo (MCMC) computational techniques. Different strategies have been applied to improve mixing. We illustrate our method in two artificial examples and a real application that concerns the capture of carbon dioxide with AX amine based sorbents. The source code and the examples analyzed in this paper are available as part of the supplementary materials.« less
Experimental calibration procedures for rotating Lorentz-force flowmeters
Hvasta, M. G.; Slighton, N. T.; Kolemen, E.; ...
2017-07-14
Rotating Lorentz-force flowmeters are a novel and useful technology with a range of applications in a variety of different industries. However, calibrating these flowmeters can be challenging, time-consuming, and expensive. In this paper, simple calibration procedures for rotating Lorentz-force flowmeters are presented. These procedures eliminate the need for expensive equipment, numerical modeling, redundant flowmeters, and system down-time. Finally, the calibration processes are explained in a step-by-step manner and compared to experimental results.
Experimental calibration procedures for rotating Lorentz-force flowmeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hvasta, M. G.; Slighton, N. T.; Kolemen, E.
Rotating Lorentz-force flowmeters are a novel and useful technology with a range of applications in a variety of different industries. However, calibrating these flowmeters can be challenging, time-consuming, and expensive. In this paper, simple calibration procedures for rotating Lorentz-force flowmeters are presented. These procedures eliminate the need for expensive equipment, numerical modeling, redundant flowmeters, and system down-time. Finally, the calibration processes are explained in a step-by-step manner and compared to experimental results.
Methods for Calibration of Prout-Tompkins Kinetics Parameters Using EZM Iteration and GLO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wemhoff, A P; Burnham, A K; de Supinski, B
2006-11-07
This document contains information regarding the standard procedures used to calibrate chemical kinetics parameters for the extended Prout-Tompkins model to match experimental data. Two methods for calibration are mentioned: EZM calibration and GLO calibration. EZM calibration matches kinetics parameters to three data points, while GLO calibration slightly adjusts kinetic parameters to match multiple points. Information is provided regarding the theoretical approach and application procedure for both of these calibration algorithms. It is recommended that for the calibration process, the user begin with EZM calibration to provide a good estimate, and then fine-tune the parameters using GLO. Two examples have beenmore » provided to guide the reader through a general calibrating process.« less
NASA Technical Reports Server (NTRS)
Geng, Steven M.
1987-01-01
A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Reasonable agreement was obtained between the code prediction and the experimental data over a wide range of engine operating conditions.
NASA Technical Reports Server (NTRS)
Geng, Steven M.
1987-01-01
A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Resonable agreement was obtained between the code predictions and the experimental data over a wide range of engine operating conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nabeel A. Riza
The goals of the first six months of this project were to begin laying the foundations for both the SiC front-end optical chip fabrication techniques for high pressure gas species sensing as well as the design, assembly, and test of a portable high pressure high temperature calibration test cell chamber for introducing gas species. This calibration cell will be used in the remaining months for proposed first stage high pressure high temperature gas species sensor experimentation and data processing. All these goals have been achieved and are described in detail in the report. Both design process and diagrams for themore » mechanical elements as well as the optical systems are provided. Photographs of the fabricated calibration test chamber cell, the optical sensor setup with the calibration cell, the SiC sample chip holder, and relevant signal processing mathematics are provided. Initial experimental data from both the optical sensor and fabricated test gas species SiC chips is provided. The design and experimentation results are summarized to give positive conclusions on the proposed novel high temperature high pressure gas species detection optical sensor technology.« less
Fang, Cheng; Butler, David Lee
2013-05-01
In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.
Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2004-01-01
A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.
Calibrator device for the extrusion of cable coatings
NASA Astrophysics Data System (ADS)
Garbacz, Tomasz; Dulebová, Ľudmila; Spišák, Emil; Dulebová, Martina
2016-05-01
This paper presents selected results of theoretical and experimental research works on a new calibration device (calibrators) used to produce coatings of electric cables. The aim of this study is to present design solution calibration equipment and present a new calibration machine, which is an important element of the modernized technology extrusion lines for coating cables. As a result of the extrusion process of PVC modified with blowing agents, an extrudate in the form of an electrical cable was obtained. The conditions of the extrusion process were properly selected, which made it possible to obtain a product with solid external surface and cellular core.
Precise calibration of few-cycle laser pulses with atomic hydrogen
NASA Astrophysics Data System (ADS)
Wallace, W. C.; Kielpinski, D.; Litvinyuk, I. V.; Sang, R. T.
2017-12-01
Interaction of atoms and molecules with strong electric fields is a fundamental process in many fields of research, particularly in the emerging field of attosecond science. Therefore, understanding the physics underpinning those interactions is of significant interest to the scientific community. One crucial step in this understanding is accurate knowledge of the few-cycle laser field driving the process. Atomic hydrogen (H), the simplest of all atomic species, plays a key role in benchmarking strong-field processes. Its wide-spread use as a testbed for theoretical calculations allows the comparison of approximate theoretical models against nearly-perfect numerical solutions of the three-dimensional time-dependent Schrödinger equation. Until recently, relatively little experimental data in atomic H was available for comparison to these models, and was due mostly due to the difficulty in the construction and use of atomic H sources. Here, we review our most recent experimental results from atomic H interaction with few-cycle laser pulses and how they have been used to calibrate important laser pulse parameters such as peak intensity and the carrier-envelope phase (CEP). Quantitative agreement between experimental data and theoretical predictions for atomic H has been obtained at the 10% uncertainty level, allowing for accurate laser calibration intensity at the 1% level. Using this calibration in atomic H, both accurate CEP data and an intensity calibration standard have been obtained Ar, Kr, and Xe; such gases are in common use for strong-field experiments. This calibration standard can be used by any laboratory using few-cycle pulses in the 1014 W cm-2 intensity regime centered at 800 nm wavelength to accurately calibrate their peak laser intensity to within few-percent precision.
Van Daele, Timothy; Gernaey, Krist V; Ringborg, Rolf H; Börner, Tim; Heintz, Søren; Van Hauwermeiren, Daan; Grey, Carl; Krühne, Ulrich; Adlercreutz, Patrick; Nopens, Ingmar
2017-09-01
The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimentation is not actively used to optimize the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω-transaminase catalyzed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is not only more accurate but also a computationally more expensive method. As a result, an important deviation between both approaches is found, confirming that linearization methods should be applied with care for nonlinear models. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1278-1293, 2017. © 2017 American Institute of Chemical Engineers.
Middleton, John; Vaks, Jeffrey E
2007-04-01
Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
Calibration and simulation of two large wastewater treatment plants operated for nutrient removal.
Ferrer, J; Morenilla, J J; Bouzas, A; García-Usach, F
2004-01-01
Control and optimisation of plant processes has become a priority for WWTP managers. The calibration and verification of a mathematical model provides an important tool for the investigation of advanced control strategies that may assist in the design or optimization of WWTPs. This paper describes the calibration of the ASM2d model for two full scale biological nitrogen and phosphorus removal plants in order to characterize the biological process and to upgrade the plants' performance. Results from simulation showed a good correspondence with experimental data demonstrating that the model and the calibrated parameters were able to predict the behaviour of both WWTPs. Once the calibration and simulation process was finished, a study for each WWTP was done with the aim of improving its performance. Modifications focused on reactor configuration and operation strategies were proposed.
Calibration for single multi-mode fiber digital scanning microscopy imaging system
NASA Astrophysics Data System (ADS)
Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong
2015-11-01
Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.
Laser's calibration of an AOTF-based spectral colorimeter
NASA Astrophysics Data System (ADS)
Emelianov, Sergey P.; Khrustalev, Vladimir N.; Kochin, Leonid B.; Polosin, Lev L.
2003-06-01
The paper is devoted to expedients of AOTF spectral colorimeters calibration. The spectrometer method of color values measuring with reference to spectral colorimeters on AOTF surveyed. The theoretical exposition of spectrometer data processing expedients is offered. The justified source of radiation choice, suitable for calibration of spectral colorimeters is carried out. The experimental results for different acousto-optical mediums and modes of interaction are submitted.
Brightness checkerboard lattice method for the calibration of the coaxial reverse Hartmann test
NASA Astrophysics Data System (ADS)
Li, Xinji; Hui, Mei; Li, Ning; Hu, Shinan; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin
2018-01-01
The coaxial reverse Hartmann test (RHT) is widely used in the measurement of large aspheric surfaces as an auxiliary method for interference measurement, because of its large dynamic range, highly flexible test with low frequency of surface errors, and low cost. And the accuracy of the coaxial RHT depends on the calibration. However, the calibration process remains inefficient, and the signal-to-noise ratio limits the accuracy of the calibration. In this paper, brightness checkerboard lattices were used to replace the traditional dot matrix. The brightness checkerboard method can reduce the number of dot matrix projections in the calibration process, thus improving efficiency. An LCD screen displayed a brightness checkerboard lattice, in which the brighter checkerboard and the darker checkerboard alternately arranged. Based on the image on the detector, the relationship between the rays at certain angles and the photosensitive positions of the detector coordinates can be obtained. And a differential de-noising method can effectively reduce the impact of noise on the measurement results. Simulation and experimentation proved the feasibility of the method. Theoretical analysis and experimental results show that the efficiency of the brightness checkerboard lattices is about four times that of the traditional dot matrix, and the signal-to-noise ratio of the calibration is significantly improved.
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilkey, Lindsay
This milestone presents a demonstration of the High-to-Low (Hi2Lo) process in the VVI focus area. Validation and additional calculations with the commercial computational fluid dynamics code, STAR-CCM+, were performed using a 5x5 fuel assembly with non-mixing geometry and spacer grids. This geometry was based on the benchmark experiment provided by Westinghouse. Results from the simulations were compared to existing experimental data and to the subchannel thermal-hydraulics code COBRA-TF (CTF). An uncertainty quantification (UQ) process was developed for the STAR-CCM+ model and results of the STAR UQ were communicated to CTF. Results from STAR-CCM+ simulations were used as experimental design pointsmore » in CTF to calibrate the mixing parameter β and compared to results obtained using experimental data points. This demonstrated that CTF’s β parameter can be calibrated to match existing experimental data more closely. The Hi2Lo process for the STAR-CCM+/CTF code coupling was documented in this milestone and closely linked L3:VVI.H2LP15.01 milestone report.« less
Single-Vector Calibration of Wind-Tunnel Force Balances
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2003-01-01
An improved method of calibrating a wind-tunnel force balance involves the use of a unique load application system integrated with formal experimental design methodology. The Single-Vector Force Balance Calibration System (SVS) overcomes the productivity and accuracy limitations of prior calibration methods. A force balance is a complex structural spring element instrumented with strain gauges for measuring three orthogonal components of aerodynamic force (normal, axial, and side force) and three orthogonal components of aerodynamic torque (rolling, pitching, and yawing moments). Force balances remain as the state-of-the-art instrument that provide these measurements on a scale model of an aircraft during wind tunnel testing. Ideally, each electrical channel of the balance would respond only to its respective component of load, and it would have no response to other components of load. This is not entirely possible even though balance designs are optimized to minimize these undesirable interaction effects. Ultimately, a calibration experiment is performed to obtain the necessary data to generate a mathematical model and determine the force measurement accuracy. In order to set the independent variables of applied load for the calibration 24 NASA Tech Briefs, October 2003 experiment, a high-precision mechanical system is required. Manual deadweight systems have been in use at Langley Research Center (LaRC) since the 1940s. These simple methodologies produce high confidence results, but the process is mechanically complex and labor-intensive, requiring three to four weeks to complete. Over the past decade, automated balance calibration systems have been developed. In general, these systems were designed to automate the tedious manual calibration process resulting in an even more complex system which deteriorates load application quality. The current calibration approach relies on a one-factor-at-a-time (OFAT) methodology, where each independent variable is incremented individually throughout its full-scale range, while all other variables are held at a constant magnitude. This OFAT approach has been widely accepted because of its inherent simplicity and intuitive appeal to the balance engineer. LaRC has been conducting research in a "modern design of experiments" (MDOE) approach to force balance calibration. Formal experimental design techniques provide an integrated view to the entire calibration process covering all three major aspects of an experiment; the design of the experiment, the execution of the experiment, and the statistical analyses of the data. In order to overcome the weaknesses in the available mechanical systems and to apply formal experimental techniques, a new mechanical system was required. The SVS enables the complete calibration of a six-component force balance with a series of single force vectors.
Experimental power density distribution benchmark in the TRIGA Mark II reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snoj, L.; Stancar, Z.; Radulovic, V.
2012-07-01
In order to improve the power calibration process and to benchmark the existing computational model of the TRIGA Mark II reactor at the Josef Stefan Inst. (JSI), a bilateral project was started as part of the agreement between the French Commissariat a l'energie atomique et aux energies alternatives (CEA) and the Ministry of higher education, science and technology of Slovenia. One of the objectives of the project was to analyze and improve the power calibration process of the JSI TRIGA reactor (procedural improvement and uncertainty reduction) by using absolutely calibrated CEA fission chambers (FCs). This is one of the fewmore » available power density distribution benchmarks for testing not only the fission rate distribution but also the absolute values of the fission rates. Our preliminary calculations indicate that the total experimental uncertainty of the measured reaction rate is sufficiently low that the experiments could be considered as benchmark experiments. (authors)« less
Volumetric calibration of a plenoptic camera.
Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S
2018-02-01
The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poilane, C.; Sandoz, P.; Departement d'Optique PM Duffieux, Institut FEMTO-ST, UMR CNRS 6174, Universite de Franche-Comte, 25030 Besancon, Cedex
2006-05-15
A double-side optical profilometer based on white-light interferometry was developed for thickness measurement of nontransparent films. The profile of the sample is measured simultaneously on both sides of the film. The resulting data allow the computation of the roughness, the flatness and the parallelism of the sides of the film, and the average thickness of the film. The key point is the apparatus calibration, i.e., the accurate determination of the distance between the reference mirrors of the complementary interferometers. Specific samples were processed for that calibration. The system is adaptable to various thickness scales as long as calibration can bemore » made accurately. A thickness accuracy better than 30 nm for films thinner than 200 {mu}m is reported with the experimental material used. In this article, we present the principle of the method as well as the calibration methodology. Limitation and accuracy of the method are discussed. Experimental results are presented.« less
Research on camera on orbit radial calibration based on black body and infrared calibration stars
NASA Astrophysics Data System (ADS)
Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng
2018-05-01
Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.
Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions
Chen, Shengyong; Xiao, Gang; Li, Xiaoli
2014-01-01
This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954
Assessment and certification of neonatal incubator sensors through an inferential neural network.
de Araújo, José Medeiros; de Menezes, José Maria Pires; Moura de Albuquerque, Alberto Alexandre; da Mota Almeida, Otacílio; Ugulino de Araújo, Fábio Meneghetti
2013-11-15
Measurement and diagnostic systems based on electronic sensors have been increasingly essential in the standardization of hospital equipment. The technical standard IEC (International Electrotechnical Commission) 60601-2-19 establishes requirements for neonatal incubators and specifies the calibration procedure and validation tests for such devices using sensors systems. This paper proposes a new procedure based on an inferential neural network to evaluate and calibrate a neonatal incubator. The proposal presents significant advantages over the standard calibration process, i.e., the number of sensors is drastically reduced, and it runs with the incubator under operation. Since the sensors used in the new calibration process are already installed in the commercial incubator, no additional hardware is necessary; and the calibration necessity can be diagnosed in real time without the presence of technical professionals in the neonatal intensive care unit (NICU). Experimental tests involving the aforementioned calibration system are carried out in a commercial incubator in order to validate the proposal.
Assessment and Certification of Neonatal Incubator Sensors through an Inferential Neural Network
de Araújo Júnior, José Medeiros; de Menezes Júnior, José Maria Pires; de Albuquerque, Alberto Alexandre Moura; Almeida, Otacílio da Mota; de Araújo, Fábio Meneghetti Ugulino
2013-01-01
Measurement and diagnostic systems based on electronic sensors have been increasingly essential in the standardization of hospital equipment. The technical standard IEC (International Electrotechnical Commission) 60601-2-19 establishes requirements for neonatal incubators and specifies the calibration procedure and validation tests for such devices using sensors systems. This paper proposes a new procedure based on an inferential neural network to evaluate and calibrate a neonatal incubator. The proposal presents significant advantages over the standard calibration process, i.e., the number of sensors is drastically reduced, and it runs with the incubator under operation. Since the sensors used in the new calibration process are already installed in the commercial incubator, no additional hardware is necessary; and the calibration necessity can be diagnosed in real time without the presence of technical professionals in the neonatal intensive care unit (NICU). Experimental tests involving the aforementioned calibration system are carried out in a commercial incubator in order to validate the proposal. PMID:24248278
Monitoring and modeling of long-term settlements of an experimental landfill in Brazil.
Simões, Gustavo Ferreira; Catapreta, Cícero Antônio Antunes
2013-02-01
Settlement evaluation in sanitary landfills is a complex process, due to the waste heterogeneity, time-varying properties and influencing factors and mechanisms, such as mechanical compression due to load application and creep, and physical-chemical and biological processes caused by the wastes decomposition. Many empirical models for the analysis of long-term settlement in landfills are reported in the literature. This paper presents the results of a settlement monitoring program carried out during 6 years in Belo Horizonte experimental landfill. Different sets of field data were used to calibrate three long-term settlement prediction models (rheological, hyperbolic and composite). The parameters obtained in the calibration were used to predict the settlements and to compare with actual field data. During the monitoring period of 6 years, significant vertical strains were observed (of up to 31%) in relation to the initial height of the experimental landfill. The results for the long-term settlement prediction obtained by the hyperbolic and rheological models significantly underestimate the settlements, regardless the period of data used in the calibration. The best fits were obtained with the composite model, except when 1 year field data were used in the calibration. The results of the composite model indicate settlements stabilization at larger times and with larger final settlements when compared to the hyperbolic and rheological models. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nielsen, Roger L.; Ustunisik, Gokce; Weinsteiger, Allison B.; Tepley, Frank J.; Johnston, A. Dana; Kent, Adam J. R.
2017-09-01
Quantitative models of petrologic processes require accurate partition coefficients. Our ability to obtain accurate partition coefficients is constrained by their dependence on pressure temperature and composition, and on the experimental and analytical techniques we apply. The source and magnitude of error in experimental studies of trace element partitioning may go unrecognized if one examines only the processed published data. The most important sources of error are relict crystals, and analyses of more than one phase in the analytical volume. Because we have typically published averaged data, identification of compromised data is difficult if not impossible. We addressed this problem by examining unprocessed data from plagioclase/melt partitioning experiments, by comparing models based on that data with existing partitioning models, and evaluated the degree to which the partitioning models are dependent on the calibration data. We found that partitioning models are dependent on the calibration data in ways that result in erroneous model values, and that the error will be systematic and dependent on the value of the partition coefficient. In effect, use of different calibration datasets will result in partitioning models whose results are systematically biased, and that one can arrive at different and conflicting conclusions depending on how a model is calibrated, defeating the purpose of applying the models. Ultimately this is an experimental data problem, which can be solved if we publish individual analyses (not averages) or use a projection method wherein we use an independent compositional constraint to identify and estimate the uncontaminated composition of each phase.
Volumetric calibration of a plenoptic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Volumetric calibration of a plenoptic camera
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...
2018-02-01
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Experimental Demonstration of In-Place Calibration for Time Domain Microwave Imaging System
NASA Astrophysics Data System (ADS)
Kwon, S.; Son, S.; Lee, K.
2018-04-01
In this study, the experimental demonstration of in-place calibration was conducted using the developed time domain measurement system. Experiments were conducted using three calibration methods—in-place calibration and two existing calibrations, that is, array rotation and differential calibration. The in-place calibration uses dual receivers located at an equal distance from the transmitter. The received signals at the dual receivers contain similar unwanted signals, that is, the directly received signal and antenna coupling. In contrast to the simulations, the antennas are not perfectly matched and there might be unexpected environmental errors. Thus, we experimented with the developed experimental system to demonstrate the proposed method. The possible problems with low signal-to-noise ratio and clock jitter, which may exist in time domain systems, were rectified by averaging repeatedly measured signals. The tumor was successfully detected using the three calibration methods according to the experimental results. The cross correlation was calculated using the reconstructed image of the ideal differential calibration for a quantitative comparison between the existing rotation calibration and the proposed in-place calibration. The mean value of cross correlation between the in-place calibration and ideal differential calibration was 0.80, and the mean value of cross correlation of the rotation calibration was 0.55. Furthermore, the results of simulation were compared with the experimental results to verify the in-place calibration method. A quantitative analysis was also performed, and the experimental results show a tendency similar to the simulation.
Efficient material decomposition method for dual-energy X-ray cargo inspection system
NASA Astrophysics Data System (ADS)
Lee, Donghyeon; Lee, Jiseoc; Min, Jonghwan; Lee, Byungcheol; Lee, Byeongno; Oh, Kyungmin; Kim, Jaehyun; Cho, Seungryong
2018-03-01
Dual-energy X-ray inspection systems are widely used today for it provides X-ray attenuation contrast of the imaged object and also its material information. Material decomposition capability allows a higher detection sensitivity of potential targets including purposely loaded impurities in agricultural product inspections and threats in security scans for example. Dual-energy X-ray transmission data can be transformed into two basis material thickness data, and its transformation accuracy heavily relies on a calibration of material decomposition process. The calibration process in general can be laborious and time consuming. Moreover, a conventional calibration method is often challenged by the nonuniform spectral characteristics of the X-ray beam in the entire field-of-view (FOV). In this work, we developed an efficient material decomposition calibration process for a linear accelerator (LINAC) based high-energy X-ray cargo inspection system. We also proposed a multi-spot calibration method to improve the decomposition performance throughout the entire FOV. Experimental validation of the proposed method has been demonstrated by use of a cargo inspection system that supports 6 MV and 9 MV dual-energy imaging.
NASA Astrophysics Data System (ADS)
Gao, Dongyang; Zheng, Xiaobing; Li, Jianjun; Hu, Youbo; Xia, Maopeng; Salam, Abdul; Zhang, Peng
2018-03-01
Based on spontaneous parametric downconversion process, we propose a novel self-calibration radiometer scheme which can self-calibrate the degradation of its own response and ultimately monitor the fluctuation of a target radiation. Monitor results were independent of its degradation and not linked to the primary standard detector scale. The principle and feasibility of the proposed scheme were verified by observing bromine-tungsten lamp. A relative standard deviation of 0.39 % was obtained for stable bromine-tungsten lamp. Results show that the proposed scheme is advanced of its principle. The proposed scheme could make a significant breakthrough in the self-calibration issue on the space platform.
Calibration of a fluxgate magnetometer array and its application in magnetic object localization
NASA Astrophysics Data System (ADS)
Pang, Hongfeng; Luo, Shitu; Zhang, Qi; Li, Ji; Chen, Dixiang; Pan, Mengchun; Luo, Feilu
2013-07-01
The magnetometer array is effective for magnetic object detection and localization. Calibration is important to improve the accuracy of the magnetometer array. A magnetic sensor array built with four three-axis DM-050 fluxgate magnetometers is designed, which is connected by a cross aluminum frame. In order to improve the accuracy of the magnetometer array, a calibration process is presented. The calibration process includes magnetometer calibration, coordinate transformation and misalignment calibration. The calibration system consists of a magnetic sensor array, a GSM-19T proton magnetometer, a two-dimensional nonmagnetic rotation platform, a 12 V-dc portable power device and two portable computers. After magnetometer calibration, the RMS error has been decreased from an original value of 125.559 nT to a final value of 1.711 nT (a factor of 74). After alignment, the RMS error of misalignment has been decreased from 1322.3 to 6.0 nT (a factor of 220). Then, the calibrated array deployed on the nonmagnetic rotation platform is used for ferromagnetic object localization. Experimental results show that the estimated errors of X, Y and Z axes are -0.049 m, 0.008 m and 0.025 m, respectively. Thus, the magnetometer array is effective for magnetic object detection and localization in three dimensions.
A Focusing Method in the Calibration Process of Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro, José L.; Gardel, Alfredo; Cano, Ángel E.; Bravo, Ignacio
2010-01-01
A focusing procedure in the calibration process of image sensors based on Incoherent Optical Fiber Bundles (IOFBs) is described using the information extracted from fibers. These procedures differ from any other currently known focusing method due to the non spatial in-out correspondence between fibers, which produces a natural codification of the image to transmit. Focus measuring is essential prior to carrying out calibration in order to guarantee accurate processing and decoding. Four algorithms have been developed to estimate the focus measure; two methods based on mean grey level, and the other two based on variance. In this paper, a few simple focus measures are defined and compared. Some experimental results referred to the focus measure and the accuracy of the developed methods are discussed in order to demonstrate its effectiveness. PMID:22315526
Autonomous calibration of single spin qubit operations
NASA Astrophysics Data System (ADS)
Frank, Florian; Unden, Thomas; Zoller, Jonathan; Said, Ressa S.; Calarco, Tommaso; Montangero, Simone; Naydenov, Boris; Jelezko, Fedor
2017-12-01
Fully autonomous precise control of qubits is crucial for quantum information processing, quantum communication, and quantum sensing applications. It requires minimal human intervention on the ability to model, to predict, and to anticipate the quantum dynamics, as well as to precisely control and calibrate single qubit operations. Here, we demonstrate single qubit autonomous calibrations via closed-loop optimisations of electron spin quantum operations in diamond. The operations are examined by quantum state and process tomographic measurements at room temperature, and their performances against systematic errors are iteratively rectified by an optimal pulse engineering algorithm. We achieve an autonomous calibrated fidelity up to 1.00 on a time scale of minutes for a spin population inversion and up to 0.98 on a time scale of hours for a single qubit π/2 -rotation within the experimental error of 2%. These results manifest a full potential for versatile quantum technologies.
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...
2017-09-07
In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan
In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhang, X.; Xiao, W.
2018-04-01
As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.
Parameter Calibration of GTN Damage Model and Formability Analysis of 22MnB5 in Hot Forming Process
NASA Astrophysics Data System (ADS)
Ying, Liang; Liu, Wenquan; Wang, Dantong; Hu, Ping
2017-11-01
Hot forming of high strength steel at elevated temperatures is an attractive technology to achieve the lightweight of vehicle body. The mechanical behavior of boron steel 22MnB5 strongly depends on the variation of temperature which makes the process design more difficult. In this paper, the Gurson-Tvergaard-Needleman (GTN) model is used to study the formability of 22MnB5 sheet at different temperatures. Firstly, the rheological behavior of 22MnB5 is analyzed through a series of hot tensile tests at a temperature range of 600-800 °C. Then, a detailed process to calibrate the damage parameters is given based on the response surface methodology and genetic algorithm method. The GTN model together with the damage parameters calibrated is then implemented to simulate the deformation and damage evolution of 22MnB5 in the process of high-temperature Nakazima test. The capability of the GTN model as a suitable tool to evaluate the sheet formability is confirmed by comparing experimental and calculated results. Finally, as a practical application, the forming limit diagram of 22MnB5 at 700 °C is constructed using the Nakazima simulation and Marciniak-Kuczynski (M-K) model, respectively. And the simulation integrated GTN model shows a higher reliability by comparing the predicted results of these two approaches with the experimental ones.
NASA Astrophysics Data System (ADS)
Gupta, A.; Singh, P. J.; Gaikwad, D. Y.; Udupa, D. V.; Topkar, A.; Sahoo, N. K.
2018-02-01
An experimental setup is developed for the trace level detection of heavy water (HDO) using the off axis-integrated cavity output spectroscopy technique. The absorption spectrum of water samples is recorded in the spectral range of 7190.7 cm-1-7191.5 cm-1 with the diode laser as the light source. From the recorded water vapor absorption spectrum, the heavy water concentration is determined from the HDO and water line. The effect of cavity gain nonlinearity with per pass absorption is studied. The signal processing and data fitting procedure is devised to obtain linear calibration curves by including nonlinear cavity gain effects into the calculation. Initial calibration of mirror reflectivity is performed by measurements on the natural water sample. The signal processing and data fitting method has been validated by the measurement of the HDO concentration in water samples over a wide range from 20 ppm to 2280 ppm showing a linear calibration curve. The average measurement time is about 30 s. The experimental technique presented in this paper could be applied for the development of a portable instrument for the fast measurement of water isotopic composition in heavy water plants and for the detection of heavy water leak in pressurized heavy water reactors.
National Transonic Facility Wall Pressure Calibration Using Modern Design of Experiments (Invited)
NASA Technical Reports Server (NTRS)
Underwood, Pamela J.; Everhart, Joel L.; DeLoach, Richard
2001-01-01
The Modern Design of Experiments (MDOE) has been applied to wind tunnel testing at NASA Langley Research Center for several years. At Langley, MDOE has proven to be a useful and robust approach to aerodynamic testing that yields significant reductions in the cost and duration of experiments while still providing for the highest quality research results. This paper extends its application to include empty tunnel wall pressure calibrations. These calibrations are performed in support of wall interference corrections. This paper will present the experimental objectives, and the theoretical design process. To validate the tunnel-empty-calibration experiment design, preliminary response surface models calculated from previously acquired data are also presented. Finally, lessons learned and future wall interference applications of MDOE are discussed.
Calibrating the orientation between a microlens array and a sensor based on projective geometry
NASA Astrophysics Data System (ADS)
Su, Lijuan; Yan, Qiangqiang; Cao, Jun; Yuan, Yan
2016-07-01
We demonstrate a method for calibrating a microlens array (MLA) with a sensor component by building a plenoptic camera with a conventional prime lens. This calibration method includes a geometric model, a setup to adjust the distance (L) between the prime lens and the MLA, a calibration procedure for determining the subimage centers, and an optimization algorithm. The geometric model introduces nine unknown parameters regarding the centers of the microlenses and their images, whereas the distance adjustment setup provides an initial guess for the distance L. The simulation results verify the effectiveness and accuracy of the proposed method. The experimental results demonstrate the calibration process can be performed with a commercial prime lens and the proposed method can be used to quantitatively evaluate whether a MLA and a sensor is assembled properly for plenoptic systems.
Data filtering with support vector machines in geometric camera calibration.
Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C
2010-02-01
The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.
A Single-Vector Force Calibration Method Featuring the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Parker, P. A.; Morton, M.; Draper, N.; Line, W.
2001-01-01
This paper proposes a new concept in force balance calibration. An overview of the state-of-the-art in force balance calibration is provided with emphasis on both the load application system and the experimental design philosophy. Limitations of current systems are detailed in the areas of data quality and productivity. A unique calibration loading system integrated with formal experimental design techniques has been developed and designated as the Single-Vector Balance Calibration System (SVS). This new concept addresses the limitations of current systems. The development of a quadratic and cubic calibration design is presented. Results from experimental testing are compared and contrasted with conventional calibration systems. Analyses of data are provided that demonstrate the feasibility of this concept and provide new insights into balance calibration.
NASA Astrophysics Data System (ADS)
Butt, Ali
Crack propagation in a solid rocket motor environment is difficult to measure directly. This experimental and analytical study evaluated the viability of real-time radiography for detecting bore regression and propellant crack propagation speed. The scope included the quantitative interpretation of crack tip velocity from simulated radiographic images of a burning, center-perforated grain and actual real-time radiographs taken on a rapid-prototyped model that dynamically produced the surface movements modeled in the simulation. The simplified motor simulation portrayed a bore crack that propagated radially at a speed that was 10 times the burning rate of the bore. Comparing the experimental image interpretation with the calibrated surface inputs, measurement accuracies were quantified. The average measurements of the bore radius were within 3% of the calibrated values with a maximum error of 7%. The crack tip speed could be characterized with image processing algorithms, but not with the dynamic calibration data. The laboratory data revealed that noise in the transmitted X-Ray intensity makes sensing the crack tip propagation using changes in the centerline transmitted intensity level impractical using the algorithms employed.
Yang, Tao; Sezer, Hayri; Celik, Ismail B.; ...
2015-06-02
In the present paper, a physics-based procedure combining experiments and multi-physics numerical simulations is developed for overall analysis of SOFCs operational diagnostics and performance predictions. In this procedure, essential information for the fuel cell is extracted first by utilizing empirical polarization analysis in conjunction with experiments and refined by multi-physics numerical simulations via simultaneous analysis and calibration of polarization curve and impedance behavior. The performance at different utilization cases and operating currents is also predicted to confirm the accuracy of the proposed model. It is demonstrated that, with the present electrochemical model, three air/fuel flow conditions are needed to producemore » a set of complete data for better understanding of the processes occurring within SOFCs. After calibration against button cell experiments, the methodology can be used to assess performance of planar cell without further calibration. The proposed methodology would accelerate the calibration process and improve the efficiency of design and diagnostics.« less
NASA Technical Reports Server (NTRS)
Held, D.; Werner, C.; Wall, S.
1983-01-01
The absolute amplitude calibration of the spaceborne Seasat SAR data set is presented based on previous relative calibration studies. A scale factor making it possible to express the perceived radar brightness of a scene in units of sigma-zero is established. The system components are analyzed for error contribution, and the calibration techniques are introduced for each stage. These include: A/D converter saturation tests; prevention of clipping in the processing step; and converting the digital image into the units of received power. Experimental verification was performed by screening and processing the data of the lava flow surrounding the Pisgah Crater in Southern California, for which previous C-130 airborne scatterometer data were available. The average backscatter difference between the two data sets is estimated to be 2 dB in the brighter, and 4 dB in the dimmer regions. For the SAR a calculated uncertainty of 3 dB is expected.
Wang, Mi; Fan, Chengcheng; Yang, Bo; Jin, Shuying; Pan, Jun
2016-01-01
Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. PMID:27483287
Cao, Jianping; Xiong, Jianyin; Wang, Lixin; Xu, Ying; Zhang, Yinping
2016-09-06
Solid-phase microextraction (SPME) is regarded as a nonexhaustive sampling technique with a smaller extraction volume and a shorter extraction time than traditional sampling techniques and is hence widely used. The SPME sampling process is affected by the convection or diffusion effect along the coating surface, but this factor has seldom been studied. This paper derives an analytical model to characterize SPME sampling for semivolatile organic compounds (SVOCs) as well as for volatile organic compounds (VOCs) by considering the surface mass transfer process. Using this model, the chemical concentrations in a sample matrix can be conveniently calculated. In addition, the model can be used to determine the characteristic parameters (partition coefficient and diffusion coefficient) for typical SPME chemical samplings (SPME calibration). Experiments using SPME samplings of two typical SVOCs, dibutyl phthalate (DBP) in sealed chamber and di(2-ethylhexyl) phthalate (DEHP) in ventilated chamber, were performed to measure the two characteristic parameters. The experimental results demonstrated the effectiveness of the model and calibration method. Experimental data from the literature (VOCs sampled by SPME) were used to further validate the model. This study should prove useful for relatively rapid quantification of concentrations of different chemicals in various circumstances with SPME.
Improvements in geothermometry. Final technical report. Rev
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, J.; Dibble, W.; Parks, G.
1982-08-01
Alkali and alkaline earth geothermometers are useful for estimating geothermal reservoir temperatures, though a general theoretical basis has yet to be established and experimental calibration needs improvement. Equilibrium cation exchange between feldspars provided the original basis for the Na-K and Na-K-Ca geothermometers (Fournier and Truesdell, 1973), but theoretical, field and experimental evidence prove that neither equilibrium nor feldspars are necessary. Here, evidence is summarized in support of these observations, concluding that these geothermometers can be expected to have a surprisingly wide range of applicability, but that the reasons behind such broad applicability are not yet understood. Early experimental work provedmore » that water-rock interactions are slow at low temperatures, so experimental calibration at temperatures below 150/sup 0/ is impractical. Theoretical methods and field data were used instead for all work at low temperatures. Experimental methods were emphasized for temperatures above 150/sup 0/C, and the simplest possible solid and solution compositions were used to permit investigation of one process or question at a time. Unexpected results in experimental work prevented complete integration of the various portions of the investigation.« less
Hall Probe Calibration System Design for the Mu2e Solenoid Field Mapping System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orozco, Charles; Elementi, Luciano; Feher, Sandor
The goal of the Mu2e experiment at Fermilab is to search for charged-lepton flavor violation by looking for neutrino-less muon to electron conversion in the field of the nucleus. The Mu2e experimental apparatus utilizes a complex magnetic field in the muon generation and momentum and charge selection process. Precise knowledge of the magnetic field is crucial. It is planned to map the solenoid field with calibrated 3D Hall probes up to 10 -5 accuracy. Here, this article describes a new design of a Hall probe calibration system that will be used to calibrate 3D Hall probes to better than 10more » -5 accuracy for the Mu2e Solenoid Field Mapping System.« less
Hall Probe Calibration System Design for the Mu2e Solenoid Field Mapping System
Orozco, Charles; Elementi, Luciano; Feher, Sandor; ...
2018-02-22
The goal of the Mu2e experiment at Fermilab is to search for charged-lepton flavor violation by looking for neutrino-less muon to electron conversion in the field of the nucleus. The Mu2e experimental apparatus utilizes a complex magnetic field in the muon generation and momentum and charge selection process. Precise knowledge of the magnetic field is crucial. It is planned to map the solenoid field with calibrated 3D Hall probes up to 10 -5 accuracy. Here, this article describes a new design of a Hall probe calibration system that will be used to calibrate 3D Hall probes to better than 10more » -5 accuracy for the Mu2e Solenoid Field Mapping System.« less
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Bader, Jon B.
2010-01-01
Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.
A road map for multi-way calibration models.
Escandar, Graciela M; Olivieri, Alejandro C
2017-08-07
A large number of experimental applications of multi-way calibration are known, and a variety of chemometric models are available for the processing of multi-way data. While the main focus has been directed towards three-way data, due to the availability of various instrumental matrix measurements, a growing number of reports are being produced on order signals of increasing complexity. The purpose of this review is to present a general scheme for selecting the appropriate data processing model, according to the properties exhibited by the multi-way data. In spite of the complexity of the multi-way instrumental measurements, simple criteria can be proposed for model selection, based on the presence and number of the so-called multi-linearity breaking modes (instrumental modes that break the low-rank multi-linearity of the multi-way arrays), and also on the existence of mutually dependent instrumental modes. Recent literature reports on multi-way calibration are reviewed, with emphasis on the models that were selected for data processing.
Kuenze, Christopher; Eltouhky, Moataz; Thomas, Abbey; Sutherlin, Mark; Hart, Joseph
2016-05-01
Collecting torque data using a multimode dynamometer is common in sports-medicine research. The error in torque measurements across multiple sites and dynamometers has not been established. To assess the validity of 2 calibration protocols across 3 dynamometers and the error associated with torque measurement for each system. Observational study. 3 university laboratories at separate institutions. 2 Biodex System 3 dynamometers and 1 Biodex System 4 dynamometer. System calibration was completed using the manufacturer-recommended single-weight method and an experimental calibration method using a series of progressive weights. Both calibration methods were compared with a manually calculated theoretical torque across a range of applied weights. Relative error, absolute error, and percent error were calculated at each weight. Each outcome variable was compared between systems using 95% confidence intervals across low (0-65 Nm), moderate (66-110 Nm), and high (111-165 Nm) torque categorizations. Calibration coefficients were established for each system using both calibration protocols. However, within each system the calibration coefficients generated using the single-weight (System 4 = 2.42 [0.90], System 3a = 1.37 [1.11], System 3b = -0.96 [1.45]) and experimental calibration protocols (System 4 = 3.95 [1.08], System 3a = -0.79 [1.23], System 3b = 2.31 [1.66]) were similar and displayed acceptable mean relative error compared with calculated theoretical torque values. Overall, percent error was greatest for all 3 systems in low-torque conditions (System 4 = 11.66% [6.39], System 3a = 6.82% [11.98], System 3b = 4.35% [9.49]). The System 4 significantly overestimated torque across all 3 weight increments, and the System 3b overestimated torque over the moderate-torque increment. Conversion of raw voltage to torque values using the single-calibration-weight method is valid and comparable to a more complex multiweight calibration process; however, it is clear that calibration must be done for each individual system to ensure accurate data collection.
Imaging of particles with 3D full parallax mode with two-color digital off-axis holography
NASA Astrophysics Data System (ADS)
Kara-Mohammed, Soumaya; Bouamama, Larbi; Picart, Pascal
2018-05-01
This paper proposes an approach based on two orthogonal views and two wavelengths for recording off-axis two-color holograms. The approach permits to discriminate particles aligned along the sight-view axis. The experimental set-up is based on a double Mach-Zehnder architecture in which two different wavelengths provides the reference and the object beams. The digital processing to get images from the particles is based on convolution so as to obtain images with no wavelength dependence. The spatial bandwidth of the angular spectrum transfer function is adapted in order to increase the maximum reconstruction distance which is generally limited to a few tens of millimeters. In order to get the images of particles in the 3D volume, a calibration process is proposed and is based on the modulation theorem to perfectly superimpose the two views in a common XYZ axis. The experimental set-up is applied to two-color hologram recording of moving non-calibrated opaque particles with average diameter at about 150 μm. After processing the two-color holograms with image reconstruction and view calibration, the location of particles in the 3D volume can be obtained. Particularly, ambiguity about close particles, generating hidden particles in a single-view scheme, can be removed to determine the exact number of particles in the region of interest.
Calibration of a dual-PTZ camera system for stereo vision
NASA Astrophysics Data System (ADS)
Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng
2010-08-01
In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.
How to obtain accurate resist simulations in very low-k1 era?
NASA Astrophysics Data System (ADS)
Chiou, Tsann-Bim; Park, Chan-Ha; Choi, Jae-Seung; Min, Young-Hong; Hansen, Steve; Tseng, Shih-En; Chen, Alek C.; Yim, Donggyu
2006-03-01
A procedure for calibrating a resist model iteratively adjusts appropriate parameters until the simulations of the model match the experimental data. The tunable parameters may include the shape of the illuminator, the geometry and transmittance/phase of the mask, light source and scanner-related parameters that affect imaging quality, resist process control and most importantly the physical/chemical factors in the resist model. The resist model can be accurately calibrated by measuring critical dimensions (CD) of a focus-exposure matrix (FEM) and the technique has been demonstrated to be very successful in predicting lithographic performance. However, resist model calibration is more challenging in the low k1 (<0.3) regime because numerous uncertainties, such as mask and resist CD metrology errors, are becoming too large to be ignored. This study demonstrates a resist model calibration procedure for a 0.29 k1 process using a 6% halftone mask containing 2D brickwall patterns. The influence of different scanning electron microscopes (SEM) and their wafer metrology signal analysis algorithms on the accuracy of the resist model is evaluated. As an example of the metrology issue of the resist pattern, the treatment of a sidewall angle is demonstrated for the resist line ends where the contrast is relatively low. Additionally, the mask optical proximity correction (OPC) and corner rounding are considered in the calibration procedure that is based on captured SEM images. Accordingly, the average root-mean-square (RMS) error, which is the difference between simulated and experimental CDs, can be improved by considering the metrological issues. Moreover, a weighting method and a measured CD tolerance are proposed to handle the different CD variations of the various edge points of the wafer resist pattern. After the weighting method is implemented and the CD selection criteria applied, the RMS error can be further suppressed. Therefore, the resist CD and process window can be confidently evaluated using the accurately calibrated resist model. One of the examples simulates the sensitivity of the mask pattern error, which is helpful to specify the mask CD control.
Multiple Source DF (Direction Finding) Signal Processing: An Experimental System,
The MUltiple SIgnal Characterization ( MUSIC ) algorithm is an implementation of the Signal Subspace Approach to provide parameter estimates of...the signal subspace (obtained from the received data) and the array manifold (obtained via array calibration). The MUSIC algorithm has been
Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes
Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian
2016-01-01
Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes. PMID:26751451
Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes.
Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian
2016-01-07
Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes.
Benschop, R; Draaisma, D
2000-01-01
A prominent feature of late nineteenth-century psychology was its intense preoccupation with precision. Precision was at once an ideal and an argument: the quest for precision helped psychology to establish its status as a mature science, sharing a characteristic concern with the natural sciences. We will analyse how psychologists set out to produce precision in 'mental chronometry', the measurement of the duration of psychological processes. In his Leipzig laboratory, Wundt inaugurated an elaborate research programme on mental chronometry. We will look at the problem of calibration of experimental apparatus and will describe the intricate material, literary, and social technologies involved in the manufacture of precision. First, we shall discuss some of the technical problems involved in the measurement of ever shorter time-spans. Next, the Cattell-Berger experiments will help us to argue against the received view that all the precision went into the hardware, and practically none into the social organization of experimentation. Experimenters made deliberate efforts to bring themselves and their subjects under a regime of control and calibration similar to that which reigned over the experimental machinery. In Leipzig psychology, the particular blend of material and social technology resulted in a specific object of study: the generalized mind. We will then show that the distribution of precision in experimental psychology outside Leipzig demanded a concerted effort of instruments, texts, and people. It will appear that the forceful attempts to produce precision and uniformity had some rather paradoxical consequences.
Geometric Characterization of Multi-Axis Multi-Pinhole SPECT
DiFilippo, Frank P.
2008-01-01
A geometric model and calibration process are developed for SPECT imaging with multiple pinholes and multiple mechanical axes. Unlike the typical situation where pinhole collimators are mounted directly to rotating gamma ray detectors, this geometric model allows for independent rotation of the detectors and pinholes, for the case where the pinhole collimator is physically detached from the detectors. This geometric model is applied to a prototype small animal SPECT device with a total of 22 pinholes and which uses dual clinical SPECT detectors. All free parameters in the model are estimated from a calibration scan of point sources and without the need for a precision point source phantom. For a full calibration of this device, a scan of four point sources with 360° rotation is suitable for estimating all 95 free parameters of the geometric model. After a full calibration, a rapid calibration scan of two point sources with 180° rotation is suitable for estimating the subset of 22 parameters associated with repositioning the collimation device relative to the detectors. The high accuracy of the calibration process is validated experimentally. Residual differences between predicted and measured coordinates are normally distributed with 0.8 mm full width at half maximum and are estimated to contribute 0.12 mm root mean square to the reconstructed spatial resolution. Since this error is small compared to other contributions arising from the pinhole diameter and the detector, the accuracy of the calibration is sufficient for high resolution small animal SPECT imaging. PMID:18293574
A fundamental study of suction for Laminar Flow Control (LFC)
NASA Astrophysics Data System (ADS)
Watmuff, Jonathan H.
1992-10-01
This report covers the period forming the first year of the project. The aim is to experimentally investigate the effects of suction as a technique for Laminar Flow Control. Experiments are to be performed which require substantial modifications to be made to the experimental facility. Considerable effort has been spent developing new high performance constant temperature hot-wire anemometers for general purpose use in the Fluid Mechanics Laboratory. Twenty instruments have been delivered. An important feature of the facility is that it is totally automated under computer control. Unprecedently large quantities of data can be acquired and the results examined using the visualization tools developed specifically for studying the results of numerical simulations on graphics works stations. The experiment must be run for periods of up to a month at a time since the data is collected on a point-by-point basis. Several techniques were implemented to reduce the experimental run-time by a significant factor. Extra probes have been constructed and modifications have been made to the traverse hardware and to the real-time experimental code to enable multiple probes to be used. This will reduce the experimental run-time by the appropriate factor. Hot-wire calibration drift has been a frustrating problem owing to the large range of ambient temperatures experienced in the laboratory. The solution has been to repeat the calibrations at frequent intervals. However the calibration process has consumed up to 40 percent of the run-time. A new method of correcting the drift is very nearly finalized and when implemented it will also lead to a significant reduction in the experimental run-time.
A fundamental study of suction for Laminar Flow Control (LFC)
NASA Technical Reports Server (NTRS)
Watmuff, Jonathan H.
1992-01-01
This report covers the period forming the first year of the project. The aim is to experimentally investigate the effects of suction as a technique for Laminar Flow Control. Experiments are to be performed which require substantial modifications to be made to the experimental facility. Considerable effort has been spent developing new high performance constant temperature hot-wire anemometers for general purpose use in the Fluid Mechanics Laboratory. Twenty instruments have been delivered. An important feature of the facility is that it is totally automated under computer control. Unprecedently large quantities of data can be acquired and the results examined using the visualization tools developed specifically for studying the results of numerical simulations on graphics works stations. The experiment must be run for periods of up to a month at a time since the data is collected on a point-by-point basis. Several techniques were implemented to reduce the experimental run-time by a significant factor. Extra probes have been constructed and modifications have been made to the traverse hardware and to the real-time experimental code to enable multiple probes to be used. This will reduce the experimental run-time by the appropriate factor. Hot-wire calibration drift has been a frustrating problem owing to the large range of ambient temperatures experienced in the laboratory. The solution has been to repeat the calibrations at frequent intervals. However the calibration process has consumed up to 40 percent of the run-time. A new method of correcting the drift is very nearly finalized and when implemented it will also lead to a significant reduction in the experimental run-time.
Zhao, Yanzhi; Zhang, Caifeng; Zhang, Dan; Shi, Zhongpan; Zhao, Tieshi
2016-01-01
Nowadays improving the accuracy and enlarging the measuring range of six-axis force sensors for wider applications in aircraft landing, rocket thrust, and spacecraft docking testing experiments has become an urgent objective. However, it is still difficult to achieve high accuracy and large measuring range with traditional parallel six-axis force sensors due to the influence of the gap and friction of the joints. Therefore, to overcome the mentioned limitations, this paper proposed a 6-Universal-Prismatic-Universal-Revolute (UPUR) joints parallel mechanism with flexible joints to develop a large measurement range six-axis force sensor. The structural characteristics of the sensor are analyzed in comparison with traditional parallel sensor based on the Stewart platform. The force transfer relation of the sensor is deduced, and the force Jacobian matrix is obtained using screw theory in two cases of the ideal state and the state of flexibility of each flexible joint is considered. The prototype and loading calibration system are designed and developed. The K value method and least squares method are used to process experimental data, and in errors of kind Ι and kind II linearity are obtained. The experimental results show that the calibration error of the K value method is more than 13.4%, and the calibration error of the least squares method is 2.67%. The experimental results prove the feasibility of the sensor and the correctness of the theoretical analysis which are expected to be adopted in practical applications. PMID:27529244
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...
2018-05-01
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Numerical Analysis of a Radiant Heat Flux Calibration System
NASA Technical Reports Server (NTRS)
Jiang, Shanjuan; Horn, Thomas J.; Dhir, V. K.
1998-01-01
A radiant heat flux gage calibration system exists in the Flight Loads Laboratory at NASA's Dryden Flight Research Center. This calibration system must be well understood if the heat flux gages calibrated in it are to provide useful data during radiant heating ground tests or flight tests of high speed aerospace vehicles. A part of the calibration system characterization process is to develop a numerical model of the flat plate heater element and heat flux gage, which will help identify errors due to convection, heater element erosion, and other factors. A 2-dimensional mathematical model of the gage-plate system has been developed to simulate the combined problem involving convection, radiation and mass loss by chemical reaction. A fourth order finite difference scheme is used to solve the steady state governing equations and determine the temperature distribution in the gage and plate, incident heat flux on the gage face, and flat plate erosion. Initial gage heat flux predictions from the model are found to be within 17% of experimental results.
A Novel Protocol for Model Calibration in Biological Wastewater Treatment
Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen
2015-01-01
Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959
Research on the calibration of ultraviolet energy meters
NASA Astrophysics Data System (ADS)
Lin, Fangsheng; Yin, Dejin; Li, Tiecheng; Lai, Lei; Xia, Ming
2016-10-01
Ultraviolet (UV) radiation is a kind of non-lighting radiation with the wavelength range from 100nm to 400nm. Ultraviolet irradiance meters are now widely used in many areas. However, as the development of science and technology, especially in the field of light-curing industry, there are more and more UV energy meters or UV-integrators need to be measured. Because the structure, wavelength band and measured power intensity of UV energy meters are different from traditional UV irradiance meters, it is important for us to take research on the calibration. With reference to JJG879-2002, we SIMT have independently developed the UV energy calibration device and the standard of operation and experimental methods for UV energy calibration in detail. In the calibration process of UV energy meter, many influencing factors will affect the final results, including different UVA-band UV light sources, different spectral response for different brands of UV energy meters, instability and no uniformity of UV light source and temperature. Therefore we need to take all of these factors into consideration to improve accuracy in UV energy calibration.
GTE blade injection moulding modeling and verification of models during process approbation
NASA Astrophysics Data System (ADS)
Stepanenko, I. S.; Khaimovich, A. I.
2017-02-01
The simulation model for filling the mould was developed using Moldex3D, and it was experimentally verified in order to perform further optimization calculations of the moulding process conditions. The method described in the article allows adjusting the finite-element model by minimizing the airfoil profile difference between the design and experimental melt motion front due to the differentiated change of power supplied to heating elements, which heat the injection mould in simulation. As a result of calibrating the injection mould for the gas-turbine engine blade, the mean difference between the design melt motion profile and the experimental airfoil profile of no more than 4% was achieved.
Evaluation of the Quality of Action Cameras with Wide-Angle Lenses in Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Hastedt, H.; Ekkel, T.; Luhmann, T.
2016-06-01
The application of light-weight cameras in UAV photogrammetry is required due to restrictions in payload. In general, consumer cameras with normal lens type are applied to a UAV system. The availability of action cameras, like the GoPro Hero4 Black, including a wide-angle lens (fish-eye lens) offers new perspectives in UAV projects. With these investigations, different calibration procedures for fish-eye lenses are evaluated in order to quantify their accuracy potential in UAV photogrammetry. Herewith the GoPro Hero4 is evaluated using different acquisition modes. It is investigated to which extent the standard calibration approaches in OpenCV or Agisoft PhotoScan/Lens can be applied to the evaluation processes in UAV photogrammetry. Therefore different calibration setups and processing procedures are assessed and discussed. Additionally a pre-correction of the initial distortion by GoPro Studio and its application to the photogrammetric purposes will be evaluated. An experimental setup with a set of control points and a prospective flight scenario is chosen to evaluate the processing results using Agisoft PhotoScan. Herewith it is analysed to which extent a pre-calibration and pre-correction of a GoPro Hero4 will reinforce the reliability and accuracy of a flight scenario.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao; Xu, Zhijie; Lai, Canhai
A hierarchical model calibration and validation is proposed for quantifying the confidence level of mass transfer prediction using a computational fluid dynamics (CFD) model, where the solvent-based carbon dioxide (CO2) capture is simulated and simulation results are compared to the parallel bench-scale experimental data. Two unit problems with increasing level of complexity are proposed to breakdown the complex physical/chemical processes of solvent-based CO2 capture into relatively simpler problems to separate the effects of physical transport and chemical reaction. This paper focuses on the calibration and validation of the first unit problem, i.e. the CO2 mass transfer across a falling ethanolaminemore » (MEA) film in absence of chemical reaction. This problem is investigated both experimentally and numerically using nitrous oxide (N2O) as a surrogate for CO2. To capture the motion of gas-liquid interface, a volume of fluid method is employed together with a one-fluid formulation to compute the mass transfer between the two phases. Bench-scale parallel experiments are designed and conducted to validate and calibrate the CFD models using a general Bayesian calibration. Two important transport parameters, e.g. Henry’s constant and gas diffusivity, are calibrated to produce the posterior distributions, which will be used as the input for the second unit problem to address the chemical adsorption of CO2 across the MEA falling film, where both mass transfer and chemical reaction are involved.« less
Calibration and analysis of genome-based models for microbial ecology.
Louca, Stilianos; Doebeli, Michael
2015-10-16
Microbial ecosystem modeling is complicated by the large number of unknown parameters and the lack of appropriate calibration tools. Here we present a novel computational framework for modeling microbial ecosystems, which combines genome-based model construction with statistical analysis and calibration to experimental data. Using this framework, we examined the dynamics of a community of Escherichia coli strains that emerged in laboratory evolution experiments, during which an ancestral strain diversified into two coexisting ecotypes. We constructed a microbial community model comprising the ancestral and the evolved strains, which we calibrated using separate monoculture experiments. Simulations reproduced the successional dynamics in the evolution experiments, and pathway activation patterns observed in microarray transcript profiles. Our approach yielded detailed insights into the metabolic processes that drove bacterial diversification, involving acetate cross-feeding and competition for organic carbon and oxygen. Our framework provides a missing link towards a data-driven mechanistic microbial ecology.
Simple laser vision sensor calibration for surface profiling applications
NASA Astrophysics Data System (ADS)
Abu-Nabah, Bassam A.; ElSoussi, Adnane O.; Al Alami, Abed ElRahman K.
2016-09-01
Due to the relatively large structures in the Oil and Gas industry, original equipment manufacturers (OEMs) have been implementing custom-designed laser vision sensor (LVS) surface profiling systems as part of quality control in their manufacturing processes. The rough manufacturing environment and the continuous movement and misalignment of these custom-designed tools adversely affect the accuracy of laser-based vision surface profiling applications. Accordingly, Oil and Gas businesses have been raising the demand from the OEMs to implement practical and robust LVS calibration techniques prior to running any visual inspections. This effort introduces an LVS calibration technique representing a simplified version of two known calibration techniques, which are commonly implemented to obtain a calibrated LVS system for surface profiling applications. Both calibration techniques are implemented virtually and experimentally to scan simulated and three-dimensional (3D) printed features of known profiles, respectively. Scanned data is transformed from the camera frame to points in the world coordinate system and compared with the input profiles to validate the introduced calibration technique capability against the more complex approach and preliminarily assess the measurement technique for weld profiling applications. Moreover, the sensitivity to stand-off distances is analyzed to illustrate the practicality of the presented technique.
A new method to calibrate the absolute sensitivity of a soft X-ray streak camera
NASA Astrophysics Data System (ADS)
Yu, Jian; Liu, Shenye; Li, Jin; Yang, Zhiwen; Chen, Ming; Guo, Luting; Yao, Li; Xiao, Shali
2016-12-01
In this paper, we introduce a new method to calibrate the absolute sensitivity of a soft X-ray streak camera (SXRSC). The calibrations are done in the static mode by using a small laser-produced X-ray source. A calibrated X-ray CCD is used as a secondary standard detector to monitor the X-ray source intensity. In addition, two sets of holographic flat-field grating spectrometers are chosen as the spectral discrimination systems of the SXRSC and the X-ray CCD. The absolute sensitivity of the SXRSC is obtained by comparing the signal counts of the SXRSC to the output counts of the X-ray CCD. Results show that the calibrated spectrum covers the range from 200 eV to 1040 eV. The change of the absolute sensitivity in the vicinity of the K-edge of the carbon can also be clearly seen. The experimental values agree with the calculated values to within 29% error. Compared with previous calibration methods, the proposed method has several advantages: a wide spectral range, high accuracy, and simple data processing. Our calibration results can be used to make quantitative X-ray flux measurements in laser fusion research.
Liau, Kee Fui; Yeoh, Hak Koon; Shoji, Tadashi; Chua, Adeline Seak May; Ho, Pei Yee
2017-01-01
Recently reported kinetic and stoichiometric parameters of the Activated Sludge Model no. 2d (ASM2d) for high-temperature EBPR processes suggested that the absence of glycogen in the model contributed to underestimation of PHA accumulation at 32 °C. Here, two modified ASM2d models were used to further explore the contribution of glycogen in the process. The ASM2d-1G model incorporated glycogen metabolism by PAOs (polyphosphate-accumulating organisms), while the ASM2d-2G model further included processes by GAOs (glycogen-accumulating organisms). These models were calibrated and validated using experimental data at 32 °C. The ASM2d-1G model supported the hypothesis that the excess PHA was attributed to glycogen, but remained inadequate to capture the dynamics of glycogen without considering GAOs activities. The ASM2d-2G model performed better, but it was challenging to calibrate as it often led to wash-out of either PAOs or GAOs. Associated hurdles are highlighted and additional efforts in calibrating ASM2d-2G more effectively are proposed.
Research on an autonomous vision-guided helicopter
NASA Technical Reports Server (NTRS)
Amidi, Omead; Mesaki, Yuji; Kanade, Takeo
1994-01-01
Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.
Ludwig, T; Kern, P; Bongards, M; Wolf, C
2011-01-01
The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.
NASA Astrophysics Data System (ADS)
Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.
2010-04-01
In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.
1982-07-15
Bergeron- Findeisen process, the dominant mid and high latitude precipitation forming mechanism. 1.1.2 Ice Supersaturation Measurements Lala (1969), in an...experimentally verified in strong cumulus updrafts, they would constitute a new mechanism of precipitation formation to augment the Bergeron- Findeisen and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Lai, Canhai; Marcy, Peter William
2017-05-01
A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less
Low energy nuclear recoils study in noble liquids for low-mass WIMPs
NASA Astrophysics Data System (ADS)
Wang, Lu; Mei, Dongming
2014-03-01
Detector response to low-energy nuclear recoils is critical to the detection of low-mass dark matter particles-WIMPs (Weakly interacting massive particles). Although the detector response to the processes of low-energy nuclear recoils is subtle and direct experimental calibration is rather difficult, many studies have been performed for noble liquids, NEST is a good example. However, the response of low-energy nuclear recoils, as a critical issue, needs more experimental data, in particular, with presence of electric field. We present a new design using time of flight to calibrate the large-volume xenon detector, such as LUX-Zeplin (LZ) and Xenon1T, energy scale for low-energy nuclear recoils. The calculation and physics models will be discussed based on the available data to predict the performance of the calibration device and set up criteria for the design of the device. A small test bench is built to verify the concepts at The University of South Dakota. This work is supported by DOE grant DE-FG02-10ER46709 and the state of South Dakota.
Novel crystal timing calibration method based on total variation
NASA Astrophysics Data System (ADS)
Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng
2016-11-01
A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.
Chen, Runlin; Wei, Yangyang; Shi, Zhaoyang; Yuan, Xiaoyang
2016-01-01
The identification accuracy of dynamic characteristics coefficients is difficult to guarantee because of the errors of the measurement system itself. A novel dynamic calibration method of measurement system for dynamic characteristics coefficients is proposed in this paper to eliminate the errors of the measurement system itself. Compared with the calibration method of suspension quality, this novel calibration method is different because the verification device is a spring-mass system, which can simulate the dynamic characteristics of sliding bearing. The verification device is built, and the calibration experiment is implemented in a wide frequency range, in which the bearing stiffness is simulated by the disc springs. The experimental results show that the amplitude errors of this measurement system are small in the frequency range of 10 Hz–100 Hz, and the phase errors increase along with the increasing of frequency. It is preliminarily verified by the simulated experiment of dynamic characteristics coefficients identification in the frequency range of 10 Hz–30 Hz that the calibration data in this frequency range can support the dynamic characteristics test of sliding bearing in this frequency range well. The bearing experiments in greater frequency ranges need higher manufacturing and installation precision of calibration device. Besides, the processes of calibration experiments should be improved. PMID:27483283
NASA Astrophysics Data System (ADS)
Mu, Nan; Wang, Kun; Xie, Zexiao; Ren, Ping
2017-05-01
To realize online rapid measurement for complex workpieces, a flexible measurement system based on an articulated industrial robot with a structured light sensor mounted on the end-effector is developed. A method for calibrating the system parameters is proposed in which the hand-eye transformation parameters and the robot kinematic parameters are synthesized in the calibration process. An initial hand-eye calibration is first performed using a standard sphere as the calibration target. By applying the modified complete and parametrically continuous method, we establish a synthesized kinematic model that combines the initial hand-eye transformation and distal link parameters as a whole with the sensor coordinate system as the tool frame. According to the synthesized kinematic model, an error model is constructed based on spheres' center-to-center distance errors. Consequently, the error model parameters can be identified in a calibration experiment using a three-standard-sphere target. Furthermore, the redundancy of error model parameters is eliminated to ensure the accuracy and robustness of the parameter identification. Calibration and measurement experiments are carried out based on an ER3A-C60 robot. The experimental results show that the proposed calibration method enjoys high measurement accuracy, and this efficient and flexible system is suitable for online measurement in industrial scenes.
Calibration and accuracy analysis of a focused plenoptic camera
NASA Astrophysics Data System (ADS)
Zeller, N.; Quint, F.; Stilla, U.
2014-08-01
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
Calibration of mass spectrometric measurements of gas phase reactions on steel surfaces
NASA Astrophysics Data System (ADS)
Falk, H.; Falk, M.; Wuttke, T.
2015-03-01
The sampling of the surface-near gas composition using a mass spectrometer (MS-Probe) is a valuable tool within a hot dip process simulator. Since reference samples with well characterized surface coverage are usually not available, steel samples can deliver quantifiable amounts of the process relevant species H2O, CO and H2 using the decarburization reaction with water vapor. Such "artificial calibration samples" (ACS) can be used for the calibration of the MS-Probe measurements. The carbon release rate, which is governed by the diffusion law, was determined by GDOES, since the diffusion coefficients of carbon in steel samples are usually not known. The measured carbon concentration profiles in the ACS after the thermal treatment confirmed the validity of the diffusion model described in this paper. The carbon bulk concentration > 100 ppm is sufficient for the use of a steel material as ACS. The experimental results reported in this paper reveal, that with the MS-Probe the LOQ of less than one monolayer of iron oxide can be achieved.
Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.
Xiong, Chunshui; Huang, Lei; Liu, Changping
2014-01-01
Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.
Calibrators measurement system for headlamp tester of motor vehicle base on machine vision
NASA Astrophysics Data System (ADS)
Pan, Yue; Zhang, Fan; Xu, Xi-ping; Zheng, Zhe
2014-09-01
With the development of photoelectric detection technology, machine vision has a wider use in the field of industry. The paper mainly introduces auto lamps tester calibrator measuring system, of which CCD image sampling system is the core. Also, it shows the measuring principle of optical axial angle and light intensity, and proves the linear relationship between calibrator's facula illumination and image plane illumination. The paper provides an important specification of CCD imaging system. Image processing by MATLAB can get flare's geometric midpoint and average gray level. By fitting the statistics via the method of the least square, we can get regression equation of illumination and gray level. It analyzes the error of experimental result of measurement system, and gives the standard uncertainty of synthesis and the resource of optical axial angle. Optical axial angle's average measuring accuracy is controlled within 40''. The whole testing process uses digital means instead of artificial factors, which has higher accuracy, more repeatability and better mentality than any other measuring systems.
NASA Technical Reports Server (NTRS)
Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.; Best, Paul K.
2007-01-01
In the companion paper, [Appl. Opt. 46, 5853 (2007)] a highly accurate white light interference model was developed from just a few key parameters characterized in terms of various moments of the source and instrument transmission function. We develop and implement the end-to-end process of calibrating these moment parameters together with the differential dispersion of the instrument and applying them to the algorithms developed in the companion paper. The calibration procedure developed herein is based on first obtaining the standard monochromatic parameters at the pixel level: wavenumber, phase, intensity, and visibility parameters via a nonlinear least-squares procedure that exploits the structure of the model. The pixel level parameters are then combined to obtain the required 'global' moment and dispersion parameters. The process is applied to both simulated scenarios of astrometric observations and to data from the microarcsecond metrology testbed (MAM), an interferometer testbed that has played a prominent role in the development of this technology.
NASA Technical Reports Server (NTRS)
Prasad, C. B.; Prabhakaran, R.; Tompkins, S.
1987-01-01
The first step in the extension of the semidestructive hole-drilling technique for residual stress measurement to orthotropic composite materials is the determination of the three calibration constants. Attention is presently given to an experimental determination of these calibration constants for a highly orthotropic, unidirectionally-reinforced graphite fiber-reinforced polyimide composite. A comparison of the measured values with theoretically obtained ones shows agreement to be good, in view of the many possible sources of experimental variation.
Numerical simulation of damage evolution for ductile materials and mechanical properties study
NASA Astrophysics Data System (ADS)
El Amri, A.; Hanafi, I.; Haddou, M. E. Y.; Khamlichi, A.
2015-12-01
This paper presents results of a numerical modelling of ductile fracture and failure of elements made of 5182H111 aluminium alloys subjected to dynamic traction. The analysis was performed using Johnson-Cook model based on ABAQUS software. The modelling difficulty related to prediction of ductile fracture mainly arises because there is a tremendous span of length scales from the structural problem to the micro-mechanics problem governing the material separation process. This study has been used the experimental results to calibrate a simple crack propagation criteria for shell elements of which one has often been used in practical analyses. The performance of the proposed model is in general good and it is believed that the presented results and experimental-numerical calibration procedure can be of use in practical finite-element simulations.
Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut
2005-01-01
Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175
Model Calibration Efforts for the International Space Station's Solar Array Mast
NASA Technical Reports Server (NTRS)
Elliott, Kenny B.; Horta, Lucas G.; Templeton, Justin D.; Knight, Norman F., Jr.
2012-01-01
The International Space Station (ISS) relies on sixteen solar-voltaic blankets to provide electrical power to the station. Each pair of blankets is supported by a deployable boom called the Folding Articulated Square Truss Mast (FAST Mast). At certain ISS attitudes, the solar arrays can be positioned in such a way that shadowing of either one or three longerons causes an unexpected asymmetric thermal loading that if unchecked can exceed the operational stability limits of the mast. Work in this paper documents part of an independent NASA Engineering and Safety Center effort to assess the existing operational limits. Because of the complexity of the system, the problem is being worked using a building-block progression from components (longerons), to units (single or multiple bays), to assembly (full mast). The paper presents results from efforts to calibrate the longeron components. The work includes experimental testing of two types of longerons (straight and tapered), development of Finite Element (FE) models, development of parameter uncertainty models, and the establishment of a calibration and validation process to demonstrate adequacy of the models. Models in the context of this paper refer to both FE model and probabilistic parameter models. Results from model calibration of the straight longerons show that the model is capable of predicting the mean load, axial strain, and bending strain. For validation, parameter values obtained from calibration of straight longerons are used to validate experimental results for the tapered longerons.
Calibration of a universal indicated turbulence system
NASA Technical Reports Server (NTRS)
Chapin, W. G.
1977-01-01
Theoretical and experimental work on a Universal Indicated Turbulence Meter is described. A mathematical transfer function from turbulence input to output indication was developed. A random ergodic process and a Gaussian turbulence distribution were assumed. A calibration technique based on this transfer function was developed. The computer contains a variable gain amplifier to make the system output independent of average velocity. The range over which this independence holds was determined. An optimum dynamic response was obtained for the tubulation between the system pitot tube and pressure transducer by making dynamic response measurements for orifices of various lengths and diameters at the source end.
Statistical analysis on experimental calibration data for flowmeters in pressure pipes
NASA Astrophysics Data System (ADS)
Lazzarin, Alessandro; Orsi, Enrico; Sanfilippo, Umberto
2017-08-01
This paper shows a statistical analysis on experimental calibration data for flowmeters (i.e.: electromagnetic, ultrasonic, turbine flowmeters) in pressure pipes. The experimental calibration data set consists of the whole archive of the calibration tests carried out on 246 flowmeters from January 2001 to October 2015 at Settore Portate of Laboratorio di Idraulica “G. Fantoli” of Politecnico di Milano, that is accredited as LAT 104 for a flow range between 3 l/s and 80 l/s, with a certified Calibration and Measurement Capability (CMC) - formerly known as Best Measurement Capability (BMC) - equal to 0.2%. The data set is split into three subsets, respectively consisting in: 94 electromagnetic, 83 ultrasonic and 69 turbine flowmeters; each subset is analysed separately from the others, but then a final comparison is carried out. In particular, the main focus of the statistical analysis is the correction C, that is the difference between the flow rate Q measured by the calibration facility (through the accredited procedures and the certified reference specimen) minus the flow rate QM contemporarily recorded by the flowmeter under calibration, expressed as a percentage of the same QM .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cárdenas-García, D.; Méndez-Lango, E.
Flat Calibrators (FC) are an option for calibration of infrared thermometers (IT) with a fixed large target. FCs are neither blackbodies, nor gray-bodies; their spectral emissivity is lower than one and depends on wavelength. Nevertheless they are used as gray-bodies with a nominal emissivity value. FCs can be calibrated radiometrically using as reference a calibrated IR thermometer (RT). If an FC will be used to calibrate ITs that work in the same spectral range as the RT then its calibration is straightforward: the actual FC spectral emissivity is not required. This result is valid for any given fixed emissivity assessedmore » to the FC. On the other hand, when the RT working spectral range does not match with that of the ITs to be calibrated with the FC then it is required to know the FC spectral emissivity as part of the calibration process. For this purpose, at CENAM, we developed an experimental setup to measure spectral emissivity in the infrared spectral range, based on a Fourier transform infrared spectrometer. Not all laboratories have emissivity measurement capability in the appropriate wavelength and temperature ranges to obtain the spectral emissivity. Thus, we present an estimation of the error introduced when the spectral range of the RT used to calibrate an FC and the spectral ranges of the ITs to be calibrated with the FC do not match. Some examples are developed for the cases when RT and IT spectral ranges are [8,13] μm and [8,14] μm respectively.« less
Geist, Rebecca E; DuBois, Chase H; Nichols, Timothy C; Caughey, Melissa C; Merricks, Elizabeth P; Raymer, Robin; Gallippi, Caterina M
2016-09-01
Acoustic radiation force impulse (ARFI) Surveillance of Subcutaneous Hemorrhage (ASSH) has been previously demonstrated to differentiate bleeding phenotype and responses to therapy in dogs and humans, but to date, the method has lacked experimental validation. This work explores experimental validation of ASSH in a poroelastic tissue-mimic and in vivo in dogs. The experimental design exploits calibrated flow rates and infusion durations of evaporated milk in tofu or heparinized autologous blood in dogs. The validation approach enables controlled comparisons of ASSH-derived bleeding rate (BR) and time to hemostasis (TTH) metrics. In tissue-mimicking experiments, halving the calibrated flow rate yielded ASSH-derived BRs that decreased by 44% to 48%. Furthermore, for calibrated flow durations of 5.0 minutes and 7.0 minutes, average ASSH-derived TTH was 5.2 minutes and 7.0 minutes, respectively, with ASSH predicting the correct TTH in 78% of trials. In dogs undergoing calibrated autologous blood infusion, ASSH measured a 3-minute increase in TTH, corresponding to the same increase in the calibrated flow duration. For a measured 5% decrease in autologous infusion flow rate, ASSH detected a 7% decrease in BR. These tissue-mimicking and in vivo preclinical experimental validation studies suggest the ASSH BR and TTH measures reflect bleeding dynamics. © The Author(s) 2015.
Kinect based real-time position calibration for nasal endoscopic surgical navigation system
NASA Astrophysics Data System (ADS)
Fan, Jingfan; Yang, Jian; Chu, Yakui; Ma, Shaodong; Wang, Yongtian
2016-03-01
Unanticipated, reactive motion of the patient during skull based tumor resective surgery is the source of the consequence that the nasal endoscopic tracking system is compelled to be recalibrated. To accommodate the calibration process with patient's movement, this paper developed a Kinect based Real-time positional calibration method for nasal endoscopic surgical navigation system. In this method, a Kinect scanner was employed as the acquisition part of the point cloud volumetric reconstruction of the patient's head during surgery. Then, a convex hull based registration algorithm aligned the real-time image of the patient head with a model built upon the CT scans performed in the preoperative preparation to dynamically calibrate the tracking system if a movement was detected. Experimental results confirmed the robustness of the proposed method, presenting a total tracking error within 1 mm under the circumstance of relatively violent motions. These results point out the tracking accuracy can be retained stably and the potential to expedite the calibration of the tracking system against strong interfering conditions, demonstrating high suitability for a wide range of surgical applications.
Boudaoud, Mokrane; Haddab, Yassine; Le Gorrec, Yann; Lutz, Philippe
2012-01-01
The atomic force microscope (AFM) is a powerful tool for the measurement of forces at the micro/nano scale when calibrated cantilevers are used. Besides many existing calibration techniques, the thermal calibration is one of the simplest and fastest methods for the dynamic characterization of an AFM cantilever. This method is efficient provided that the Brownian motion (thermal noise) is the most important source of excitation during the calibration process. Otherwise, the value of spring constant is underestimated. This paper investigates noise interference ranges in low stiffness AFM cantilevers taking into account thermal fluctuations and acoustic pressures as two main sources of noise. As a result, a preliminary knowledge about the conditions in which thermal fluctuations and acoustic pressures have closely the same effect on the AFM cantilever (noise interference) is provided with both theoretical and experimental arguments. Consequently, beyond the noise interference range, commercial low stiffness AFM cantilevers are calibrated in two ways: using the thermal noise (in a wide temperature range) and acoustic pressures generated by a loudspeaker. We then demonstrate that acoustic noises can also be used for an efficient characterization and calibration of low stiffness AFM cantilevers. The accuracy of the acoustic characterization is evaluated by comparison with results from the thermal calibration.
Dynamic calibration of pan-tilt-zoom cameras for traffic monitoring.
Song, Kai-Tai; Tai, Jen-Chao
2006-10-01
Pan-tilt-zoom (PTZ) cameras have been widely used in recent years for monitoring and surveillance applications. These cameras provide flexible view selection as well as a wider observation range. This makes them suitable for vision-based traffic monitoring and enforcement systems. To employ PTZ cameras for image measurement applications, one first needs to calibrate the camera to obtain meaningful results. For instance, the accuracy of estimating vehicle speed depends on the accuracy of camera calibration and that of vehicle tracking results. This paper presents a novel calibration method for a PTZ camera overlooking a traffic scene. The proposed approach requires no manual operation to select the positions of special features. It automatically uses a set of parallel lane markings and the lane width to compute the camera parameters, namely, focal length, tilt angle, and pan angle. Image processing procedures have been developed for automatically finding parallel lane markings. Interesting experimental results are presented to validate the robustness and accuracy of the proposed method.
Ricci, L; Formica, D; Tamilia, E; Taffoni, F; Sparaci, L; Capirci, O; Guglielmelli, E
2013-01-01
Motion capture based on magneto-inertial sensors is a technology enabling data collection in unstructured environments, allowing "out of the lab" motion analysis. This technology is a good candidate for motion analysis of children thanks to the reduced weight and size as well as the use of wireless communication that has improved its wearability and reduced its obtrusivity. A key issue in the application of such technology for motion analysis is its calibration, i.e. a process that allows mapping orientation information from each sensor to a physiological reference frame. To date, even if there are several calibration procedures available for adults, no specific calibration procedures have been developed for children. This work addresses this specific issue presenting a calibration procedure for motion capture of thorax and upper limbs on healthy children. Reported results suggest comparable performance with similar studies on adults and emphasize some critical issues, opening the way to further improvements.
Domingo-Félez, Carlos; Pellicer-Nàcher, Carles; Petersen, Morten S; Jensen, Marlene M; Plósz, Benedek G; Smets, Barth F
2017-01-01
Nitrous oxide (N 2 O), a by-product of biological nitrogen removal during wastewater treatment, is produced by ammonia-oxidizing bacteria (AOB) and heterotrophic denitrifying bacteria (HB). Mathematical models are used to predict N 2 O emissions, often including AOB as the main N 2 O producer. Several model structures have been proposed without consensus calibration procedures. Here, we present a new experimental design that was used to calibrate AOB-driven N 2 O dynamics of a mixed culture. Even though AOB activity was favoured with respect to HB, oxygen uptake rates indicated HB activity. Hence, rigorous experimental design for calibration of autotrophic N 2 O production from mixed cultures is essential. The proposed N 2 O production pathways were examined using five alternative process models confronted with experimental data inferred. Individually, the autotrophic and heterotrophic denitrification pathway could describe the observed data. In the best-fit model, which combined two denitrification pathways, the heterotrophic was stronger than the autotrophic contribution to N 2 O production. Importantly, the individual contribution of autotrophic and heterotrophic to the total N 2 O pool could not be unambiguously elucidated solely based on bulk N 2 O measurements. Data on NO would increase the practical identifiability of N 2 O production pathways. Biotechnol. Bioeng. 2017;114: 132-140. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.
On the use of video projectors for three-dimensional scanning
NASA Astrophysics Data System (ADS)
Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.; Robledo-Sanchez, Carlos; Diaz-Gonzalez, Gerardo
2017-08-01
Structured light projection is one of the most useful methods for accurate three-dimensional scanning. Video projectors are typically used as the illumination source. However, because video projectors are not designed for structured light systems, some considerations such as gamma calibration must be taken into account. In this work, we present a simple method for gamma calibration of video projectors. First, the experimental fringe patterns are normalized. Then, the samples of the fringe patterns are sorted in ascending order. The sample sorting leads to a simple three-parameter sine curve that is fitted using the Gauss-Newton algorithm. The novelty of this method is that the sorting process removes the effect of the unknown phase. Thus, the resulting gamma calibration algorithm is significantly simplified. The feasibility of the proposed method is illustrated in a three-dimensional scanning experiment.
Development and Characterization of a Low-Pressure Calibration System for Hypersonic Wind Tunnels
NASA Technical Reports Server (NTRS)
Green, Del L.; Everhart, Joel L.; Rhode, Matthew N.
2004-01-01
Minimization of uncertainty is essential for accurate ESP measurements at very low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources requires a well defined and controlled calibration method. A calibration system has been constructed and environmental control software developed to control experimentation to eliminate human induced error sources. The initial stability study of the calibration system shows a high degree of measurement accuracy and precision in temperature and pressure control. Control manometer drift and reference pressure instabilities induce uncertainty into the repeatability of voltage responses measured from the PSI System 8400 between calibrations. Methods of improving repeatability are possible through software programming and further experimentation.
Ràfols, Clara; Bosch, Elisabeth; Barbas, Rafael; Prohens, Rafel
2016-07-01
A study about the suitability of the chelation reaction of Ca(2+)with ethylenediaminetetraacetic acid (EDTA) as a validation standard for Isothermal Titration Calorimeter measurements has been performed exploring the common experimental variables (buffer, pH, ionic strength and temperature). Results obtained in a variety of experimental conditions have been amended according to the side reactions involved in the main process and to the experimental ionic strength and, finally, validated by contrast with the potentiometric reference values. It is demonstrated that the chelation reaction performed in acetate buffer 0.1M and 25°C shows accurate and precise results and it is robust enough to be adopted as a standard calibration process. Copyright © 2016 Elsevier B.V. All rights reserved.
RANS Based Methodology for Predicting the Influence of Leading Edge Erosion on Airfoil Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langel, Christopher M.; Chow, Raymond C.; van Dam, C. P.
The impact of surface roughness on flows over aerodynamically designed surfaces is of interested in a number of different fields. It has long been known the surface roughness will likely accelerate the laminar- turbulent transition process by creating additional disturbances in the boundary layer. However, there are very few tools available to predict the effects surface roughness will have on boundary layer flow. There are numerous implications of the premature appearance of a turbulent boundary layer. Increases in local skin friction, boundary layer thickness, and turbulent mixing can impact global flow properties compounding the effects of surface roughness. With thismore » motivation, an investigation into the effects of surface roughness on boundary layer transition has been conducted. The effort involved both an extensive experimental campaign, and the development of a high fidelity roughness model implemented in a R ANS solver. Vast a mounts of experimental data was generated at the Texas A&M Oran W. Nicks Low Speed Wind Tunnel for the calibration and validation of the roughness model described in this work, as well as future efforts. The present work focuses on the development of the computational model including a description of the calibration process. The primary methodology presented introduces a scalar field variable and associated transport equation that interacts with a correlation based transition model. The additional equation allows for non-local effects of surface roughness to be accounted for downstream of rough wall sections while maintaining a "local" formulation. The scalar field is determined through a boundary condition function that has been calibrated to flat plate cases with sand grain roughness. The model was initially tested on a NACA 0012 airfoil with roughness strips applied to the leading edge. Further calibration of the roughness model was performed using results from the companion experimental study on a NACA 63 3 -418 airfoil. The refined model demonstrates favorable agreement predicting changes to the transition location, as well as drag, for a number of different leading edge roughness configurations on the NACA 63 3-418 airfoil. Additional tests were conducted on a thicker S814 airfoil, with similar roughness configurations to the NACA 63 3-418. Simulations run with the roughness model compare favorably with the results obtained in the experimental study for both airfoils.« less
NASA Astrophysics Data System (ADS)
Percoco, Gianluca; Sánchez Salmerón, Antonio J.
2015-09-01
The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.
Bridging across OECD 308 and 309 Data in Search of a Robust Biotransformation Indicator.
Honti, Mark; Hahn, Stefan; Hennecke, Dieter; Junker, Thomas; Shrestha, Prasit; Fenner, Kathrin
2016-07-05
The OECD guidelines 308 and 309 define simulation tests aimed at assessing biotransformation of chemicals in water-sediment systems. They should serve the estimation of persistence indicators for hazard assessment and half-lives for exposure modeling. Although dissipation half-lives of the parent compound are directly extractable from OECD 308 data, they are system-specific and mix up phase transfer with biotransformation. In contrast, aerobic biotransformation half-lives should be easier to extract from OECD 309 experiments with suspended sediments. Therefore, there is scope for OECD 309 tests with suspended sediment to serve as a proxy for degradation in the aerobic phase of the more complicated OECD 308 test, but that correspondence has remained untested so far. Our aim was to find a way to extract biotransformation rate constants that are universally valid across variants of water-sediment systems and, hence, provide a more general description of the compound's behavior in the environment. We developed a unified model that was able to simulate four experimental types (two variants of OECD 308 and two variants of OECD 309) for three compounds by using a biomass-corrected, generalized aerobic biotransformation parameter (k'bio). We used Bayesian calibration and uncertainty assessment to calibrate the models for individual experimental types separately and for combinations of experimental types. The results suggested that k'bio was a generally valid parameter for quantifying biotransformation across systems. However, its uncertainty remained significant when calibrated on individual systems alone. Using at least two different experimental types for the calibration of k'bio increased its robustness by clearly separating degradation from the phase-transfer processes taking place in the individual systems. Overall, k'bio has the potential to serve as a system-independent descriptor of aerobic biotransformation at the water-sediment interface that is equally and consistently applicable for both persistence and exposure assessment purposes.
Lee, Jeong Wan
2008-01-01
This paper proposes a field calibration technique for aligning a wind direction sensor to the true north. The proposed technique uses the synchronized measurements of captured images by a camera, and the output voltage of a wind direction sensor. The true wind direction was evaluated through image processing techniques using the captured picture of the sensor with the least square sense. Then, the evaluated true value was compared with the measured output voltage of the sensor. This technique solves the discordance problem of the wind direction sensor in the process of installing meteorological mast. For this proposed technique, some uncertainty analyses are presented and the calibration accuracy is discussed. Finally, the proposed technique was applied to the real meteorological mast at the Daegwanryung test site, and the statistical analysis of the experimental testing estimated the values of stable misalignment and uncertainty level. In a strict sense, it is confirmed that the error range of the misalignment from the exact north could be expected to decrease within the credibility level. PMID:27873957
Effects of experimental design on calibration curve precision in routine analysis
Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.
1998-01-01
A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816
NASA Astrophysics Data System (ADS)
Torregrosa, A. J.; Broatch, A.; Margot, X.; García-Tíscar, J.
2016-08-01
An experimental methodology is proposed to assess the noise emission of centrifugal turbocompressors like those of automotive turbochargers. A step-by-step procedure is detailed, starting from the theoretical considerations of sound measurement in flow ducts and examining specific experimental setup guidelines and signal processing routines. Special care is taken regarding some limiting factors that adversely affect the measuring of sound intensity in ducts, namely calibration, sensor placement and frequency ranges and restrictions. In order to provide illustrative examples of the proposed techniques and results, the methodology has been applied to the acoustic evaluation of a small automotive turbocharger in a flow bench. Samples of raw pressure spectra, decomposed pressure waves, calibration results, accurate surge characterization and final compressor noise maps and estimated spectrograms are provided. The analysis of selected frequency bands successfully shows how different, known noise phenomena of particular interest such as mid-frequency "whoosh noise" and low-frequency surge onset are correlated with operating conditions of the turbocharger. Comparison against external inlet orifice intensity measurements shows good correlation and improvement with respect to alternative wave decomposition techniques.
NASA Astrophysics Data System (ADS)
Dantec-Nédélec, S.; Ottlé, C.; Wang, T.; Guglielmo, F.; Maignan, F.; Delbart, N.; Valdayskikh, V.; Radchenko, T.; Nekrasova, O.; Zakharov, V.; Jouzel, J.
2017-06-01
The ORCHIDEE land surface model has recently been updated to improve the representation of high-latitude environments. The model now includes improved soil thermodynamics and the representation of permafrost physical processes (soil thawing and freezing), as well as a new snow model to improve the representation of the seasonal evolution of the snow pack and the resulting insulation effects. The model was evaluated against data from the experimental sites of the WSibIso-Megagrant project (www.wsibiso.ru). ORCHIDEE was applied in stand-alone mode, on two experimental sites located in the Yamal Peninsula in the northwestern part of Siberia. These sites are representative of circumpolar-Arctic tundra environments and differ by their respective fractions of shrub/tree cover and soil type. After performing a global sensitivity analysis to identify those parameters that have most influence on the simulation of energy and water transfers, the model was calibrated at local scale and evaluated against in situ measurements (vertical profiles of soil temperature and moisture, as well as active layer thickness) acquired during summer 2012. The results show how sensitivity analysis can identify the dominant processes and thereby reduce the parameter space for the calibration process. We also discuss the model performance at simulating the soil temperature and water content (i.e., energy and water transfers in the soil-vegetation-atmosphere continuum) and the contribution of the vertical discretization of the hydrothermal properties. This work clearly shows, at least at the two sites used for validation, that the new ORCHIDEE vertical discretization can represent the water and heat transfers through complex cryogenic Arctic soils—soils which present multiple horizons sometimes with peat inclusions. The improved model allows us to prescribe the vertical heterogeneity of the soil hydrothermal properties.
Evaluation of dispersivity coefficients by means of a laboratory image analysis.
Citarella, Donato; Cupola, Fausto; Tanda, Maria Giovanna; Zanini, Andrea
2015-01-01
This paper describes the application of an innovative procedure that allows the estimation of longitudinal and transverse dispersivities in an experimental plume devised in a laboratory sandbox. The phenomenon of transport in porous media is studied using sodium fluorescein as tracer. The fluorescent excitation was achieved by using blue light and the concentration data were obtained through the processing of side wall images collected with a high resolution color digital camera. After a calibration process, the relationship between the luminosity of the emitted fluorescence and the fluorescein concentration was determined at each point of the sandbox. The relationships were used to describe the evolution of the transport process quantitatively throughout the entire domain. Some check tests were performed in order to verify the reliability of the experimental device. Numerical flow and transport models of the sandbox were developed and calibrated comparing computed and observed flow rates and breakthrough curves. The estimation of the dispersivity coefficients was carried out by analyzing the concentration field deduced from the images collected during the experiments; the dispersivity coefficients were evaluated in the domain zones where the tracer affected the porous medium under the hypothesis that the transport phenomenon is described by advection-dispersion equation (ADE) and by computing the differential components of the concentration by means of a numerical leap-frog scheme. The values determined agree with the ones referred in literature for similar media and with the coefficients obtained by calibrating the numerical model. Very interesting considerations have been made from the analysis of the performance of the methodology at different locations in the flow domain and phases of the plume evolution. Copyright © 2014 Elsevier B.V. All rights reserved.
The effect of rainfall measurement uncertainties on rainfall-runoff processes modelling.
Stransky, D; Bares, V; Fatka, P
2007-01-01
Rainfall data are a crucial input for various tasks concerning the wet weather period. Nevertheless, their measurement is affected by random and systematic errors that cause an underestimation of the rainfall volume. Therefore, the general objective of the presented work was to assess the credibility of measured rainfall data and to evaluate the effect of measurement errors on urban drainage modelling tasks. Within the project, the methodology of the tipping bucket rain gauge (TBR) was defined and assessed in terms of uncertainty analysis. A set of 18 TBRs was calibrated and the results were compared to the previous calibration. This enables us to evaluate the ageing of TBRs. A propagation of calibration and other systematic errors through the rainfall-runoff model was performed on experimental catchment. It was found that the TBR calibration is important mainly for tasks connected with the assessment of peak values and high flow durations. The omission of calibration leads to up to 30% underestimation and the effect of other systematic errors can add a further 15%. The TBR calibration should be done every two years in order to catch up the ageing of TBR mechanics. Further, the authors recommend to adjust the dynamic test duration proportionally to generated rainfall intensity.
Bayesian calibration for electrochemical thermal model of lithium-ion cells
NASA Astrophysics Data System (ADS)
Tagade, Piyush; Hariharan, Krishnan S.; Basu, Suman; Verma, Mohan Kumar Singh; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang
2016-07-01
Pseudo-two dimensional electrochemical thermal (P2D-ECT) model contains many parameters that are difficult to evaluate experimentally. Estimation of these model parameters is challenging due to computational cost and the transient model. Due to lack of complete physical understanding, this issue gets aggravated at extreme conditions like low temperature (LT) operations. This paper presents a Bayesian calibration framework for estimation of the P2D-ECT model parameters. The framework uses a matrix variate Gaussian process representation to obtain a computationally tractable formulation for calibration of the transient model. Performance of the framework is investigated for calibration of the P2D-ECT model across a range of temperatures (333 Ksbnd 263 K) and operating protocols. In the absence of complete physical understanding, the framework also quantifies structural uncertainty in the calibrated model. This information is used by the framework to test validity of the new physical phenomena before incorporation in the model. This capability is demonstrated by introducing temperature dependence on Bruggeman's coefficient and lithium plating formation at LT. With the incorporation of new physics, the calibrated P2D-ECT model accurately predicts the cell voltage with high confidence. The accurate predictions are used to obtain new insights into the low temperature lithium ion cell behavior.
Wang, Gang; Briskot, Till; Hahn, Tobias; Baumann, Pascal; Hubbuch, Jürgen
2017-03-03
Mechanistic modeling has been repeatedly successfully applied in process development and control of protein chromatography. For each combination of adsorbate and adsorbent, the mechanistic models have to be calibrated. Some of the model parameters, such as system characteristics, can be determined reliably by applying well-established experimental methods, whereas others cannot be measured directly. In common practice of protein chromatography modeling, these parameters are identified by applying time-consuming methods such as frontal analysis combined with gradient experiments, curve-fitting, or combined Yamamoto approach. For new components in the chromatographic system, these traditional calibration approaches require to be conducted repeatedly. In the presented work, a novel method for the calibration of mechanistic models based on artificial neural network (ANN) modeling was applied. An in silico screening of possible model parameter combinations was performed to generate learning material for the ANN model. Once the ANN model was trained to recognize chromatograms and to respond with the corresponding model parameter set, it was used to calibrate the mechanistic model from measured chromatograms. The ANN model's capability of parameter estimation was tested by predicting gradient elution chromatograms. The time-consuming model parameter estimation process itself could be reduced down to milliseconds. The functionality of the method was successfully demonstrated in a study with the calibration of the transport-dispersive model (TDM) and the stoichiometric displacement model (SDM) for a protein mixture. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nouri, N. M.; Mostafapour, K.; Kamran, M.
2018-02-01
In a closed water-tunnel circuit, the multi-component strain gauge force and moment sensor (also known as balance) are generally used to measure hydrodynamic forces and moments acting on scaled models. These balances are periodically calibrated by static loading. Their performance and accuracy depend significantly on the rig and the method of calibration. In this research, a new calibration rig was designed and constructed to calibrate multi-component internal strain gauge balances. The calibration rig has six degrees of freedom and six different component-loading structures that can be applied separately and synchronously. The system was designed based on the applicability of formal experimental design techniques, using gravity for balance loading and balance positioning and alignment relative to gravity. To evaluate the calibration rig, a six-component internal balance developed by Iran University of Science and Technology was calibrated using response surface methodology. According to the results, calibration rig met all design criteria. This rig provides the means by which various methods of formal experimental design techniques can be implemented. The simplicity of the rig saves time and money in the design of experiments and in balance calibration while simultaneously increasing the accuracy of these activities.
NASA Astrophysics Data System (ADS)
Ala-aho, Pertti; Soulsby, Chris; Wang, Hailong; Tetzlaff, Doerthe
2017-04-01
Understanding the role of groundwater for runoff generation in headwater catchments is a challenge in hydrology, particularly so in data-scarce areas. Fully-integrated surface-subsurface modelling has shown potential in increasing process understanding for runoff generation, but high data requirements and difficulties in model calibration are typically assumed to preclude their use in catchment-scale studies. We used a fully integrated surface-subsurface hydrological simulator to enhance groundwater-related process understanding in a headwater catchment with a rich background in empirical data. To set up the model we used minimal data that could be reasonably expected to exist for any experimental catchment. A novel aspect of our approach was in using simplified model parameterisation and including parameters from all model domains (surface, subsurface, evapotranspiration) in automated model calibration. Calibration aimed not only to improve model fit, but also to test the information content of the observations (streamflow, remotely sensed evapotranspiration, median groundwater level) used in calibration objective functions. We identified sensitive parameters in all model domains (subsurface, surface, evapotranspiration), demonstrating that model calibration should be inclusive of parameters from these different model domains. Incorporating groundwater data in calibration objectives improved the model fit for groundwater levels, but simulations did not reproduce well the remotely sensed evapotranspiration time series even after calibration. Spatially explicit model output improved our understanding of how groundwater functions in maintaining streamflow generation primarily via saturation excess overland flow. Steady groundwater inputs created saturated conditions in the valley bottom riparian peatlands, leading to overland flow even during dry periods. Groundwater on the hillslopes was more dynamic in its response to rainfall, acting to expand the saturated area extent and thereby promoting saturation excess overland flow during rainstorms. Our work shows the potential of using integrated surface-subsurface modelling alongside with rigorous model calibration to better understand and visualise the role of groundwater in runoff generation even with limited datasets.
Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW
NASA Technical Reports Server (NTRS)
DeLoach, RIchard; Philipsen, Iwan
2007-01-01
This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.
Programmed temperature gasification study. Final report, October 1, 1979-November 30, 1980
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spoon, M.J.; Gardner, M.P.; Starkovich, J.A.
An experimental, modeling and conceptual engineering analysis study has been performed to assess the feasibility of TRW's Programmed Temperature Gasification (PTG) concept for carbonizing caking coals without severe agglomeration. The concept involves control of carbonizing heating rate to maintain metaplast concentration at a level equal to or slightly below that which causes agglomeration. The experimental studies required the contruction of a novel programmed temperature, elevated pressure, hot stage video microscope for observation of coal particle changes during heating. This system was used to develop a minimum-time heating schedule capable of carbonizing the coal at elevated pressures in the presence ofmore » hydrogen without severe agglomeration. Isothermal fixed heating rate data for a series of coals were subsequently used to calibrate and verify the mathematical model for the PTG process. These results showed good correlation between experimental data and mathematical predictions. Commercial application of the PTG concept to batch, moving bed and fluid bed processing schemes was then evaluated. Based on the calibrated model programmed temperature gasification of the coal without severe agglomeration could be carried out on a commercial batch reaction in 4 to 12 minutes. The next step in development of the PTG concept for commercial application would require testing on a bench scale (3-inch diameter) gasifier coupled with a full commercial assessment to determine size and cost of various gasification units.« less
NASA Technical Reports Server (NTRS)
Sellers, Piers J.; Shuttleworth, W. James; Dorman, Jeff L.; Dalcher, Amnon; Roberts, John M.
1989-01-01
Using meteorological and hydrological measurements taken in and above the central-Amazon-basin tropical forest, calibration of the Sellers et al. (1986) simple biosphere (SiB) model are described. The SiB model is a one-dimensional soil-vegetation-atmosphere model designed for use within GCMs models, representing the vegetation cover by analogy with processes operating within a single representative plant. The experimental systems and the procedures used to obtain field data are described, together with the specification of the physiological parameterization required to provide an average description of data. It was found that some of the existing literature on stomatal behavior for tropical species is inconsistent with the observed behavior of the complete canopy in Amazonia, and that the rainfall interception store of the canopy is considerably smaller than originally specified in the SiB model.
A review of model applications for structured soils: b) Pesticide transport.
Köhne, John Maximilian; Köhne, Sigrid; Simůnek, Jirka
2009-02-16
The past decade has seen considerable progress in the development of models simulating pesticide transport in structured soils subject to preferential flow (PF). Most PF pesticide transport models are based on the two-region concept and usually assume one (vertical) dimensional flow and transport. Stochastic parameter sets are sometimes used to account for the effects of spatial variability at the field scale. In the past decade, PF pesticide models were also coupled with Geographical Information Systems (GIS) and groundwater flow models for application at the catchment and larger regional scales. A review of PF pesticide model applications reveals that the principal difficulty of their application is still the appropriate parameterization of PF and pesticide processes. Experimental solution strategies involve improving measurement techniques and experimental designs. Model strategies aim at enhancing process descriptions, studying parameter sensitivity, uncertainty, inverse parameter identification, model calibration, and effects of spatial variability, as well as generating model emulators and databases. Model comparison studies demonstrated that, after calibration, PF pesticide models clearly outperform chromatographic models for structured soils. Considering nonlinear and kinetic sorption reactions further enhanced the pesticide transport description. However, inverse techniques combined with typically available experimental data are often limited in their ability to simultaneously identify parameters for describing PF, sorption, degradation and other processes. On the other hand, the predictive capacity of uncalibrated PF pesticide models currently allows at best an approximate (order-of-magnitude) estimation of concentrations. Moreover, models should target the entire soil-plant-atmosphere system, including often neglected above-ground processes such as pesticide volatilization, interception, sorption to plant residues, root uptake, and losses by runoff. The conclusions compile progress, problems, and future research choices for modelling pesticide displacement in structured soils.
Accuracy evaluation of optical distortion calibration by digital image correlation
NASA Astrophysics Data System (ADS)
Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan
2017-11-01
Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.
On-the-fly Data Reprocessing and Analysis Capabilities from the XMM-Newton Archive
NASA Astrophysics Data System (ADS)
Ibarra, A.; Sarmiento, M.; Colomo, E.; Loiseau, N.; Salgado, J.; Gabriel, C.
2017-10-01
The XMM-Newton Science Archive (XSA) includes since last release the possibility to perform on-the-fly data processing with SAS through the Remote Interface for Science Analysis (RISA) server. It enables scientists to analyse data without any download and installation of data and software. The analysis options presently available include extraction of spectra and light curves of user-defined EPIC source regions and full reprocessing of data for which currently archived pipeline products were processed with older SAS versions or calibration files. The current pipeline is fully aligned with the most recent SAS and calibration, while the last full reprocessing of the archive was performed in 2013. The on-the-fly data processing functionality in this release is an experimental version and we invite the community to test and let us know their results. Known issues and workarounds are described in the 'Watchouts' section of the XSA web page. Feedback on how this functionality should evolve will be highly appreciated.
Liu, Guo-hai; Jiang, Hui; Xiao, Xia-hong; Zhang, Dong-juan; Mei, Cong-li; Ding, Yu-han
2012-04-01
Fourier transform near-infrared (FT-NIR) spectroscopy was attempted to determine pH, which is one of the key process parameters in solid-state fermentation of crop straws. First, near infrared spectra of 140 solid-state fermented product samples were obtained by near infrared spectroscopy system in the wavelength range of 10 000-4 000 cm(-1), and then the reference measurement results of pH were achieved by pH meter. Thereafter, the extreme learning machine (ELM) was employed to calibrate model. In the calibration model, the optimal number of PCs and the optimal number of hidden-layer nodes of ELM network were determined by the cross-validation. Experimental results showed that the optimal ELM model was achieved with 1040-1 topology construction as follows: R(p) = 0.961 8 and RMSEP = 0.104 4 in the prediction set. The research achievement could provide technological basis for the on-line measurement of the process parameters in solid-state fermentation.
Development of a semi-adiabatic isoperibol solution calorimeter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkata Krishnan, R.; Jogeswararao, G.; Parthasarathy, R.
2014-12-15
A semi-adiabatic isoperibol solution calorimeter has been indigenously developed. The measurement system comprises modules for sensitive temperature measurement probe, signal processing, data collection, and joule calibration. The sensitivity of the temperature measurement module was enhanced by using a sensitive thermistor coupled with a lock-in amplifier based signal processor. A microcontroller coordinates the operation and control of these modules. The latter in turn is controlled through personal computer (PC) based custom made software developed with LabView. An innovative summing amplifier concept was used to cancel out the base resistance of the thermistor. The latter was placed in the dewar. The temperaturemore » calibration was carried out with a standard platinum resistance (PT100) sensor coupled with an 8½ digit multimeter. The water equivalent of this calorimeter was determined by using electrical calibration with the joule calibrator. The experimentally measured values of the quantum of heat were validated by measuring heats of dissolution of pure KCl (for endotherm) and tris (hydroxyl methyl) amino-methane (for exotherm). The uncertainity in the measurements was found to be within ±3%.« less
Modeling of solid-state and excimer laser processes for 3D micromachining
NASA Astrophysics Data System (ADS)
Holmes, Andrew S.; Onischenko, Alexander I.; George, David S.; Pedder, James E.
2005-04-01
An efficient simulation method has recently been developed for multi-pulse ablation processes. This is based on pulse-by-pulse propagation of the machined surface according to one of several phenomenological models for the laser-material interaction. The technique allows quantitative predictions to be made about the surface shapes of complex machined parts, given only a minimal set of input data for parameter calibration. In the case of direct-write machining of polymers or glasses with ns-duration pulses, this data set can typically be limited to the surface profiles of a small number of standard test patterns. The use of phenomenological models for the laser-material interaction, calibrated by experimental feedback, allows fast simulation, and can achieve a high degree of accuracy for certain combinations of material, laser and geometry. In this paper, the capabilities and limitations of the approach are discussed, and recent results are presented for structures machined in SU8 photoresist.
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
Data-driven sensitivity inference for Thomson scattering electron density measurement systems.
Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro
2017-01-01
We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.
Polarimetric SAR calibration experiment using active radar calibrators
NASA Astrophysics Data System (ADS)
Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.
1990-03-01
Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.
Polarimetric SAR calibration experiment using active radar calibrators
NASA Technical Reports Server (NTRS)
Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.
1990-01-01
Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.
Querol, Jorge; Tarongí, José Miguel; Forte, Giuseppe; Gómez, José Javier; Camps, Adriano
2017-05-10
MERITXELL is a ground-based multisensor instrument that includes a multiband dual-polarization radiometer, a GNSS reflectometer, and several optical sensors. Its main goals are twofold: to test data fusion techniques, and to develop Radio-Frequency Interference (RFI) detection, localization and mitigation techniques. The former is necessary to retrieve complementary data useful to develop geophysical models with improved accuracy, whereas the latter aims at solving one of the most important problems of microwave radiometry. This paper describes the hardware design, the instrument control architecture, the calibration of the radiometer, and several captures of RFI signals taken with MERITXELL in urban environment. The multiband radiometer has a dual linear polarization total-power radiometer topology, and it covers the L-, S-, C-, X-, K-, Ka-, and W-band. Its back-end stage is based on a spectrum analyzer structure which allows to perform real-time signal processing, while the rest of the sensors are controlled by a host computer where the off-line processing takes place. The calibration of the radiometer is performed using the hot-cold load procedure, together with the tipping curves technique in the case of the five upper frequency bands. Finally, some captures of RFI signals are shown for most of the radiometric bands under analysis, which evidence the problem of RFI in microwave radiometry, and the limitations they impose in external calibration.
[Optimization of end-tool parameters based on robot hand-eye calibration].
Zhang, Lilong; Cao, Tong; Liu, Da
2017-04-01
A new one-time registration method was developed in this research for hand-eye calibration of a surgical robot to simplify the operation process and reduce the preparation time. And a new and practical method is introduced in this research to optimize the end-tool parameters of the surgical robot based on analysis of the error sources in this registration method. In the process with one-time registration method, firstly a marker on the end-tool of the robot was recognized by a fixed binocular camera, and then the orientation and position of the marker were calculated based on the joint parameters of the robot. Secondly the relationship between the camera coordinate system and the robot base coordinate system could be established to complete the hand-eye calibration. Because of manufacturing and assembly errors of robot end-tool, an error equation was established with the transformation matrix between the robot end coordinate system and the robot end-tool coordinate system as the variable. Numerical optimization was employed to optimize end-tool parameters of the robot. The experimental results showed that the one-time registration method could significantly improve the efficiency of the robot hand-eye calibration compared with the existing methods. The parameter optimization method could significantly improve the absolute positioning accuracy of the one-time registration method. The absolute positioning accuracy of the one-time registration method can meet the requirements of the clinical surgery.
Querol, Jorge; Tarongí, José Miguel; Forte, Giuseppe; Gómez, José Javier; Camps, Adriano
2017-01-01
MERITXELL is a ground-based multisensor instrument that includes a multiband dual-polarization radiometer, a GNSS reflectometer, and several optical sensors. Its main goals are twofold: to test data fusion techniques, and to develop Radio-Frequency Interference (RFI) detection, localization and mitigation techniques. The former is necessary to retrieve complementary data useful to develop geophysical models with improved accuracy, whereas the latter aims at solving one of the most important problems of microwave radiometry. This paper describes the hardware design, the instrument control architecture, the calibration of the radiometer, and several captures of RFI signals taken with MERITXELL in urban environment. The multiband radiometer has a dual linear polarization total-power radiometer topology, and it covers the L-, S-, C-, X-, K-, Ka-, and W-band. Its back-end stage is based on a spectrum analyzer structure which allows to perform real-time signal processing, while the rest of the sensors are controlled by a host computer where the off-line processing takes place. The calibration of the radiometer is performed using the hot-cold load procedure, together with the tipping curves technique in the case of the five upper frequency bands. Finally, some captures of RFI signals are shown for most of the radiometric bands under analysis, which evidence the problem of RFI in microwave radiometry, and the limitations they impose in external calibration. PMID:28489056
Factory-Calibrated Continuous Glucose Sensors: The Science Behind the Technology.
Hoss, Udo; Budiman, Erwin Satrya
2017-05-01
The use of commercially available continuous glucose monitors for diabetes management requires sensor calibrations, which until recently are exclusively performed by the patient. A new development is the implementation of factory calibration for subcutaneous glucose sensors, which eliminates the need for user calibrations and the associated blood glucose tests. Factory calibration means that the calibration process is part of the sensor manufacturing process and performed under controlled laboratory conditions. The ability to move from a user calibration to factory calibration is based on several technical requirements related to sensor stability and the robustness of the sensor manufacturing process. The main advantages of factory calibration over the conventional user calibration are: (a) more convenience for the user, since no more fingersticks are required for calibration and (b) elimination of use errors related to the execution of the calibration process, which can lead to sensor inaccuracies. The FreeStyle Libre ™ and FreeStyle Libre Pro ™ flash continuous glucose monitoring systems are the first commercially available sensor systems using factory-calibrated sensors. For these sensor systems, no user calibrations are required throughout the sensor wear duration.
Factory-Calibrated Continuous Glucose Sensors: The Science Behind the Technology
Budiman, Erwin Satrya
2017-01-01
Abstract The use of commercially available continuous glucose monitors for diabetes management requires sensor calibrations, which until recently are exclusively performed by the patient. A new development is the implementation of factory calibration for subcutaneous glucose sensors, which eliminates the need for user calibrations and the associated blood glucose tests. Factory calibration means that the calibration process is part of the sensor manufacturing process and performed under controlled laboratory conditions. The ability to move from a user calibration to factory calibration is based on several technical requirements related to sensor stability and the robustness of the sensor manufacturing process. The main advantages of factory calibration over the conventional user calibration are: (a) more convenience for the user, since no more fingersticks are required for calibration and (b) elimination of use errors related to the execution of the calibration process, which can lead to sensor inaccuracies. The FreeStyle Libre™ and FreeStyle Libre Pro™ flash continuous glucose monitoring systems are the first commercially available sensor systems using factory-calibrated sensors. For these sensor systems, no user calibrations are required throughout the sensor wear duration. PMID:28541139
Uncertainty quantification for constitutive model calibration of brain tissue.
Brewick, Patrick T; Teferra, Kirubel
2018-05-31
The results of a study comparing model calibration techniques for Ogden's constitutive model that describes the hyperelastic behavior of brain tissue are presented. One and two-term Ogden models are fit to two different sets of stress-strain experimental data for brain tissue using both least squares optimization and Bayesian estimation. For the Bayesian estimation, the joint posterior distribution of the constitutive parameters is calculated by employing Hamiltonian Monte Carlo (HMC) sampling, a type of Markov Chain Monte Carlo method. The HMC method is enriched in this work to intrinsically enforce the Drucker stability criterion by formulating a nonlinear parameter constraint function, which ensures the constitutive model produces physically meaningful results. Through application of the nested sampling technique, 95% confidence bounds on the constitutive model parameters are identified, and these bounds are then propagated through the constitutive model to produce the resultant bounds on the stress-strain response. The behavior of the model calibration procedures and the effect of the characteristics of the experimental data are extensively evaluated. It is demonstrated that increasing model complexity (i.e., adding an additional term in the Ogden model) improves the accuracy of the best-fit set of parameters while also increasing the uncertainty via the widening of the confidence bounds of the calibrated parameters. Despite some similarity between the two data sets, the resulting distributions are noticeably different, highlighting the sensitivity of the calibration procedures to the characteristics of the data. For example, the amount of uncertainty reported on the experimental data plays an essential role in how data points are weighted during the calibration, and this significantly affects how the parameters are calibrated when combining experimental data sets from disparate sources. Published by Elsevier Ltd.
Optimization of dynamic envelope measurement system for high speed train based on monocular vision
NASA Astrophysics Data System (ADS)
Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong
2018-01-01
The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
NASA Astrophysics Data System (ADS)
Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing
2018-04-01
This paper describes the merits and demerits of different sensors for measuring propellant gas pressure, the applicable range of the frequently used dynamic pressure calibration methods, and the working principle of absolute quasi-static pressure calibration based on the drop-weight device. The main factors affecting the accuracy of pressure calibration are analyzed from two aspects of the force sensor and the piston area. To calculate the effective area of the piston rod and evaluate the uncertainty between the force sensor and the corresponding peak pressure in the absolute quasi-static pressure calibration process, a method for solving these problems based on the least squares principle is proposed. According to the relevant quasi-static pressure calibration experimental data, the least squares fitting model between the peak force and the peak pressure, and the effective area of the piston rod and its measurement uncertainty, are obtained. The fitting model is tested by an additional group of experiments, and the peak pressure obtained by the existing high-precision comparison calibration method is taken as the reference value. The test results show that the peak pressure obtained by the least squares fitting model is closer to the reference value than the one directly calculated by the cross-sectional area of the piston rod. When the peak pressure is higher than 150 MPa, the percentage difference is less than 0.71%, which can meet the requirements of practical application.
A detector interferometric calibration experiment for high precision astrometry
NASA Astrophysics Data System (ADS)
Crouzier, A.; Malbet, F.; Henault, F.; Léger, A.; Cara, C.; LeDuigou, J. M.; Preis, O.; Kern, P.; Delboulbe, A.; Martin, G.; Feautrier, P.; Stadler, E.; Lafrasse, S.; Rochat, S.; Ketchazo, C.; Donati, M.; Doumayrou, E.; Lagage, P. O.; Shao, M.; Goullioud, R.; Nemati, B.; Zhai, C.; Behar, E.; Potin, S.; Saint-Pe, M.; Dupont, J.
2016-11-01
Context. Exoplanet science has made staggering progress in the last two decades, due to the relentless exploration of new detection methods and refinement of existing ones. Yet astrometry offers a unique and untapped potential of discovery of habitable-zone low-mass planets around all the solar-like stars of the solar neighborhood. To fulfill this goal, astrometry must be paired with high precision calibration of the detector. Aims: We present a way to calibrate a detector for high accuracy astrometry. An experimental testbed combining an astrometric simulator and an interferometric calibration system is used to validate both the hardware needed for the calibration and the signal processing methods. The objective is an accuracy of 5 × 10-6 pixel on the location of a Nyquist sampled polychromatic point spread function. Methods: The interferometric calibration system produced modulated Young fringes on the detector. The Young fringes were parametrized as products of time and space dependent functions, based on various pixel parameters. The minimization of function parameters was done iteratively, until convergence was obtained, revealing the pixel information needed for the calibration of astrometric measurements. Results: The calibration system yielded the pixel positions to an accuracy estimated at 4 × 10-4 pixel. After including the pixel position information, an astrometric accuracy of 6 × 10-5 pixel was obtained, for a PSF motion over more than five pixels. In the static mode (small jitter motion of less than 1 × 10-3 pixel), a photon noise limited precision of 3 × 10-5 pixel was reached.
Extending neutron autoradiography technique for boron concentration measurements in hard tissues.
Provenzano, Lucas; Olivera, María Silvina; Saint Martin, Gisela; Rodríguez, Luis Miguel; Fregenal, Daniel; Thorp, Silvia I; Pozzi, Emiliano C C; Curotto, Paula; Postuma, Ian; Altieri, Saverio; González, Sara J; Bortolussi, Silva; Portu, Agustina
2018-07-01
The neutron autoradiography technique using polycarbonate nuclear track detectors (NTD) has been extended to quantify the boron concentration in hard tissues, an application of special interest in Boron Neutron Capture Therapy (BNCT). Chemical and mechanical processing methods to prepare thin tissue sections as required by this technique have been explored. Four different decalcification methods governed by slow and fast kinetics were tested in boron-loaded bones. Due to the significant loss of the boron content, this technique was discarded. On the contrary, mechanical manipulation to obtain bone powder and tissue sections of tens of microns thick proved reproducible and suitable, ensuring a proper conservation of the boron content in the samples. A calibration curve that relates the 10 B concentration of a bone sample and the track density in a Lexan NTD is presented. Bone powder embedded in boric acid solution with known boron concentrations between 0 and 100 ppm was used as a standard material. The samples, contained in slim Lexan cases, were exposed to a neutron fluence of 10 12 cm -2 at the thermal column central facility of the RA-3 reactor (Argentina). The revealed tracks in the NTD were counted with an image processing software. The effect of track overlapping was studied and corresponding corrections were implemented in the presented calibration curve. Stochastic simulations of the track densities produced by the products of the 10 B thermal neutron capture reaction for different boron concentrations in bone were performed and compared with the experimental results. The remarkable agreement between the two curves suggested the suitability of the obtained experimental calibration curve. This neutron autoradiography technique was finally applied to determine the boron concentration in pulverized and compact bone samples coming from a sheep experimental model. The obtained results for both type of samples agreed with boron measurements carried out by ICP-OES within experimental uncertainties. The fact that the histological structure of bone sections remains preserved allows for future boron microdistribution analysis. Copyright © 2018 Elsevier Ltd. All rights reserved.
Męczykowska, Hanna; Kobylis, Paulina; Stepnowski, Piotr; Caban, Magda
2017-05-04
Passive sampling is one of the most efficient methods of monitoring pharmaceuticals in environmental water. The reliability of the process relies on a correctly performed calibration experiment and a well-defined sampling rate (R s ) for target analytes. Therefore, in this review the state-of-the-art methods of passive sampler calibration for the most popular pharmaceuticals: antibiotics, hormones, β-blockers and non-steroidal anti-inflammatory drugs (NSAIDs), along with the sampling rate variation, were presented. The advantages and difficulties in laboratory and field calibration were pointed out, according to the needs of control of the exact conditions. Sampling rate calculating equations and all the factors affecting the R s value - temperature, flow, pH, salinity of the donor phase and biofouling - were discussed. Moreover, various calibration parameters gathered from the literature published in the last 16 years, including the device types, were tabled and compared. What is evident is that the sampling rate values for pharmaceuticals are impacted by several factors, whose influence is still unclear and unpredictable, while there is a big gap in experimental data. It appears that the calibration procedure needs to be improved, for example, there is a significant deficiency of PRCs (Performance Reference Compounds) for pharmaceuticals. One of the suggestions is to introduce correction factors for R s values estimated in laboratory conditions.
TOGA/COARE AMMR 1992 data processing
NASA Technical Reports Server (NTRS)
Kunkee, D. B.
1994-01-01
The complete set of Tropical Ocean and Global Atmosphere (TOGA)/Coupled Ocean Atmosphere Response Experiment (COARE) flight data for the 91.65 GHz Airborne Meteorological Radiometer (AMMR92) contains data from nineteen flights: two test flights, four transit flights, and thirteen experimental flights. The data flight occurred between December 16, 1992 and February 28, 1993. Data collection from the AMMR92 during the first ten flights of TOGA/COARE was performed using the executable code TSK30041. These are IBM PC/XT programs used by the NASA Goddard Space Flight Center (GSFC). During one flight, inconsistencies were found during the operation of the AMMR92 using the GSFC data acquisition system. Consequently, the Georgia Tech (GT) data acquisition system was used during all successive TOGA/COARE flights. These inconsistencies were found during the data processing to affect the recorded data as well. Errors are caused by an insufficient pre- and post-calibration setting period for the splash-plate mechanism. The splash-plate operates asynchronusly with the data acquisition system (there is no position feedback to the GSFC or GT data system). This condition caused both the calibration and the post-calibration scene measurement to be corrupted on a randomly occurring basis when the GSFC system was used. This problem did not occur with the GT data acquisition system due to sufficient allowance for splash-plate settling. After TOGA/COARE it was determined that calibration of the instrument was a function of the scene brightness temperature. Therefore, the orientation error in the main antenna beam of the AMMR92 is hypothesized to be caused by misalignment of the internal 'splash-plate' responsible for directing the antenna beam toward the scene or toward the calibration loads. Misalignment of the splash-plate is responsible for 'scene feedthrough' during calibration. Laboratory investigation at Georgia Tech found that each polarization is affected differently by the splash-plate alignment error. This is likely to cause significant and unique errors in the absolute calibration of each channel.
TOGA/COARE AMMR 1992 data processing
NASA Astrophysics Data System (ADS)
Kunkee, D. B.
1994-05-01
The complete set of Tropical Ocean and Global Atmosphere (TOGA)/Coupled Ocean Atmosphere Response Experiment (COARE) flight data for the 91.65 GHz Airborne Meteorological Radiometer (AMMR92) contains data from nineteen flights: two test flights, four transit flights, and thirteen experimental flights. The data flight occurred between December 16, 1992 and February 28, 1993. Data collection from the AMMR92 during the first ten flights of TOGA/COARE was performed using the executable code TSK30041. These are IBM PC/XT programs used by the NASA Goddard Space Flight Center (GSFC). During one flight, inconsistencies were found during the operation of the AMMR92 using the GSFC data acquisition system. Consequently, the Georgia Tech (GT) data acquisition system was used during all successive TOGA/COARE flights. These inconsistencies were found during the data processing to affect the recorded data as well. Errors are caused by an insufficient pre- and post-calibration setting period for the splash-plate mechanism. The splash-plate operates asynchronusly with the data acquisition system (there is no position feedback to the GSFC or GT data system). This condition caused both the calibration and the post-calibration scene measurement to be corrupted on a randomly occurring basis when the GSFC system was used. This problem did not occur with the GT data acquisition system due to sufficient allowance for splash-plate settling. After TOGA/COARE it was determined that calibration of the instrument was a function of the scene brightness temperature. Therefore, the orientation error in the main antenna beam of the AMMR92 is hypothesized to be caused by misalignment of the internal 'splash-plate' responsible for directing the antenna beam toward the scene or toward the calibration loads. Misalignment of the splash-plate is responsible for 'scene feedthrough' during calibration. Laboratory investigation at Georgia Tech found that each polarization is affected differently by the splash-plate alignment error. This is likely to cause significant and unique errors in the absolute calibration of each channel.
FY 2016 Status Report on the Modeling of the M8 Calibration Series using MAMMOTH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Benjamin Allen; Ortensi, Javier; DeHart, Mark David
2016-09-01
This report provides a summary of the progress made towards validating the multi-physics reactor analysis application MAMMOTH using data from measurements performed at the Transient Reactor Test facility, TREAT. The work completed consists of a series of comparisons of TREAT element types (standard and control rod assemblies) in small geometries as well as slotted mini-cores to reference Monte Carlo simulations to ascertain the accuracy of cross section preparation techniques. After the successful completion of these smaller problems, a full core model of the half slotted core used in the M8 Calibration series was assembled. Full core MAMMOTH simulations were comparedmore » to Serpent reference calculations to assess the cross section preparation process for this larger configuration. As part of the validation process the M8 Calibration series included a steady state wire irradiation experiment and coupling factors for the experiment region. The shape of the power distribution obtained from the MAMMOTH simulation shows excellent agreement with the experiment. Larger differences were encountered in the calculation of the coupling factors, but there is also great uncertainty on how the experimental values were obtained. Future work will focus on resolving some of these differences.« less
Muscle Synergies May Improve Optimization Prediction of Knee Contact Forces During Walking
Walter, Jonathan P.; Kinney, Allison L.; Banks, Scott A.; D'Lima, Darryl D.; Besier, Thor F.; Lloyd, David G.; Fregly, Benjamin J.
2014-01-01
The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values. PMID:24402438
Muscle synergies may improve optimization prediction of knee contact forces during walking.
Walter, Jonathan P; Kinney, Allison L; Banks, Scott A; D'Lima, Darryl D; Besier, Thor F; Lloyd, David G; Fregly, Benjamin J
2014-02-01
The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values.
A non-invasive online photoionization spectrometer for FLASH2.
Braune, Markus; Brenner, Günter; Dziarzhytski, Siarhei; Juranić, Pavle; Sorokin, Andrey; Tiedtke, Kai
2016-01-01
The stochastic nature of the self-amplified spontaneous emission (SASE) process of free-electron lasers (FELs) effects pulse-to-pulse fluctuations of the radiation properties, such as the photon energy, which are determinative for processes of photon-matter interactions. Hence, SASE FEL sources pose a great challenge for scientific investigations, since experimenters need to obtain precise real-time feedback of these properties for each individual photon bunch for interpretation of the experimental data. Furthermore, any device developed to deliver the according information should not significantly interfere with or degrade the FEL beam. Regarding the spectral properties, a device for online monitoring of FEL wavelengths has been developed for FLASH2, which is based on photoionization of gaseous targets and the measurements of the corresponding electron and ion time-of-flight spectra. This paper presents experimental studies and cross-calibration measurements demonstrating the viability of this online photoionization spectrometer.
A non-invasive online photoionization spectrometer for FLASH2
Braune, Markus; Brenner, Günter; Dziarzhytski, Siarhei; Juranić, Pavle; Sorokin, Andrey; Tiedtke, Kai
2016-01-01
The stochastic nature of the self-amplified spontaneous emission (SASE) process of free-electron lasers (FELs) effects pulse-to-pulse fluctuations of the radiation properties, such as the photon energy, which are determinative for processes of photon–matter interactions. Hence, SASE FEL sources pose a great challenge for scientific investigations, since experimenters need to obtain precise real-time feedback of these properties for each individual photon bunch for interpretation of the experimental data. Furthermore, any device developed to deliver the according information should not significantly interfere with or degrade the FEL beam. Regarding the spectral properties, a device for online monitoring of FEL wavelengths has been developed for FLASH2, which is based on photoionization of gaseous targets and the measurements of the corresponding electron and ion time-of-flight spectra. This paper presents experimental studies and cross-calibration measurements demonstrating the viability of this online photoionization spectrometer. PMID:26698040
Multi-sensor calibration of low-cost magnetic, angular rate and gravity systems.
Lüken, Markus; Misgeld, Berno J E; Rüschen, Daniel; Leonhardt, Steffen
2015-10-13
We present a new calibration procedure for low-cost nine degrees-of-freedom (9DOF) magnetic, angular rate and gravity (MARG) sensor systems, which relies on a calibration cube, a reference table and a body sensor network (BSN). The 9DOF MARG sensor is part of our recently-developed "Integrated Posture and Activity Network by Medit Aachen" (IPANEMA) BSN. The advantage of this new approach is the use of the calibration cube, which allows for easy integration of two sensor nodes of the IPANEMA BSN. One 9DOF MARG sensor node is thereby used for calibration; the second 9DOF MARG sensor node is used for reference measurements. A novel algorithm uses these measurements to further improve the performance of the calibration procedure by processing arbitrarily-executed motions. In addition, the calibration routine can be used in an alignment procedure to minimize errors in the orientation between the 9DOF MARG sensor system and a motion capture inertial reference system. A two-stage experimental study is conducted to underline the performance of our calibration procedure. In both stages of the proposed calibration procedure, the BSN data, as well as reference tracking data are recorded. In the first stage, the mean values of all sensor outputs are determined as the absolute measurement offset to minimize integration errors in the derived movement model of the corresponding body segment. The second stage deals with the dynamic characteristics of the measurement system where the dynamic deviation of the sensor output compared to a reference system is Sensors 2015, 15 25920 corrected. In practical validation experiments, this procedure showed promising results with a maximum RMS error of 3.89°.
Multi-Sensor Calibration of Low-Cost Magnetic, Angular Rate and Gravity Systems
Lüken, Markus; Misgeld, Berno J.E.; Rüschen, Daniel; Leonhardt, Steffen
2015-01-01
We present a new calibration procedure for low-cost nine degrees-of-freedom (9DOF) magnetic, angular rate and gravity (MARG) sensor systems, which relies on a calibration cube, a reference table and a body sensor network (BSN). The 9DOF MARG sensor is part of our recently-developed “Integrated Posture and Activity Network by Medit Aachen” (IPANEMA) BSN. The advantage of this new approach is the use of the calibration cube, which allows for easy integration of two sensor nodes of the IPANEMA BSN. One 9DOF MARG sensor node is thereby used for calibration; the second 9DOF MARG sensor node is used for reference measurements. A novel algorithm uses these measurements to further improve the performance of the calibration procedure by processing arbitrarily-executed motions. In addition, the calibration routine can be used in an alignment procedure to minimize errors in the orientation between the 9DOF MARG sensor system and a motion capture inertial reference system. A two-stage experimental study is conducted to underline the performance of our calibration procedure. In both stages of the proposed calibration procedure, the BSN data, as well as reference tracking data are recorded. In the first stage, the mean values of all sensor outputs are determined as the absolute measurement offset to minimize integration errors in the derived movement model of the corresponding body segment. The second stage deals with the dynamic characteristics of the measurement system where the dynamic deviation of the sensor output compared to a reference system is corrected. In practical validation experiments, this procedure showed promising results with a maximum RMS error of 3.89°. PMID:26473873
Sub-half-micron contact window design with 3D photolithography simulator
NASA Astrophysics Data System (ADS)
Brainerd, Steve K.; Bernard, Douglas A.; Rey, Juan C.; Li, Jiangwei; Granik, Yuri; Boksha, Victor V.
1997-07-01
In state of the art IC design and manufacturing certain lithography layers have unique requirements. Latitudes and tolerances that apply to contacts and polysilicon gates are tight for such critical layers. Industry experts are discussing the most cost effective ways to use feature- oriented equipment and materials already developed for these layers. Such requirements introduce new dimensions into the traditionally challenging task for the photolithography engineer when considering various combinations of multiple factors to optimize and control the process. In addition, he/she faces a rapidly increasing cost of experiments, limited time and scarce access to equipment to conduct them. All the reasons presented above support simulation as an ideal method to satisfy these demands. However lithography engineers may be easily dissatisfied with a simulation tool when discovering disagreement between the simulation and experimental data. The problem is that several parameters used in photolithography simulation are very process specific. Calibration, i.e. matching experimental and simulation data using a specific set of procedures allows one to effectively use the simulation tool. We present results of a simulation based approach to optimize photolithography processes for sub-0.5 micron contact windows. Our approach consists of: (1) 3D simulation to explore different lithographic options, (2) calibration to a range of process conditions with extensive use of specifically developed optimization techniques. The choice of a 3D simulator is essential because of 3D nature of the problem of contact window design. We use DEPICT 4.1. This program performs fast aerial image simulation as presented before. For 3D exposure the program uses an extension to three-dimensions of the high numerical aperture model combined with Fast Fourier Transforms for maximum performance and accuracy. We use Kim (U.C. Berkeley) model and the fast marching Level Set method respectively for the calculation of resist development rates and resist surface movement during development process. Calibration efforts were aimed at matching experimental results on contact windows obtained after exposure of a binary mask. Additionally, simulation was applied to conduct quantitative analysis of PSM design capabilities, optical proximity correction, and stepper parameter optimization. Extensive experiments covered exposure (ASML 5500/100D stepper), pre- and post-exposure bake and development (2.38% TMAH, puddle process) of JSR IX725D2G and TOK iP3500 photoresists films on 200 mm test wafers. `Aquatar' was used as top antireflective coating, SEM pictures of developed patterns were analyzed and compared with simulation results for different values of defocus, exposure energies, numerical aperture and partial coherence.
Contributions to the problem of piezoelectric accelerometer calibration. [using lock-in voltmeter
NASA Technical Reports Server (NTRS)
Jakab, I.; Bordas, A.
1974-01-01
After discussing the principal calibration methods for piezoelectric accelerometers, an experimental setup for accelerometer calibration by the reciprocity method is described It is shown how the use of a lock-in voltmeter eliminates errors due to viscous damping and electrical loading.
Scheiblauer, Johannes; Scheiner, Stefan; Joksch, Martin; Kavsek, Barbara
2018-09-14
A combined experimental/theoretical approach is presented, for improving the predictability of Saccharomyces cerevisiae fermentations. In particular, a mathematical model was developed explicitly taking into account the main mechanisms of the fermentation process, allowing for continuous computation of key process variables, including the biomass concentration and the respiratory quotient (RQ). For model calibration and experimental validation, batch and fed-batch fermentations were carried out. Comparison of the model-predicted biomass concentrations and RQ developments with the corresponding experimentally recorded values shows a remarkably good agreement for both batch and fed-batch processes, confirming the adequacy of the model. Furthermore, sensitivity studies were performed, in order to identify model parameters whose variations have significant effects on the model predictions: our model responds with significant sensitivity to the variations of only six parameters. These studies provide a valuable basis for model reduction, as also demonstrated in this paper. Finally, optimization-based parametric studies demonstrate how our model can be utilized for improving the efficiency of Saccharomyces cerevisiae fermentations. Copyright © 2018 Elsevier Ltd. All rights reserved.
Metal Carbon Eutectics to Extend the Use of the Fixed-Point Technique in Precision IR Thermometry
NASA Astrophysics Data System (ADS)
Battuello, M.; Girard, F.; Florio, M.
2008-06-01
The high-temperature extension of the fixed-point technique for primary calibration of precision infrared (IR) thermometers was investigated both through mathematical simulations and laboratory investigations. Simulations were performed with Co C (1,324°C) and Pd C (1, 492°C) eutectic fixed points, and a precision IR thermometer was calibrated from the In point (156.5985°C) up to the Co C point. Mathematical simulations suggested the possibility of directly deriving the transition temperature of the Co C and Pd C points by extrapolating the calibration derived from fixed-point measurements from In to the Cu point. Both temperatures, as a result of the low uncertainty associated with the In Cu calibration and the high number of fixed points involved in the calibration process, can be derived with an uncertainty of 0.11°C for Co C and 0.18°C for Pd C. A transition temperature of 1,324.3°C for Co C was determined from the experimental verification, a value higher than, but compatible with, the one proposed by the thermometry community for inclusion as a secondary reference point for ITS-90 dissemination, i.e., 1,324.0°C.
Franck, D; de Carlan, L; Pierrat, N; Broggio, D; Lamart, S
2007-01-01
Although great efforts have been made to improve the physical phantoms used to calibrate in vivo measurement systems, these phantoms represent a single average counting geometry and usually contain a uniform distribution of the radionuclide over the tissue substitute. As a matter of fact, significant corrections must be made to phantom-based calibration factors in order to obtain absolute calibration efficiencies applicable to a given individual. The importance of these corrections is particularly crucial when considering in vivo measurements of low energy photons emitted by radionuclides deposited in the lung such as actinides. Thus, it was desirable to develop a method for calibrating in vivo measurement systems that is more sensitive to these types of variability. Previous works have demonstrated the possibility of such a calibration using the Monte Carlo technique. Our research programme extended such investigations to the reconstruction of numerical anthropomorphic phantoms based on personal physiological data obtained by computed tomography. New procedures based on a new graphical user interface (GUI) for development of computational phantoms for Monte Carlo calculations and data analysis are being developed to take advantage of recent progress in image-processing codes. This paper presents the principal features of this new GUI. Results of calculations and comparison with experimental data are also presented and discussed in this work.
Calibration method for a large-scale structured light measurement system.
Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken
2017-05-10
The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.
Tomei, M Concetta; Mosca Angelucci, Domenica; Ademollo, Nicoletta; Daugulis, Andrew J
2015-03-01
Solid phase extraction performed with commercial polymer beads to treat soil contaminated by chlorophenols (4-chlorophenol, 2,4-dichlorophenol and pentachlorophenol) as single compounds and in a mixture has been investigated in this study. Soil-water-polymer partition tests were conducted to determine the relative affinities of single compounds in soil-water and polymer-water pairs. Subsequent soil extraction tests were performed with Hytrel 8206, the polymer showing the highest affinity for the tested chlorophenols. Factors that were examined were polymer type, moisture content, and contamination level. Increased moisture content (up to 100%) improved the extraction efficiency for all three compounds. Extraction tests at this upper level of moisture content showed removal efficiencies ≥70% for all the compounds and their ternary mixture, for 24 h of contact time, which is in contrast to the weeks and months, normally required for conventional ex situ remediation processes. A dynamic model characterizing the rate and extent of decontamination was also formulated, calibrated and validated with the experimental data. The proposed model, based on the simplified approach of "lumped parameters" for the mass transfer coefficients, provided very good predictions of the experimental data for the absorptive removal of contaminants from soil at different individual solute levels. Parameters evaluated from calibration by fitting of single compound data, have been successfully applied to predict mixture data, with differences between experimental and predicted data in all cases being ≤3%. Copyright © 2014 Elsevier Ltd. All rights reserved.
Imaging of human tooth using ultrasound based chirp-coded nonlinear time reversal acoustics.
Dos Santos, Serge; Prevorovsky, Zdenek
2011-08-01
Human tooth imaging sonography is investigated experimentally with an acousto-optic noncoupling set-up based on the chirp-coded nonlinear time reversal acoustic concept. The complexity of the tooth internal structure (enamel-dentine interface, cracks between internal tubules) is analyzed by adapting the nonlinear elastic wave spectroscopy (NEWS) with the objective of the tomography of damage. Optimization of excitations using intrinsic symmetries, such as time reversal (TR) invariance, reciprocity, correlation properties are then proposed and implemented experimentally. The proposed medical application of this TR-NEWS approach is implemented on a third molar human tooth and constitutes an alternative of noncoupling echodentography techniques. A 10 MHz bandwidth ultrasonic instrumentation has been developed including a laser vibrometer and a 20 MHz contact piezoelectric transducer. The calibrated chirp-coded TR-NEWS imaging of the tooth is obtained using symmetrized excitations, pre- and post-signal processing, and the highly sensitive 14 bit resolution TR-NEWS instrumentation previously calibrated. Nonlinear signature coming from the symmetry properties is observed experimentally in the tooth using this bi-modal TR-NEWS imaging after and before the focusing induced by the time-compression process. The TR-NEWS polar B-scan of the tooth is described and suggested as a potential application for modern echodentography. It constitutes the basis of the self-consistent harmonic imaging sonography for monitoring cracks propagation in the dentine, responsible of human tooth structural health. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lambiase, F.; Genna, S.; Kant, R.
2018-01-01
The quality of the joints produced by means of Laser-Assisted Metal to Polymer direct joining (LAMP) is strongly influenced by the temperature field produced during the laser treatment. The main phenomena including the adhesion of the plastic to the metal sheet and the development of bubbles (on the plastic surface) depend on the temperature reached by the polymer at the interface. Such a temperature should be higher than the softening temperature, but lower than the degradation temperature of the polymer. However, the temperature distribution is difficult to be measured by experimental tests since the most polymers (which are transparent to the laser radiation) are often opaque to the infrared wavelength. Thus, infrared analysis involving pyrometers and infrared camera is not suitable for this purpose. On the other hand, thermocouples are difficult to be placed at the interface without influencing the temperature conditions. In this paper, an integrated approach involving both experimental measurements and a Finite Element (FE) model were used to perform such an analysis. LAMP of Polycarbonate and AISI304 stainless steel was performed by means of high power diode laser and the main process parameters i.e. laser power and scanning speed were varied. Comparing the experimental measurements and the FE model prediction of the thermal field, a good correspondence was achieved proving the suitability of the developed model and the proposed calibration procedure to be ready used for process design and optimization.
NASA Technical Reports Server (NTRS)
Amer, Tahani; Tripp, John; Tcheng, Ping; Burkett, Cecil; Sealey, Bradley
2004-01-01
This paper presents the calibration results and uncertainty analysis of a high-precision reference pressure measurement system currently used in wind tunnels at the NASA Langley Research Center (LaRC). Sensors, calibration standards, and measurement instruments are subject to errors due to aging, drift with time, environment effects, transportation, the mathematical model, the calibration experimental design, and other factors. Errors occur at every link in the chain of measurements and data reduction from the sensor to the final computed results. At each link of the chain, bias and precision uncertainties must be separately estimated for facility use, and are combined to produce overall calibration and prediction confidence intervals for the instrument, typically at a 95% confidence level. The uncertainty analysis and calibration experimental designs used herein, based on techniques developed at LaRC, employ replicated experimental designs for efficiency, separate estimation of bias and precision uncertainties, and detection of significant parameter drift with time. Final results, including calibration confidence intervals and prediction intervals given as functions of the applied inputs, not as a fixed percentage of the full-scale value are presented. System uncertainties are propagated beginning with the initial reference pressure standard, to the calibrated instrument as a working standard in the facility. Among the several parameters that can affect the overall results are operating temperature, atmospheric pressure, humidity, and facility vibration. Effects of factors such as initial zeroing and temperature are investigated. The effects of the identified parameters on system performance and accuracy are discussed.
A novel instrumentation circuit for electrochemical measurements.
Yin, Li-Te; Wang, Hung-Yu; Lin, Yang-Chiuan; Huang, Wen-Chung
2012-01-01
In this paper, a novel signal processing circuit which can be used for the measurement of H(+) ion and urea concentration is presented. A potentiometric method is used to detect the concentrations of H(+) ions and urea by using H(+) ion-selective electrodes and urea electrodes, respectively. The experimental data shows that this measuring structure has a linear pH response for the concentration range within pH 2 and 12, and the dynamic range for urea concentration measurement is in the range of 0.25 to 64 mg/dL. The designed instrumentation circuit possesses a calibration function and it can be applied to different sensing electrodes for electrochemical analysis. It possesses the advantageous properties of being multi-purpose, easy calibration and low cost.
NASA Astrophysics Data System (ADS)
Lamare, Maxim; Hedley, John; King, Martin
2016-04-01
Knowledge of the albedo in the cryosphere is essential to monitor a range of climatic processes that have an impact on a global scale. Optical Earth Observation satellites are ideal for the synoptic observation of expansive and inaccessible areas, providing large datasets used to derive essential products, such as albedo. The application of remote sensing to investigate climate processes requires the combination of data from different sensors. However, although there is significant value in the analysis of data from individual sensors, global observing systems require accurate knowledge of sensor-to-sensor biases. Therefore, the inter-calibration of sensors used for climate studies is essential to avoid inconsistencies, which may mask climate effects. CEOS (Committee on Earth Observing Satellites) has established a number of natural Earth targets to serve as international reference standards, amongst which sea ice has great potential. The reflectance of natural surfaces is not isotropic and reflectance varies with the illumination and viewing geometries, consequently impacting satellite observations. Furthermore, variations in the physical properties (sea ice type, thickness) and the light absorbing impurities deposited in the sea ice have a strong impact on reflectance. Thus, the characterisation of the bi-directional reflectance distribution function (BRDF) of sea ice is a fundamental step toward the inter-calibration of optical satellite sensors. This study provides a characterisation of the effects of mineral aerosol and black carbon deposits on the BRDF of three different sea ice types. BRDF measurements were performed on bare sea ice grown in an experimental ice tank, using a state-of-the-art laboratory goniometer. The sea ice was "poisoned" with concentrations of mineral dust and black carbon varying between 100 and 5 000 ng g-1 deposited uniformly in a 5 cm surface layer. Using measurements from the experimental facility, novel information about sea ice BRDF as a function of sea ice type, thickness and light-absorbing impurities was derived using a radiative-transfer model (PlanarRad). This extensive characterisation of the multi angular reflectance of sea ice reveals the importance of BRDF for the validation and calibration of Earth Observation satellite sensor data.
Estimated landmark calibration of biomechanical models for inverse kinematics.
Trinler, Ursula; Baker, Richard
2018-01-01
Inverse kinematics is emerging as the optimal method in movement analysis to fit a multi-segment biomechanical model to experimental marker positions. A key part of this process is calibrating the model to the dimensions of the individual being analysed which requires scaling of the model, pose estimation and localisation of tracking markers within the relevant segment coordinate systems. The aim of this study is to propose a generic technique for this process and test a specific application to the OpenSim model Gait2392. Kinematic data from 10 healthy adult participants were captured in static position and normal walking. Results showed good average static and dynamic fitting errors between virtual and experimental markers of 0.8 cm and 0.9 cm, respectively. Highest fitting errors were found on the epicondyle (static), feet (static, dynamic) and on the thigh (dynamic). These result from inconsistencies between the model geometry and degrees of freedom and the anatomy and movement pattern of the individual participants. A particular limitation is in estimating anatomical landmarks from the bone meshes supplied with Gait2392 which do not conform with the bone morphology of the participants studied. Soft tissue artefact will also affect fitting the model to walking trials. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Phillip E. Farnes; Ward W. McCaughey; Katherine J. Hansen
1999-01-01
The objectives of this Research Joint Venture Agreement (RJVA) were to install and calibrate three flumes on Tenderfoot Creek Experimental Forest (TCEF) in central Montana; check calibration of the existing seven flumes on TCEF; estimate the influence of fire on water yields over the 400-year fire history period; and estimate back records of monthly temperature,...
ERIC Educational Resources Information Center
Gutierrez de Blume, Antonio P.
2017-01-01
This study investigated the influence of strategy training instruction and an extrinsic incentive on American fourth- and fifth-grade students' (N = 35) performance, confidence in performance, and calibration accuracy. Using an experimental design, children were randomized to either an experimental group (strategy training and an extrinsic…
Calibration of streamflow gauging stations at the Tenderfoot Creek Experimental Forest
Scott W. Woods
2007-01-01
We used tracer based methods to calibrate eleven streamflow gauging stations at the Tenderfoot Creek Experimental Forest in western Montana. At six of the stations the measured flows were consistent with the existing rating curves. At Lower and Upper Stringer Creek, Upper Sun Creek and Upper Tenderfoot Creek the published flows, based on the existing rating curves,...
Interpretation of F106B and CV580 in-flight lightning data and form factor determination
NASA Technical Reports Server (NTRS)
Rudolph, T.; Horembala, J.; Eriksen, F. J.; Weigel, H. S.; Elliott, J. R.; Parker, S. L.; Perala, R. A.
1989-01-01
Two topics of in-flight aircraft/lightning interaction are addressed. The first is the analysis of measured data from the NASA F106B Thunderstorm Research Aircraft and the CV580 research program run by the FAA and Wright-Patterson Air Force Base. The CV580 data was investigated in a mostly qualitative sense, while the F106B data was subjected to both statistical and quantitative analysis using linear triggered lightning finite difference models. The second main topic is the analysis of field mill data and the calibration of the field mill systems. The calibration of the F106B field mill system was investigated using an improved finite difference model of the aircraft having a spatial resolution of one-quarter meter. The calibration was applied to measured field mill data acquired during the 1985 thunderstorm season. The experimental determination of form factors useful for field mill calibration was also investigated both experimentally and analytically. The experimental effort involved the use of conducting scale models and an electrolytic tank. An analytic technique was developed to aid in the understanding of the experimental results.
Reconstruction method for fringe projection profilometry based on light beams.
Li, Xuexing; Zhang, Zhijiang; Yang, Chen
2016-12-01
A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.
Taking a look at the calibration of a CCD detector with a fiber-optic taper
Alkire, R. W.; Rotella, F. J.; Duke, N. E. C.; Otwinowski, Zbyszek; Borek, Dominika
2016-01-01
At the Structural Biology Center beamline 19BM, located at the Advanced Photon Source, the operational characteristics of the equipment are routinely checked to ensure they are in proper working order. After performing a partial flat-field calibration for the ADSC Quantum 210r CCD detector, it was confirmed that the detector operates within specifications. However, as a secondary check it was decided to scan a single reflection across one-half of a detector module to validate the accuracy of the calibration. The intensities from this single reflection varied by more than 30% from the module center to the corner of the module. Redistribution of light within bent fibers of the fiber-optic taper was identified to be a source of this variation. The degree to which the diffraction intensities are corrected to account for characteristics of the fiber-optic tapers depends primarily upon the experimental strategy of data collection, approximations made by the data processing software during scaling, and crystal symmetry. PMID:27047303
High accuracy broadband infrared spectropolarimetry
NASA Astrophysics Data System (ADS)
Krishnaswamy, Venkataramanan
Mueller matrix spectroscopy or Spectropolarimetry combines conventional spectroscopy with polarimetry, providing more information than can be gleaned from spectroscopy alone. Experimental studies on infrared polarization properties of materials covering a broad spectral range have been scarce due to the lack of available instrumentation. This dissertation aims to fill the gap by the design, development, calibration and testing of a broadband Fourier Transform Infra-Red (FT-IR) spectropolarimeter. The instrument operates over the 3-12 mum waveband and offers better overall accuracy compared to the previous generation instruments. Accurate calibration of a broadband spectropolarimeter is a non-trivial task due to the inherent complexity of the measurement process. An improved calibration technique is proposed for the spectropolarimeter and numerical simulations are conducted to study the effectiveness of the proposed technique. Insights into the geometrical structure of the polarimetric measurement matrix is provided to aid further research towards global optimization of Mueller matrix polarimeters. A high performance infrared wire-grid polarizer is characterized using the spectropolarimeter. Mueller matrix spectrum measurements on Penicillin and pine pollen are also presented.
Calibration of a portable HPGe detector using MCNP code for the determination of 137Cs in soils.
Gutiérrez-Villanueva, J L; Martín-Martín, A; Peña, V; Iniguez, M P; de Celis, B; de la Fuente, R
2008-10-01
In situ gamma spectrometry provides a fast method to determine (137)Cs inventories in soils. To improve the accuracy of the estimates, one can use not only the information on the photopeak count rates but also on the peak to forward-scatter ratios. Before applying this procedure to field measurements, a calibration including several experimental simulations must be carried out in the laboratory. In this paper it is shown that Monte Carlo methods are a valuable tool to minimize the number of experimental measurements needed for the calibration.
Modeling Subsurface Behavior at the System Level: Considerations and a Path Forward
NASA Astrophysics Data System (ADS)
Geesey, G.
2005-12-01
The subsurface is an obscure but essential resource to life on Earth. It is an important region for carbon production and sequestration, a source and reservoir for energy, minerals and metals and potable water. There is a growing need to better understand subsurface possesses that control the exploitation and security of these resources. Our best models often fail to predict these processes at the field scale because of limited understanding of 1) the processes and the controlling parameters, 2) how processes are coupled at the field scale 3) geological heterogeneities that control hydrological, geochemical and microbiological processes at the field scale and 4) lack of data sets to calibrate and validate numerical models. There is a need for experimental data obtained at scales larger than those obtained at the laboratory bench that take into account the influence of hydrodynamics, geochemical reactions including complexation and chelation/adsorption/precipitation/ion exchange/oxidation-reduction/colloid formation and dissolution, and reactions of microbial origin. Furthermore, the coupling of each of these processes and reactions needs to be evaluated experimentally at a scale that produces data that can be used to calibrate numerical models so that they accurately describe field scale system behavior. Establishing the relevant experimental scale for collection of data from coupled processes remains a challenge and will likely be process-dependent and involve iterations of experimentation and data collection at different intermediate scales until the models calibrated with the appropriate date sets achieve an acceptable level of performance. Assuming that the geophysicists will soon develop technologies to define geological heterogeneities over a wide range of scales in the subsurface, geochemists need to continue to develop techniques to remotely measure abiotic reactions, while geomicrobiologists need to continue their development of complementary technologies to remotely measure microbial community parameters that define their key functions at a scale that accurately reflects their role in large scale subsurface system behavior. The practical questions that geomicrobiologist must answer in the short term are: 1) What is known about the activities of the dominant microbial populations or those of their closest relatives? 2) Which of these activities is likely to dominate under in situ conditions? In the process of answering these questions, researchers will obtain answers to questions of a more fundamental nature such as 1) How deep does "active" life extend below the surface of the seafloor and terrestrial subsurface? 2) How are electrons exchanged between microbial cells and solid phase minerals? 3) What is the metabolic state and mechanism of survival of "inactive" life forms in the subsurface? 4) What can genomes of life forms trapped in geological material tell us about evolution of life that current methods cannot? The subsurface environment represents a challenging environment to understand and model. As the need to understand subsurface processes increases and the technologies to characterize them become available, modeling subsurface behavior will approach the level of sophistication of models used today to predict behavior of other large scale systems such as the oceans.
NASA Technical Reports Server (NTRS)
Thesken, John C.; Murthy, Pappu L. N.; Phoenix, S. L.; Greene, N.; Palko, Joseph L.; Eldridge, Jeffrey; Sutter, James; Saulsberry, R.; Beeson, H.
2009-01-01
A theoretical investigation of the factors controlling the stress rupture life of the National Aeronautics and Space Administration's (NASA) composite overwrapped pressure vessels (COPVs) continues. Kevlar (DuPont) fiber overwrapped tanks are of particular concern due to their long usage and the poorly understood stress rupture process in Kevlar filaments. Existing long term data show that the rupture process is a function of stress, temperature and time. However due to the presence of a load sharing liner, the manufacturing induced residual stresses and the complex mechanical response, the state of actual fiber stress in flight hardware and test articles is not clearly known. This paper is a companion to a previously reported experimental investigation and develops a theoretical framework necessary to design full-scale pathfinder experiments and accurately interpret the experimentally observed deformation and failure mechanisms leading up to static burst in COPVs. The fundamental mechanical response of COPVs is described using linear elasticity and thin shell theory and discussed in comparison to existing experimental observations. These comparisons reveal discrepancies between physical data and the current analytical results and suggest that the vessel s residual stress state and the spatial stress distribution as a function of pressure may be completely different from predictions based upon existing linear elastic analyses. The 3D elasticity of transversely isotropic spherical shells demonstrates that an overly compliant transverse stiffness relative to membrane stiffness can account for some of this by shifting a thin shell problem well into the realm of thick shell response. The use of calibration procedures are demonstrated as calibrated thin shell model results and finite element results are shown to be in good agreement with the experimental results. The successes reported here have lead to continuing work with full scale testing of larger NASA COPV hardware.
SAR calibration technology review
NASA Technical Reports Server (NTRS)
Walker, J. L.; Larson, R. W.
1981-01-01
Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.
Calibration of Elasto-Magnetic Sensors on In-Service Cable-Stayed Bridges for Stress Monitoring.
Cappello, Carlo; Zonta, Daniele; Laasri, Hassan Ait; Glisic, Branko; Wang, Ming
2018-02-05
The recent developments in measurement technology have led to the installation of efficient monitoring systems on many bridges and other structures all over the world. Nowadays, more and more structures have been built and instrumented with sensors. However, calibration and installation of sensors remain challenging tasks. In this paper, we use a case study, Adige Bridge, in order to present a low-cost method for the calibration and installation of elasto-magnetic sensors on cable-stayed bridges. Elasto-magnetic sensors enable monitoring of cable stress. The sensor installation took place two years after the bridge construction. The calibration was conducted in two phases: one in the laboratory and the other one on site. In the laboratory, a sensor was built around a segment of cable that was identical to those of the cable-stayed bridge. Then, the sample was subjected to a defined tension force. The sensor response was compared with the applied load. Experimental results showed that the relationship between load and magnetic permeability does not depend on the sensor fabrication process except for an offset. The determination of this offset required in situ calibration after installation. In order to perform the in situ calibration without removing the cables from the bridge, vibration tests were carried out for the estimation of the cables' tensions. At the end of the paper, we show and discuss one year of data from the elasto-magnetic sensors. Calibration results demonstrate the simplicity of the installation of these sensors on existing bridges and new structures.
Calibration of Elasto-Magnetic Sensors on In-Service Cable-Stayed Bridges for Stress Monitoring
Ait Laasri, Hassan; Glisic, Branko; Wang, Ming
2018-01-01
The recent developments in measurement technology have led to the installation of efficient monitoring systems on many bridges and other structures all over the world. Nowadays, more and more structures have been built and instrumented with sensors. However, calibration and installation of sensors remain challenging tasks. In this paper, we use a case study, Adige Bridge, in order to present a low-cost method for the calibration and installation of elasto-magnetic sensors on cable-stayed bridges. Elasto-magnetic sensors enable monitoring of cable stress. The sensor installation took place two years after the bridge construction. The calibration was conducted in two phases: one in the laboratory and the other one on site. In the laboratory, a sensor was built around a segment of cable that was identical to those of the cable-stayed bridge. Then, the sample was subjected to a defined tension force. The sensor response was compared with the applied load. Experimental results showed that the relationship between load and magnetic permeability does not depend on the sensor fabrication process except for an offset. The determination of this offset required in situ calibration after installation. In order to perform the in situ calibration without removing the cables from the bridge, vibration tests were carried out for the estimation of the cables’ tensions. At the end of the paper, we show and discuss one year of data from the elasto-magnetic sensors. Calibration results demonstrate the simplicity of the installation of these sensors on existing bridges and new structures. PMID:29401751
A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems
Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua
2013-01-01
A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597
NASA Astrophysics Data System (ADS)
Li, Baoxin; Wang, Dongmei; Lv, Jiagen; Zhang, Zhujun
2006-09-01
In this paper, a flow-injection chemiluminescence (CL) system is proposed for simultaneous determination of Co(II) and Cr(III) with partial least squares calibration. This method is based on the fact that both Co(II) and Cr(III) catalyze the luminol-H 2O 2 CL reaction, and that their catalytic activities are significantly different on the same reaction condition. The CL intensity of Co(II) and Cr(III) was measured and recorded at different pH of reaction medium, and the obtained data were processed by the chemometric approach of partial least squares. The experimental calibration set was composed with nine sample solutions using orthogonal calibration design for two component mixtures. The calibration curve was linear over the concentration range of 2 × 10 -7 to 8 × 10 -10 and 2 × 10 -6 to 4 × 10 -9 g/ml for Co(II) and Cr(III), respectively. The proposed method offers the potential advantages of high sensitivity, simplicity and rapidity for Co(II) and Cr(III) determination, and was successfully applied to the simultaneous determination of both analytes in real water sample.
NASA Technical Reports Server (NTRS)
Capone, Francis J.; Bangert, Linda S.; Asbury, Scott C.; Mills, Charles T. L.; Bare, E. Ann
1995-01-01
The Langley 16-Foot Transonic Tunnel is a closed-circuit single-return atmospheric wind tunnel that has a slotted octagonal test section with continuous air exchange. The wind tunnel speed can be varied continuously over a Mach number range from 0.1 to 1.3. Test-section plenum suction is used for speeds above a Mach number of 1.05. Over a period of some 40 years, the wind tunnel has undergone many modifications. During the modifications completed in 1990, a new model support system that increased blockage, new fan blades, a catcher screen for the first set of turning vanes, and process controllers for tunnel speed, model attitude, and jet flow for powered models were installed. This report presents a complete description of the Langley 16-Foot Transonic Tunnel and auxiliary equipment, the calibration procedures, and the results of the 1977 and the 1990 wind tunnel calibration with test section air removal. Comparisons with previous calibrations showed that the modifications made to the wind tunnel had little or no effect on the aerodynamic characteristics of the tunnel. Information required for planning experimental investigations and the use of test hardware and model support systems is also provided.
A real-time camera calibration system based on OpenCV
NASA Astrophysics Data System (ADS)
Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng
2015-07-01
Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.
Bleeker, H J; Lewin, P A
2000-01-01
A new calibration technique for PVDF ultrasonic hydrophone probes is described. Current implementation of the technique allows determination of hydrophone frequency response between 2 and 100 MHz and is based on the comparison of theoretically predicted and experimentally determined pressure-time waveforms produced by a focused, circular source. The simulation model was derived from the time domain algorithm that solves the non linear KZK (Khokhlov-Zabolotskaya-Kuznetsov) equation describing acoustic wave propagation. The calibration technique data were experimentally verified using independent calibration procedures in the frequency range from 2 to 40 MHz using a combined time delay spectrometry and reciprocity approach or calibration data provided by the National Physical Laboratory (NPL), UK. The results of verification indicated good agreement between the results obtained using KZK and the above-mentioned independent calibration techniques from 2 to 40 MHz, with the maximum discrepancy of 18% at 30 MHz. The frequency responses obtained using different hydrophone designs, including several membrane and needle probes, are presented, and it is shown that the technique developed provides a desirable tool for independent verification of primary calibration techniques such as those based on optical interferometry. Fundamental limitations of the presented calibration method are also examined.
Experimental Results of Site Calibration and Sensitivity Measurements in OTR for UWB Systems
NASA Astrophysics Data System (ADS)
Viswanadham, Chandana; Rao, P. Mallikrajuna
2017-06-01
System calibration and parameter accuracy measurement of electronic support measures (ESM) systems is a major activity, carried out by electronic warfare (EW) engineers. These activities are very critical and needs good understanding in the field of microwaves, antennas, wave propagation, digital and communication domains. EW systems are broad band, built with state-of-the art electronic hardware, installed on different varieties of military platforms to guard country's security from time to time. EW systems operate in wide frequency ranges, typically in the order of thousands of MHz, hence these are ultra wide band (UWB) systems. Few calibration activities are carried within the system and in the test sites, to meet the accuracies of final specifications. After calibration, parameters are measured for their accuracies either in feed mode by injecting the RF signals into the front end or in radiation mode by transmitting the RF signals on to system antenna. To carry out these activities in radiation mode, a calibrated open test range (OTR) is necessary in the frequency band of interest. Thus site calibration of OTR is necessary to be carried out before taking up system calibration and parameter measurements. This paper presents the experimental results of OTR site calibration and sensitivity measurements of UWB systems in radiation mode.
Jaramillo, Hector E; Gómez, Lessby; García, Jose J
2015-01-01
With the aim to study disc degeneration and the risk of injury during occupational activities, a new finite element (FE) model of the L4-L5-S1 segment of the human spine was developed based on the anthropometry of a typical Colombian worker. Beginning with medical images, the programs CATIA and SOLIDWORKS were used to generate and assemble the vertebrae and create the soft structures of the segment. The software ABAQUS was used to run the analyses, which included a detailed model calibration using the experimental step-wise reduction data for the L4-L5 component, while the L5-S1 segment was calibrated in the intact condition. The range of motion curves, the intradiscal pressure and the lateral bulging under pure moments were considered for the calibration. As opposed to other FE models that include the L5-S1 disc, the model developed in this study considered the regional variations and anisotropy of the annulus as well as a realistic description of the nucleus geometry, which allowed an improved representation of experimental data during the validation process. Hence, the model can be used to analyze the stress and strain distributions in the L4-L5 and L5-S1 discs of workers performing activities such as lifting and carrying tasks.
Thermographic Microstructure Monitoring in Electron Beam Additive Manufacturing.
Raplee, J; Plotkowski, A; Kirka, M M; Dinwiddie, R; Okello, A; Dehoff, R R; Babu, S S
2017-03-03
To reduce the uncertainty of build performance in metal additive manufacturing, robust process monitoring systems that can detect imperfections and improve repeatability are desired. One of the most promising methods for in situ monitoring is thermographic imaging. However, there is a challenge in using this technology due to the difference in surface emittance between the metal powder and solidified part being observed that affects the accuracy of the temperature data collected. The purpose of the present study was to develop a method for properly calibrating temperature profiles from thermographic data to account for this emittance change and to determine important characteristics of the build through additional processing. The thermographic data was analyzed to identify the transition of material from metal powder to a solid as-printed part. A corrected temperature profile was then assembled for each point using calibrations for these surface conditions. Using this data, the thermal gradient and solid-liquid interface velocity were approximated and correlated to experimentally observed microstructural variation within the part. This work shows that by using a method of process monitoring, repeatability of a build could be monitored specifically in relation to microstructure control.
Thermographic Microstructure Monitoring in Electron Beam Additive Manufacturing
Raplee, Jake B.; Plotkowski, Alex J.; Kirka, Michael M.; ...
2017-03-03
To reduce the uncertainty of build performance in metal additive manufacturing, robust process monitoring systems that can detect imperfections and improve repeatability are desired. One of the most promising methods for in-situ monitoring is thermographic imaging. However, there is a challenge in using this technology due to the difference in surface emittance between the metal powder and solidified part being observed that affects the accuracy of the temperature data collected. This developed a method for properly calibrating temperature profiles from thermographic data and then determining important characteristics of the build through additional processing. The thermographic data was analyzed to determinemore » the transition of material from metal powder to a solid as-printed part. A corrected temperature profile was then assembled for each point using calibrations for these surface conditions. Using this data, we calculated the thermal gradient and solid-liquid interface velocity and correlated it to microstructural variation within the part experimentally. This work shows that by using a method of process monitoring, repeatability of a build could be monitored specifically in relation to microstructure control.« less
NASA Astrophysics Data System (ADS)
Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu
To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.
NASA Astrophysics Data System (ADS)
Lim, Sungsoo; Lee, Seohyung; Kim, Jun-geon; Lee, Daeho
2018-01-01
The around-view monitoring (AVM) system is one of the major applications of advanced driver assistance systems and intelligent transportation systems. We propose an on-line calibration method, which can compensate misalignments for AVM systems. Most AVM systems use fisheye undistortion, inverse perspective transformation, and geometrical registration methods. To perform these procedures, the parameters for each process must be known; the procedure by which the parameters are estimated is referred to as the initial calibration. However, when only using the initial calibration data, we cannot compensate misalignments, caused by changing equilibria of cars. Moreover, even small changes such as tire pressure levels, passenger weight, or road conditions can affect a car's equilibrium. Therefore, to compensate for this misalignment, additional techniques are necessary, specifically an on-line calibration method. On-line calibration can recalculate homographies, which can correct any degree of misalignment using the unique features of ordinary parking lanes. To extract features from the parking lanes, this method uses corner detection and a pattern matching algorithm. From the extracted features, homographies are estimated using random sample consensus and parameter estimation. Finally, the misaligned epipolar geographies are compensated via the estimated homographies. Thus, the proposed method can render image planes parallel to the ground. This method does not require any designated patterns and can be used whenever cars are placed in a parking lot. The experimental results show the robustness and efficiency of the method.
Zhang, Jiarui; Zhang, Yingjie; Chen, Bo
2017-12-20
The three-dimensional measurement system with a binary defocusing technique is widely applied in diverse fields. The measurement accuracy is mainly determined by out-of-focus projector calibration accuracy. In this paper, a high-precision out-of-focus projector calibration method that is based on distortion correction on the projection plane and nonlinear optimization algorithm is proposed. To this end, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. In terms of this principle, the proposed method uses a high-order radial and tangential lens distortion representation on the projection plane to correct the calibration residuals caused by projection distortion. The final accuracy parameters of out-of-focus projector were obtained using a nonlinear optimization algorithm with good initial values, which were provided by coarsely calibrating the parameters of the out-of-focus projector on the focal and projection planes. Finally, the experimental results demonstrated that the proposed method can accuracy calibrate an out-of-focus projector, regardless of the amount of defocusing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fat’yanov, O. V., E-mail: fatyan1@gps.caltech.edu; Asimow, P. D., E-mail: asimow@gps.caltech.edu
2015-10-15
We describe an accurate and precise calibration procedure for multichannel optical pyrometers such as the 6-channel, 3-ns temporal resolution instrument used in the Caltech experimental geophysics laboratory. We begin with a review of calibration sources for shock temperatures in the 3000-30 000 K range. High-power, coiled tungsten halogen standards of spectral irradiance appear to be the only practical alternative to NIST-traceable tungsten ribbon lamps, which are no longer available with large enough calibrated area. However, non-uniform radiance complicates the use of such coiled lamps for reliable and reproducible calibration of pyrometers that employ imaging or relay optics. Careful analysis of documentedmore » methods of shock pyrometer calibration to coiled irradiance standard lamps shows that only one technique, not directly applicable in our case, is free of major radiometric errors. We provide a detailed description of the modified Caltech pyrometer instrument and a procedure for its absolute spectral radiance calibration, accurate to ±5%. We employ a designated central area of a 0.7× demagnified image of a coiled-coil tungsten halogen lamp filament, cross-calibrated against a NIST-traceable tungsten ribbon lamp. We give the results of the cross-calibration along with descriptions of the optical arrangement, data acquisition, and processing. We describe a procedure to characterize the difference between the static and dynamic response of amplified photodetectors, allowing time-dependent photodiode correction factors for spectral radiance histories from shock experiments. We validate correct operation of the modified Caltech pyrometer with actual shock temperature experiments on single-crystal NaCl and MgO and obtain very good agreement with the literature data for these substances. We conclude with a summary of the most essential requirements for error-free calibration of a fiber-optic shock-temperature pyrometer using a high-power coiled tungsten halogen irradiance standard lamp.« less
Esquinas, Pedro L; Tanguay, Jesse; Gonzalez, Marjorie; Vuckovic, Milan; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna
2016-12-01
In the nuclear medicine department, the activity of radiopharmaceuticals is measured using dose calibrators (DCs) prior to patient injection. The DC consists of an ionization chamber that measures current generated by ionizing radiation (emitted from the radiotracer). In order to obtain an activity reading, the current is converted into units of activity by applying an appropriate calibration factor (also referred to as DC dial setting). Accurate determination of DC dial settings is crucial to ensure that patients receive the appropriate dose in diagnostic scans or radionuclide therapies. The goals of this study were (1) to describe a practical method to experimentally determine dose calibrator settings using a thyroid-probe (TP) and (2) to investigate the accuracy, reproducibility, and uncertainties of the method. As an illustration, the TP method was applied to determine 188 Re dial settings for two dose calibrator models: Atomlab 100plus and Capintec CRC-55tR. Using the TP to determine dose calibrator settings involved three measurements. First, the energy-dependent efficiency of the TP was determined from energy spectra measurements of two calibration sources ( 152 Eu and 22 Na). Second, the gamma emissions from the investigated isotope ( 188 Re) were measured using the TP and its activity was determined using γ-ray spectroscopy methods. Ambient background, scatter, and source-geometry corrections were applied during the efficiency and activity determination steps. Third, the TP-based 188 Re activity was used to determine the dose calibrator settings following the calibration curve method [B. E. Zimmerman et al., J. Nucl. Med. 40, 1508-1516 (1999)]. The interobserver reproducibility of TP measurements was determined by the coefficient of variation (COV) and uncertainties associated to each step of the measuring process were estimated. The accuracy of activity measurements using the proposed method was evaluated by comparing the TP activity estimates of 99m Tc, 188 Re, 131 I, and 57 Co samples to high purity Ge (HPGe) γ-ray spectroscopy measurements. The experimental 188 Re dial settings determined with the TP were 76.5 ± 4.8 and 646 ± 43 for Atomlab 100plus and Capintec CRC-55tR, respectively. In the case of Atomlab 100plus, the TP-based dial settings improved the accuracy of 188 Re activity measurements (confirmed by HPGe measurements) as compared to manufacturer-recommended settings. For Capintec CRC-55tR, the TP-based settings were in agreement with previous results [B. E. Zimmerman et al., J. Nucl. Med. 40, 1508-1516 (1999)] which demonstrated that manufacturer-recommended settings overestimate 188 Re activity by more than 20%. The largest source of uncertainty in the experimentally determined dial settings was due to the application of a geometry correction factor, followed by the uncertainty of the scatter-corrected photopeak counts and the uncertainty of the TP efficiency calibration experiment. When using the most intense photopeak of the sample's emissions, the TP method yielded accurate (within 5% errors) and reproducible (COV = 2%) measurements of sample's activity. The relative uncertainties associated with such measurements ranged from 6% to 8% (expanded uncertainty at 95% confidence interval, k = 2). Accurate determination/verification of dose calibrator dial settings can be performed using a thyroid-probe in the nuclear medicine department.
NASA Astrophysics Data System (ADS)
Wimer, N. T.; Mackoweicki, A. S.; Poludnenko, A. Y.; Hoffman, C.; Daily, J. W.; Rieker, G. B.; Hamlington, P.
2017-12-01
Results are presented from a joint computational and experimental research effort focused on understanding and characterizing wildland fire spread at small scales (roughly 1m-1mm) using direct numerical simulations (DNS) with chemical kinetics mechanisms that have been calibrated using data from high-speed laser diagnostics. The simulations are intended to directly resolve, with high physical accuracy, all small-scale fluid dynamic and chemical processes relevant to wildland fire spread. The high fidelity of the simulations is enabled by the calibration and validation of DNS sub-models using data from high-speed laser diagnostics. These diagnostics have the capability to measure temperature and chemical species concentrations, and are used here to characterize evaporation and pyrolysis processes in wildland fuels subjected to an external radiation source. The chemical kinetics code CHEMKIN-PRO is used to study and reduce complex reaction mechanisms for water removal, pyrolysis, and gas phase combustion during solid biomass burning. Simulations are then presented for a gaseous pool fire coupled with the resulting multi-step chemical reaction mechanisms, and the results are connected to the fundamental structure and spread of wildland fires. It is anticipated that the combined computational and experimental approach of this research effort will provide unprecedented access to information about chemical species, temperature, and turbulence during the entire pyrolysis, evaporation, ignition, and combustion process, thereby permitting more complete understanding of the physics that must be represented by coarse-scale numerical models of wildland fire spread.
NASA Astrophysics Data System (ADS)
Houchin, J. S.
2014-09-01
A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.
A laboratory experiment simulating the dynamics of topographic relief: methodology and results
NASA Astrophysics Data System (ADS)
Crave, A.; Lague, D.; Davy, P.; Bonnet, S.; Laguionie, P.
2002-12-01
Theoretical analysis and numerical models of landscape evolution have advanced several scenarios for the long-term evolution of terrestrial topography. These scenarios require quantitative evaluation. Analyses of topography, sediment fluxes, and the physical mechanisms of erosion and sediment transport can provide some constraints on the range of plausible models. But in natural systems the boundary conditions (tectonic uplift, climate, base level) are often not well constrained and the spatial heterogeneity of substrate, climate, vegetation, and prevalent processes commonly confounds attempts at extrapolation of observations to longer timescales. In the laboratory, boundary conditions are known and heterogeneity and complexity can be controlled. An experimental approach can thus provide valuable constraints on the dynamics of geomorphic systems, provided that (1) the elementary processes are well calibrated and (2) the topography and sediment fluxes are sufficiently well documented. We have built an experimental setup of decimeter scale that is designed to develop a complete drainage network by the growth and propagation of erosion instabilities in response to tectonic and climatic perturbations. Uplift and precipitation rates can be changed over an order of magnitude. Telemetric lasers and 3D stereo-photography allow the precise quantification of the topographic evolution of the experimental surface. In order to calibrate the principal processes of erosion and transport we have used three approaches: (1) theoretical derivation of erosion laws deduced from the geometrical properties of experimental surfaces at steady-state under different rates of tectonic uplift; (2) comparison of the experimental transient dynamics with a numerical simulation model to test the validity of the predicted erosion laws; and (3) detailed analysis of particle detachment and transport in a millimeter sheet flow on a two-meter long flume under precisely controlled water discharge, slope and flow width. The analogy with real geomorphic systems is limited by the imperfect downscaling in both time and space of the experiments. However, these simple experiments have allowed us to probe (1) the importance of a threshold for particle mobilization to the relationship between steady-state elevation and uplift rate, (2) the role of initial drainage network organization in the transient dynamics of tectonically perturbed systems and (3) the sediment flux dynamics of climatically perturbed systems.
ON THE CALIBRATION OF DK-02 AND KID DOSIMETERS (in Estonian)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehvaert, H.
1963-01-01
For the periodic calibration of the DK-02 and WD dosimeters, the rotating stand method which is more advantageous than the usual method is recommended. The calibration can be accomplished in a strong gamma field, reducing considerably the time necessary for calibration. Using a point source, the dose becomes a simple function of time and geometrical parameters. The experimental values are in good agreement with theoretical values. (tr-auth)
A Dynamic Calibration Method for Experimental and Analytical Hub Load Comparison
NASA Technical Reports Server (NTRS)
Kreshock, Andrew R.; Thornburgh, Robert P.; Wilbur, Matthew L.
2017-01-01
This paper presents the results from an ongoing effort to produce improved correlation between analytical hub force and moment prediction and those measured during wind-tunnel testing on the Aeroelastic Rotor Experimental System (ARES), a conventional rotor testbed commonly used at the Langley Transonic Dynamics Tunnel (TDT). A frequency-dependent transformation between loads at the rotor hub and outputs of the testbed balance is produced from frequency response functions measured during vibration testing of the system. The resulting transformation is used as a dynamic calibration of the balance to transform hub loads predicted by comprehensive analysis into predicted balance outputs. In addition to detailing the transformation process, this paper also presents a set of wind-tunnel test cases, with comparisons between the measured balance outputs and transformed predictions from the comprehensive analysis code CAMRAD II. The modal response of the testbed is discussed and compared to a detailed finite-element model. Results reveal that the modal response of the testbed exhibits a number of characteristics that make accurate dynamic balance predictions challenging, even with the use of the balance transformation.
Estimating Divergence Dates and Substitution Rates in the Drosophila Phylogeny
Obbard, Darren J.; Maclennan, John; Kim, Kang-Wook; Rambaut, Andrew; O’Grady, Patrick M.; Jiggins, Francis M.
2012-01-01
An absolute timescale for evolution is essential if we are to associate evolutionary phenomena, such as adaptation or speciation, with potential causes, such as geological activity or climatic change. Timescales in most phylogenetic studies use geologically dated fossils or phylogeographic events as calibration points, but more recently, it has also become possible to use experimentally derived estimates of the mutation rate as a proxy for substitution rates. The large radiation of drosophilid taxa endemic to the Hawaiian islands has provided multiple calibration points for the Drosophila phylogeny, thanks to the "conveyor belt" process by which this archipelago forms and is colonized by species. However, published date estimates for key nodes in the Drosophila phylogeny vary widely, and many are based on simplistic models of colonization and coalescence or on estimates of island age that are not current. In this study, we use new sequence data from seven species of Hawaiian Drosophila to examine a range of explicit coalescent models and estimate substitution rates. We use these rates, along with a published experimentally determined mutation rate, to date key events in drosophilid evolution. Surprisingly, our estimate for the date for the most recent common ancestor of the genus Drosophila based on mutation rate (25–40 Ma) is closer to being compatible with independent fossil-derived dates (20–50 Ma) than are most of the Hawaiian-calibration models and also has smaller uncertainty. We find that Hawaiian-calibrated dates are extremely sensitive to model choice and give rise to point estimates that range between 26 and 192 Ma, depending on the details of the model. Potential problems with the Hawaiian calibration may arise from systematic variation in the molecular clock due to the long generation time of Hawaiian Drosophila compared with other Drosophila and/or uncertainty in linking island formation dates with colonization dates. As either source of error will bias estimates of divergence time, we suggest mutation rate estimates be used until better models are available. PMID:22683811
van Steenbergen, Henk; Bocanegra, Bruno R
2016-12-01
In a recent letter, Plant (2015) reminded us that proper calibration of our laboratory experiments is important for the progress of psychological science. Therefore, carefully controlled laboratory studies are argued to be preferred over Web-based experimentation, in which timing is usually more imprecise. Here we argue that there are many situations in which the timing of Web-based experimentation is acceptable and that online experimentation provides a very useful and promising complementary toolbox to available lab-based approaches. We discuss examples in which stimulus calibration or calibration against response criteria is necessary and situations in which this is not critical. We also discuss how online labor markets, such as Amazon's Mechanical Turk, allow researchers to acquire data in more diverse populations and to test theories along more psychological dimensions. Recent methodological advances that have produced more accurate browser-based stimulus presentation are also discussed. In our view, online experimentation is one of the most promising avenues to advance replicable psychological science in the near future.
High-temperature sensor instrumentation with a thin-film-based sapphire fiber.
Guo, Yuqing; Xia, Wei; Hu, Zhangzhong; Wang, Ming
2017-03-10
A novel sapphire fiber-optic high-temperature sensor has been designed and fabricated based on blackbody radiation theory. Metallic molybdenum has been used as the film material to develop the blackbody cavity, owing to its relatively high melting point compared to that of sapphire. More importantly, the fabrication process for the blackbody cavity is simple, efficient, and economical. Thermal radiation emitted from such a blackbody cavity is transmitted via optical fiber to a remote place for detection. The operating principle, the sensor structure, and the fabrication process are described here in detail. The developed high-temperature sensor was calibrated through a calibration blackbody furnace at temperatures from 900°C to 1200°C and tested by a sapphire crystal growth furnace up to 1880°C. The experimental results of our system agree well with those from a commercial Rayteck MR1SCCF infrared pyrometer, and the maximum residual is approximately 5°C, paving the way for high-accuracy temperature measurement especially for extremely harsh environments.
Wavelength calibration of x-ray imaging crystal spectrometer on Joint Texas Experimental Tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, W.; Chen, Z. Y., E-mail: zychen@hust.edu.cn; Jin, W.
2014-11-15
The wavelength calibration of x-ray imaging crystal spectrometer is a key issue for the measurements of plasma rotation. For the lack of available standard radiation source near 3.95 Å and there is no other diagnostics to measure the core rotation for inter-calibration, an indirect method by using tokamak plasma itself has been applied on joint Texas experimental tokamak. It is found that the core toroidal rotation velocity is not zero during locked mode phase. This is consistent with the observation of small oscillations on soft x-ray signals and electron cyclotron emission during locked-mode phase.
Conical Probe Calibration and Wind Tunnel Data Analysis of the Channeled Centerbody Inlet Experiment
NASA Technical Reports Server (NTRS)
Truong, Samson Siu
2011-01-01
For a multi-hole test probe undergoing wind tunnel tests, the resulting data needs to be analyzed for any significant trends. These trends include relating the pressure distributions, the geometric orientation, and the local velocity vector to one another. However, experimental runs always involve some sort of error. As a result, a calibration procedure is required to compensate for this error. For this case, it is the misalignment bias angles resulting from the distortion associated with the angularity of the test probe or the local velocity vector. Through a series of calibration steps presented here, the angular biases are determined and removed from the data sets. By removing the misalignment, smoother pressure distributions contribute to more accurate experimental results, which in turn could be then compared to theoretical and actual in-flight results to derive any similarities. Error analyses will also be performed to verify the accuracy of the calibration error reduction. The resulting calibrated data will be implemented into an in-flight RTF script that will output critical flight parameters during future CCIE experimental test runs. All of these tasks are associated with and in contribution to NASA Dryden Flight Research Center s F-15B Research Testbed s Small Business Innovation Research of the Channeled Centerbody Inlet Experiment.
Primary standardization of 57Co.
Koskinas, Marina F; Moreira, Denise S; Yamazaki, Ione M; de Toledo, Fábio; Brancaccio, Franco; Dias, Mauro S
2010-01-01
This work describes the method developed by the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, for the standardization of a (57)Co radioactive solution. Cobalt-57 is a radionuclide used for calibrating gamma-ray and X-ray spectrometers, as well as a gamma reference source for dose calibrators used in nuclear medicine services. Two 4pibeta-gamma coincidence systems were used to perform the standardization, the first used a 4pi(PC) counter coupled to a pair of 76 mm x 76 mm NaI(Tl) scintillators for detecting gamma-rays, the other one used a HPGe spectrometer for gamma detection. The measurements were performed by selecting a gamma-ray window comprising the (122 keV+136 keV) total absorption energy peaks in the NaI(Tl) and selecting the total absorption peak of 122 keV in the germanium detector. The electronic system used the TAC method developed at LMN for registering the observed events. The methodology recently developed by the LMN for simulating all detection processes in a 4pibeta-gamma coincidence system, by means of the Monte Carlo technique, was applied and the behavior of extrapolation curve compared to experimental data. The final activity obtained by the Monte Carlo calculation agrees with the experimental results within the experimental uncertainty. Copyright 2009 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason
During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less
High-efficiency non-uniformity correction for wide dynamic linear infrared radiometry system
NASA Astrophysics Data System (ADS)
Li, Zhou; Yu, Yi; Tian, Qi-Jie; Chang, Song-Tao; He, Feng-Yun; Yin, Yan-He; Qiao, Yan-Feng
2017-09-01
Several different integration times are always set for a wide dynamic linear and continuous variable integration time infrared radiometry system, therefore, traditional calibration-based non-uniformity correction (NUC) are usually conducted one by one, and furthermore, several calibration sources required, consequently makes calibration and process of NUC time-consuming. In this paper, the difference of NUC coefficients between different integration times have been discussed, and then a novel NUC method called high-efficiency NUC, which combines the traditional calibration-based non-uniformity correction, has been proposed. It obtains the correction coefficients of all integration times in whole linear dynamic rangesonly by recording three different images of a standard blackbody. Firstly, mathematical procedure of the proposed non-uniformity correction method is validated and then its performance is demonstrated by a 400 mm diameter ground-based infrared radiometry system. Experimental results show that the mean value of Normalized Root Mean Square (NRMS) is reduced from 3.78% to 0.24% by the proposed method. In addition, the results at 4 ms and 70 °C prove that this method has a higher accuracy compared with traditional calibration-based NUC. In the meantime, at other integration time and temperature there is still a good correction effect. Moreover, it greatly reduces the number of correction time and temperature sampling point, and is characterized by good real-time performance and suitable for field measurement.
Apparatus and method for monitoring the intensities of charged particle beams
Varma, Matesh N.; Baum, John W.
1982-11-02
Charged particle beam monitoring means (40) are disposed in the path of a charged particle beam (44) in an experimental device (10). The monitoring means comprise a beam monitoring component (42) which is operable to prevent passage of a portion of beam (44), while concomitantly permitting passage of another portion thereof (46) for incidence in an experimental chamber (18), and providing a signal (I.sub.m) indicative of the intensity of the beam portion which is not passed. Calibration means (36) are disposed in the experimental chamber in the path of the said another beam portion and are operable to provide a signal (I.sub.f) indicative of the intensity thereof. Means (41 and 43) are provided to determine the ratio (R) between said signals whereby, after suitable calibration, the calibration means may be removed from the experimental chamber and the intensity of the said another beam portion determined by monitoring of the monitoring means signal, per se.
NASA Astrophysics Data System (ADS)
Mikhailovna Smolenskaya, Natalia; Vladimirovich Smolenskii, Victor; Vladimirovich Korneev, Nicholas
2018-02-01
The work is devoted to the substantiation and practical implementation of a new approach for estimating the change in internal energy by pressure and volume. The pressure is measured with a calibrated sensor. The change in volume inside the cylinder is determined by changing the position of the piston. The position of the piston is precisely determined by the angle of rotation of the crankshaft. On the basis of the proposed approach, the thermodynamic efficiency of the working process of spark ignition engines on natural gas with the addition of hydrogen was estimated. Experimental studies were carried out on a single-cylinder unit UIT-85. Their analysis showed an increase in the thermodynamic efficiency of the working process with the addition of hydrogen in a compressed natural gas (CNG).The results obtained make it possible to determine the characteristic of heat release from the analysis of experimental data. The effect of hydrogen addition on the CNG combustion process is estimated.
Calibration of micromechanical parameters for DEM simulations by using the particle filter
NASA Astrophysics Data System (ADS)
Cheng, Hongyang; Shuku, Takayuki; Thoeni, Klaus; Yamamoto, Haruyuki
2017-06-01
The calibration of DEM models is typically accomplished by trail and error. However, the procedure lacks of objectivity and has several uncertainties. To deal with these issues, the particle filter is employed as a novel approach to calibrate DEM models of granular soils. The posterior probability distribution of the microparameters that give numerical results in good agreement with the experimental response of a Toyoura sand specimen is approximated by independent model trajectories, referred as `particles', based on Monte Carlo sampling. The soil specimen is modeled by polydisperse packings with different numbers of spherical grains. Prepared in `stress-free' states, the packings are subjected to triaxial quasistatic loading. Given the experimental data, the posterior probability distribution is incrementally updated, until convergence is reached. The resulting `particles' with higher weights are identified as the calibration results. The evolutions of the weighted averages and posterior probability distribution of the micro-parameters are plotted to show the advantage of using a particle filter, i.e., multiple solutions are identified for each parameter with known probabilities of reproducing the experimental response.
Landsat-7 Enhanced Thematic Mapper plus radiometric calibration
Markham, B.L.; Boncyk, Wayne C.; Helder, D.L.; Barker, J.L.
1997-01-01
Landsat-7 is currently being built and tested for launch in 1998. The Enhanced Thematic Mapper Plus (ETM+) sensor for Landsat-7, a derivative of the highly successful Thematic Mapper (TM) sensors on Landsats 4 and 5, and the Landsat-7 ground system are being built to provide enhanced radiometric calibration performance. In addition, regular vicarious calibration campaigns are being planned to provide additional information for calibration of the ETM+ instrument. The primary upgrades to the instrument include the addition of two solar calibrators: the full aperture solar calibrator, a deployable diffuser, and the partial aperture solar calibrator, a passive device that allows the ETM+ to image the sun. The ground processing incorporates for the first time an off-line facility, the Image Assessment System (IAS), to perform calibration, evaluation and analysis. Within the IAS, processing capabilities include radiometric artifact characterization and correction, radiometric calibration from the multiple calibrator sources, inclusion of results from vicarious calibration and statistical trending of calibration data to improve calibration estimation. The Landsat Product Generation System, the portion of the ground system responsible for producing calibrated products, will incorporate the radiometric artifact correction algorithms and will use the calibration information generated by the IAS. This calibration information will also be supplied to ground processing systems throughout the world.
Pacilio, M; Basile, C; Shcherbinin, S; Caselli, F; Ventroni, G; Aragno, D; Mango, L; Santini, E
2011-06-01
Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging play an important role in the segmentation of functioning parts of organs or tumours, but an accurate and reproducible delineation is still a challenging task. In this work, an innovative iterative thresholding method for tumour segmentation has been proposed and implemented for a SPECT system. This method, which is based on experimental threshold-volume calibrations, implements also the recovery coefficients (RC) of the imaging system, so it has been called recovering iterative thresholding method (RIThM). The possibility to employ Monte Carlo (MC) simulations for system calibration was also investigated. The RIThM is an iterative algorithm coded using MATLAB: after an initial rough estimate of the volume of interest, the following calculations are repeated: (i) the corresponding source-to-background ratio (SBR) is measured and corrected by means of the RC curve; (ii) the threshold corresponding to the amended SBR value and the volume estimate is then found using threshold-volume data; (iii) new volume estimate is obtained by image thresholding. The process goes on until convergence. The RIThM was implemented for an Infinia Hawkeye 4 (GE Healthcare) SPECT/CT system, using a Jaszczak phantom and several test objects. Two MC codes were tested to simulate the calibration images: SIMIND and SimSet. For validation, test images consisting of hot spheres and some anatomical structures of the Zubal head phantom were simulated with SIMIND code. Additional test objects (flasks and vials) were also imaged experimentally. Finally, the RIThM was applied to evaluate three cases of brain metastases and two cases of high grade gliomas. Comparing experimental thresholds and those obtained by MC simulations, a maximum difference of about 4% was found, within the errors (+/- 2% and +/- 5%, for volumes > or = 5 ml or < 5 ml, respectively). Also for the RC data, the comparison showed differences (up to 8%) within the assigned error (+/- 6%). ANOVA test demonstrated that the calibration results (in terms of thresholds or RCs at various volumes) obtained by MC simulations were indistinguishable from those obtained experimentally. The accuracy in volume determination for the simulated hot spheres was between -9% and 15% in the range 4-270 ml, whereas for volumes less than 4 ml (in the range 1-3 ml) the difference increased abruptly reaching values greater than 100%. For the Zubal head phantom, errors ranged between 9% and 18%. For the experimental test images, the accuracy level was within +/- 10%, for volumes in the range 20-110 ml. The preliminary test of application on patients evidenced the suitability of the method in a clinical setting. The MC-guided delineation of tumor volume may reduce the acquisition time required for the experimental calibration. Analysis of images of several simulated and experimental test objects, Zubal head phantom and clinical cases demonstrated the robustness, suitability, accuracy, and speed of the proposed method. Nevertheless, studies concerning tumors of irregular shape and/or nonuniform distribution of the background activity are still in progress.
NASA Astrophysics Data System (ADS)
Abdo Yassin, Fuad; Wheater, Howard; Razavi, Saman; Sapriza, Gonzalo; Davison, Bruce; Pietroniro, Alain
2015-04-01
The credible identification of vertical and horizontal hydrological components and their associated parameters is very challenging (if not impossible) by only constraining the model to streamflow data, especially in regions where the vertical processes significantly dominate the horizontal processes. The prairie areas of the Saskatchewan River basin, a major water system in Canada, demonstrate such behavior, where the hydrologic connectivity and vertical fluxes are mainly controlled by the amount of surface and sub-surface water storages. In this study, we develop a framework for distributed hydrologic model identification and calibration that jointly constrains the model response (i.e., streamflows) as well as a set of model state variables (i.e., water storages) to observations. This framework is set up in the form of multi-objective optimization, where multiple performance criteria are defined and used to simultaneously evaluate the fidelity of the model to streamflow observations and observed (estimated) changes of water storage in the gridded landscape over daily and monthly time scales. The time series of estimated changes in total water storage (including soil, canopy, snow and pond storages) used in this study were derived from an experimental study enhanced by the information obtained from the GRACE satellite. We test this framework on the calibration of a Land Surface Scheme-Hydrology model, called MESH (Modélisation Environmentale Communautaire - Surface and Hydrology), for the Saskatchewan River basin. Pareto Archived Dynamically Dimensioned Search (PA-DDS) is used as the multi-objective optimization engine. The significance of using the developed framework is demonstrated in comparison with the results obtained through a conventional calibration approach to streamflow observations. The approach of incorporating water storage data into the model identification process can more potentially constrain the posterior parameter space, more comprehensively evaluate the model fidelity, and yield more credible predictions.
Calibration of cathode strip gains in multiwire drift chambers of the GlueX experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berdnikov, V. V.; Somov, S. V.; Pentchev, L.
A technique for calibrating cathode strip gains in multiwire drift chambers of the GlueX experiment is described. The accuracy of the technique is estimated based on Monte Carlo generated data with known gain coefficients in the strip signal channels. One of the four detector sections has been calibrated using cosmic rays. Results of drift chamber calibration on the accelerator beam upon inclusion in the GlueX experimental setup are presented.
Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge
2014-12-01
Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.
Modelling mono-digestion of grass silage in a 2-stage CSTR anaerobic digester using ADM1.
Thamsiriroj, T; Murphy, J D
2011-01-01
This paper examines 174 days of experimental data and modelling of mono-digestion of grass silage in a two stage wet process with recirculation of liquor; the two vessels have an effective volume of 312 L each. The organic loading rate is initiated at 0.5 kg VS m(-3) d(-1) (first 74 days) and subsequently increased to 1 kg VS m(-3) d(-1). The experimental data was used to generate a mathematical model (ADM1) which was calibrated over the first 74 days of operation. Good accuracy with experimental data was found for the subsequent 100 days. Results of the model would suggest starting the process without recirculation and thus building up the solids content of the liquor. As the level of VFA increases, recirculation should be employed to control VFA. Recirculation also controls solids content and pH. Methane production was estimated at 88% of maximum theoretical production. Copyright © 2010 Elsevier Ltd. All rights reserved.
Bernard J. Wood Receives 2013 Harry H. Hess Medal: Citation
NASA Astrophysics Data System (ADS)
Hofmann, Albrecht W.
2014-01-01
As Harry Hess recognized over 50 years ago, mantle melting is the fundamental motor for planetary evolution and differentiation. Melting generates the major divisions of crust mantle and core. The distribution of chemical elements between solids, melts, and gaseous phases is fundamental to understanding these differentiation processes. Bernie Wood, together with Jon Blundy, has combined experimental petrology and physicochemical theory to revolutionize the understanding of the distribution of trace elements between melts and solids in the Earth. Knowledge of these distribution laws allows the reconstruction of the source compositions of the melts (deep in Earth's interior) from their abundances in volcanic rocks. Bernie's theoretical treatment relates the elastic strain of the lattice caused by the substitution of a trace element in a crystal to the ionic radius and charge of this element. This theory, and its experimental calibrations, brought order to a literature of badly scattered, rather chaotic experimental data that allowed no satisfactory quantitative modeling of melting processes in the mantle.
[A plane-based hand-eye calibration method for surgical robots].
Zeng, Bowei; Meng, Fanle; Ding, Hui; Liu, Wenbo; Wu, Di; Wang, Guangzhi
2017-04-01
In order to calibrate the hand-eye transformation of the surgical robot and laser range finder (LRF), a calibration algorithm based on a planar template was designed. A mathematical model of the planar template had been given and the approach to address the equations had been derived. Aiming at the problems of the measurement error in a practical system, we proposed a new algorithm for selecting coplanar data. This algorithm can effectively eliminate considerable measurement error data to improve the calibration accuracy. Furthermore, three orthogonal planes were used to improve the calibration accuracy, in which a nonlinear optimization for hand-eye calibration was used. With the purpose of verifying the calibration precision, we used the LRF to measure some fixed points in different directions and a cuboid's surfaces. Experimental results indicated that the precision of a single planar template method was (1.37±0.24) mm, and that of the three orthogonal planes method was (0.37±0.05) mm. Moreover, the mean FRE of three-dimensional (3D) points was 0.24 mm and mean TRE was 0.26 mm. The maximum angle measurement error was 0.4 degree. Experimental results show that the method presented in this paper is effective with high accuracy and can meet the requirements of surgical robot precise location.
Tomographic methods in flow diagnostics
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
1993-01-01
This report presents a viewpoint of tomography that should be well adapted to currently available optical measurement technology as well as the needs of computational and experimental fluid dynamists. The goals in mind are to record data with the fastest optical array sensors; process the data with the fastest parallel processing technology available for small computers; and generate results for both experimental and theoretical data. An in-depth example treats interferometric data as it might be recorded in an aeronautics test facility, but the results are applicable whenever fluid properties are to be measured or applied from projections of those properties. The paper discusses both computed and neural net calibration tomography. The report also contains an overview of key definitions and computational methods, key references, computational problems such as ill-posedness, artifacts, missing data, and some possible and current research topics.
Generic System for Remote Testing and Calibration of Measuring Instruments: Security Architecture
NASA Astrophysics Data System (ADS)
Jurčević, M.; Hegeduš, H.; Golub, M.
2010-01-01
Testing and calibration of laboratory instruments and reference standards is a routine activity and is a resource and time consuming process. Since many of the modern instruments include some communication interfaces, it is possible to create a remote calibration system. This approach addresses a wide range of possible applications and permits to drive a number of different devices. On the other hand, remote calibration process involves a number of security issues due to recommendations specified in standard ISO/IEC 17025, since it is not under total control of the calibration laboratory personnel who will sign the calibration certificate. This approach implies that the traceability and integrity of the calibration process directly depends on the collected measurement data. The reliable and secure remote control and monitoring of instruments is a crucial aspect of internet-enabled calibration procedure.
Möltgen, C-V; Puchert, T; Menezes, J C; Lochmann, D; Reich, G
2012-04-15
Film coating of tablets is a multivariate pharmaceutical unit operation. In this study an innovative in-line Fourier-Transform Near-Infrared Spectroscopy (FT-NIRS) application is described which enables real-time monitoring of a full industrial scale pan coating process of heart-shaped tablets. The tablets were coated with a thin hydroxypropyl methylcellulose (HPMC) film of up to approx. 28 μm on the tablet face as determined by SEM, corresponding to a weight gain of 2.26%. For a better understanding of the aqueous coating process the NIR probe was positioned inside the rotating tablet bed. Five full scale experimental runs have been performed to evaluate the impact of process variables such as pan rotation, exhaust air temperature, spray rate and pan load and elaborate robust and selective quantitative calibration models for the real-time determination of both coating growth and tablet moisture content. Principal Component (PC) score plots allowed each coating step, namely preheating, spraying and drying to be distinguished and the dominating factors and their spectral effects to be identified (e.g. temperature, moisture, coating growth, change of tablet bed density, and core/coat interactions). The distinct separation of HPMC coating growth and tablet moisture in different PCs enabled a real-time in-line monitoring of both attributes. A PLS calibration model based on Karl Fischer reference values allowed the tablet moisture trajectory to be determined throughout the entire coating process. A 1-latent variable iPLS weight gain calibration model with calibration samples from process stages dominated by the coating growth (i.e. ≥ 30% of the theoretically applied amount of coating) was sufficiently selective and accurate to predict the progress of the thin HPMC coating layer. At-line NIR Chemical Imaging (NIR-CI) in combination with PLS Discriminant Analysis (PLSDA) verified the HPMC coating growth and physical changes at the core/coat interface during the initial stages of the coating process. In addition, inter- and intra-tablet coating variability throughout the process could be assessed. These results clearly demonstrate that in-line NIRS and at-line NIR-CI can be applied as complimentary PAT tools to monitor a challenging pan coating process. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bussweiler, Y.; Brey, G. P.; Pearson, D. G.; Stachel, T.; Stern, R. A.; Hardman, M. F.; Kjarsgaard, B. A.; Jackson, S. E.
2017-02-01
This study provides an experimental calibration of the empirical Al-in-olivine thermometer for mantle peridotites proposed by De Hoog et al. (2010). We report Al concentrations measured by secondary ion mass spectrometry (SIMS) in olivines produced in the original high-pressure, high-temperature, four-phase lherzolite experiments by Brey et al. (1990). These reversed experiments were used for the calibration of the two-pyroxene thermometer and Al-in-orthopyroxene barometer by Brey and Köhler (1990). The experimental conditions of the runs investigated here range from 28 to 60 kbar and 1000 to 1300 °C. Olivine compositions from this range of experiments have Al concentrations that are consistent, within analytical uncertainties, with those predicted by the empirical calibration of the Al-in-olivine thermometer for mantle peridotites. Fitting the experimental data to a thermometer equation, using the least squares method, results in the expression: This version of the Al-in-olivine thermometer appears to be applicable to garnet peridotites (lherzolites and harzburgites) well outside the range of experimental conditions investigated here. However, the thermometer is not applicable to spinel-bearing peridotites. We provide new trace element criteria to distinguish between olivine from garnet-, garnet-spinel-, and spinel-facies peridotites. The estimated accuracy of the thermometer is ± 20 °C. Thus, the thermometer could serve as a useful tool in settings where two-pyroxene thermometry cannot be applied, such as garnet harzburgites and single inclusions in diamond.
NASA Astrophysics Data System (ADS)
Dhooghe, Frederik; De Keyser, Johan; Altwegg, Kathrin; Calmonte, Ursina; Fuselier, Stephen; Hässig, Myrtha; Berthelier, Jean-Jacques; Mall, Urs; Gombosi, Tamas; Fiethe, Björn
2014-05-01
Rosetta will rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) instrument comprises three sensors: the pressure sensor (COPS) and two mass spectrometers (RTOF and DFMS). The double focusing mass spectrometer DFMS is optimized for mass resolution and consists of an ion source, a mass analyser and a detector package operated in analogue mode. The magnetic sector of the analyser provides the mass dispersion needed for use with the position-sensitive microchannel plate (MCP) detector. Ions that hit the MCP release electrons that are recorded digitally using a linear electron detector array with 512 pixels. Raw data for a given commanded mass are obtained as ADC counts as a function of pixel number. We have developed a computer-assisted approach to address the problem of calibrating such raw data. Mass calibration: Ion identification is based on their mass-over-charge (m/Z) ratio and requires an accurate correlation of pixel number and m/Z. The m/Z scale depends on the commanded mass and the magnetic field and can be described by an offset of the pixel associated with the commanded mass from the centre of the detector array and a scaling factor. Mass calibration is aided by the built-in gas calibration unit (GCU), which allows one to inject a known gas mixture into the instrument. In a first, fully automatic step of the mass calibration procedure, the calibration uses all GCU spectra and extracts information about the mass peak closest to the centre pixel, since those peaks can be identified unambiguously. This preliminary mass-calibration relation can then be applied to all spectra. Human-assisted identification of additional mass peaks further improves the mass calibration. Ion flux calibration: ADC counts per pixel are converted to ion counts per second using the overall gain, the individual pixel gain, and the total data accumulation time. DFMS can perform an internal scan to determine the pixel gain and related detector aging. The software automatically corrects for these effects to calibrate the fluxes. The COPS sensor can be used for an a posteriori calibration of the fluxes. Neutral gas number densities: Neutrals are ionized in the ion source before they are transferred to the mass analyser, but during this process fragmentation may occur. Our software allows one to identify which neutrals entered the instrument, given the ion fragments that are detected. First, multiple spectra with a limited mass range are combined to provide an overview of as many ion fragments as possible. We then exploit a fragmentation database to assist in figuring out the relation between entering species and recorded fragments. Finally, using experimentally determined sensitivities, gas number densities are obtained. The instrument characterisation (experimental determination of sensitivities, fragmentation patterns for the most common neutral species, etc.) has been conducted by the consortium using an instrument copy in the University of Bern test facilities during the cruise phase of the mission.
Oceanic Whitecaps and Associated, Bubble-Mediated, Air-Sea Exchange Processes
1992-10-01
experiments performed in laboratory conditions using Air-Sea Exchange Monitoring System (A-SEMS). EXPERIMENTAL SET-UP In a first look, the Air-Sea Exchange...Model 225, equipped with a Model 519 plug-in module. Other complementary information on A-SEMS along with results from first tests and calibration...between 9.50C and 22.40C within the first 24 hours after transferring the water sample into laboratory conditions. The results show an enhancement of
NASA Technical Reports Server (NTRS)
Nguyen, Quang-Viet; Kojima, Jun
2005-01-01
Researchers from NASA Glenn Research Center s Combustion Branch and the Ohio Aerospace Institute (OAI) have developed a transferable calibration standard for an optical technique called spontaneous Raman scattering (SRS) in high-pressure flames. SRS is perhaps the only technique that provides spatially and temporally resolved, simultaneous multiscalar measurements in turbulent flames. Such measurements are critical for the validation of numerical models of combustion. This study has been a combined experimental and theoretical effort to develop a spectral calibration database for multiscalar diagnostics using SRS in high-pressure flames. However, in the past such measurements have used a one-of-a-kind experimental setup and a setup-dependent calibration procedure to empirically account for spectral interferences, or crosstalk, among the major species of interest. Such calibration procedures, being non-transferable, are prohibitively expensive to duplicate. A goal of this effort is to provide an SRS calibration database using transferable standards that can be implemented widely by other researchers for both atmospheric-pressure and high-pressure (less than 30 atm) SRS studies. A secondary goal of this effort is to provide quantitative multiscalar diagnostics in high pressure environments to validate computational combustion codes.
NASA Astrophysics Data System (ADS)
Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.
1992-10-01
Three aspects of a polarimetric active radar calibrator (PARC) are treated: (1) experimental measurements of the magnitudes and phases of the scattering-matrix elements of a pair of PARCs operating at 1.25 and 5.3 GHz; (2) the design, construction, and performance evaluation of a PARC; and (3) the extension of the single-target-calibration technique (STCT) to a PARC. STCT has heretofore been limited to the use of reciprocal passive calibration devices, such as spheres and trihedral corner reflectors.
NASA Technical Reports Server (NTRS)
Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.
1992-01-01
Three aspects of a polarimetric active radar calibrator (PARC) are treated: (1) experimental measurements of the magnitudes and phases of the scattering-matrix elements of a pair of PARCs operating at 1.25 and 5.3 GHz; (2) the design, construction, and performance evaluation of a PARC; and (3) the extension of the single-target-calibration technique (STCT) to a PARC. STCT has heretofore been limited to the use of reciprocal passive calibration devices, such as spheres and trihedral corner reflectors.
NASA Astrophysics Data System (ADS)
Kluge, Tobias; John, Cédric M.; Jourdan, Anne-Lise; Davis, Simon; Crawshaw, John
2015-05-01
Many fields of Earth sciences benefit from the knowledge of mineral formation temperatures. For example, carbonates are extensively used for reconstruction of the Earth's past climatic variations by determining ocean, lake, and soil paleotemperatures. Furthermore, diagenetic minerals and their formation or alteration temperature may provide information about the burial history of important geological units and can have practical applications, for instance, for reconstructing the geochemical and thermal histories of hydrocarbon reservoirs. Carbonate clumped isotope thermometry is a relatively new technique that can provide the formation temperature of carbonate minerals without requiring a priori knowledge of the isotopic composition of the initial solution. It is based on the temperature-dependent abundance of the rare 13C-18O bonds in carbonate minerals, specified as a Δ47 value. The clumped isotope thermometer has been calibrated experimentally from 1 °C to 70 °C. However, higher temperatures that are relevant to geological processes have so far not been directly calibrated in the laboratory. In order to close this calibration gap and to provide a robust basis for the application of clumped isotopes to high-temperature geological processes we precipitated CaCO3 (mainly calcite) in the laboratory between 23 and 250 °C. We used two different precipitation techniques: first, minerals were precipitated from a CaCO3 supersaturated solution at atmospheric pressure (23-91 °C), and, second, from a solution resulting from the mixing of CaCl2 and NaHCO3 in a pressurized reaction vessel at a pressure of up to 80 bar (25-250 °C).
Some advances in experimentation supporting development of viscoplastic constitutive models
NASA Technical Reports Server (NTRS)
Ellis, J. R.; Robinson, D. N.
1985-01-01
The development of a biaxial extensometer capable of measuring axial, torsion, and diametral strains to near-microstrain resolution at elevated temperatures is discussed. An instrument with this capability was needed to provide experimental support to the development of viscoplastic constitutive models. The advantages gained when torsional loading is used to investigate inelastic material response at elevated temperatures are highlighted. The development of the biaxial extensometer was conducted in two stages. The first involved a series of bench calibration experiments performed at room temperature. The second stage involved a series of in-place calibration experiments performed at room temperature. A review of the calibration data indicated that all performance requirements regarding resolution, range, stability, and crosstalk had been met by the subject instrument over the temperature range of interest, 21 C to 651 C. The scope of the in-placed calibration experiments was expanded to investigate the feasibility of generating stress relaxation data under torsional loading.
Some advances in experimentation supporting development of viscoplastic constitutive models
NASA Technical Reports Server (NTRS)
Ellis, J. R.; Robinson, D. N.
1985-01-01
The development of a biaxial extensometer capable of measuring axial, torsion, and diametral strains to near-microstrain resolution at elevated temperatures is discussed. An instrument with this capability was needed to provide experimental support to the development of viscoplastic constitutive models. The advantages gained when torsional loading is used to investigate inelastic material response at elevated temperatures are highlighted. The development of the biaxial extensometer was conducted in two stages. The first involved a series of bench calibration experiments performed at room temperature. The second stage involved a series of in-place calibration experiments conducted at room and elevated temperature. A review of the calibration data indicated that all performance requirements regarding resolution, range, stability, and crosstalk had been met by the subject instrument over the temperature range of interest, 21 C to 651 C. The scope of the in-place calibration experiments was expanded to investigate the feasibility of generating stress relaxation data under torsional loading.
[New method of mixed gas infrared spectrum analysis based on SVM].
Bai, Peng; Xie, Wen-Jun; Liu, Jun-Hua
2007-07-01
A new method of infrared spectrum analysis based on support vector machine (SVM) for mixture gas was proposed. The kernel function in SVM was used to map the seriously overlapping absorption spectrum into high-dimensional space, and after transformation, the high-dimensional data could be processed in the original space, so the regression calibration model was established, then the regression calibration model with was applied to analyze the concentration of component gas. Meanwhile it was proved that the regression calibration model with SVM also could be used for component recognition of mixture gas. The method was applied to the analysis of different data samples. Some factors such as scan interval, range of the wavelength, kernel function and penalty coefficient C that affect the model were discussed. Experimental results show that the component concentration maximal Mean AE is 0.132%, and the component recognition accuracy is higher than 94%. The problems of overlapping absorption spectrum, using the same method for qualitative and quantitative analysis, and limit number of training sample, were solved. The method could be used in other mixture gas infrared spectrum analyses, promising theoretic and application values.
Taking a look at the calibration of a CCD detector with a fiber-optic taper
Alkire, R. W.; Rotella, F. J.; Duke, Norma E. C.; ...
2016-02-16
At the Structural Biology Center beamline 19BM, located at the Advanced Photon Source, the operational characteristics of the equipment are routinely checked to ensure they are in proper working order. After performing a partial flat-field calibration for the ADSC Quantum 210r CCD detector, it was confirmed that the detector operates within specifications. However, as a secondary check it was decided to scan a single reflection across one-half of a detector module to validate the accuracy of the calibration. The intensities from this single reflection varied by more than 30% from the module center to the corner of the module. Redistributionmore » of light within bent fibers of the fiber-optic taper was identified to be a source of this variation. As a result, the degree to which the diffraction intensities are corrected to account for characteristics of the fiber-optic tapers depends primarily upon the experimental strategy of data collection, approximations made by the data processing software during scaling, and crystal symmetry.« less
NASA Astrophysics Data System (ADS)
Fuochi, P. G.; Onori, S.; Casali, F.; Chirco, P.
1993-10-01
A 12 MeV linear accelerator is currently used for electron beam processing of power semiconductor devices for lifetime control and, on an experimental basis, for food irradiation, sludge treatment etc. In order to control the irradiation process a simple, quick and reliable method for a direct evaluation of dose and fluence in a broad electron beam has been developed. This paper presents the results obtained using a "charge collector" which measures the charge absorbed in a graphite target exposed in air. Calibration of the system with super-Fricke dosimeter and comparison of absorbed dose results obtained with plastic dosimeters and alanine pellets are discussed.
NASA Astrophysics Data System (ADS)
Rimantho, Dino; Rahman, Tomy Abdul; Cahyadi, Bambang; Tina Hernawati, S.
2017-02-01
Calibration of instrumentation equipment in the pharmaceutical industry is an important activity to determine the true value of a measurement. Preliminary studies indicated that occur lead-time calibration resulted in disruption of production and laboratory activities. This study aimed to analyze the causes of lead-time calibration. Several methods used in this study such as, Six Sigma in order to determine the capability process of the calibration instrumentation of equipment. Furthermore, the method of brainstorming, Pareto diagrams, and Fishbone diagrams were used to identify and analyze the problems. Then, the method of Hierarchy Analytical Process (AHP) was used to create a hierarchical structure and prioritize problems. The results showed that the value of DPMO around 40769.23 which was equivalent to the level of sigma in calibration equipment approximately 3,24σ. This indicated the need for improvements in the calibration process. Furthermore, the determination of problem-solving strategies Lead Time Calibration such as, shortens the schedule preventive maintenance, increase the number of instrument Calibrators, and train personnel. Test results on the consistency of the whole matrix of pairwise comparisons and consistency test showed the value of hierarchy the CR below 0.1.
NASA Technical Reports Server (NTRS)
Groot, J. S.
1990-01-01
In August 1989 the NASA/JPL airborne P/L/C-band DC-8 SAR participated in several remote sensing campaigns in Europe. Amongst other test sites, data were obtained of the Flevopolder test site in the Netherlands on August the 16th. The Dutch X-band SLAR was flown on the same date and imaged parts of the same area as the SAR. To calibrate the two imaging radars a set of 33 calibration devices was deployed. 16 trihedrals were used to calibrate a part of the SLAR data. This short paper outlines the X-band SLAR characteristics, the experimental set-up and the calibration method used to calibrate the SLAR data. Finally some preliminary results are given.
NASA Astrophysics Data System (ADS)
Liu, Hai-Zheng; Shi, Ze-Lin; Feng, Bin; Hui, Bin; Zhao, Yao-Hong
2016-03-01
Integrating microgrid polarimeters on focal plane array (FPA) of an infrared detector causes non-uniformity of polarization response. In order to reduce the effect of polarization non-uniformity, this paper constructs an experimental setup for capturing raw flat-field images and proposes a procedure for acquiring non-uniform calibration (NUC) matrix and calibrating raw polarization images. The proposed procedure takes the incident radiation as a polarization vector and offers a calibration matrix for each pixel. Both our matrix calibration and two-point calibration are applied to our mid-wavelength infrared (MWIR) polarization imaging system with integrated microgrid polarimeters. Compared with two point calibration, our matrix calibration reduces non-uniformity by 30 40% under condition of flat-field data test with polarization. The ourdoor scene observation experiment indicates that our calibration can effectively reduce polarization non-uniformity and improve the image quality of our MWIR polarization imaging system.
Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilches-Freixas, Gloria; Létang, Jean Michel; Rit,
2016-09-15
Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performedmore » at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in most clinical environments.« less
A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry
NASA Astrophysics Data System (ADS)
Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.
2018-03-01
Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.
Zhang, Man; Zhou, Zhuhuang; Wu, Shuicai; Lin, Lan; Gao, Hongjian; Feng, Yusheng
2015-12-21
This study aims at improving the accuracy of temperature simulation for temperature-controlled radio frequency ablation (RFA). We proposed a new voltage-calibration method in the simulation and investigated the feasibility of a hyperbolic bioheat equation (HBE) in the RFA simulation with longer durations and higher power. A total of 40 RFA experiments was conducted in a liver-mimicking phantom. Four mathematical models with multipolar electrodes were developed by the finite element method in COMSOL software: HBE with/without voltage calibration, and the Pennes bioheat equation (PBE) with/without voltage calibration. The temperature-varied voltage calibration used in the simulation was calculated from an experimental power output and temperature-dependent resistance of liver tissue. We employed the HBE in simulation by considering the delay time τ of 16 s. First, for simulations by each kind of bioheat equation (PBE or HBE), we compared the differences between the temperature-varied voltage-calibration and the fixed-voltage values used in the simulations. Then, the comparisons were conducted between the PBE and the HBE in the simulations with temperature-varied voltage calibration. We verified the simulation results by experimental temperature measurements on nine specific points of the tissue phantom. The results showed that: (1) the proposed voltage-calibration method improved the simulation accuracy of temperature-controlled RFA for both the PBE and the HBE, and (2) for temperature-controlled RFA simulation with the temperature-varied voltage calibration, the HBE method was 0.55 °C more accurate than the PBE method. The proposed temperature-varied voltage calibration may be useful in temperature field simulations of temperature-controlled RFA. Besides, the HBE may be used as an alternative in the simulation of long-duration high-power RFA.
Requirements for Calibration in Noninvasive Glucose Monitoring by Raman Spectroscopy
Lipson, Jan; Bernhardt, Jeff; Block, Ueyn; Freeman, William R.; Hofmeister, Rudy; Hristakeva, Maya; Lenosky, Thomas; McNamara, Robert; Petrasek, Danny; Veltkamp, David; Waydo, Stephen
2009-01-01
Background In the development of noninvasive glucose monitoring technology, it is highly desirable to derive a calibration that relies on neither person-dependent calibration information nor supplementary calibration points furnished by an existing invasive measurement technique (universal calibration). Method By appropriate experimental design and associated analytical methods, we establish the sufficiency of multiple factors required to permit such a calibration. Factors considered are the discrimination of the measurement technique, stabilization of the experimental apparatus, physics–physiology-based measurement techniques for normalization, the sufficiency of the size of the data set, and appropriate exit criteria to establish the predictive value of the algorithm. Results For noninvasive glucose measurements, using Raman spectroscopy, the sufficiency of the scale of data was demonstrated by adding new data into an existing calibration algorithm and requiring that (a) the prediction error should be preserved or improved without significant re-optimization, (b) the complexity of the model for optimum estimation not rise with the addition of subjects, and (c) the estimation for persons whose data were removed entirely from the training set should be no worse than the estimates on the remainder of the population. Using these criteria, we established guidelines empirically for the number of subjects (30) and skin sites (387) for a preliminary universal calibration. We obtained a median absolute relative difference for our entire data set of 30 mg/dl, with 92% of the data in the Clarke A and B ranges. Conclusions Because Raman spectroscopy has high discrimination for glucose, a data set of practical dimensions appears to be sufficient for universal calibration. Improvements based on reducing the variance of blood perfusion are expected to reduce the prediction errors substantially, and the inclusion of supplementary calibration points for the wearable device under development will be permissible and beneficial. PMID:20144354
NASA Astrophysics Data System (ADS)
Joiner, N.; Esser, B.; Fertig, M.; Gülhan, A.; Herdrich, G.; Massuti-Ballester, B.
2016-12-01
This paper summarises the final synthesis of an ESA technology research programme entitled "Development of an Innovative Validation Strategy of Gas Surface Interaction Modelling for Re-entry Applications". The focus of the project was to demonstrate the correct pressure dependency of catalytic surface recombination, with an emphasis on Low Earth Orbit (LEO) re-entry conditions and thermal protection system materials. A physics-based model describing the prevalent recombination mechanisms was proposed for implementation into two CFD codes, TINA and TAU. A dedicated experimental campaign was performed to calibrate and validate the CFD model on TPS materials pertinent to the EXPERT space vehicle at a wide range of temperatures and pressures relevant to LEO. A new set of catalytic recombination data was produced that was able to improve the chosen model calibration for CVD-SiC and provide the first model calibration for the Nickel-Chromium super-alloy PM1000. The experimentally observed pressure dependency of catalytic recombination can only be reproduced by the Langmuir-Hinshelwood recombination mechanism. Due to decreasing degrees of (enthalpy and hence) dissociation with facility stagnation pressure, it was not possible to obtain catalytic recombination coefficients from the measurements at high experimental stagnation pressures. Therefore, the CFD model calibration has been improved by this activity based on the low pressure results. The results of the model calibration were applied to the existing EXPERT mission profile to examine the impact of the experimentally calibrated model at flight relevant conditions. The heat flux overshoot at the CVD-SiC/PM1000 junction on EXPERT is confirmed to produce radiative equilibrium temperatures in close proximity to the PM1000 melt temperature.This was anticipated within the margins of the vehicle design; however, due to the measurements made here for the first time at relevant temperatures for the junction, an increased confidence in this finding is placed on the computations.
NASA Astrophysics Data System (ADS)
Wang, Qingquan; Yu, Yingjie; Mou, Kebing
2017-10-01
This paper presents a method of testing the effect of computer-generated hologram (CGH) fabrication error in a cylindrical interferometry system. An experimental system is developed for calibrating the effect of this error. In the calibrating system, a mirror with high surface accuracy is placed at the focal axis of the cylindrical wave. After transmitting through the CGH, the reflected cylindrical wave can be transformed into a plane wave again, and then the plane wave interferes with the reference plane wave. Finally, the double-pass transmitted wavefront of the CGH, representing the effect of the CGH fabrication error in the experimental system, is obtained by analyzing the interferogram. The mathematical model of misalignment aberration removal in the calibration system is described, and the feasibility is demonstrated via the simulation system established in Zemax. With the mathematical polynomial, most of the possible misalignment errors can be estimated with the least-squares fitting algorithm, and then the double-pass transmitted wavefront of the CGH can be obtained by subtracting the misalignment errors from the result extracted from the real experimental system. Compared to the standard double-pass transmitted wavefront given by Diffraction International Ltd., which manufactured the CGH used in the experimental system, the result is desirable. We conclude that the proposed method is effective in calibrating the effect of the CGH error in the cylindrical interferometry system for the measurement of cylindricity error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason
During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less
NASA Astrophysics Data System (ADS)
Venable, Demetrius D.; Whiteman, David N.; Calhoun, Monique N.; Dirisu, Afusat O.; Connell, Rasheen M.; Landulfo, Eduardo
2011-08-01
We have investigated a technique that allows for the independent determination of the water vapor mixing ratio calibration factor for a Raman lidar system. This technique utilizes a procedure whereby a light source of known spectral characteristics is scanned across the aperture of the lidar system's telescope and the overall optical efficiency of the system is determined. Direct analysis of the temperature-dependent differential scattering cross sections for vibration and vibration-rotation transitions (convolved with narrowband filters) along with the measured efficiency of the system, leads to a theoretical determination of the water vapor mixing ratio calibration factor. A calibration factor was also obtained experimentally from lidar measurements and radiosonde data. A comparison of the theoretical and experimentally determined values agrees within 5%. We report on the sensitivity of the water vapor mixing ratio calibration factor to uncertainties in parameters that characterize the narrowband transmission filters, the temperature-dependent differential scattering cross section, and the variability of the system efficiency ratios as the lamp is scanned across the aperture of the telescope used in the Howard University Raman Lidar system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyer, Christopher; Rosenthal, Anja; Myhill, Robert
We have performed an experimental cross calibration of a suite of mineral equilibria within mantle rock bulk compositions that are commonly used in geobarometry to determine the equilibration depths of upper mantle assemblages. Multiple barometers were compared simultaneously in experimental runs, where the pressure was determined using in-situ measurements of the unit cell volumes of MgO, NaCl, Re and h-BN between 3.6 and 10.4 GPa, and 1250 and 1500 °C. The experiments were performed in a large volume press (LVPs) in combination with synchrotron X-ray diffraction. Noble metal capsules drilled with multiple sample chambers were loaded with a range ofmore » bulk compositions representative of peridotite, eclogite and pyroxenite lithologies. By this approach, we simultaneously calibrated the geobarometers applicable to different mantle lithologies under identical and well determined pressure and temperature conditions. We identified discrepancies between the calculated and experimental pressures for which we propose simple linear or constant correction factors to some of the previously published barometric equations. As a result, we establish internally-consistent cross-calibrations for a number of garnet-orthopyroxene, garnet-clinopyroxene, Ca-Tschermaks-in-clinopyroxene and majorite geobarometers.« less
e-Calibrations: using the Internet to deliver calibration services in real time at lower cost
NASA Astrophysics Data System (ADS)
Desrosiers, Marc; Nagy, Vitaly; Puhl, James; Glenn, Robert; Densock, Robert; Stieren, David; Lang, Brian; Kamlowski, Andreas; Maier, Diether; Heiss, Arthur
2002-03-01
The National Institute of Standards and Technology (NIST) is expanding into a new frontier in the delivery of measurement services. The Internet will be employed to provide industry with electronic traceability to national standards. This is a radical departure from the traditional modes of traceability and presents many new challenges. The traditional mail-based calibration service relies on sending artifacts to the user, who then mails them back to NIST for evaluation. The new service will deliver calibration results to the industry customer on-demand, in real-time, at a lower cost. The calibration results can be incorporated rapidly into the production process to ensure the highest quality manufacturing. The service would provide the US radiation processing industry with a direct link to the NIST calibration facilities and its expertise, and provide an interactive feedback process between industrial processing and the national measurement standard. Moreover, an Internet calibration system should contribute to the removal of measurement-related trade barriers.
Moretti, Paul; Choubert, Jean-Marc; Canler, Jean-Pierre; Buffière, Pierre; Pétrimaux, Olivier; Lessard, Paul
2018-02-01
The integrated fixed-film activated sludge (IFAS) process is being increasingly used to enhance nitrogen removal for former activated sludge systems. The aim of this work is to evaluate a numerical model of a new nitrifying/denitrifying IFAS configuration. It consists of two carrier-free reactors (anoxic and aerobic) and one IFAS reactor with a filling ratio of 43% of carriers, followed by a clarifier. Simulations were carried out with GPS-X involving the nitrification reaction combined with a 1D heterogeneous biofilm model, including attachment/detachment processes. An original iterative calibration protocol was created comprising four steps and nine actions. Experimental campaigns were carried out to collect data on the pilot in operation, specifically for modelling purpose. The model used was able to predict properly the variations of the activated sludge (bulk) and the biofilm masses, the nitrification rates of both the activated sludge and the biofilm, and the nitrogen concentration in the effluent for short (4-10 days) and long (300 days) simulation runs. A calibrated parameter set is proposed (biokinetics, detachment, diffusion) related to the activated sludge, the biofilm and the effluent variables to enhance the model prediction on hourly and daily data sets.
Thermographic Microstructure Monitoring in Electron Beam Additive Manufacturing
Raplee, J.; Plotkowski, A.; Kirka, M. M.; Dinwiddie, R.; Okello, A.; Dehoff, R. R.; Babu, S. S.
2017-01-01
To reduce the uncertainty of build performance in metal additive manufacturing, robust process monitoring systems that can detect imperfections and improve repeatability are desired. One of the most promising methods for in situ monitoring is thermographic imaging. However, there is a challenge in using this technology due to the difference in surface emittance between the metal powder and solidified part being observed that affects the accuracy of the temperature data collected. The purpose of the present study was to develop a method for properly calibrating temperature profiles from thermographic data to account for this emittance change and to determine important characteristics of the build through additional processing. The thermographic data was analyzed to identify the transition of material from metal powder to a solid as-printed part. A corrected temperature profile was then assembled for each point using calibrations for these surface conditions. Using this data, the thermal gradient and solid-liquid interface velocity were approximated and correlated to experimentally observed microstructural variation within the part. This work shows that by using a method of process monitoring, repeatability of a build could be monitored specifically in relation to microstructure control. PMID:28256595
Zhang, Da; Mihai, Georgeta; Barbaras, Larry G; Brook, Olga R; Palmer, Matthew R
2018-05-10
Water equivalent diameter (Dw) reflects patient's attenuation and is a sound descriptor of patient size, and is used to determine size-specific dose estimator from a CT examination. Calculating Dw from CT localizer radiographs makes it possible to utilize Dw before actual scans and minimizes truncation errors due to limited reconstructed fields of view. One obstacle preventing the user community from implementing this useful tool is the necessity to calibrate localizer pixel values so as to represent water equivalent attenuation. We report a practical method to ease this calibration process. Dw is calculated from water equivalent area (Aw) which is deduced from the average localizer pixel value (LPV) of the line(s) in the localizer radiograph that correspond(s) to the axial image. The calibration process is conducted to establish the relationship between Aw and LPV. Localizer and axial images were acquired from phantoms of different total attenuation. We developed a program that automates the geometrical association between axial images and localizer lines and manages the measurements of Dw and average pixel values. We tested the calibration method on three CT scanners: a GE CT750HD, a Siemens Definition AS, and a Toshiba Acquilion Prime80, for both posterior-anterior (PA) and lateral (LAT) localizer directions (for all CTs) and with different localizer filters (for the Toshiba CT). The computer program was able to correctly perform the geometrical association between corresponding axial images and localizer lines. Linear relationships between Aw and LPV were observed (with R 2 all greater than 0.998) on all tested conditions, regardless of the direction and image filters used on the localizer radiographs. When comparing LAT and PA directions with the same image filter and for the same scanner, the slope values were close (maximum difference of 0.02 mm), and the intercept values showed larger deviations (maximum difference of 2.8 mm). Water equivalent diameter estimation on phantoms and patients demonstrated high accuracy of the calibration: percentage difference between Dw from axial images and localizers was below 2%. With five clinical chest examinations and five abdominal-pelvic examinations of varying patient sizes, the maximum percentage difference was approximately 5%. Our study showed that Aw and LPV are highly correlated, providing enough evidence to allow for the Dw determination once the experimental calibration process is established. © 2018 American Association of Physicists in Medicine.
Linearization of Positional Response Curve of a Fiber-optic Displacement Sensor
NASA Astrophysics Data System (ADS)
Babaev, O. G.; Matyunin, S. A.; Paranin, V. D.
2018-01-01
Currently, the creation of optical measuring instruments and sensors for measuring linear displacement is one of the most relevant problems in the area of instrumentation. Fiber-optic contactless sensors based on the magneto-optical effect are of special interest. They are essentially contactless, non-electrical and have a closed optical channel not subject to contamination. The main problem of this type of sensors is the non-linearity of their positional response curve due to the hyperbolic nature of the magnetic field intensity variation induced by moving the magnetic source mounted on the controlled object relative to the sensing element. This paper discusses an algorithmic method of linearizing the positional response curve of fiber-optic displacement sensors in any selected range of the displacements to be measured. The method is divided into two stages: 1 - definition of the calibration function, 2 - measurement and linearization of the positional response curve (including its temperature stabilization). The algorithm under consideration significantly reduces the number of points of the calibration function, which is essential for the calibration of temperature dependence, due to the use of the points that randomly deviate from the grid points with uniform spacing. Subsequent interpolation of the deviating points and piecewise linear-plane approximation of the calibration function reduces the microcontroller storage capacity for storing the calibration function and the time required to process the measurement results. The paper also presents experimental results of testing real samples of fiber-optic displacement sensors.
Assessment of uncertainty in ROLO lunar irradiance for on-orbit calibration
Stone, T.C.; Kieffer, H.H.; Barnes, W.L.; Butler, J.J.
2004-01-01
A system to provide radiometric calibration of remote sensing imaging instruments on-orbit using the Moon has been developed by the US Geological Survey RObotic Lunar Observatory (ROLO) project. ROLO has developed a model for lunar irradiance which treats the primary geometric variables of phase and libration explicitly. The model fits hundreds of data points in each of 23 VNIR and 9 SWIR bands; input data are derived from lunar radiance images acquired by the project's on-site telescopes, calibrated to exoatmospheric radiance and converted to disk-equivalent reflectance. Experimental uncertainties are tracked through all stages of the data processing and modeling. Model fit residuals are ???1% in each band over the full range of observed phase and libration angles. Application of ROLO lunar calibration to SeaWiFS has demonstrated the capability for long-term instrument response trending with precision approaching 0.1% per year. Current work involves assessing the error in absolute responsivity and relative spectral response of the ROLO imaging systems, and propagation of error through the data reduction and modeling software systems with the goal of reducing the uncertainty in the absolute scale, now estimated at 5-10%. This level is similar to the scatter seen in ROLO lunar irradiance comparisons of multiple spacecraft instruments that have viewed the Moon. A field calibration campaign involving NASA and NIST has been initiated that ties the ROLO lunar measurements to the NIST (SI) radiometric scale.
NASA Astrophysics Data System (ADS)
Kimpe, Tom; Rostang, Johan; Avanaki, Ali; Espig, Kathryn; Xthona, Albert; Cocuranu, Ioan; Parwani, Anil V.; Pantanowitz, Liron
2014-03-01
Digital pathology systems typically consist of a slide scanner, processing software, visualization software, and finally a workstation with display for visualization of the digital slide images. This paper studies whether digital pathology images can look different when presenting them on different display systems, and whether these visual differences can result in different perceived contrast of clinically relevant features. By analyzing a set of four digital pathology images of different subspecialties on three different display systems, it was concluded that pathology images look different when visualized on different display systems. The importance of these visual differences is elucidated when they are located in areas of the digital slide that contain clinically relevant features. Based on a calculation of dE2000 differences between background and clinically relevant features, it was clear that perceived contrast of clinically relevant features is influenced by the choice of display system. Furthermore, it seems that the specific calibration target chosen for the display system has an important effect on the perceived contrast of clinically relevant features. Preliminary results suggest that calibrating to DICOM GSDF calibration performed slightly worse than sRGB, while a new experimental calibration target CSDF performed better than both DICOM GSDF and sRGB. This result is promising as it suggests that further research work could lead to better definition of an optimized calibration target for digital pathology images resulting in a positive effect on clinical performance.
True logarithmic amplification of frequency clock in SS-OCT for calibration
Liu, Bin; Azimi, Ehsan; Brezinski, Mark E.
2011-01-01
With swept source optical coherence tomography (SS-OCT), imprecise signal calibration prevents optimal imaging of biological tissues such as coronary artery. This work demonstrates an approach using a true logarithmic amplifier to precondition the clock signal, with the effort to minimize the noises and phase errors for optimal calibration. This method was validated and tested with a high-speed SS-OCT. The experimental results manifest its superior ability on optimization of the calibration and improvement of the imaging performance. Particularly, this hardware-based approach is suitable for real-time calibration in a high-speed system where computation time is constrained. PMID:21698036
Hijazi, Bilal; Cool, Simon; Vangeyte, Jürgen; Mertens, Koen C; Cointault, Frédéric; Paindavoine, Michel; Pieters, Jan G
2014-11-13
A 3D imaging technique using a high speed binocular stereovision system was developed in combination with corresponding image processing algorithms for accurate determination of the parameters of particles leaving the spinning disks of centrifugal fertilizer spreaders. Validation of the stereo-matching algorithm using a virtual 3D stereovision simulator indicated an error of less than 2 pixels for 90% of the particles. The setup was validated using the cylindrical spread pattern of an experimental spreader. A 2D correlation coefficient of 90% and a Relative Error of 27% was found between the experimental results and the (simulated) spread pattern obtained with the developed setup. In combination with a ballistic flight model, the developed image acquisition and processing algorithms can enable fast determination and evaluation of the spread pattern which can be used as a tool for spreader design and precise machine calibration.
NASA Astrophysics Data System (ADS)
Wang, Qingquan; Yu, Yingjie; Mou, Kebing
2016-10-01
This paper presents a method of absolutely calibrating the fabrication error of the CGH in the cylindrical interferometry system for the measurement of cylindricity error. First, a simulated experimental system is set up in ZEMAX. On one hand, the simulated experimental system has demonstrated the feasibility of the method we proposed. On the other hand, by changing the different positions of the mirror in the simulated experimental system, a misalignment aberration map, consisting of the different interferograms in different positions, is acquired. And it can be acted as a reference for the experimental adjustment in real system. Second, the mathematical polynomial, which describes the relationship between the misalignment aberrations and the possible misalignment errors, is discussed.
Automated Attitude Sensor Calibration: Progress and Plans
NASA Technical Reports Server (NTRS)
Sedlak, Joseph; Hashmall, Joseph
2004-01-01
This paper describes ongoing work a NASA/Goddard Space Flight Center to improve the quality of spacecraft attitude sensor calibration and reduce costs by automating parts of the calibration process. The new calibration software can autonomously preview data quality over a given time span, select a subset of the data for processing, perform the requested calibration, and output a report. This level of automation is currently being implemented for two specific applications: inertial reference unit (IRU) calibration and sensor alignment calibration. The IRU calibration utility makes use of a sequential version of the Davenport algorithm. This utility has been successfully tested with simulated and actual flight data. The alignment calibration is still in the early testing stage. Both utilities will be incorporated into the institutional attitude ground support system.
Comparison of magnetic probe calibration at nano and millitesla magnitudes
NASA Astrophysics Data System (ADS)
Pahl, Ryan A.; Rovey, Joshua L.; Pommerenke, David J.
2014-01-01
Magnetic field probes are invaluable diagnostics for pulsed inductive plasma devices where field magnitudes on the order of tenths of tesla or larger are common. Typical methods of providing a broadband calibration of dot{{B}} probes involve either a Helmholtz coil driven by a function generator or a network analyzer. Both calibration methods typically produce field magnitudes of tens of microtesla or less, at least three and as many as six orders of magnitude lower than their intended use. This calibration factor is then assumed constant regardless of magnetic field magnitude and the effects of experimental setup are ignored. This work quantifies the variation in calibration factor observed when calibrating magnetic field probes in low field magnitudes. Calibration of two dot{{B}} probe designs as functions of frequency and field magnitude are presented. The first dot{{B}} probe design is the most commonly used design and is constructed from two hand-wound inductors in a differential configuration. The second probe uses surface mounted inductors in a differential configuration with balanced shielding to further reduce common mode noise. Calibration factors are determined experimentally using an 80.4 mm radius Helmholtz coil in two separate configurations over a frequency range of 100-1000 kHz. A conventional low magnitude calibration using a vector network analyzer produced a field magnitude of 158 nT and yielded calibration factors of 15 663 ± 1.7% and 4920 ± 0.6% {T}/{V {s}} at 457 kHz for the surface mounted and hand-wound probes, respectively. A relevant magnitude calibration using a pulsed-power setup with field magnitudes of 8.7-354 mT yielded calibration factors of 14 615 ± 0.3% and 4507 ± 0.4% {T}/{V {s}} at 457 kHz for the surface mounted inductor and hand-wound probe, respectively. Low-magnitude calibration resulted in a larger calibration factor, with an average difference of 9.7% for the surface mounted probe and 12.0% for the hand-wound probe. The maximum difference between relevant and low magnitude tests was 21.5%.
Modeling Improvements and Users Manual for Axial-flow Turbine Off-design Computer Code AXOD
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1994-01-01
An axial-flow turbine off-design performance computer code used for preliminary studies of gas turbine systems was modified and calibrated based on the experimental performance of large aircraft-type turbines. The flow- and loss-model modifications and calibrations are presented in this report. Comparisons are made between computed performances and experimental data for seven turbines over wide ranges of speed and pressure ratio. This report also serves as the users manual for the revised code, which is named AXOD.
A theoretical/experimental program to develop active optical pollution sensors, part 2
NASA Technical Reports Server (NTRS)
Poultney, S. K.
1975-01-01
Progress is reported on experimental investigations of Lidar and the application of Lidar to environmental and atmospheric science. Specifically the following programs are considered: calibration and application of the LaRC 48-inch Lidar; efficient and certain detection of SO2 and other gases in the calibration tank using the Raman Stack Monitor Lidar; the potential of Lidar remote sensing from the space shuttle; and the planning and mounting of efforts to realize the promise of backscatter differential absorption Lidar.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirsh, T. Y.; Perez Galvan, A.; Burkey, M.
This article presents an approach to calibrate the energy response of double-sided silicon strip detectors (DSSDs) for low-energy nuclear-science experiments by utilizing cosmic-ray muons. For the 1-mm-thick detectors used with the Beta-decay Paul Trap, the minimum-ionizing peak from these muons provides a stable and time-independent in situ calibration point at around 300 keV, which supplements the calibration data obtained above 3 MeV from sources. The muon-data calibration is achieved by comparing experimental spectra with detailed Monte Carlo simulations performed using GEANT4 and CRY codes. This additional information constrains the calibration at lower energies, resulting in improvements in quality and accuracy.
NASA Astrophysics Data System (ADS)
Hirsh, T. Y.; Pérez Gálvan, A.; Burkey, M. T.; Aprahamian, A.; Buchinger, F.; Caldwell, S.; Clark, J. A.; Gallant, A. T.; Heckmaier, E.; Levand, A. F.; Marley, S. T.; Morgan, G. E.; Nystrom, A.; Orford, R.; Savard, G.; Scielzo, N. D.; Segel, R.; Sharma, K. S.; Siegl, K.; Wang, B. S.
2018-04-01
This article presents an approach to calibrate the energy response of double-sided silicon strip detectors (DSSDs) for low-energy nuclear-science experiments by utilizing cosmic-ray muons. For the 1-mm-thick detectors used with the Beta-decay Paul Trap, the minimum-ionizing peak from these muons provides a stable and time-independent in situ calibration point at around 300 keV, which supplements the calibration data obtained above 3 MeV from α sources. The muon-data calibration is achieved by comparing experimental spectra with detailed Monte Carlo simulations performed using GEANT4 and CRY codes. This additional information constrains the calibration at lower energies, resulting in improvements in quality and accuracy.
The Landsat Data Continuity Mission Operational Land Imager (OLI) Radiometric Calibration
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Dabney, Philip W.; Murphy-Morris, Jeanine E.; Knight, Edward J.; Kvaran, Geir; Barsi, Julia A.
2010-01-01
The Operational Land Imager (OLI) on the Landsat Data Continuity Mission (LDCM) has a comprehensive radiometric characterization and calibration program beginning with the instrument design, and extending through integration and test, on-orbit operations and science data processing. Key instrument design features for radiometric calibration include dual solar diffusers and multi-lamped on-board calibrators. The radiometric calibration transfer procedure from NIST standards has multiple checks on the radiometric scale throughout the process and uses a heliostat as part of the transfer to orbit of the radiometric calibration. On-orbit lunar imaging will be used to track the instruments stability and side slither maneuvers will be used in addition to the solar diffuser to flat field across the thousands of detectors per band. A Calibration Validation Team is continuously involved in the process from design to operations. This team uses an Image Assessment System (IAS), part of the ground system to characterize and calibrate the on-orbit data.
Meng, Hu; Li, Jiang-Yuan; Tang, Yong-Huai
2009-01-01
The virtual instrument system based on LabVIEW 8.0 for ion analyzer which can measure and analyze ion concentrations in solution is developed and comprises homemade conditioning circuit, data acquiring board, and computer. It can calibrate slope, temperature, and positioning automatically. When applied to determine the reaction rate constant by pX, it achieved live acquiring, real-time displaying, automatical processing of testing data, generating the report of results; and other functions. This method simplifies the experimental operation greatly, avoids complicated procedures of manual processing data and personal error, and improves veracity and repeatability of the experiment results.
ARTIP: Automated Radio Telescope Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh
2018-02-01
The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.
In the making: SA-PIV applied to swimming practice
NASA Astrophysics Data System (ADS)
van Houwelingen, Josje; van de Water, Willem; Kunnen, Rudie; van Heijst, Gertjan; Clercx, Herman
2017-11-01
To understand and optimize the propulsion in human swimming, a deep understanding of the hydrodynamics of swimming is required. This is usually based on experiments and numerical simulations under laboratory conditions.. In this study, we bring basic fluid mechanics knowledge and experimental measurement techniques to analyze the flow towards the swimming practice itself. A flow visualization setup is build and placed in a regular swimming pool. The measurement volume contains five homogeneous air bubble curtains illuminated by ambient light. The bubbles in these curtains act as tracer particles. The bubble motion is captured by six cameras placed in the side wall of the pool. It is intended to apply SA-PIV (synthetic aperture PIV) for analyzing the flow structures on multiple planes in the measurement volume. The system has been calibrated and the calibration data are used to refocus on the planes of interest. Multiple preprocessing steps need to be executed to obtain the proper quality of images before applying PIV. With a specially programmed video card to process and analyze the images in real-time feedback about swimming performance will become possible. We report on the first experimental data obtained by this system.
Calibration Method of an Ultrasonic System for Temperature Measurement
Zhou, Chao; Wang, Yueke; Qiao, Chunjie; Dai, Weihua
2016-01-01
System calibration is fundamental to the overall accuracy of the ultrasonic temperature measurement, and it is basically involved in accurately measuring the path length and the system latency of the ultrasonic system. This paper proposes a method of high accuracy system calibration. By estimating the time delay between the transmitted signal and the received signal at several different temperatures, the calibration equations are constructed, and the calibrated results are determined with the use of the least squares algorithm. The formulas are deduced for calculating the calibration uncertainties, and the possible influential factors are analyzed. The experimental results in distilled water show that the calibrated path length and system latency can achieve uncertainties of 0.058 mm and 0.038 μs, respectively, and the temperature accuracy is significantly improved by using the calibrated results. The temperature error remains within ±0.04°C consistently, and the percentage error is less than 0.15%. PMID:27788252
Optical Interferometric Micrometrology
NASA Technical Reports Server (NTRS)
Abel, Phillip B.; Lauer, James R.
1989-01-01
Resolutions in angstrom and subangstrom range sought for atomic-scale surface probes. Experimental optical micrometrological system built to demonstrate calibration of piezoelectric transducer to displacement sensitivity of few angstroms. Objective to develop relatively simple system producing and measuring translation, across surface of specimen, of stylus in atomic-force or scanning tunneling microscope. Laser interferometer used to calibrate piezoelectric transducer used in atomic-force microscope. Electronic portion of calibration system made of commercially available components.
Kwon, Young-Hoo; Casebolt, Jeffrey B
2006-01-01
One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.
Kwon, Young-Hoo; Casebolt, Jeffrey B
2006-07-01
One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.
NASA Astrophysics Data System (ADS)
Chatzidimitriou-Dreismann, C. A.; Gray, E. MacA.; Blach, T. P.
2012-06-01
The "standard" procedure for calibrating the Vesuvio eV neutron spectrometer at the ISIS neutron source, forming the basis for data analysis over at least the last decade, was recently documented in considerable detail by the instrument's scientists. Additionally, we recently derived analytic expressions of the sensitivity of recoil peak positions with respect to fight-path parameters and presented neutron-proton scattering results that together called into question the validity of the "standard" calibration. These investigations should contribute significantly to the assessment of the experimental results obtained with Vesuvio. Here we present new results of neutron-deuteron scattering from D2 in the backscattering angular range (θ>90°) which are accompanied by a striking energy increase that violates the Impulse Approximation, thus leading unequivocally the following dilemma: (A) either the "standard" calibration is correct and then the experimental results represent a novel quantum dynamical effect of D which stands in blatant contradiction of conventional theoretical expectations; (B) or the present "standard" calibration procedure is seriously deficient and leads to artificial outcomes. For Case (A), we allude to the topic of attosecond quantum dynamical phenomena and our recent neutron scattering experiments from H2 molecules. For Case (B), some suggestions as to how the "standard" calibration could be considerably improved are made.
NASA Astrophysics Data System (ADS)
Čufar, Aljaž; Batistoni, Paola; Conroy, Sean; Ghani, Zamir; Lengar, Igor; Milocco, Alberto; Packer, Lee; Pillon, Mario; Popovichev, Sergey; Snoj, Luka; JET Contributors
2017-03-01
At the Joint European Torus (JET) the ex-vessel fission chambers and in-vessel activation detectors are used as the neutron production rate and neutron yield monitors respectively. In order to ensure that these detectors produce accurate measurements they need to be experimentally calibrated. A new calibration of neutron detectors to 14 MeV neutrons, resulting from deuterium-tritium (DT) plasmas, is planned at JET using a compact accelerator based neutron generator (NG) in which a D/T beam impinges on a solid target containing T/D, producing neutrons by DT fusion reactions. This paper presents the analysis that was performed to model the neutron source characteristics in terms of energy spectrum, angle-energy distribution and the effect of the neutron generator geometry. Different codes capable of simulating the accelerator based DT neutron sources are compared and sensitivities to uncertainties in the generator's internal structure analysed. The analysis was performed to support preparation to the experimental measurements performed to characterize the NG as a calibration source. Further extensive neutronics analyses, performed with this model of the NG, will be needed to support the neutron calibration experiments and take into account various differences between the calibration experiment and experiments using the plasma as a source of neutrons.
High Performance Input/Output for Parallel Computer Systems
NASA Technical Reports Server (NTRS)
Ligon, W. B.
1996-01-01
The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.
The algorithm for automatic detection of the calibration object
NASA Astrophysics Data System (ADS)
Artem, Kruglov; Irina, Ugfeld
2017-06-01
The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.
NASA Technical Reports Server (NTRS)
Greene, N.; Thesken, J. C.; Murthy, P. L. N.; Phoenix, S. L.; Palko, J.; Eldridge, J.; Sutter, J.; Saulsberry, R.; Beeson, H.
2006-01-01
A theoretical investigation of the factors controlling the stress rupture life of the National Aeronautics and Space Agency's (NASA) composite overwrapped pressure vessels (COPVs) continues. Kevlar(TradeMark) fiber overwrapped tanks are of particular concern due to their long usage and the poorly understood stress rupture process in Kevlar(TradeMark) filaments. Existing long term data show that the rupture process is a function of stress, temperature and time. However, due to the presence of a load sharing liner, the manufacturing induced residual stresses and the complex mechanical response, the state of actual fiber stress in flight hardware and test articles is not clearly known. This paper is a companion to the experimental investigation reported in [1] and develops a theoretical framework necessary to design full-scale pathfinder experiments and accurately interpret the experimentally observed deformation and failure mechanisms leading up to static burst in COPVs. The fundamental mechanical response of COPVs is described using linear elasticity and thin shell theory and discussed in comparison to existing experimental observations. These comparisons reveal discrepancies between physical data and the current analytical results and suggest that the vessel's residual stress state and the spatial stress distribution as a function of pressure may be completely different from predictions based upon existing linear elastic analyses. The 3D elasticity of transversely isotropic spherical shells demonstrates that an overly compliant transverse stiffness relative to membrane stiffness can account for some of this by shifting a thin shell problem well into the realm of thick shell response. The use of calibration procedures are demonstrated as calibrated thin shell model results and finite element results are shown to be in good agreement with the experimental results. The successes reported here have lead to continuing work with full scale testing of larger NASA COPV hardware.
A Single-Block TRL Test Fixture for the Cryogenic Characterization of Planar Microwave Components
NASA Technical Reports Server (NTRS)
Mejia, M.; Creason, A. S.; Toncich, S. S.; Ebihara, B. T.; Miranda, F. A.
1996-01-01
The High-Temperature-Superconductivity (HTS) group of the RF Technology Branch, Space Electronics Division, is actively involved in the fabrication and cryogenic characterization of planar microwave components for space applications. This process requires fast, reliable, and accurate measurement techniques not readily available. A new calibration standard/test fixture that enhances the integrity and reliability of the component characterization process has been developed. The fixture consists of 50 omega thru, reflect, delay, and device under test gold lines etched onto a 254 microns (0.010 in) thick alumina substrate. The Thru-Reflect-Line (TRL) fixture was tested at room temperature using a 30 omega, 7.62 mm (300 mil) long, gold line as a known standard. Good agreement between the experimental data and the data modelled using Sonnet's em(C) software was obtained for both the return (S(sub 11)) and insertion (S( 21)) losses. A gold two-pole bandpass filter with a 7.3 GHz center frequency was used as our Device Under Test (DUT), and the results compared with those obtained using a Short-Open-Load-Thru (SOLT) calibration technique.
Wall, Mark J.
2016-01-01
Microelectrode amperometric biosensors are widely used to measure concentrations of analytes in solution and tissue including acetylcholine, adenosine, glucose, and glutamate. A great deal of experimental and modeling effort has been directed at quantifying the response of the biosensors themselves; however, the influence that the macroscopic tissue environment has on biosensor response has not been subjected to the same level of scrutiny. Here we identify an important issue in the way microelectrode biosensors are calibrated that is likely to have led to underestimations of analyte tissue concentrations. Concentration in tissue is typically determined by comparing the biosensor signal to that measured in free-flow calibration conditions. In a free-flow environment the concentration of the analyte at the outer surface of the biosensor can be considered constant. However, in tissue the analyte reaches the biosensor surface by diffusion through the extracellular space. Because the enzymes in the biosensor break down the analyte, a density gradient is set up resulting in a significantly lower concentration of analyte near the biosensor surface. This effect is compounded by the diminished volume fraction (porosity) and reduction in the diffusion coefficient due to obstructions (tortuosity) in tissue. We demonstrate this effect through modeling and experimentally verify our predictions in diffusive environments. NEW & NOTEWORTHY Microelectrode biosensors are typically calibrated in a free-flow environment where the concentrations at the biosensor surface are constant. However, when in tissue, the analyte reaches the biosensor via diffusion and so analyte breakdown by the biosensor results in a concentration gradient and consequently a lower concentration around the biosensor. This effect means that naive free-flow calibration will underestimate tissue concentration. We develop mathematical models to better quantify the discrepancy between the calibration and tissue environment and experimentally verify our key predictions. PMID:27927788
Newton, Adam J H; Wall, Mark J; Richardson, Magnus J E
2017-03-01
Microelectrode amperometric biosensors are widely used to measure concentrations of analytes in solution and tissue including acetylcholine, adenosine, glucose, and glutamate. A great deal of experimental and modeling effort has been directed at quantifying the response of the biosensors themselves; however, the influence that the macroscopic tissue environment has on biosensor response has not been subjected to the same level of scrutiny. Here we identify an important issue in the way microelectrode biosensors are calibrated that is likely to have led to underestimations of analyte tissue concentrations. Concentration in tissue is typically determined by comparing the biosensor signal to that measured in free-flow calibration conditions. In a free-flow environment the concentration of the analyte at the outer surface of the biosensor can be considered constant. However, in tissue the analyte reaches the biosensor surface by diffusion through the extracellular space. Because the enzymes in the biosensor break down the analyte, a density gradient is set up resulting in a significantly lower concentration of analyte near the biosensor surface. This effect is compounded by the diminished volume fraction (porosity) and reduction in the diffusion coefficient due to obstructions (tortuosity) in tissue. We demonstrate this effect through modeling and experimentally verify our predictions in diffusive environments. NEW & NOTEWORTHY Microelectrode biosensors are typically calibrated in a free-flow environment where the concentrations at the biosensor surface are constant. However, when in tissue, the analyte reaches the biosensor via diffusion and so analyte breakdown by the biosensor results in a concentration gradient and consequently a lower concentration around the biosensor. This effect means that naive free-flow calibration will underestimate tissue concentration. We develop mathematical models to better quantify the discrepancy between the calibration and tissue environment and experimentally verify our key predictions. Copyright © 2017 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Belfort, Benjamin; Weill, Sylvain; Lehmann, François
2017-07-01
A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.
Modeling of block copolymer dry etching for directed self-assembly lithography
NASA Astrophysics Data System (ADS)
Belete, Zelalem; Baer, Eberhard; Erdmann, Andreas
2018-03-01
Directed self-assembly (DSA) of block copolymers (BCP) is a promising alternative technology to overcome the limits of patterning for the semiconductor industry. DSA exploits the self-assembling property of BCPs for nano-scale manufacturing and to repair defects in patterns created during photolithography. After self-assembly of BCPs, to transfer the created pattern to the underlying substrate, selective etching of PMMA (poly (methyl methacrylate)) to PS (polystyrene) is required. However, the etch process to transfer the self-assemble "fingerprint" DSA patterns to the underlying layer is still a challenge. Using combined experimental and modelling studies increases understanding of plasma interaction with BCP materials during the etch process and supports the development of selective process that form well-defined patterns. In this paper, a simple model based on a generic surface model has been developed and an investigation to understand the etch behavior of PS-b-PMMA for Ar, and Ar/O2 plasma chemistries has been conducted. The implemented model is calibrated for etch rates and etch profiles with literature data to extract parameters and conduct simulations. In order to understand the effect of the plasma on the block copolymers, first the etch model was calibrated for polystyrene (PS) and poly (methyl methacrylate) (PMMA) homopolymers. After calibration of the model with the homopolymers etch rate, a full Monte-Carlo simulation was conducted and simulation results are compared with the critical-dimension (CD) and selectivity of etch profile measurement. In addition, etch simulations for lamellae pattern have been demonstrated, using the implemented model.
NASA Astrophysics Data System (ADS)
Fer, I.; Kelly, R.; Andrews, T.; Dietze, M.; Richardson, A. D.
2016-12-01
Our ability to forecast ecosystems is limited by how well we parameterize ecosystem models. Direct measurements for all model parameters are not always possible and inverse estimation of these parameters through Bayesian methods is computationally costly. A solution to computational challenges of Bayesian calibration is to approximate the posterior probability surface using a Gaussian Process that emulates the complex process-based model. Here we report the integration of this method within an ecoinformatics toolbox, Predictive Ecosystem Analyzer (PEcAn), and its application with two ecosystem models: SIPNET and ED2.1. SIPNET is a simple model, allowing application of MCMC methods both to the model itself and to its emulator. We used both approaches to assimilate flux (CO2 and latent heat), soil respiration, and soil carbon data from Bartlett Experimental Forest. This comparison showed that emulator is reliable in terms of convergence to the posterior distribution. A 10000-iteration MCMC analysis with SIPNET itself required more than two orders of magnitude greater computation time than an MCMC run of same length with its emulator. This difference would be greater for a more computationally demanding model. Validation of the emulator-calibrated SIPNET against both the assimilated data and out-of-sample data showed improved fit and reduced uncertainty around model predictions. We next applied the validated emulator method to the ED2, whose complexity precludes standard Bayesian data assimilation. We used the ED2 emulator to assimilate demographic data from a network of inventory plots. For validation of the calibrated ED2, we compared the model to results from Empirical Succession Mapping (ESM), a novel synthesis of successional patterns in Forest Inventory and Analysis data. Our results revealed that while the pre-assimilation ED2 formulation cannot capture the emergent demographic patterns from ESM analysis, constrained model parameters controlling demographic processes increased their agreement considerably.
Electronic transport in VO 2 —Experimentally calibrated Boltzmann transport modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinaci, Alper; Kado, Motohisa; Rosenmann, Daniel
2015-12-28
Materials that undergo metal-insulator transitions (MITs) are under intense study because the transition is scientifically fascinating and technologically promising for various applications. Among these materials, VO2 has served as a prototype due to its favorable transition temperature. While the physical underpinnings of the transition have been heavily investigated experimentally and computationally, quantitative modeling of electronic transport in the two phases has yet to be undertaken. In this work, we establish a density-functional-theory (DFT)-based approach to model electronic transport properties in VO2 in the semiconducting and metallic regimes, focusing on band transport using the Boltzmann transport equations. We synthesized high qualitymore » VO2 films and measured the transport quantities across the transition, in order to calibrate the free parameters in the model. We find that the experimental calibration of the Hubbard correction term can efficiently and adequately model the metallic and semiconducting phases, allowing for further computational design of MIT materials for desirable transport properties.« less
GIFTS SM EDU Radiometric and Spectral Calibrations
NASA Technical Reports Server (NTRS)
Tian, J.; Reisse, R. a.; Johnson, D. G.; Gazarik, J. J.
2007-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument gathers measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration. The calibration procedures can be subdivided into three categories: the pre-calibration stage, the calibration stage, and finally, the post-calibration stage. Detailed derivations for each stage are presented in this paper.
A one dimensional moving bed biofilm reactor model for nitrification of municipal wastewaters.
Barry, Ugo; Choubert, Jean-Marc; Canler, Jean-Pierre; Pétrimaux, Olivier; Héduit, Alain; Lessard, Paul
2017-08-01
This work presents a one-dimensional model of a moving bed bioreactor (MBBR) process designed for the removal of nitrogen from raw wastewaters. A comprehensive experimental strategy was deployed at a semi-industrial pilot-scale plant fed with a municipal wastewater operated at 10-12 °C, and surface loading rates of 1-2 g filtered COD/m 2 d and 0.4-0.55 g NH 4 -N/m 2 d. Data were collected on influent/effluent composition, and on measurement of key variables or parameters (biofilm mass and maximal thickness, thickness of the limit liquid layer, maximal nitrification rate, oxygen mass transfer coefficient). Based on time-course variations in these variables, the MBBR model was calibrated at two time-scales and magnitudes of dynamic conditions, i.e., short-term (4 days) calibration under dynamic conditions and long-term (33 days) calibration, and for three types of carriers. A set of parameters suitable for the conditions was proposed, and the calibrated parameter set is able to simulate the time-course change of nitrogen forms in the effluent of the MBBR tanks, under the tested operated conditions. Parameters linked to diffusion had a strong influence on how robustly the model is able to accurately reproduce time-course changes in effluent quality. Then the model was used to optimize the operations of MBBR layout. It was shown that the main optimization track consists of the limitation of the aeration supply without changing the overall performance of the process. Further work would investigate the influence of the hydrodynamic conditions onto the thickness of the limit liquid layer and the "apparent" diffusion coefficient in the biofilm parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xuesong
2012-12-17
Precipitation is an important input variable for hydrologic and ecological modeling and analysis. Next Generation Radar (NEXRAD) can provide precipitation products that cover most of the continental United States with a high resolution display of approximately 4 × 4 km2. Two major issues concerning the applications of NEXRAD data are (1) lack of a NEXRAD geo-processing and geo-referencing program and (2) bias correction of NEXRAD estimates. In this chapter, a geographic information system (GIS) based software that can automatically support processing of NEXRAD data for hydrologic and ecological models is presented. Some geostatistical approaches to calibrating NEXRAD data using rainmore » gauge data are introduced, and two case studies on evaluating accuracy of NEXRAD Multisensor Precipitation Estimator (MPE) and calibrating MPE with rain-gauge data are presented. The first case study examines the performance of MPE in mountainous region versus south plains and cold season versus warm season, as well as the effect of sub-grid variability and temporal scale on NEXRAD performance. From the results of the first case study, performance of MPE was found to be influenced by complex terrain, frozen precipitation, sub-grid variability, and temporal scale. Overall, the assessment of MPE indicates the importance of removing bias of the MPE precipitation product before its application, especially in the complex mountainous region. The second case study examines the performance of three MPE calibration methods using rain gauge observations in the Little River Experimental Watershed in Georgia. The comparison results show that no one method can perform better than the others in terms of all evaluation coefficients and for all time steps. For practical estimation of precipitation distribution, implementation of multiple methods to predict spatial precipitation is suggested.« less
Camera calibration based on the back projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui
2015-12-01
Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Yuren, Wang; Fang, Shao; Weiping, Sun; Xioujuan, Li; Suning, Tian; Hongyan, Li
1989-06-01
When a heavy-calibre gun is fired and a projectite is flying near the gun muzzle, velocity of the projectile is very high and firing process is accompanying with strong muzzle flash. So taking the picture of the attitudes of flying projectile at the gun muzzle is very difficult. "YDS speed Photography System" developed by our group can take the framing pictures of the attitudes of the projectile and prevent them from flash confusing at the muzzle. Since framing depends on sequential pulse of the laser and the width of the putse is very narrow, therefore the exposure time is very short and photos of high-velocity flying body taken are very clear. This paper Introduces configuration and operation principle of "YDS laser High-speed Photography System" and the fuctions of the devices in this system In addition, some experimental results are briefly introduced.
A Self-Calibrating Radar Sensor System for Measuring Vital Signs.
Huang, Ming-Chun; Liu, Jason J; Xu, Wenyao; Gu, Changzhan; Li, Changzhi; Sarrafzadeh, Majid
2016-04-01
Vital signs (i.e., heartbeat and respiration) are crucial physiological signals that are useful in numerous medical applications. The process of measuring these signals should be simple, reliable, and comfortable for patients. In this paper, a noncontact self-calibrating vital signs monitoring system based on the Doppler radar is presented. The system hardware and software were designed with a four-tiered layer structure. To enable accurate vital signs measurement, baseband signals in the radar sensor were modeled and a framework for signal demodulation was proposed. Specifically, a signal model identification method was formulated into a quadratically constrained l1 minimization problem and solved using the upper bound and linear matrix inequality (LMI) relaxations. The performance of the proposed system was comprehensively evaluated using three experimental sets, and the results indicated that this system can be used to effectively measure human vital signs.
Liu, Shu-Yu; Hu, Chang-Qin
2007-10-17
This study introduces the general method of quantitative nuclear magnetic resonance (qNMR) for the calibration of reference standards of macrolide antibiotics. Several qNMR experimental conditions were optimized including delay, which is an important parameter of quantification. Three kinds of macrolide antibiotics were used to validate the accuracy of the qNMR method by comparison with the results obtained by the high performance liquid chromatography (HPLC) method. The purities of five common reference standards of macrolide antibiotics were measured by the 1H qNMR method and the mass balance method, respectively. The analysis results of the two methods were compared. The qNMR is quick and simple to use. In a new medicine research and development process, qNMR provides a new and reliable method for purity analysis of the reference standard.
Bondi, Robert W; Igne, Benoît; Drennen, James K; Anderson, Carl A
2012-12-01
Near-infrared spectroscopy (NIRS) is a valuable tool in the pharmaceutical industry, presenting opportunities for online analyses to achieve real-time assessment of intermediates and finished dosage forms. The purpose of this work was to investigate the effect of experimental designs on prediction performance of quantitative models based on NIRS using a five-component formulation as a model system. The following experimental designs were evaluated: five-level, full factorial (5-L FF); three-level, full factorial (3-L FF); central composite; I-optimal; and D-optimal. The factors for all designs were acetaminophen content and the ratio of microcrystalline cellulose to lactose monohydrate. Other constituents included croscarmellose sodium and magnesium stearate (content remained constant). Partial least squares-based models were generated using data from individual experimental designs that related acetaminophen content to spectral data. The effect of each experimental design was evaluated by determining the statistical significance of the difference in bias and standard error of the prediction for that model's prediction performance. The calibration model derived from the I-optimal design had similar prediction performance as did the model derived from the 5-L FF design, despite containing 16 fewer design points. It also outperformed all other models estimated from designs with similar or fewer numbers of samples. This suggested that experimental-design selection for calibration-model development is critical, and optimum performance can be achieved with efficient experimental designs (i.e., optimal designs).
Lattice Calibration with Turn-By-Turn BPM Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Xiaobiao; /SLAC; Sebek, James
2012-07-02
Turn-by-turn beam position monitor (BPM) data from multiple BPMs are fitted with a tracking code to calibrate magnet strengths in a manner similar to the well known LOCO code. Simulation shows that this turn-by-turn method can be a quick and efficient way for optics calibration. The method is applicable to both linacs and ring accelerators. Experimental results for a section of the SPEAR3 ring is also shown.
Peña-Perez, Luis Manuel; Pedraza-Ortega, Jesus Carlos; Ramos-Arreguin, Juan Manuel; Arriaga, Saul Tovar; Fernandez, Marco Antonio Aceves; Becerra, Luis Omar; Hurtado, Efren Gorrostieta; Vargas-Soto, Jose Emilio
2013-10-24
The present work presents an improved method to align the measurement scale mark in an immersion hydrometer calibration system of CENAM, the National Metrology Institute (NMI) of Mexico, The proposed method uses a vision system to align the scale mark of the hydrometer to the surface of the liquid where it is immersed by implementing image processing algorithms. This approach reduces the variability in the apparent mass determination during the hydrostatic weighing in the calibration process, therefore decreasing the relative uncertainty of calibration.
Peña-Perez, Luis Manuel; Pedraza-Ortega, Jesus Carlos; Ramos-Arreguin, Juan Manuel; Arriaga, Saul Tovar; Fernandez, Marco Antonio Aceves; Becerra, Luis Omar; Hurtado, Efren Gorrostieta; Vargas-Soto, Jose Emilio
2013-01-01
The present work presents an improved method to align the measurement scale mark in an immersion hydrometer calibration system of CENAM, the National Metrology Institute (NMI) of Mexico, The proposed method uses a vision system to align the scale mark of the hydrometer to the surface of the liquid where it is immersed by implementing image processing algorithms. This approach reduces the variability in the apparent mass determination during the hydrostatic weighing in the calibration process, therefore decreasing the relative uncertainty of calibration. PMID:24284770
Pollington, Anthony D.; Kozdon, Reinhard; Anovitz, Lawrence M.; ...
2015-12-01
The interpretation of silicon isotope data for quartz is hampered by the lack of experimentally determined fractionation factors between quartz and fluid. Further, there is a large spread in published oxygen isotope fractionation factors at low temperatures, primarily due to extrapolation from experimental calibrations at high temperature. We report the first measurements of silicon isotope ratios from experimentally precipitated quartz and estimate the equilibrium fractionation vs. dissolved silica using a novel in situ analysis technique applying secondary ion mass spectrometry to directly analyze experimental products. These experiments also yield a new value for oxygen isotope fractionation. Quartz overgrowths up tomore » 235 μm thick were precipitated in silica–H 2O–NaOH–NaCl fluids, at pH 12–13 and 250 °C. At this temperature, 1000lnα 30Si(Qtz–fluid) = 0.55 ± 0.10‰ and 1000lnα 18O(Qtz–fluid) = 10.62 ± 0.13‰, yielding the relations 1000lnα 30Si(Qtz–fluid) = (0.15 ± 0.03) * 10 6/T 2 and 1000lnα 18O(Qtz–fluid) = (2.91 ± 0.04) * 10 6/T 2 when extended to zero fractionation at infinite temperature. Values of δ 30Si(Qtz) from diagenetic cement in sandstones from the basal Cambrian Mt. Simon Formation in central North America range from 0 to ₋5.4‰. Paired δ 18O and δ 30Si values from individual overgrowths preserve a record of Precambrian weathering and fluid transport. In conclusion, the application of the experimental quartz growth results to observations from natural sandstone samples suggests that precipitation of quartz at low temperatures in nature is dominated by kinetic, rather than equilibrium, processes.« less
Research on the calibration methods of the luminance parameter of radiation luminance meters
NASA Astrophysics Data System (ADS)
Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei
2017-10-01
This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.
NASA Astrophysics Data System (ADS)
Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Ha, Dong-Gwang; Einzinger, Markus; Wu, Tony; Baldo, Marc A.; Aspuru-Guzik, Alán.
2016-09-01
Discovering new OLED emitters requires many experiments to synthesize candidates and test performance in devices. Large scale computer simulation can greatly speed this search process but the problem remains challenging enough that brute force application of massive computing power is not enough to successfully identify novel structures. We report a successful High Throughput Virtual Screening study that leveraged a range of methods to optimize the search process. The generation of candidate structures was constrained to contain combinatorial explosion. Simulations were tuned to the specific problem and calibrated with experimental results. Experimentalists and theorists actively collaborated such that experimental feedback was regularly utilized to update and shape the computational search. Supervised machine learning methods prioritized candidate structures prior to quantum chemistry simulation to prevent wasting compute on likely poor performers. With this combination of techniques, each multiplying the strength of the search, this effort managed to navigate an area of molecular space and identify hundreds of promising OLED candidate structures. An experimentally validated selection of this set shows emitters with external quantum efficiencies as high as 22%.
Microtomography imaging of an isolated plant fiber: a digital holographic approach.
Malek, Mokrane; Khelfa, Haithem; Picart, Pascal; Mounier, Denis; Poilâne, Christophe
2016-01-20
This paper describes a method for optical projection tomography for the 3D in situ characterization of micrometric plant fibers. The proposed approach is based on digital holographic microscopy, the holographic capability being convenient to compensate for the runout of the fiber during rotations. The setup requires a telecentric alignment to prevent from the changes in the optical magnification, and calibration results show the very good experimental adjustment. Amplitude images are obtained from the set of recorded and digitally processed holograms. Refocusing of blurred images and correction of both runout and jitter are carried out to get appropriate amplitude images. The 3D data related to the plant fiber are computed from the set of images using a dedicated numerical processing. Experimental results exhibit the internal and external shapes of the plant fiber. These experimental results constitute the first attempt to obtain 3D data of flax fiber, about 12 μm×17 μm in apparent diameter, with a full-field optical tomography approach using light in the visible range.
NASA Technical Reports Server (NTRS)
Navard, Sharon E.
1989-01-01
In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.
Goicoechea, H C; Olivieri, A C
2001-07-01
A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.
A numerical identifiability test for state-space models--application to optimal experimental design.
Hidalgo, M E; Ayesa, E
2001-01-01
This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.
From Heavy-Ion Collisions to Quark Matter (2/3)
Lourenco, C.
2018-05-23
The art of experimental (high-energy heavy-ion) physics 1) many experimental issues are crucial to properly understand the measurements and derive a correct physics interpretation: Acceptance and phase space windows; Efficiencies (of track reconstruction, vertexing, track matching, trigger, etc); Resolutions (of mass, momenta, energies, etc); Backgrounds, feed-downs and "expected sources"; Data selection; Monte Carlo adjustments, calibrations and smearing; luminosity and trigger conditions; Evaluation of systematic uncertainties, and several others. 2) "New Physics" often appears as excesses or suppressions with respect to "normal baselines", which must be very carefully established, on the basis of "reference" physics processes and collision systems. If we misunderstand these issues we can miss an important discovery...or we can "discover" non-existent "new physics."
NASA Astrophysics Data System (ADS)
Zhang, Hua; Zeng, Luan
2017-11-01
Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.
Omnidirectional Underwater Camera Design and Calibration
Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David
2015-01-01
This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707
Radiometrie recalibration procedure for landsat-5 thematic mapper data
Chander, G.; Micijevic, E.; Hayes, R.W.; Barsi, J.A.
2008-01-01
The Landsat-5 (L5) satellite was launched on March 01, 1984, with a design life of three years. Incredibly, the L5 Thematic Mapper (TM) has collected data for 23 years. Over this time, the detectors have aged, and its radiometric characteristics have changed since launch. The calibration procedures and parameters have also changed with time. Revised radiometric calibrations have improved the radiometric accuracy of recently processed data; however, users with data that were processed prior to the calibration update do not benefit from the revisions. A procedure has been developed to give users the ability to recalibrate their existing Level 1 (L1) products without having to purchase reprocessed data from the U.S. Geological Survey (USGS). The accuracy of the recalibration is dependent on the knowledge of the prior calibration applied to the data. The ""Work Order" file, included with standard National Land Archive Production System (NLAFS) data products, gives parameters that define the applied calibration. These are the Internal Calibrator (IC) calibration parameters or the default prelaunch calibration, if there were problems with the IC calibration. This paper details the recalibration procedure for data processed using IC, in which users have the Work Order file. ?? 2001 IEEE.
Monti, S.; Cooper, G. F.
1998-01-01
We present a new Bayesian classifier for computer-aided diagnosis. The new classifier builds upon the naive-Bayes classifier, and models the dependencies among patient findings in an attempt to improve its performance, both in terms of classification accuracy and in terms of calibration of the estimated probabilities. This work finds motivation in the argument that highly calibrated probabilities are necessary for the clinician to be able to rely on the model's recommendations. Experimental results are presented, supporting the conclusion that modeling the dependencies among findings improves calibration. PMID:9929288
Calibration Designs for Non-Monolithic Wind Tunnel Force Balances
NASA Technical Reports Server (NTRS)
Johnson, Thomas H.; Parker, Peter A.; Landman, Drew
2010-01-01
This research paper investigates current experimental designs and regression models for calibrating internal wind tunnel force balances of non-monolithic design. Such calibration methods are necessary for this class of balance because it has an electrical response that is dependent upon the sign of the applied forces and moments. This dependency gives rise to discontinuities in the response surfaces that are not easily modeled using traditional response surface methodologies. An analysis of current recommended calibration models is shown to lead to correlated response model terms. Alternative modeling methods are explored which feature orthogonal or near-orthogonal terms.
Coluccelli, Nicola
2010-08-01
Modeling a real laser diode stack based on Zemax ray tracing software that operates in a nonsequential mode is reported. The implementation of the model is presented together with the geometric and optical parameters to be adjusted to calibrate the model and to match the simulated intensity irradiance profiles with the experimental profiles. The calibration of the model is based on a near-field and a far-field measurement. The validation of the model has been accomplished by comparing the simulated and experimental transverse irradiance profiles at different positions along the caustic formed by a lens. Spot sizes and waist location are predicted with a maximum error below 6%.
A Nonlinear Calibration Algorithm Based on Harmonic Decomposition for Two-Axis Fluxgate Sensors
Liu, Shibin
2018-01-01
Nonlinearity is a prominent limitation to the calibration performance for two-axis fluxgate sensors. In this paper, a novel nonlinear calibration algorithm taking into account the nonlinearity of errors is proposed. In order to establish the nonlinear calibration model, the combined effort of all time-invariant errors is analyzed in detail, and then harmonic decomposition method is utilized to estimate the compensation coefficients. Meanwhile, the proposed nonlinear calibration algorithm is validated and compared with a classical calibration algorithm by experiments. The experimental results show that, after the nonlinear calibration, the maximum deviation of magnetic field magnitude is decreased from 1302 nT to 30 nT, which is smaller than 81 nT after the classical calibration. Furthermore, for the two-axis fluxgate sensor used as magnetic compass, the maximum error of heading is corrected from 1.86° to 0.07°, which is approximately 11% in contrast with 0.62° after the classical calibration. The results suggest an effective way to improve the calibration performance of two-axis fluxgate sensors. PMID:29789448
Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry
NASA Astrophysics Data System (ADS)
Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei
2018-04-01
In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.
Finding trap stiffness of optical tweezers using digital filters.
Almendarez-Rangel, Pedro; Morales-Cruzado, Beatriz; Sarmiento-Gómez, Erick; Pérez-Gutiérrez, Francisco G
2018-02-01
Obtaining trap stiffness and calibration of the position detection system is the basis of a force measurement using optical tweezers. Both calibration quantities can be calculated using several experimental methods available in the literature. In most cases, stiffness determination and detection system calibration are performed separately, often requiring procedures in very different conditions, and thus confidence of calibration methods is not assured due to possible changes in the environment. In this work, a new method to simultaneously obtain both the detection system calibration and trap stiffness is presented. The method is based on the calculation of the power spectral density of positions through digital filters to obtain the harmonic contributions of the position signal. This method has the advantage of calculating both trap stiffness and photodetector calibration factor from the same dataset in situ. It also provides a direct method to avoid unwanted frequencies that could greatly affect calibration procedure, such as electric noise, for example.
NASA Technical Reports Server (NTRS)
Everhart, Joel L.
1996-01-01
Orifice-to-orifice inconsistencies in data acquired with an electronically-scanned pressure system at the beginning of a wind tunnel experiment forced modifications to the standard, instrument calibration procedures. These modifications included a large increase in the number of calibration points which would allow a critical examination of the calibration curve-fit process, and a subsequent post-test reduction of the pressure data. Evaluation of these data has resulted in an improved functional representation of the pressure-voltage signature for electronically-scanned pressures sensors, which can reduce the errors due to calibration curve fit to under 0.10 percent of reading compared to the manufacturer specified 0.10 percent of full scale. Application of the improved calibration function allows a more rational selection of the calibration set-point pressures. These pressures should be adjusted to achieve a voltage output which matches the physical shape of the pressure-voltage signature of the sensor. This process is conducted in lieu of the more traditional approach where a calibration pressure is specified and the resulting sensor voltage is recorded. The fifteen calibrations acquired over the two-week duration of the wind tunnel test were further used to perform a preliminary, statistical assessment of the variation in the calibration process. The results allowed the estimation of the bias uncertainty for a single instrument calibration; and, they form the precursor for more extensive and more controlled studies in the laboratory.
NASA Astrophysics Data System (ADS)
Tan, Xihe; Mester, Achim; von Hebel, Christian; van der Kruk, Jan; Zimmermann, Egon; Vereecken, Harry; van Waasen, Stefan
2017-04-01
Electromagnetic induction (EMI) systems offer a great potential to obtain highly resolved layered electrical conductivity models of the shallow subsurface. State-of-the-art inversion procedures require quantitative calibration of EMI data, especially for short-offset EMI systems where significant data shifts are often observed. These shifts are caused by external influences such as the presence of the operator, zero-leveling procedures, the field setup used to move the EMI system and/or cables close by. Calibrations can be performed by using collocated electrical resistivity measurements or taking soil samples, however, these two methods take a lot of time in the field. To improve the calibration in a fast and concise way, we introduce a novel on-site calibration method using a series of apparent electrical conductivity (ECa) values acquired at multiple elevations for a multi-configuration EMI system. No additional instrument or pre-knowledge of the subsurface is needed to acquire quantitative ECa data. By using this calibration method, we correct each coil configuration, i.e., transmitter and receiver coil separation and the horizontal or vertical coplanar (HCP or VCP) coil orientation with a unique set of calibration parameters. A multi-layer soil structure at the corresponding measurement location is inverted together with the calibration parameters using full-solution Maxwell equations for the forward modelling within the shuffled complex evolution (SCE) algorithm to find the optimum solution under a user-defined parameter space. Synthetic data verified the feasibility for calibrating HCP and VCP measurements of a custom made six-coil EMI system with coil offsets between 0.35 m and 1.8 m for quantitative data inversions. As a next step, we applied the calibration approach on acquired experimental data from a bare soil test field (Selhausen, Germany) for the considered EMI system. The obtained calibration parameters were applied to measurements over a 30 m transect line that covers a range of conductivities between 5 and 40 mS/m. Inverted calibrated EMI data of the transect line showed very similar electrical conductivity distributions and layer interfaces of the subsurface compared to reference data obtained from vertical electrical sounding (VES) measurements. These results show that a combined calibration and inversion of multi-configuration EMI data is possible when including measurements at different elevations, which will speed up the measurement process to obtain quantitative EMI data since the labor intensive electrical resistivity measurement or soil coring is not necessary anymore.
Bayesian Calibration of Thermodynamic Databases and the Role of Kinetics
NASA Astrophysics Data System (ADS)
Wolf, A. S.; Ghiorso, M. S.
2017-12-01
Self-consistent thermodynamic databases of geologically relevant materials (like Berman, 1988; Holland and Powell, 1998, Stixrude & Lithgow-Bertelloni 2011) are crucial for simulating geological processes as well as interpreting rock samples from the field. These databases form the backbone of our understanding of how fluids and rocks interact at extreme planetary conditions. Considerable work is involved in their construction from experimental phase reaction data, as they must self-consistently describe the free energy surfaces (including relative offsets) of potentially hundreds of interacting phases. Standard database calibration methods typically utilize either linear programming or least squares regression. While both produce a viable model, they suffer from strong limitations on the training data (which must be filtered by hand), along with general ignorance of many of the sources of experimental uncertainty. We develop a new method for calibrating high P-T thermodynamic databases for use in geologic applications. The model is designed to handle pure solid endmember and free fluid phases and can be extended to include mixed solid solutions and melt phases. This new calibration effort utilizes Bayesian techniques to obtain optimal parameter values together with a full family of statistically acceptable models, summarized by the posterior. Unlike previous efforts, the Bayesian Logistic Uncertain Reaction (BLUR) model directly accounts for both measurement uncertainties and disequilibrium effects, by employing a kinetic reaction model whose parameters are empirically determined from the experiments themselves. Thus, along with the equilibrium free energy surfaces, we also provide rough estimates of the activation energies, entropies, and volumes for each reaction. As a first application, we demonstrate this new method on the three-phase aluminosilicate system, illustrating how it can produce superior estimates of the phase boundaries by incorporating constraints from all available data, while automatically handling variable data quality due to a combination of measurement errors and kinetic effects.
Halo current diagnostic system of experimental advanced superconducting tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, D. L.; Shen, B.; Sun, Y.
2015-10-15
The design, calibration, and installation of disruption halo current sensors for the Experimental Advanced Superconducting Tokamak are described in this article. All the sensors are Rogowski coils that surround conducting structures, and all the signals are analog integrated. Coils with two different cross-section sizes have been fabricated, and their mutual inductances are calibrated. Sensors have been installed to measure halo currents in several different parts of both the upper divertor (tungsten) and lower divertor (graphite) at several toroidal locations. Initial measurements from disruptions show that the halo current diagnostics are working well.
NASA Astrophysics Data System (ADS)
Jackson-Blake, Leah; Helliwell, Rachel
2015-04-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, with a physically unrealistic TDP simulation being produced when too many parameters were allowed to vary during model calibration. Parameters should not therefore be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. This study highlights the potential pitfalls of using low frequency timeseries of observed water quality to calibrate complex process-based models. For reliable model calibrations to be produced, monitoring programmes need to be designed which capture system variability, in particular nutrient dynamics during high flow events. In addition, there is a need for simpler models, so that all model parameters can be included in auto-calibration and uncertainty analysis, and to reduce the data needs during calibration.
Estimation of k-ε parameters using surrogate models and jet-in-crossflow data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan
2014-11-01
We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of themore » calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.« less
Leistra, Minze; Wolters, André; van den Berg, Frederik
2008-06-01
Volatilisation of pesticides from crop canopies can be an important emission pathway. In addition to pesticide properties, competing processes in the canopy and environmental conditions play a part. A computation model is being developed to simulate the processes, but only some of the input data can be obtained directly from the literature. Three well-defined experiments on the volatilisation of radiolabelled parathion-methyl (as example compound) from plants in a wind tunnel system were simulated with the computation model. Missing parameter values were estimated by calibration against the experimental results. The resulting thickness of the air boundary layer, rate of plant penetation and rate of phototransformation were compared with a diversity of literature data. The sequence of importance of the canopy processes was: volatilisation > plant penetration > phototransformation. Computer simulation of wind tunnel experiments, with radiolabelled pesticide sprayed on plants, yields values for the rate coefficients of processes at the plant surface. As some input data for simulations are not required in the framework of registration procedures, attempts to estimate missing parameter values on the basis of divergent experimental results have to be continued. Copyright (c) 2008 Society of Chemical Industry.
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-10-16
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.
Research on self-calibration biaxial autocollimator based on ZYNQ
NASA Astrophysics Data System (ADS)
Guo, Pan; Liu, Bingguo; Liu, Guodong; Zhong, Yao; Lu, Binghui
2018-01-01
Autocollimators are mainly based on computers or the electronic devices that can be connected to the internet, and its precision, measurement range and resolution are all defective, and external displays are needed to display images in real time. What's more, there is no real-time calibration for autocollimator in the market. In this paper, we propose a biaxial autocollimator based on the ZYNQ embedded platform to solve the above problems. Firstly, the traditional optical system is improved and a light path is added for real-time calibration. Then, in order to improve measurement speed, the embedded platform based on ZYNQ that combines Linux operating system with autocollimator is designed. In this part, image acquisition, image processing, image display and the man-machine interaction interface based on Qt are achieved. Finally, the system realizes two-dimensional small angle measurement. Experimental results showed that the proposed method can improve the angle measurement accuracy. The standard deviation of the close distance (1.5m) is 0.15" in horizontal direction of image and 0.24"in vertical direction, the repeatability of measurement of the long distance (10m) is improved by 0.12 in horizontal direction of image and 0.3 in vertical direction.
Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish
Maaswinkel, Hans; Zhu, Liqun; Weng, Wei
2013-01-01
Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals. PMID:24336189
MAGIC with formaldehyde applied to dosimetry of HDR brachytherapy source
NASA Astrophysics Data System (ADS)
Marques; T; Fernandes; J; Barbi; G; Nicolucci; P; Baffa; O
2009-05-01
The use of polymer gel dosimeters in brachytherapy can allow the determination of three-dimensional dose distributions in large volumes and with high spatial resolution if an adequate calibration process is performed. One of the major issues in these experiments is the polymer gel response dependence on dose rate when high dose rate sources are used and the doses in the vicinity of the sources are to be determinated. In this study, the response of a modified MAGIC polymer gel with formaldehyde around an Iridium-192 HDR brachytherapy source is presented. Experimental results obtained with this polymer gel were compared with ionization chamber measurements and with Monte Carlo simulation with PENELOPE. A maximum difference of 3.10% was found between gel dose measurements and Monte Carlo simulation at a radial distance of 18 mm from the source. The results obtained show that the gel's response is strongly influenced by dose rate and that a different calibration should be used for the vicinity of the source and for regions of lower dose rates. The results obtained in this study show that, provided the proper calibration is performed, MAGIC with formaldehyde can be successfully used to accurate determinate dose distributions form high dose rate brachytherapy sources.
Temperature uniformity in the CERN CLOUD chamber
NASA Astrophysics Data System (ADS)
Dias, António; Ehrhart, Sebastian; Vogel, Alexander; Williamson, Christina; Almeida, João; Kirkby, Jasper; Mathot, Serge; Mumford, Samuel; Onnela, Antti
2017-12-01
The CLOUD (Cosmics Leaving OUtdoor Droplets) experiment at CERN (European Council for Nuclear Research) investigates the nucleation and growth of aerosol particles under atmospheric conditions and their activation into cloud droplets. A key feature of the CLOUD experiment is precise control of the experimental parameters. Temperature uniformity and stability in the chamber are important since many of the processes under study are sensitive to temperature and also to contaminants that can be released from the stainless steel walls by upward temperature fluctuations. The air enclosed within the 26 m3 CLOUD chamber is equipped with several arrays (strings
) of high precision, fast-response thermometers to measure its temperature. Here we present a study of the air temperature uniformity inside the CLOUD chamber under various experimental conditions. Measurements were performed under calibration conditions and run conditions, which are distinguished by the flow rate of fresh air and trace gases entering the chamber at 20 and up to 210 L min-1, respectively. During steady-state calibration runs between -70 and +20 °C, the air temperature uniformity is better than ±0.06 °C in the radial direction and ±0.1 °C in the vertical direction. Larger non-uniformities are present during experimental runs, depending on the temperature control of the make-up air and trace gases (since some trace gases require elevated temperatures until injection into the chamber). The temperature stability is ±0.04 °C over periods of several hours during either calibration or steady-state run conditions. During rapid adiabatic expansions to activate cloud droplets and ice particles, the chamber walls are up to 10 °C warmer than the enclosed air. This results in temperature differences of ±1.5 °C in the vertical direction and ±1 °C in the horizontal direction, while the air returns to its equilibrium temperature with a time constant of about 200 s.
NASA Technical Reports Server (NTRS)
Delaney, J. S.; Sutton, S. R.; Newville, M.; Jones, J. H.; Hanson, B.; Dyar, M. D.; Schreiber, H.
2000-01-01
Oxidation state microanalyses for V in glass have been made by calibrating XANES spectral features with optical spectroscopic measurements. The oxidation state change with fugacity of O2 will strongly influence partitioning results.
NASA Astrophysics Data System (ADS)
Ostrikov, V. N.; Plakhotnikov, O. V.
2014-12-01
Using considerable experimental material, we examine whether it is possible to recalculate the initial data of hyperspectral aircraft survey into spectral radiance factors (SRF). The errors of external calibration for various observation conditions and different instruments for data receiving are estimated.
MCMEG: Simulations of both PDD and TPR for 6 MV LINAC photon beam using different MC codes
NASA Astrophysics Data System (ADS)
Fonseca, T. C. F.; Mendes, B. M.; Lacerda, M. A. S.; Silva, L. A. C.; Paixão, L.; Bastos, F. M.; Ramirez, J. V.; Junior, J. P. R.
2017-11-01
The Monte Carlo Modelling Expert Group (MCMEG) is an expert network specializing in Monte Carlo radiation transport and the modelling and simulation applied to the radiation protection and dosimetry research field. For the first inter-comparison task the group launched an exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes available within their laboratories and validate their simulated results by comparing them with experimental measurements carried out in the National Cancer Institute (INCA) in Rio de Janeiro, Brazil. The experimental measurements were performed using an ionization chamber with calibration traceable to a Secondary Standard Dosimetry Laboratory (SSDL). The detector was immersed in a water phantom at different depths and was irradiated with a radiation field size of 10×10 cm2. This exposure setup was used to determine the dosimetric parameters Percentage Depth Dose (PDD) and Tissue Phantom Ratio (TPR). The validation process compares the MC calculated results to the experimental measured PDD20,10 and TPR20,10. Simulations were performed reproducing the experimental TPR20,10 quality index which provides a satisfactory description of both the PDD curve and the transverse profiles at the two depths measured. This paper reports in detail the modelling process using MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes, the source and tally descriptions, the validation processes and the results.
Fermentation process tracking through enhanced spectral calibration modeling.
Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah
2007-06-15
The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.
An experimental tool to look in a magma chamber
NASA Astrophysics Data System (ADS)
Gonde, C.; Massare, D.; Bureau, H.; Martel, C.; Pichavant, M.; Clocchiatti, R.
2005-12-01
Understanding the physical and geochemical processes occurring in the volcanoes roots is one of the fundamental tasks of research in the experimental petrology community. This requires experimental tools able to create confining conditions appropriate for magma chambers and conduits. However, the characterization of some natural magmatic processes requires more than a blink experimental approach, to be rigorously studied. In some cases, the in situ approach is the only one issue, because it permits the observation of processes (crystallization of mineral phases, bubble growth.) and their kinetic studies. Here we present a powerful tool, a transparent internally heated autoclave. With this apparatus, pressures (up to 0.3 GPa) and temperatures (up to 900°C) appropriate for subvolcanic magma reservoirs can be obtained. Because it is equipped with transparent sapphire windows, either images or movies can be recorded during an experiment. The pressure medium is Argon, and heating is achieved by a W winding placed into the pressure vessel. Pressure and temperature are calibrated using both well known melting points (eg. salts, metals) and phase transitions (AgI), either at room temperature or at medium and high temperatures. During an experiment, the experimental charge is held between two thick windows of diamond, placed in the furnace cylinder. The experimental volume is about 1 mm3. The observation and numeric record are made along the horizontal axis, through the windows. This apparatus is currently used for studies of nucleation and growth of gas bubbles in a silicate melt. The first results will be presented at the meeting.
Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera
Sim, Sungdae; Sock, Juil; Kwak, Kiho
2016-01-01
LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416
Das, Arya; Ali, Sk Musharaf
2018-02-21
Tri-isoamyl phosphate (TiAP) has been proposed to be an alternative for tri-butyl phosphate (TBP) in the Plutonium Uranium Extraction (PUREX) process. Recently, we have successfully calibrated and tested all-atom optimized potentials for liquid simulations using Mulliken partial charges for pure TiAP, TBP, and dodecane by performing molecular dynamics (MD) simulation. It is of immense importance to extend this potential for the various molecular properties of TiAP and TiAP/n-dodecane binary mixtures using MD simulation. Earlier, efforts were devoted to find out a suitable force field which can explain both structural and dynamical properties by empirical parameterization. Therefore, the present MD study reports the structural, dynamical, and thermodynamical properties with different mole fractions of TiAP-dodecane mixtures at the entire range of mole fraction of 0-1 employing our calibrated Mulliken embedded optimized potentials for liquid simulation (OPLS) force field. The calculated electric dipole moment of TiAP was seen to be almost unaffected by the TiAP concentration in the dodecane diluent. The calculated liquid densities of the TiAP-dodecane mixture are in good agreement with the experimental data. The mixture densities at different temperatures are also studied which was found to be reduced with temperature as expected. The plot of diffusivities for TiAP and dodecane against mole fraction in the binary mixture intersects at a composition in the range of 25%-30% of TiAP in dodecane, which is very much closer to the TBP/n-dodecane composition used in the PUREX process. The excess volume of mixing was found to be positive for the entire range of mole fraction and the excess enthalpy of mixing was shown to be endothermic for the TBP/n-dodecane mixture as well as TiAP/n-dodecane mixture as reported experimentally. The spatial pair correlation functions are evaluated between TiAP-TiAP and TiAP-dodecane molecules. Further, shear viscosity has been computed by performing the non-equilibrium molecular dynamics employing the periodic perturbation method. The calculated shear viscosity of the binary mixture is found to be in excellent agreement with the experimental values. The use of the newly calibrated OPLS force field embedding Mulliken charges is shown to be equally reliable in predicting the structural and dynamical properties for the mixture without incorporating any arbitrary scaling in the force field or Lennard-Jones parameters. Further, the present MD simulation results demonstrate that the Stokes-Einstein relation breaks down at the molecular level. The present methodology might be adopted to evaluate the liquid state properties of an aqueous-organic biphasic system, which is of great significance in the interfacial science and technology.
NASA Astrophysics Data System (ADS)
Ghiorso, M. S.
2013-12-01
Internally consistent thermodynamic databases are critical resources that facilitate the calculation of heterogeneous phase equilibria and thereby support geochemical, petrological, and geodynamical modeling. These 'databases' are actually derived data/model systems that depend on a diverse suite of physical property measurements, calorimetric data, and experimental phase equilibrium brackets. In addition, such databases are calibrated with the adoption of various models for extrapolation of heat capacities and volumetric equations of state to elevated temperature and pressure conditions. Finally, these databases require specification of thermochemical models for the mixing properties of solid, liquid, and fluid solutions, which are often rooted in physical theory and, in turn, depend on additional experimental observations. The process of 'calibrating' a thermochemical database involves considerable effort and an extensive computational infrastructure. Because of these complexities, the community tends to rely on a small number of thermochemical databases, generated by a few researchers; these databases often have limited longevity and are universally difficult to maintain. ThermoFit is a software framework and user interface whose aim is to provide a modeling environment that facilitates creation, maintenance and distribution of thermodynamic data/model collections. Underlying ThermoFit are data archives of fundamental physical property, calorimetric, crystallographic, and phase equilibrium constraints that provide the essential experimental information from which thermodynamic databases are traditionally calibrated. ThermoFit standardizes schema for accessing these data archives and provides web services for data mining these collections. Beyond simple data management and interoperability, ThermoFit provides a collection of visualization and software modeling tools that streamline the model/database generation process. Most notably, ThermoFit facilitates the rapid visualization of predicted model outcomes and permits the user to modify these outcomes using tactile- or mouse-based GUI interaction, permitting real-time updates that reflect users choices, preferences, and priorities involving derived model results. This ability permits some resolution of the problem of correlated model parameters in the common situation where thermodynamic models must be calibrated from inadequate data resources. The ability also allows modeling constraints to be imposed using natural data and observations (i.e. petrologic or geochemical intuition). Once formulated, ThermoFit facilitates deployment of data/model collections by automated creation of web services. Users consume these services via web-, excel-, or desktop-clients. ThermoFit is currently under active development and not yet generally available; a limited capability prototype system has been coded for Macintosh computers and utilized to construct thermochemical models for H2O-CO2 mixed fluid saturation in silicate liquids. The longer term goal is to release ThermoFit as a web portal application client with server-based cloud computations supporting the modeling environment.
NASA Astrophysics Data System (ADS)
Das, Arya; Ali, Sk. Musharaf
2018-02-01
Tri-isoamyl phosphate (TiAP) has been proposed to be an alternative for tri-butyl phosphate (TBP) in the Plutonium Uranium Extraction (PUREX) process. Recently, we have successfully calibrated and tested all-atom optimized potentials for liquid simulations using Mulliken partial charges for pure TiAP, TBP, and dodecane by performing molecular dynamics (MD) simulation. It is of immense importance to extend this potential for the various molecular properties of TiAP and TiAP/n-dodecane binary mixtures using MD simulation. Earlier, efforts were devoted to find out a suitable force field which can explain both structural and dynamical properties by empirical parameterization. Therefore, the present MD study reports the structural, dynamical, and thermodynamical properties with different mole fractions of TiAP-dodecane mixtures at the entire range of mole fraction of 0-1 employing our calibrated Mulliken embedded optimized potentials for liquid simulation (OPLS) force field. The calculated electric dipole moment of TiAP was seen to be almost unaffected by the TiAP concentration in the dodecane diluent. The calculated liquid densities of the TiAP-dodecane mixture are in good agreement with the experimental data. The mixture densities at different temperatures are also studied which was found to be reduced with temperature as expected. The plot of diffusivities for TiAP and dodecane against mole fraction in the binary mixture intersects at a composition in the range of 25%-30% of TiAP in dodecane, which is very much closer to the TBP/n-dodecane composition used in the PUREX process. The excess volume of mixing was found to be positive for the entire range of mole fraction and the excess enthalpy of mixing was shown to be endothermic for the TBP/n-dodecane mixture as well as TiAP/n-dodecane mixture as reported experimentally. The spatial pair correlation functions are evaluated between TiAP-TiAP and TiAP-dodecane molecules. Further, shear viscosity has been computed by performing the non-equilibrium molecular dynamics employing the periodic perturbation method. The calculated shear viscosity of the binary mixture is found to be in excellent agreement with the experimental values. The use of the newly calibrated OPLS force field embedding Mulliken charges is shown to be equally reliable in predicting the structural and dynamical properties for the mixture without incorporating any arbitrary scaling in the force field or Lennard-Jones parameters. Further, the present MD simulation results demonstrate that the Stokes-Einstein relation breaks down at the molecular level. The present methodology might be adopted to evaluate the liquid state properties of an aqueous-organic biphasic system, which is of great significance in the interfacial science and technology.
Numerical simulation of asphalt mixtures fracture using continuum models
NASA Astrophysics Data System (ADS)
Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz
2018-01-01
The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.
Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors
NASA Astrophysics Data System (ADS)
Liu, Y.; Strum, R.; Stiles, D.; Long, C.; Rakhman, A.; Blokland, W.; Winder, D.; Riemer, B.; Wendel, M.
2018-03-01
We describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. The proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry-Perot sensors for measurement of strains and vibrations.
A new polarimetric active radar calibrator and calibration technique
NASA Astrophysics Data System (ADS)
Tang, Jianguo; Xu, Xiaojian
2015-10-01
Polarimetric active radar calibrator (PARC) is one of the most important calibrators with high radar cross section (RCS) for polarimetry measurement. In this paper, a new double-antenna polarimetric active radar calibrator (DPARC) is proposed, which consists of two rotatable antennas with wideband electromagnetic polarization filters (EMPF) to achieve lower cross-polarization for transmission and reception. With two antennas which are rotatable around the radar line of sight (LOS), the DPARC provides a variety of standard polarimetric scattering matrices (PSM) through the rotation combination of receiving and transmitting polarization, which are useful for polarimatric calibration in different applications. In addition, a technique based on Fourier analysis is proposed for calibration processing. Numerical simulation results are presented to demonstrate the superior performance of the proposed DPARC and processing technique.
NASA Astrophysics Data System (ADS)
Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan
2018-03-01
The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.
BPM System for Electron Cooling in the Fermilab Recycler Ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joireman, Paul W.; Cai, Jerry; Chase, Brian E.
2004-11-10
We report a VXI based system used to acquire and process BPM data for the electron cooling system in the Fermilab Recycler ring. The BPM system supports acquisition of data from 19 BPM locations in five different sections of the electron cooling apparatus. Beam positions for both electrons and anti-protons can be detected simultaneously with a resolution of {+-}50 {mu}m. We calibrate the system independently for each beam type at each BPM location. We describe the system components, signal processing and modes of operation used in support of the electron-cooling project and present experimental results of system performance for themore » developmental electron cooling installation at Fermilab.« less
Virtual Instrument for Determining Rate Constant of Second-Order Reaction by pX Based on LabVIEW 8.0
Meng, Hu; Li, Jiang-Yuan; Tang, Yong-Huai
2009-01-01
The virtual instrument system based on LabVIEW 8.0 for ion analyzer which can measure and analyze ion concentrations in solution is developed and comprises homemade conditioning circuit, data acquiring board, and computer. It can calibrate slope, temperature, and positioning automatically. When applied to determine the reaction rate constant by pX, it achieved live acquiring, real-time displaying, automatical processing of testing data, generating the report of results; and other functions. This method simplifies the experimental operation greatly, avoids complicated procedures of manual processing data and personal error, and improves veracity and repeatability of the experiment results. PMID:19730752
A High Performance Torque Sensor for Milling Based on a Piezoresistive MEMS Strain Gauge
Qin, Yafei; Zhao, Yulong; Li, Yingxue; Zhao, You; Wang, Peng
2016-01-01
In high speed and high precision machining applications, it is important to monitor the machining process in order to ensure high product quality. For this purpose, it is essential to develop a dynamometer with high sensitivity and high natural frequency which is suited to these conditions. This paper describes the design, calibration and performance of a milling torque sensor based on piezoresistive MEMS strain. A detailed design study is carried out to optimize the two mutually-contradictory indicators sensitivity and natural frequency. The developed torque sensor principally consists of a thin-walled cylinder, and a piezoresistive MEMS strain gauge bonded on the surface of the sensing element where the shear strain is maximum. The strain gauge includes eight piezoresistances and four are connected in a full Wheatstone circuit bridge, which is used to measure the applied torque force during machining procedures. Experimental static calibration results show that the sensitivity of torque sensor has been improved to 0.13 mv/Nm. A modal impact test indicates that the natural frequency of torque sensor reaches 1216 Hz, which is suitable for high speed machining processes. The dynamic test results indicate that the developed torque sensor is stable and practical for monitoring the milling process. PMID:27070620
Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert
2015-05-28
System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less
Data processing and in-flight calibration systems for OMI-EOS-Aura
NASA Astrophysics Data System (ADS)
van den Oord, G. H. J.; Dobber, M.; van de Vegte, J.; van der Neut, I.; Som de Cerff, W.; Rozemeijer, N. C.; Schenkelaars, V.; ter Linden, M.
2006-08-01
The OMI instrument that flies on the EOS Aura mission was launched in July 2004. OMI is a UV-VIS imaging spectrometer that measures in the 270 - 500 nm wavelength range. OMI provides daily global coverage with high spatial resolution. Every orbit of 100 minutes OMI generates about 0.5 GB of Level 0 data and 1.2 GB of Level 1 data. About half of the Level 1 data consists of in-flight calibration measurements. These data rates make it necessary to automate the process of in-flight calibration. For that purpose two facilities have been developed at KNMI in the Netherlands: the OMI Dutch Processing System (ODPS) and the Trend Monitoring and In-flight Calibration Facility (TMCF). A description of these systems is provided with emphasis on the use for radiometric, spectral and detector calibration and characterization. With the advance of detector technology and the need for higher spatial resolution, data rates will become even higher for future missions. To make effective use of automated systems like the TMCF, it is of paramount importance to integrate the instrument operations concept, the information contained in the Level 1 (meta-)data products and the inflight calibration software and system databases. In this way a robust but also flexible end-to-end system can be developed that serves the needs of the calibration staff, the scientific data users and the processing staff. The way this has been implemented for OMI may serve as an example of a cost-effective and user friendly solution for future missions. The basic system requirements for in-flight calibration are discussed and examples are given how these requirements have been implemented for OMI. Special attention is paid to the aspect of supporting the Level 0 - 1 processing with timely and accurate calibration constants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodiac, F.; Hudelot, JP.; Lecerf, J.
CABRI is an experimental pulse reactor operated by CEA at the Cadarache research center. Since 1978 the experimental programs have aimed at studying the fuel behavior under Reactivity Initiated Accident (RIA) conditions. Since 2003, it has been refurbished in order to be able to provide RIA and LOCA (Loss Of Coolant Accident) experiments in prototypical PWR conditions (155 bar, 300 deg. C). This project is part of a broader scope including an overall facility refurbishment and a safety review. The global modification is conducted by the CEA project team. It is funded by IRSN, which is conducting the CIP experimentalmore » program, in the framework of the OECD/NEA project CIP. It is financed in the framework of an international collaboration. During the reactor restart, commissioning tests are realized for all equipment, systems and circuits of the reactor. In particular neutronics and power commissioning tests will be performed respectively in 2015 and 2016. This paper focuses on the design of a complete and original dosimetry program that was built in support to the CABRI core characterization and to the power calibration. Each one of the above experimental goals will be fully described, as well as the target uncertainties and the forecasted experimental techniques and data treatment. (authors)« less
Laser Calibration of an Impact Disdrometer
NASA Technical Reports Server (NTRS)
Lane, John E.; Kasparis, Takis; Metzger, Philip T.; Jones, W. Linwood
2014-01-01
A practical approach to developing an operational low-cost disdrometer hinges on implementing an effective in situ adaptive calibration strategy. This calibration strategy lowers the cost of the device and provides a method to guarantee continued automatic calibration. In previous work, a collocated tipping bucket rain gauge was utilized to provide a calibration signal to the disdrometer's digital signal processing software. Rainfall rate is proportional to the 11/3 moment of the drop size distribution (a 7/2 moment can also be assumed, depending on the choice of terminal velocity relationship). In the previous case, the disdrometer calibration was characterized and weighted to the 11/3 moment of the drop size distribution (DSD). Optical extinction by rainfall is proportional to the 2nd moment of the DSD. Using visible laser light as a means to focus and generate an auxiliary calibration signal, the adaptive calibration processing is significantly improved.
Constitutive Model Calibration via Autonomous Multiaxial Experimentation (Postprint)
2016-09-17
test machine. Experimental data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain...data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain conditions. Optimization methods...be used directly in finite element simulations of more complex geometries. Keywords Axial/torsional experimentation • Plasticity • Constitutive model
Mengoni, Marlène; Kayode, Oluwasegun; Sikora, Sebastien N F; Zapata-Cornelio, Fernando Y; Gregory, Diane E; Wilcox, Ruth K
2017-08-01
The development of current surgical treatments for intervertebral disc damage could benefit from virtual environment accounting for population variations. For such models to be reliable, a relevant description of the mechanical properties of the different tissues and their role in the functional mechanics of the disc is of major importance. The aims of this work were first to assess the physiological hoop strain in the annulus fibrosus in fresh conditions ( n = 5) in order to extract a functional behaviour of the extrafibrillar matrix; then to reverse-engineer the annulus fibrosus fibrillar behaviour ( n = 6). This was achieved by performing both direct and global controlled calibration of material parameters, accounting for the whole process of experimental design and in silico model methodology. Direct-controlled models are specimen-specific models representing controlled experimental conditions that can be replicated and directly comparing measurements. Validation was performed on another six specimens and a sensitivity study was performed. Hoop strains were measured as 17 ± 3% after 10 min relaxation and 21 ± 4% after 20-25 min relaxation, with no significant difference between the two measurements. The extrafibrillar matrix functional moduli were measured as 1.5 ± 0.7 MPa. Fibre-related material parameters showed large variability, with a variance above 0.28. Direct-controlled calibration and validation provides confidence that the model development methodology can capture the measurable variation within the population of tested specimens.
NASA Technical Reports Server (NTRS)
Li, Tao; Hasegawa, Toshihiro; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Adam, Myriam; Bregaglio, Simone; Buis, Samuel; Confalonieri, Roberto; Fumoto, Tamon;
2014-01-01
Predicting rice (Oryza sativa) productivity under future climates is important for global food security. Ecophysiological crop models in combination with climate model outputs are commonly used in yield prediction, but uncertainties associated with crop models remain largely unquantified. We evaluated 13 rice models against multi-year experimental yield data at four sites with diverse climatic conditions in Asia and examined whether different modeling approaches on major physiological processes attribute to the uncertainties of prediction to field measured yields and to the uncertainties of sensitivity to changes in temperature and CO2 concentration [CO2]. We also examined whether a use of an ensemble of crop models can reduce the uncertainties. Individual models did not consistently reproduce both experimental and regional yields well, and uncertainty was larger at the warmest and coolest sites. The variation in yield projections was larger among crop models than variation resulting from 16 global climate model-based scenarios. However, the mean of predictions of all crop models reproduced experimental data, with an uncertainty of less than 10 percent of measured yields. Using an ensemble of eight models calibrated only for phenology or five models calibrated in detail resulted in the uncertainty equivalent to that of the measured yield in well-controlled agronomic field experiments. Sensitivity analysis indicates the necessity to improve the accuracy in predicting both biomass and harvest index in response to increasing [CO2] and temperature.
Kayode, Oluwasegun; Sikora, Sebastien N. F.; Zapata-Cornelio, Fernando Y.; Gregory, Diane E.; Wilcox, Ruth K.
2017-01-01
The development of current surgical treatments for intervertebral disc damage could benefit from virtual environment accounting for population variations. For such models to be reliable, a relevant description of the mechanical properties of the different tissues and their role in the functional mechanics of the disc is of major importance. The aims of this work were first to assess the physiological hoop strain in the annulus fibrosus in fresh conditions (n = 5) in order to extract a functional behaviour of the extrafibrillar matrix; then to reverse-engineer the annulus fibrosus fibrillar behaviour (n = 6). This was achieved by performing both direct and global controlled calibration of material parameters, accounting for the whole process of experimental design and in silico model methodology. Direct-controlled models are specimen-specific models representing controlled experimental conditions that can be replicated and directly comparing measurements. Validation was performed on another six specimens and a sensitivity study was performed. Hoop strains were measured as 17 ± 3% after 10 min relaxation and 21 ± 4% after 20–25 min relaxation, with no significant difference between the two measurements. The extrafibrillar matrix functional moduli were measured as 1.5 ± 0.7 MPa. Fibre-related material parameters showed large variability, with a variance above 0.28. Direct-controlled calibration and validation provides confidence that the model development methodology can capture the measurable variation within the population of tested specimens. PMID:28879014
Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process
NASA Astrophysics Data System (ADS)
Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.
2016-12-01
Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.
Pozhitkov, Alex E; Noble, Peter A; Bryk, Jarosław; Tautz, Diethard
2014-01-01
Although microarrays are analysis tools in biomedical research, they are known to yield noisy output that usually requires experimental confirmation. To tackle this problem, many studies have developed rules for optimizing probe design and devised complex statistical tools to analyze the output. However, less emphasis has been placed on systematically identifying the noise component as part of the experimental procedure. One source of noise is the variance in probe binding, which can be assessed by replicating array probes. The second source is poor probe performance, which can be assessed by calibrating the array based on a dilution series of target molecules. Using model experiments for copy number variation and gene expression measurements, we investigate here a revised design for microarray experiments that addresses both of these sources of variance. Two custom arrays were used to evaluate the revised design: one based on 25 mer probes from an Affymetrix design and the other based on 60 mer probes from an Agilent design. To assess experimental variance in probe binding, all probes were replicated ten times. To assess probe performance, the probes were calibrated using a dilution series of target molecules and the signal response was fitted to an adsorption model. We found that significant variance of the signal could be controlled by averaging across probes and removing probes that are nonresponsive or poorly responsive in the calibration experiment. Taking this into account, one can obtain a more reliable signal with the added option of obtaining absolute rather than relative measurements. The assessment of technical variance within the experiments, combined with the calibration of probes allows to remove poorly responding probes and yields more reliable signals for the remaining ones. Once an array is properly calibrated, absolute quantification of signals becomes straight forward, alleviating the need for normalization and reference hybridizations.
Variability in Students' Evaluating Processes in Peer Assessment with Calibrated Peer Review
ERIC Educational Resources Information Center
Russell, J.; Van Horne, S.; Ward, A. S.; Bettis, E. A., III; Gikonyo, J.
2017-01-01
This study investigated students' evaluating process and their perceptions of peer assessment when they engaged in peer assessment using Calibrated Peer Review. Calibrated Peer Review is a web-based application that facilitates peer assessment of writing. One hundred and thirty-two students in an introductory environmental science course…
Detonation Shock Dynamics Calibration for Non-Ideal HE: ANFO
NASA Astrophysics Data System (ADS)
Short, Mark; Salyer, Terry
2009-06-01
The detonation of ammonium nitrate (AN) and fuel-oil (FO) mixtures (ANFO) is significantly influenced by the properties of the AN (porosity, particle size, coating) and fuel-oil stoichiometry. We report on a new series of rate-stick experiments in cardboard confinement that highlight detonation front speed and curvature dependence on AN/FO stoichiometry and AN particle properties. Standard detonation velocity-curvature calibrations to the experimental data will be presented, as well as higher-order time-dependent detonation shock dynamics calibrations.
Novel Principle of Contactless Gauge Block Calibration
Buchta, Zdeněk; Řeřucha, Šimon; Mikel, Břetislav; Čížek, Martin; Lazar, Josef; Číp, Ondřej
2012-01-01
In this paper, a novel principle of contactless gauge block calibration is presented. The principle of contactless gauge block calibration combines low-coherence interferometry and laser interferometry. An experimental setup combines Dowell interferometer and Michelson interferometer to ensure a gauge block length determination with direct traceability to the primary length standard. By monitoring both gauge block sides with a digital camera gauge block 3D surface measurements are possible too. The principle presented is protected by the Czech national patent No. 302948. PMID:22737012
Novel principle of contactless gauge block calibration.
Buchta, Zdeněk; Reřucha, Simon; Mikel, Břetislav; Cížek, Martin; Lazar, Josef; Cíp, Ondřej
2012-01-01
In this paper, a novel principle of contactless gauge block calibration is presented. The principle of contactless gauge block calibration combines low-coherence interferometry and laser interferometry. An experimental setup combines Dowell interferometer and Michelson interferometer to ensure a gauge block length determination with direct traceability to the primary length standard. By monitoring both gauge block sides with a digital camera gauge block 3D surface measurements are possible too. The principle presented is protected by the Czech national patent No. 302948.
Velocity precision measurements using laser Doppler anemometry
NASA Astrophysics Data System (ADS)
Dopheide, D.; Taux, G.; Narjes, L.
1985-07-01
A Laser Doppler Anemometer (LDA) was calibrated to determine its applicability to high pressure measurements (up to 10 bars) for industrial purposes. The measurement procedure with LDA and the experimental computerized layouts are presented. The calibration procedure is based on absolute accuracy of Doppler frequency and calibration of interference strip intervals. A four-quadrant detector allows comparison of the interference strip distance measurements and computer profiles. Further development of LDA is recommended to increase accuracy (0.1% inaccuracy) and to apply the method industrially.
Teillet, P.M.; Helder, D.L.; Ruggles, T.A.; Landry, R.; Ahern, F.J.; Higgs, N.J.; Barsi, J.; Chander, G.; Markham, B.L.; Barker, J.L.; Thome, K.J.; Schott, J.R.; Palluconi, Frank Don
2004-01-01
A coordinated effort on the part of several agencies has led to the specification of a definitive radiometric calibration record for the Landsat-5 thematic mapper (TM) for its lifetime since launch in 1984. The time-dependent calibration record for Landsat-5 TM has been placed on the same radiometric scale as the Landsat-7 enhanced thematic mapper plus (ETM+). It has been implemented in the National Landsat Archive Production Systems (NLAPS) in use in North America. This paper documents the results of this collaborative effort and the specifications for the related calibration processing algorithms. The specifications include (i) anchoring of the Landsat-5 TM calibration record to the Landsat-7 ETM+ absolute radiometric calibration, (ii) new time-dependent calibration processing equations and procedures applicable to raw Landsat-5 TM data, and (iii) algorithms for recalibration computations applicable to some of the existing processed datasets in the North American context. The cross-calibration between Landsat-5 TM and Landsat-7 ETM+ was achieved using image pairs from the tandem-orbit configuration period that was programmed early in the Laridsat-7 mission. The time-dependent calibration for Landsat-5 TM is based on a detailed trend analysis of data from the on-board internal calibrator. The new lifetime radiometric calibration record for Landsat-5 will overcome problems with earlier product generation owing to inadequate maintenance and documentation of the calibration over time and will facilitate the quantitative examination of a continuous, near-global dataset at 30-m scale that spans almost two decades.
NASA Astrophysics Data System (ADS)
Rausch, Kameron; Houchin, Scott; Cardema, Jason; Moy, Gabriel; Haas, Evan; De Luccia, Frank J.
2013-12-01
National Polar-Orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) reflective bands are currently calibrated via weekly updates to look-up tables (LUTs) utilized by operational ground processing in the Joint Polar Satellite System Interface Data Processing Segment (IDPS). The parameters in these LUTs must be predicted ahead 2 weeks and cannot adequately track the dynamically varying response characteristics of the instrument. As a result, spurious "predict-ahead" calibration errors of the order of 0.1% or greater are routinely introduced into the calibrated reflectances and radiances produced by IDPS in sensor data records (SDRs). Spurious calibration errors of this magnitude adversely impact the quality of downstream environmental data records (EDRs) derived from VIIRS SDRs such as Ocean Color/Chlorophyll and cause increased striping and band-to-band radiometric calibration uncertainty of SDR products. A novel algorithm that fully automates reflective band calibration has been developed for implementation in IDPS in late 2013. Automating the reflective solar band (RSB) calibration is extremely challenging and represents a significant advancement over the manner in which RSB calibration has traditionally been performed in heritage instruments such as the Moderate Resolution Imaging Spectroradiometer. The automated algorithm applies calibration data almost immediately after their acquisition by the instrument from views of space and on-onboard calibration sources, thereby eliminating the predict-ahead errors associated with the current offline calibration process. This new algorithm, when implemented, will significantly improve the quality of VIIRS reflective band SDRs and consequently the quality of EDRs produced from these SDRs.
NASA Technical Reports Server (NTRS)
Bubsey, R. T.; Pierce, W. S.; Shannon, J. L., Jr.; Munz, D.
1982-01-01
The short rod chevron-notch specimen has the advantages of (1) crack development at the chevron tip during the early stage of test loading, and (2) convenient calculation of plane-strain fracture toughness from the maximum test load and from a calibration factor which depends only on the specimen geometry and manner of loading. For generalized application, calibration of the specimen over a range of specimen proportions and chevron-notch configurations is necessary. Such was the objective of this investigation, wherein calibration of the short rod specimen was made by means of experimental compliance measurements converted into dimensionless stress intensity factor coefficients.
Research yields precise uncertainty equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, E.H.; Ferguson, K.R.
1987-08-03
Results of a study of orifice-meter accuracy by Chevron Oil Field Research Co. at its Venice, La., calibration facility have important implications for natural gas custody-transfer measurement. The calibration facility, data collection, and equipment calibration were described elsewhere. This article explains the derivation of uncertainty factors and details the study's findings. The results were based on calibration of two 16-in. orifice-meter runs. The experimental data cover a beta-ratio range of from 0.27 to 0.71 and a Reynolds number range of from 4,000,000 to 35,000,000. Discharge coefficients were determined by comparing the orifice flow to the flow from critical-flow nozzles.
NASA Astrophysics Data System (ADS)
Valente, T.; Bartuli, C.; Sebastiani, M.; Loreto, A.
2005-12-01
The experimental measurement of residual stresses originating within thick coatings deposited by thermal spray on solid substrates plays a role of fundamental relevance in the preliminary stages of coating design and process parameters optimization. The hole-drilling method is a versatile and widely used technique for the experimental determination of residual stress in the most superficial layers of a solid body. The consolidated procedure, however, can only be implemented for metallic bulk materials or for homogeneous, linear elastic, and isotropic materials. The main objective of the present investigation was to adapt the experimental method to the measurement of stress fields built up in ceramic coatings/metallic bonding layers structures manufactured by plasma spray deposition. A finite element calculation procedure was implemented to identify the calibration coefficients necessary to take into account the elastic modulus discontinuities that characterize the layered structure through its thickness. Experimental adjustments were then proposed to overcome problems related to the low thermal conductivity of the coatings. The number of calculation steps and experimental drilling steps were finally optimized.
Thermal regulation in multiple-source arc welding involving material transformations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doumanidis, C.C.
1995-06-01
This article addresses regulation of the thermal field generated during arc welding, as the cause of solidification, heat-affected zone and cooling rate related metallurgical transformations affecting the final microstructure and mechanical properties of various welded materials. This temperature field is described by a dynamic real-time process model, consisting of an analytical composite conduction expression for the solid region, and a lumped-state, double-stream circulation model in the weld pool, integrated with a Gaussian heat input and calibrated experimentally through butt joint GMAW tests on plain steel plates. This model serves as the basis of an in-process thermal control system employing feedbackmore » of part surface temperatures measured by infrared pyrometry; and real-time identification of the model parameters with a multivariable adaptive control strategy. Multiple heat inputs and continuous power distributions are implemented by a single time-multiplexed torch, scanning the weld surface to ensure independent, decoupled control of several thermal characteristics. Their regulation is experimentally obtained in longitudinal GTAW of stainless steel pipes, despite the presence of several geometrical, thermal and process condition disturbances of arc welding.« less
Knowing What You Know: Improving Metacomprehension and Calibration Accuracy in Digital Text
ERIC Educational Resources Information Center
Reid, Alan J.; Morrison, Gary R.; Bol, Linda
2017-01-01
This paper presents results from an experimental study that examined embedded strategy prompts in digital text and their effects on calibration and metacomprehension accuracies. A sample population of 80 college undergraduates read a digital expository text on the basics of photography. The most robust treatment (mixed) read the text, generated a…
Assessing applicability of SWAT calibrated at multiple spatial scales from field to stream
USDA-ARS?s Scientific Manuscript database
The capability of SWAT for simulating long-term hydrology and water quality was evaluated using data collected in subwatershed K of the Little River Experimental watershed located in South Atlantic Coastal Plain of the USA. The SWAT model was calibrated to measurements made at various spatial scales...
An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian
For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less
Derivation and calibration of a gas metal arc welding (GMAW) dynamic droplet model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reutzel, E.W.; Einerson, C.J.; Johnson, J.A.
1996-12-31
A rudimentary, existing dynamic model for droplet growth and detachment in gas metal arc welding (GMAW) was improved and calibrated to match experimental data. The model simulates droplets growing at the end of an imaginary spring. Mass is added to the drop as the electrode melts, the droplet grows, and the spring is displaced. Detachment occurs when one of two criteria is met, and the amount of mass that is detached is a function of the droplet velocity at the time of detachment. Improvements to the model include the addition of a second criterion for drop detachment, a more sophisticatedmore » model of the power supply and secondary electric circuit, and the incorporation of a variable electrode resistance. Relevant physical parameters in the model were adjusted during model calibration. The average current, droplet frequency, and parameter-space location of globular-to-streaming mode transition were used as criteria for tuning the model. The average current predicted by the calibrated model matched the experimental average current to within 5% over a wide range of operating conditions.« less
Stochastic isotropic hyperelastic materials: constitutive calibration and model selection
NASA Astrophysics Data System (ADS)
Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain
2018-03-01
Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.
Experimental validation of a self-calibrating cryogenic mass flowmeter
NASA Astrophysics Data System (ADS)
Janzen, A.; Boersch, M.; Burger, B.; Drache, J.; Ebersoldt, A.; Erni, P.; Feldbusch, F.; Oertig, D.; Grohmann, S.
2017-12-01
The Karlsruhe Institute of Technology (KIT) and the WEKA AG jointly develop a commercial flowmeter for application in helium cryostats. The flowmeter functions according to a new thermal measurement principle that eliminates all systematic uncertainties and enables self-calibration during real operation. Ideally, the resulting uncertainty of the measured flow rate is only dependent on signal noises, which are typically very small with regard to the measured value. Under real operating conditions, cryoplant-dependent flow rate fluctuations induce an additional uncertainty, which follows from the sensitivity of the method. This paper presents experimental results with helium at temperatures between 30 and 70 K and flow rates in the range of 4 to 12 g/s. The experiments were carried out in a control cryostat of the 2 kW helium refrigerator of the TOSKA test facility at KIT. Inside the cryostat, the new flowmeter was installed in series with a Venturi tube that was used for reference measurements. The measurement results demonstrate the self-calibration capability during real cryoplant operation. The influences of temperature and flow rate fluctuations on the self-calibration uncertainty are discussed.
The simple procedure for the fluxgate magnetometers calibration
NASA Astrophysics Data System (ADS)
Marusenkov, Andriy
2014-05-01
The fluxgate magnetometers are widely used in geophysics investigations including the geomagnetic field monitoring at the global network of geomagnetic observatories as well as for electromagnetic sounding of the Earth's crust conductivity. For solving these tasks the magnetometers have to be calibrated with an appropriate level of accuracy. As a particular case, the ways to satisfy the recent requirements to the scaling and orientation errors of 1-second INTERNAGNET magnetometers are considered in the work. The goal of the present study was to choose a simple and reliable calibration method for estimation of scale factors and angular errors of the three-axis magnetometers in the field. There are a large number of the scalar calibration methods, which use a free rotation of the sensor in the calibration field followed by complicated data processing procedures for numerical solution of the high-order equations set. The chosen approach also exploits the Earth's magnetic field as a calibrating signal, but, in contrast to other methods, the sensor has to be oriented in some particular positions in respect to the total field vector, instead of the sensor free rotation. This allows to use very simple and straightforward linear computation formulas and, as a result, to achieve more reliable estimations of the calibrated parameters. The estimation of the scale factors is performed by the sequential aligning of each component of the sensor in two positions: parallel and anti-parallel to the Earth's magnetic field vector. The estimation of non-orthogonality angles between each pair of components is performed after sequential aligning of the components at the angles +/- 45 and +/- 135 degrees of arc in respect to the total field vector. Due to such four positions approach the estimations of the non-orthogonality angles are invariant to the zero offsets and non-linearity of transfer functions of the components. The experimental justifying of the proposed method by means of the Coil Calibration system reveals, that the achieved accuracy (<0.04 % for scale factors and 0.03 degrees of arc for angle errors) is sufficient for many applications, particularly for satisfying the INTERMAGNET requirements to 1-second instruments.
In-situ electrochemical transmission electron microscopy for battery research.
Mehdi, B Layla; Gu, Meng; Parent, Lucas R; Xu, Wu; Nasybulin, Eduard N; Chen, Xilin; Unocic, Raymond R; Xu, Pinghong; Welch, David A; Abellan, Patricia; Zhang, Ji-Guang; Liu, Jun; Wang, Chong-Min; Arslan, Ilke; Evans, James; Browning, Nigel D
2014-04-01
The recent development of in-situ liquid stages for (scanning) transmission electron microscopes now makes it possible for us to study the details of electrochemical processes under operando conditions. As electrochemical processes are complex, care must be taken to calibrate the system before any in-situ/operando observations. In addition, as the electron beam can cause effects that look similar to electrochemical processes at the electrolyte/electrode interface, an understanding of the role of the electron beam in modifying the operando observations must also be understood. In this paper we describe the design, assembly, and operation of an in-situ electrochemical cell, paying particular attention to the method for controlling and quantifying the experimental parameters. The use of this system is then demonstrated for the lithiation/delithiation of silicon nanowires.
Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M
2016-01-01
Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.
Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams
NASA Astrophysics Data System (ADS)
Cooling, M. P.; Humphrey, V. F.; Wilkens, V.
2011-02-01
The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying
2013-01-01
Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying
2013-09-01
Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.
A practical approach for the scale-up of roller compaction process.
Shi, Weixian; Sprockel, Omar L
2016-09-01
An alternative approach for the scale-up of ribbon formation during roller compaction was investigated, which required only one batch at the commercial scale to set the operational conditions. The scale-up of ribbon formation was based on a probability method. It was sufficient in describing the mechanism of ribbon formation at both scales. In this method, a statistical relationship between roller compaction parameters and ribbon attributes (thickness and density) was first defined with DoE using a pilot Alexanderwerk WP120 roller compactor. While the milling speed was included in the design, it has no practical effect on granule properties within the study range despite its statistical significance. The statistical relationship was then adapted to a commercial Alexanderwerk WP200 roller compactor with one experimental run. The experimental run served as a calibration of the statistical model parameters. The proposed transfer method was then confirmed by conducting a mapping study on the Alexanderwerk WP200 using a factorial DoE, which showed a match between the predictions and the verification experiments. The study demonstrates the applicability of the roller compaction transfer method using the statistical model from the development scale calibrated with one experiment point at the commercial scale. Copyright © 2016 Elsevier B.V. All rights reserved.
Design and Theoretical Analysis of a Resonant Sensor for Liquid Density Measurement
Zheng, Dezhi; Shi, Jiying; Fan, Shangchun
2012-01-01
In order to increase the accuracy of on-line liquid density measurements, a sensor equipped with a tuning fork as the resonant sensitive component is designed in this paper. It is a quasi-digital sensor with simple structure and high precision. The sensor is based on resonance theory and composed of a sensitive unit and a closed-loop control unit, where the sensitive unit consists of the actuator, the resonant tuning fork and the detector and the closed-loop control unit comprises precondition circuit, digital signal processing and control unit, analog-to-digital converter and digital-to-analog converter. An approximate parameters model of the tuning fork is established and the impact of liquid density, position of the tuning fork, temperature and structural parameters on the natural frequency of the tuning fork are also analyzed. On this basis, a tuning fork liquid density measurement sensor is developed. In addition, experimental testing on the sensor has been carried out on standard calibration facilities under constant 20 °C, and the sensor coefficients are calibrated. The experimental results show that the repeatability error is about 0.03% and the accuracy is about 0.4 kg/m3. The results also confirm that the method to increase the accuracy of liquid density measurement is feasible. PMID:22969378
Design and theoretical analysis of a resonant sensor for liquid density measurement.
Zheng, Dezhi; Shi, Jiying; Fan, Shangchun
2012-01-01
In order to increase the accuracy of on-line liquid density measurements, a sensor equipped with a tuning fork as the resonant sensitive component is designed in this paper. It is a quasi-digital sensor with simple structure and high precision. The sensor is based on resonance theory and composed of a sensitive unit and a closed-loop control unit, where the sensitive unit consists of the actuator, the resonant tuning fork and the detector and the closed-loop control unit comprises precondition circuit, digital signal processing and control unit, analog-to-digital converter and digital-to-analog converter. An approximate parameters model of the tuning fork is established and the impact of liquid density, position of the tuning fork, temperature and structural parameters on the natural frequency of the tuning fork are also analyzed. On this basis, a tuning fork liquid density measurement sensor is developed. In addition, experimental testing on the sensor has been carried out on standard calibration facilities under constant 20 °C, and the sensor coefficients are calibrated. The experimental results show that the repeatability error is about 0.03% and the accuracy is about 0.4 kg/m(3). The results also confirm that the method to increase the accuracy of liquid density measurement is feasible.
LIBS analysis of artificial calcified tissues matrices.
Kasem, M A; Gonzalez, J J; Russo, R E; Harith, M A
2013-04-15
In most laser-based analytical methods, the reproducibility of quantitative measurements strongly depends on maintaining uniform and stable experimental conditions. For LIBS analysis this means that for accurate estimation of elemental concentration, using the calibration curves obtained from reference samples, the plasma parameters have to be kept as constant as possible. In addition, calcified tissues such as bone are normally less "tough" in their texture than many samples, especially metals. Thus, the ablation process could change the sample morphological features rapidly, and result in poor reproducibility statistics. In the present work, three artificial reference sample sets have been fabricated. These samples represent three different calcium based matrices, CaCO3 matrix, bone ash matrix and Ca hydroxyapatite matrix. A comparative study of UV (266 nm) and IR (1064 nm) LIBS for these three sets of samples has been performed under similar experimental conditions for the two systems (laser energy, spot size, repetition rate, irradiance, etc.) to examine the wavelength effect. The analytical results demonstrated that UV-LIBS has improved reproducibility, precision, stable plasma conditions, better linear fitting, and the reduction of matrix effects. Bone ash could be used as a suitable standard reference material for calcified tissue calibration using LIBS with a 266 nm excitation wavelength. Copyright © 2013 Elsevier B.V. All rights reserved.
Development of a calibration equipment for spectrometer qualification
NASA Astrophysics Data System (ADS)
Michel, C.; Borguet, B.; Boueé, A.; Blain, P.; Deep, A.; Moreau, V.; François, M.; Maresi, L.; Myszkowiak, A.; Taccola, M.; Versluys, J.; Stockman, Y.
2017-09-01
With the development of new spectrometer concepts, it is required to adapt the calibration facilities to characterize correctly their performances. These spectro-imaging performances are mainly Modulation Transfer Function, spectral response, resolution and registration; polarization, straylight and radiometric calibration. The challenge of this calibration development is to achieve better performance than the item under test using mostly standard items. Because only the subsystem spectrometer needs to be calibrated, the calibration facility needs to simulate the geometrical "behaviours" of the imaging system. A trade-off study indicates that no commercial devices are able to fulfil completely all the requirements so that it was necessary to opt for an in home telecentric achromatic design. The proposed concept is based on an Offner design. This allows mainly to use simple spherical mirrors and to cover the spectral range. The spectral range is covered with a monochromator. Because of the large number of parameters to record the calibration facility is fully automatized. The performances of the calibration system have been verified by analysis and experimentally. Results achieved recently on a free-form grating Offner spectrometer demonstrate the capacities of this new calibration facility. In this paper, a full calibration facility is described, developed specifically for a new free-form spectro-imager.
Presas, Alexandre; Valentin, David; Egusquiza, Eduard; Valero, Carme; Egusquiza, Mònica; Bossio, Matias
2017-03-22
To accurately determine the dynamic response of a structure is of relevant interest in many engineering applications. Particularly, it is of paramount importance to determine the Frequency Response Function (FRF) for structures subjected to dynamic loads in order to avoid resonance and fatigue problems that can drastically reduce their useful life. One challenging case is the experimental determination of the FRF of submerged and confined structures, such as hydraulic turbines, which are greatly affected by dynamic problems as reported in many cases in the past. The utilization of classical and calibrated exciters such as instrumented hammers or shakers to determine the FRF in such structures can be very complex due to the confinement of the structure and because their use can disturb the boundary conditions affecting the experimental results. For such cases, Piezoelectric Patches (PZTs), which are very light, thin and small, could be a very good option. Nevertheless, the main drawback of these exciters is that the calibration as dynamic force transducers (relationship voltage/force) has not been successfully obtained in the past. Therefore, in this paper, a method to accurately determine the FRF of submerged and confined structures by using PZTs is developed and validated. The method consists of experimentally determining some characteristic parameters that define the FRF, with an uncalibrated PZT exciting the structure. These parameters, which have been experimentally determined, are then introduced in a validated numerical model of the tested structure. In this way, the FRF of the structure can be estimated with good accuracy. With respect to previous studies, where only the natural frequencies and mode shapes were considered, this paper discuss and experimentally proves the best excitation characteristic to obtain also the damping ratios and proposes a procedure to fully determine the FRF. The method proposed here has been validated for the structure vibrating in air comparing the FRF experimentally obtained with a calibrated exciter (impact Hammer) and the FRF obtained with the described method. Finally, the same methodology has been applied for the structure submerged and close to a rigid wall, where it is extremely important to not modify the boundary conditions for an accurate determination of the FRF. As experimentally shown in this paper, in such cases, the use of PZTs combined with the proposed methodology gives much more accurate estimations of the FRF than other calibrated exciters typically used for the same purpose. Therefore, the validated methodology proposed in this paper can be used to obtain the FRF of a generic submerged and confined structure, without a previous calibration of the PZT.
NASA Astrophysics Data System (ADS)
Zhang, Fangkun; Liu, Tao; Wang, Xue Z.; Liu, Jingxiang; Jiang, Xiaobin
2017-02-01
In this paper calibration model building based on using an ATR-FTIR spectroscopy is investigated for in-situ measurement of the solution concentration during a cooling crystallization process. The cooling crystallization of L-glutamic Acid (LGA) as a case is studied here. It was found that using the metastable zone (MSZ) data for model calibration can guarantee the prediction accuracy for monitoring the operating window of cooling crystallization, compared to the usage of undersaturated zone (USZ) spectra for model building as traditionally practiced. Calibration experiments were made for LGA solution under different concentrations. Four candidate calibration models were established using different zone data for comparison, by using a multivariate partial least-squares (PLS) regression algorithm for the collected spectra together with the corresponding temperature values. Experiments under different process conditions including the changes of solution concentration and operating temperature were conducted. The results indicate that using the MSZ spectra for model calibration can give more accurate prediction of the solution concentration during the crystallization process, while maintaining accuracy in changing the operating temperature. The primary reason of prediction error was clarified as spectral nonlinearity for in-situ measurement between USZ and MSZ. In addition, an LGA cooling crystallization experiment was performed to verify the sensitivity of these calibration models for monitoring the crystal growth process.
USDA-ARS?s Scientific Manuscript database
Calibration of process-based hydrologic models is a challenging task in data-poor basins, where monitored hydrologic data are scarce. In this study, we present a novel approach that benefits from remotely sensed evapotranspiration (ET) data to calibrate a complex watershed model, namely the Soil and...
Blocquet, Marion; Schoemaecker, Coralie; Amedro, Damien; Herbinet, Olivier; Battin-Leclerc, Frédérique; Fittschen, Christa
2013-01-01
•OH and •HO2 radicals are known to be the key species in the development of ignition. A direct measurement of these radicals under low-temperature oxidation conditions (T = 550–1,000 K) has been achieved by coupling a technique named fluorescence assay by gas expansion, an experimental technique designed for the quantification of these radicals in the free atmosphere, to a jet-stirred reactor, an experimental device designed for the study of low-temperature combustion chemistry. Calibration allows conversion of relative fluorescence signals to absolute mole fractions. Such radical mole fraction profiles will serve as a benchmark for testing chemical models developed to improve the understanding of combustion processes. PMID:24277836
NASA Astrophysics Data System (ADS)
Martín-Doménech, R.; Manzano-Santamaría, J.; Muñoz Caro, G. M.; Cruz-Díaz, G. A.; Chen, Y.-J.; Herrero, V. J.; Tanarro, I.
2015-12-01
Context. Ice mantles that formed on top of dust grains are photoprocessed by the secondary ultraviolet (UV) field in cold and dense molecular clouds. UV photons induce photochemistry and desorption of ice molecules. Experimental simulations dedicated to ice analogs under astrophysically relevant conditions are needed to understand these processes. Aims: We present UV-irradiation experiments of a pure CO2 ice analog. Calibration of the quadrupole mass spectrometer allowed us to quantify the photodesorption of molecules to the gas phase. This information was added to the data provided by the Fourier transform infrared spectrometer on the solid phase to obtain a complete quantitative study of the UV photoprocessing of an ice analog. Methods: Experimental simulations were performed in an ultra-high vacuum chamber. Ice samples were deposited onto an infrared transparent window at 8K and were subsequently irradiated with a microwave-discharged hydrogen flow lamp. After irradiation, ice samples were warmed up until complete sublimation was attained. Results: Photolysis of CO2 molecules initiates a network of photon-induced chemical reactions leading to the formation of CO, CO3, O2, and O3. During irradiation, photon-induced desorption of CO and, to a lesser extent, O2 and CO2 took place through a process called indirect desorption induced by electronic transitions, with maximum photodesorption yields (Ypd) of ~1.2 × 10-2 molecules incident photon-1, ~9.3 × 10-4 molecules incident photon-1, and ~1.1 × 10-4 molecules incident photon-1, respectively. Conclusions: Calibration of mass spectrometers allows a direct quantification of photodesorption yields instead of the indirect values that were obtained from infrared spectra in most previous works. Supplementary information provided by infrared spectroscopy leads to a complete quantification, and therefore a better understanding, of the processes taking place in UV-irradiated ice mantles. Appendix A is available in electronic form at http://www.aanda.org
Implementing fluid dynamics obtained from GeoPET in reactive transport models
NASA Astrophysics Data System (ADS)
Lippmann-Pipke, Johanna; Eichelbaum, Sebastian; Kulenkampff, Johannes
2016-04-01
Flow and transport simulations in geomaterials are commonly conducted on high-resolution tomograms (μCT) of the pore structure or stochastic models that are calibrated with measured integral quantities, like break through curves (BTC). Yet, there existed virtually no method for experimental verification of the simulated velocity distribution results. Positron emission tomography (PET) has unrivaled sensitivity and robustness for non-destructive, quantitative, spatio-temporal measurement of tracer concentrations in body tissue. In the past decade, we empowered PET for its applicability in opaque/geological media - GeoPET (Kulenkampff et al.; Kulenkampff et al., 2008; Zakhnini et al., 2013) and have developed detailed correction schemes to bring the images into sharp focus. Thereby it is the appropriate method for experimental verification and calibration of computer simulations of pore-scale transport by means of the observed propagation of a tracer pulse, c_PET(x,y,z,t). In parallel, we aimed at deriving velocity and porosity distributions directly from our concentration time series of fluid flow processes in geomaterials. This would allow us to directly benefit from lab scale observations and to parameterize respective numerical transport models. For this we have developed a robust spatiotemporal (3D+t) parameter extraction algorithm. Here, we will present its functionality, and demonstrate the use of obtained velocity distributions in finite element simulations of reactive transport processes on drill core scale. Kulenkampff, J., Gruendig, M., Zakhnini, A., Gerasch, R., and Lippmann-Pipke, J.: Process tomography of diffusion with PET for evaluating anisotropy and heterogeneity, Clay Minerals, in press. Kulenkampff, J., Gründig, M., Richter, M., and Enzmann, F.: Evaluation of positron emission tomography for visualisation of migration processes in geomaterials, Physics and Chemistry of the Earth, 33, 937-942, 2008. Zakhnini, A., Kulenkampff, J., Sauerzapf, S., Pietrzyk, U., and Lippmann-Pipke, J.: Monte Carlo simulations of GeoPET experiments: 3D images of tracer distributions (18-F, 124-I and 58-Co) in Opalinus Clay, anhydrite and quartz, Computers and Geosciences, 57 183-196, 2013.
Mallari, K J B; Kim, H; Pak, G; Aksoy, H; Yoon, J
2015-01-01
At the hillslope scale, where the rill-interrill configuration plays a significant role, infiltration is one of the major hydrologic processes affecting the generation of overland flow. As such, it is important to achieve a good understanding and accurate modelling of this process. Horton's infiltration has been widely used in many hydrologic models, though it has been occasionally found limited in handling adequately the antecedent moisture conditions (AMC) of soil. Holtan's model, conversely, is thought to be able to provide better estimation of infiltration rates as it can directly account for initial soil water content in its formulation. In this study, the Holtan model is coupled to an existing overland flow model, originally using Horton's model to account for infiltration, in an attempt to improve the prediction of runoff. For calibration and validation, experimental data from a two-dimensional flume which is incorporated with hillslope configuration have been used. Calibration and validation results showed that Holtan's model was able to improve the modelling results with better performance statistics than the Horton-coupled model. Holtan's infiltration equation, which allows accounting for AMC, provided an advantage and resulted in better runoff prediction of the model.
Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C
2009-10-05
In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.
NASA Astrophysics Data System (ADS)
Shu, Di; Guo, Lei; Yin, Liang; Chen, Zhaoyang; Chen, Juan; Qi, Xin
2015-11-01
The average volume of magnetic Barkhausen jump (AVMBJ) v bar generated by magnetic domain wall irreversible displacement under the effect of the incentive magnetic field H for ferromagnetic materials and the relationship between irreversible magnetic susceptibility χirr and stress σ are adopted in this paper to study the theoretical relationship among AVMBJ v bar(magneto-elasticity noise) and the incentive magnetic field H. Then the numerical relationship among AVMBJ v bar, stress σ and the incentive magnetic field H is deduced. Utilizing this numerical relationship, the displacement process of magnetic domain wall for single crystal is analyzed and the effect of the incentive magnetic field H and the stress σ on the AVMBJ v bar (magneto-elasticity noise) is explained from experimental and theoretical perspectives. The saturation velocity of Barkhausen jump characteristic value curve is different when tensile or compressive stress is applied on ferromagnetic materials, because the resistance of magnetic domain wall displacement is different. The idea of critical magnetic field in the process of magnetic domain wall displacement is introduced in this paper, which solves the supersaturated calibration problem of AVMBJ - σ calibration curve.
Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner
Yu, Chengyi; Chen, Xiaobo; Xi, Juntong
2017-01-01
A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844
Calibration of the NASA GRC 16 In. Mass-Flow Plug
NASA Technical Reports Server (NTRS)
Davis, David O.; Friedlander, David J.; Saunders, J. David; Frate, Franco C.; Foster, Lancert E.
2012-01-01
The results of an experimental calibration of the NASA Glenn Research Center 16 in. Mass-Flow Plug (MFP) are presented and compared to a previously obtained calibration of a 15 in. Mass-Flow Plug. An ASME low-beta, long-radius nozzle was used as the calibration reference. The discharge coefficient for the ASME nozzle was obtained by numerically simulating the flow through the nozzle from the WIND-US code. The results showed agreement between the 15 in. and 16 in. MFPs for area ratios (MFP to pipe area ratio) greater than 0.6 but deviate at area ratios below this value for reasons that are not fully understood. A general uncertainty analysis was also performed and indicates that large uncertainties in the calibration are present for low MFP area ratios.
Calibration of the NASA Glenn Research Center 16 in. Mass-Flow Plug
NASA Technical Reports Server (NTRS)
Davis, David O.; Friedlander, David J.; Saunders, J. David; Frate, Franco C.; Foster, Lancert E.
2014-01-01
The results of an experimental calibration of the NASA Glenn Research Center 16 in. Mass-Flow Plug (MFP) are presented and compared to a previously obtained calibration of a 15 in. Mass-Flow Plug. An ASME low-beta, long-radius nozzle was used as the calibration reference. The discharge coefficient for the ASME nozzle was obtained by numerically simulating the flow through the nozzle from the WIND-US code. The results showed agreement between the 15 and 16 in. MFPs for area ratios (MFP to pipe area ratio) greater than 0.6 but deviate at area ratios below this value for reasons that are not fully understood. A general uncertainty analysis was also performed and indicates that large uncertainties in the calibration are present for low MFP area ratios.
Calibration of X-Ray diffractometer by the experimental comparison method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dudka, A. P., E-mail: dudka@ns.crys.ras.ru
2015-07-15
A software for calibrating an X-ray diffractometer with area detector has been developed. It is proposed to search for detector and goniometer calibration models whose parameters are reproduced in a series of measurements on a reference crystal. Reference (standard) crystals are prepared during the investigation; they should provide the agreement of structural models in repeated analyses. The technique developed has been used to calibrate Xcalibur Sapphire and Eos, Gemini Ruby (Agilent) and Apex x8 and Apex Duo (Bruker) diffractometers. The main conclusions are as follows: the calibration maps are stable for several years and can be used to improve structuralmore » results, verified CCD detectors exhibit significant inhomogeneity of the efficiency (response) function, and a Bruker goniometer introduces smaller distortions than an Agilent goniometer.« less
One-calibrant kinetic calibration for on-site water sampling with solid-phase microextraction.
Ouyang, Gangfeng; Cui, Shufen; Qin, Zhipei; Pawliszyn, Janusz
2009-07-15
The existing solid-phase microextraction (SPME) kinetic calibration technique, using the desorption of the preloaded standards to calibrate the extraction of the analytes, requires that the physicochemical properties of the standard should be similar to those of the analyte, which limited the application of the technique. In this study, a new method, termed the one-calibrant kinetic calibration technique, which can use the desorption of a single standard to calibrate all extracted analytes, was proposed. The theoretical considerations were validated by passive water sampling in laboratory and rapid water sampling in the field. To mimic the variety of the environment, such as temperature, turbulence, and the concentration of the analytes, the flow-through system for the generation of standard aqueous polycyclic aromatic hydrocarbons (PAHs) solution was modified. The experimental results of the passive samplings in the flow-through system illustrated that the effect of the environmental variables was successfully compensated with the kinetic calibration technique, and all extracted analytes can be calibrated through the desorption of a single calibrant. On-site water sampling with rotated SPME fibers also illustrated the feasibility of the new technique for rapid on-site sampling of hydrophobic organic pollutants in water. This technique will accelerate the application of the kinetic calibration method and also will be useful for other microextraction techniques.
New and improved apparatus and method for monitoring the intensities of charged-particle beams
Varma, M.N.; Baum, J.W.
1981-01-16
Charged particle beam monitoring means are disposed in the path of a charged particle beam in an experimental device. The monitoring means comprise a beam monitoring component which is operable to prevent passage of a portion of beam, while concomitantly permitting passage of another portion thereof for incidence in an experimental chamber, and providing a signal (I/sub m/) indicative of the intensity of the beam portion which is not passed. Caibration means are disposed in the experimental chamber in the path of the said another beam portion and are operable to provide a signal (I/sub f/) indicative of the intensity thereof. Means are provided to determine the ratio (R) between said signals whereby, after suitable calibration, the calibration means may be removed from the experimental chamber and the intensity of the said another beam portion determined by monitoring of the monitoring means signal, per se.
Fukuda, Ikuma; Hayashi, Hiroaki; Takegami, Kazuki; Konishi, Yuki
2013-09-01
Diagnostic X-ray equipment was used to develop an experimental apparatus for calibrating a CdTe detector. Powder-type samples were irradiated with collimated X-rays. On excitation of the atoms, characteristic X-rays were emitted. We prepared Nb2O5, SnO2, La2O3, Gd2O3, and WO3 metal oxide samples. Experiments using the diagnostic X-ray equipment were carried out to verify the practicality of our apparatus. First, we verified that the collimators involving the apparatus worked well. Second, the X-ray spectra were measured using the prepared samples. Finally, we analyzed the spectra, which indicated that the energy calibration curve had been obtained at an accuracy of ±0.06 keV. The developed apparatus could be used conveniently, suggesting it to be useful for the practical training of beginners and researchers.
Note: Ultrasonic gas flowmeter based on optimized time-of-flight algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, X. F.; Tang, Z. A.
2011-04-15
A new digital signal processor based single path ultrasonic gas flowmeter is designed, constructed, and experimentally tested. To achieve high accuracy measurements, an optimized ultrasound driven method of incorporation of the amplitude modulation and the phase modulation of the transmit-receive technique is used to stimulate the transmitter. Based on the regularities among the received envelope zero-crossings, different received signal's signal-to-noise ratio situations are discriminated and optional time-of-flight algorithms are applied to take flow rate calculations. Experimental results from the dry calibration indicate that the designed flowmeter prototype can meet the zero-flow verification test requirements of the American Gas Association Reportmore » No. 9. Furthermore, the results derived from the flow calibration prove that the proposed flowmeter prototype can measure flow rate accurately in the practical experiments, and the nominal accuracies after FWME adjustment are lower than 0.8% throughout the calibration range.« less
Heterodyne interferometry method for calibration of a Soleil-Babinet compensator.
Zhang, Wenjing; Zhang, Zhiwei
2016-05-20
A method based on the common-path heterodyne interferometer system is proposed for the calibration of a Soleil-Babinet compensator. In this heterodyne interferometer system, which consists of two acousto-optic modulators, the compensator being calibrated is inserted into the signal path. By using the reference beam as the benchmark and a lock-in amplifier (SR844) as the phase retardation collector, retardations of 0 and λ (one wavelength) can be located accurately, and an arbitrary retardation between 0 and λ can also be measured accurately and continuously. By fitting a straight line to the experimental data, we obtained a linear correlation coefficient (R) of 0.995, which indicates that this system is capable of linear phase detection. The experimental results demonstrate determination accuracies of 0.212° and 0.26° and measurement precisions of 0.054° and 0.608° for retardations of 0 and λ, respectively.
Design of experiments and data analysis challenges in calibration for forensics applications
Anderson-Cook, Christine M.; Burr, Thomas L.; Hamada, Michael S.; ...
2015-07-15
Forensic science aims to infer characteristics of source terms using measured observables. Our focus is on statistical design of experiments and data analysis challenges arising in nuclear forensics. More specifically, we focus on inferring aspects of experimental conditions (of a process to produce product Pu oxide powder), such as temperature, nitric acid concentration, and Pu concentration, using measured features of the product Pu oxide powder. The measured features, Y, include trace chemical concentrations and particle morphology such as particle size and shape of the produced Pu oxide power particles. Making inferences about the nature of inputs X that were usedmore » to create nuclear materials having particular characteristics, Y, is an inverse problem. Therefore, statistical analysis can be used to identify the best set (or sets) of Xs for a new set of observed responses Y. One can fit a model (or models) such as Υ = f(Χ) + error, for each of the responses, based on a calibration experiment and then “invert” to solve for the best set of Xs for a new set of Ys. This perspectives paper uses archived experimental data to consider aspects of data collection and experiment design for the calibration data to maximize the quality of the predicted Ys in the forward models; that is, we assume that well-estimated forward models are effective in the inverse problem. In addition, we consider how to identify a best solution for the inferred X, and evaluate the quality of the result and its robustness to a variety of initial assumptions, and different correlation structures between the responses. In addition, we also briefly review recent advances in metrology issues related to characterizing particle morphology measurements used in the response vector, Y.« less
Design of experiments and data analysis challenges in calibration for forensics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Cook, Christine M.; Burr, Thomas L.; Hamada, Michael S.
Forensic science aims to infer characteristics of source terms using measured observables. Our focus is on statistical design of experiments and data analysis challenges arising in nuclear forensics. More specifically, we focus on inferring aspects of experimental conditions (of a process to produce product Pu oxide powder), such as temperature, nitric acid concentration, and Pu concentration, using measured features of the product Pu oxide powder. The measured features, Y, include trace chemical concentrations and particle morphology such as particle size and shape of the produced Pu oxide power particles. Making inferences about the nature of inputs X that were usedmore » to create nuclear materials having particular characteristics, Y, is an inverse problem. Therefore, statistical analysis can be used to identify the best set (or sets) of Xs for a new set of observed responses Y. One can fit a model (or models) such as Υ = f(Χ) + error, for each of the responses, based on a calibration experiment and then “invert” to solve for the best set of Xs for a new set of Ys. This perspectives paper uses archived experimental data to consider aspects of data collection and experiment design for the calibration data to maximize the quality of the predicted Ys in the forward models; that is, we assume that well-estimated forward models are effective in the inverse problem. In addition, we consider how to identify a best solution for the inferred X, and evaluate the quality of the result and its robustness to a variety of initial assumptions, and different correlation structures between the responses. In addition, we also briefly review recent advances in metrology issues related to characterizing particle morphology measurements used in the response vector, Y.« less
NASA Astrophysics Data System (ADS)
Sobrino, J. A.; Skokovic, D.; Jimenez-Munoz, J. C.; Soria, G.; Julien, Y.
2016-08-01
The Global Change Unit (GCU) at the University of Valencia has been involved in several calibration/validation (cal/val) activities carried out in dedicated field campaigns organized by ESA and other organisms. However, permanent stations are required in order to ensure a long-term and continuous calibration of on-orbit sensors. In the framework of the CEOS-Spain project, the GCU has managed the setting-up and launch of experimental sites in Spain for the calibration of thermal infrared sensors and the validation of Land Surface Temperature (LST) products derived from those data. Currently, three sites have been identified and equipped: the agricultural area of Barrax (39.05N, 2.1W), the marshland area in the National Park of Doñana (36.99N, 6.44W), and the semi-arid area of the National Park of Cabo de Gata (36.83N, 2.25W). The activities of the CEOS-Spain project also included the implementation of an operational processing chain in order to provide in near-real time different remote sensing products, including LST. This work presents the performance of the permanent stations installed over the different test areas, as well as the cal/val results obtained for a number of Earth Observation sensors: SEVIRI, MODIS and Landsat series. We also show the results obtained in the validation of LST products derived from AATSR, with discussion on the implications for the forthcoming Sentinel-3/SLSTR.
A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.
Pagoulatos, N; Haynor, D R; Kim, Y
2001-09-01
We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.
Automatic Calibration Method for Driver’s Head Orientation in Natural Driving Environment
Fu, Xianping; Guan, Xiao; Peli, Eli; Liu, Hongbo; Luo, Gang
2013-01-01
Gaze tracking is crucial for studying driver’s attention, detecting fatigue, and improving driver assistance systems, but it is difficult in natural driving environments due to nonuniform and highly variable illumination and large head movements. Traditional calibrations that require subjects to follow calibrators are very cumbersome to be implemented in daily driving situations. A new automatic calibration method, based on a single camera for determining the head orientation and which utilizes the side mirrors, the rear-view mirror, the instrument board, and different zones in the windshield as calibration points, is presented in this paper. Supported by a self-learning algorithm, the system tracks the head and categorizes the head pose in 12 gaze zones based on facial features. The particle filter is used to estimate the head pose to obtain an accurate gaze zone by updating the calibration parameters. Experimental results show that, after several hours of driving, the automatic calibration method without driver’s corporation can achieve the same accuracy as a manual calibration method. The mean error of estimated eye gazes was less than 5°in day and night driving. PMID:24639620
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model
NASA Astrophysics Data System (ADS)
Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.
2013-12-01
We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.
Calibration and prediction of removal function in magnetorheological finishing.
Dai, Yifan; Song, Ci; Peng, Xiaoqiang; Shi, Feng
2010-01-20
A calibrated and predictive model of the removal function has been established based on the analysis of a magnetorheological finishing (MRF) process. By introducing an efficiency coefficient of the removal function, the model can be used to calibrate the removal function in a MRF figuring process and to accurately predict the removal function of a workpiece to be polished whose material is different from the spot part. Its correctness and feasibility have been validated by simulations. Furthermore, applying this model to the MRF figuring experiments, the efficiency coefficient of the removal function can be identified accurately to make the MRF figuring process deterministic and controllable. Therefore, all the results indicate that the calibrated and predictive model of the removal function can improve the finishing determinacy and increase the model applicability in a MRF process.
Experimental evidence of nitrous acid formation in the electron beam treatment of flue gas
NASA Astrophysics Data System (ADS)
Mätzing, H.; Namba, H.; Tokunaga, O.
1994-03-01
In the Electron Beam Dry Scrubbing (EBDS) process, flue gas from fossil fuel burning power plants is irradiated with accelerated (300-800 keV) electrons. Thereby, nitrogen oxide (NO x) and sulfur dioxide (SO 2) traces are transformed into nitric and sulfuric acids, respectively, which are converted into particulate ammonium nitrate and sulfate upon the addition of ammonia. The powdery can be filtered from the main gas stream and can be sold as agricultural fertilizer. A lot of experimental investigations have been performed on the EBDS process and computer models have been developed to interpret the experimental results and to predict economic improvements. According to the model calculations, substantial amounts of intermediate nitrous acid (HNO 2) are formed in the electron beam treatment of flue gas. However, no corresponding experimental information is available so far. Therefore, we have undertaken the first experimental investigation about the formation of nitrous acid in an irradiated mixture of NO in synthetic air. Under these conditions, aerosol formation is avoided. UV spectra of the irradiated gas were recorded in the wavelength range λ = 345-375 nm. Both NO 2 and HNO 2 have characteristic absorption bands in this wavelength range. Calibration spectra of NO 2 were subtracted from the sample spectra. The remaining absorption bands can clearly be assigned to nitrous acid. The concentration of nitrous acid was determined by differential optical absorption. It was found lower than the model prediction. The importance of nitrous acid formation in the EBDS process needs to be clarified.
Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K
2001-01-01
When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.
Space-based infrared scanning sensor LOS determination and calibration using star observation
NASA Astrophysics Data System (ADS)
Chen, Jun; Xu, Zhan; An, Wei; Deng, Xin-Pu; Yang, Jun-Gang
2015-10-01
This paper provides a novel methodology for removing sensor bias from a space based infrared (IR) system (SBIRS) through the use of stars detected in the background field of the sensor. Space based IR system uses the LOS (line of sight) of target for target location. LOS determination and calibration is the key precondition of accurate location and tracking of targets in Space based IR system and the LOS calibration of scanning sensor is one of the difficulties. The subsequent changes of sensor bias are not been taking into account in the conventional LOS determination and calibration process. Based on the analysis of the imaging process of scanning sensor, a theoretical model based on the estimation of bias angles using star observation is proposed. By establishing the process model of the bias angles and the observation model of stars, using an extended Kalman filter (EKF) to estimate the bias angles, and then calibrating the sensor LOS. Time domain simulations results indicate that the proposed method has a high precision and smooth performance for sensor LOS determination and calibration. The timeliness and precision of target tracking process in the space based infrared (IR) tracking system could be met with the proposed algorithm.
NASA Astrophysics Data System (ADS)
Raj, Rahul; van der Tol, Christiaan; Hamm, Nicholas Alexander Samuel; Stein, Alfred
2018-01-01
Parameters of a process-based forest growth simulator are difficult or impossible to obtain from field observations. Reliable estimates can be obtained using calibration against observations of output and state variables. In this study, we present a Bayesian framework to calibrate the widely used process-based simulator Biome-BGC against estimates of gross primary production (GPP) data. We used GPP partitioned from flux tower measurements of a net ecosystem exchange over a 55-year-old Douglas fir stand as an example. The uncertainties of both the Biome-BGC parameters and the simulated GPP values were estimated. The calibrated parameters leaf and fine root turnover (LFRT), ratio of fine root carbon to leaf carbon (FRC : LC), ratio of carbon to nitrogen in leaf (C : Nleaf), canopy water interception coefficient (Wint), fraction of leaf nitrogen in RuBisCO (FLNR), and effective soil rooting depth (SD) characterize the photosynthesis and carbon and nitrogen allocation in the forest. The calibration improved the root mean square error and enhanced Nash-Sutcliffe efficiency between simulated and flux tower daily GPP compared to the uncalibrated Biome-BGC. Nevertheless, the seasonal cycle for flux tower GPP was not reproduced exactly and some overestimation in spring and underestimation in summer remained after calibration. We hypothesized that the phenology exhibited a seasonal cycle that was not accurately reproduced by the simulator. We investigated this by calibrating the Biome-BGC to each month's flux tower GPP separately. As expected, the simulated GPP improved, but the calibrated parameter values suggested that the seasonal cycle of state variables in the simulator could be improved. It was concluded that the Bayesian framework for calibration can reveal features of the modelled physical processes and identify aspects of the process simulator that are too rigid.
NASA Astrophysics Data System (ADS)
Wiandt, T. J.
2008-06-01
The Hart Scientific Division of the Fluke Corporation operates two accredited standard platinum resistance thermometer (SPRT) calibration facilities, one at the Hart Scientific factory in Utah, USA, and the other at a service facility in Norwich, UK. The US facility is accredited through National Voluntary Laboratory Accreditation Program (NVLAP), and the UK facility is accredited through UKAS. Both provide SPRT calibrations using similar equipment and procedures, and at similar levels of uncertainty. These uncertainties are among the lowest available commercially. To achieve and maintain low uncertainties, it is required that the calibration procedures be thorough and optimized. However, to minimize customer downtime, it is also important that the instruments be calibrated in a timely manner and returned to the customer. Consequently, subjecting the instrument to repeated calibrations or extensive repeated measurements is not a viable approach. Additionally, these laboratories provide SPRT calibration services involving a wide variety of SPRT designs. These designs behave differently, yet predictably, when subjected to calibration measurements. To this end, an evaluation strategy involving both statistical process control and internal consistency measures is utilized to provide confidence in both the instrument calibration and the calibration process. This article describes the calibration facilities, procedure, uncertainty analysis, and internal quality assurance measures employed in the calibration of SPRTs. Data will be reviewed and generalities will be presented. Finally, challenges and considerations for future improvements will be discussed.
HYDICE postflight data processing
NASA Astrophysics Data System (ADS)
Aldrich, William S.; Kappus, Mary E.; Resmini, Ronald G.; Mitchell, Peter A.
1996-06-01
The hyperspectral digital imagery collection experiment (HYDICE) sensor records instrument counts for scene data, in-flight spectral and radiometric calibration sequences, and dark current levels onto an AMPEX DCRsi data tape. Following flight, the HYDICE ground data processing subsystem (GDPS) transforms selected scene data from digital numbers (DN) to calibrated radiance levels at the sensor aperture. This processing includes: dark current correction, spectral and radiometric calibration, conversion to radiance, and replacement of bad detector elements. A description of the algorithms for post-flight data processing is presented. A brief analysis of the original radiometric calibration procedure is given, along with a description of the development of the modified procedure currently used. Example data collected during the 1995 flight season, but uncorrected and processed, are shown to demonstrate the removal of apparent sensor artifacts (e.g., non-uniformities in detector response over the array) as a result of this transformation.
Calibration of neutron detectors on the Joint European Torus.
Batistoni, Paola; Popovichev, S; Conroy, S; Lengar, I; Čufar, A; Abhangi, M; Snoj, L; Horton, L
2017-10-01
The present paper describes the findings of the calibration of the neutron yield monitors on the Joint European Torus (JET) performed in 2013 using a 252 Cf source deployed inside the torus by the remote handling system, with particular regard to the calibration of fission chambers which provide the time resolved neutron yield from JET plasmas. The experimental data obtained in toroidal, radial, and vertical scans are presented. These data are first analysed following an analytical approach adopted in the previous neutron calibrations at JET. In this way, a calibration function for the volumetric plasma source is derived which allows us to understand the importance of the different plasma regions and of different spatial profiles of neutron emissivity on fission chamber response. Neutronics analyses have also been performed to calculate the correction factors needed to derive the plasma calibration factors taking into account the different energy spectrum and angular emission distribution of the calibrating (point) 252 Cf source, the discrete positions compared to the plasma volumetric source, and the calibration circumstances. All correction factors are presented and discussed. We discuss also the lessons learnt which are the basis for the on-going 14 MeV neutron calibration at JET and for ITER.
Calibration of BAS-TR image plate response to high energy (3-300 MeV) carbon ions
NASA Astrophysics Data System (ADS)
Doria, D.; Kar, S.; Ahmed, H.; Alejo, A.; Fernandez, J.; Cerchez, M.; Gray, R. J.; Hanton, F.; MacLellan, D. A.; McKenna, P.; Najmudin, Z.; Neely, D.; Romagnani, L.; Ruiz, J. A.; Sarri, G.; Scullion, C.; Streeter, M.; Swantusch, M.; Willi, O.; Zepf, M.; Borghesi, M.
2015-12-01
The paper presents the calibration of Fuji BAS-TR image plate (IP) response to high energy carbon ions of different charge states by employing an intense laser-driven ion source, which allowed access to carbon energies up to 270 MeV. The calibration method consists of employing a Thomson parabola spectrometer to separate and spectrally resolve different ion species, and a slotted CR-39 solid state detector overlayed onto an image plate for an absolute calibration of the IP signal. An empirical response function was obtained which can be reasonably extrapolated to higher ion energies. The experimental data also show that the IP response is independent of ion charge states.
Calibration of BAS-TR image plate response to high energy (3-300 MeV) carbon ions.
Doria, D; Kar, S; Ahmed, H; Alejo, A; Fernandez, J; Cerchez, M; Gray, R J; Hanton, F; MacLellan, D A; McKenna, P; Najmudin, Z; Neely, D; Romagnani, L; Ruiz, J A; Sarri, G; Scullion, C; Streeter, M; Swantusch, M; Willi, O; Zepf, M; Borghesi, M
2015-12-01
The paper presents the calibration of Fuji BAS-TR image plate (IP) response to high energy carbon ions of different charge states by employing an intense laser-driven ion source, which allowed access to carbon energies up to 270 MeV. The calibration method consists of employing a Thomson parabola spectrometer to separate and spectrally resolve different ion species, and a slotted CR-39 solid state detector overlayed onto an image plate for an absolute calibration of the IP signal. An empirical response function was obtained which can be reasonably extrapolated to higher ion energies. The experimental data also show that the IP response is independent of ion charge states.
Energy calibration of organic scintillation detectors for. gamma. rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu Jiahui; Xiao Genlai; Liu Jingyi
1988-10-01
An experimental method of calibrating organic detectors is described. A NaI(T1) detector has some advantages of high detection efficiency, good energy resolution, and definite position of the back-scattering peak. The precise position of the Compton edge can be determined by coincidence measurement between the pulse of an organic scintillation detector and the pulse of the back-scattering peak from NaI(T1) detector. It can be used to calibrate various sizes and shapes of organic scintillation detectors simply and reliably. The home-made plastic and organic liquid scintillation detectors are calibrated and positions of the Compton edge as a function of ..gamma..-ray energies aremore » obtained.« less
A variable acceleration calibration system
NASA Astrophysics Data System (ADS)
Johnson, Thomas H.
2011-12-01
A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.
Statistical behavior of ten million experimental detection limits
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-02-01
Using a lab-constructed laser-excited fluorimeter, together with bootstrapping methodology, the authors have generated many millions of experimental linear calibration curves for the detection of rhodamine 6G tetrafluoroborate in ethanol solutions. The detection limits computed from them are in excellent agreement with both previously published theory and with comprehensive Monte Carlo computer simulations. Currie decision levels and Currie detection limits, each in the theoretical, chemical content domain, were found to be simply scaled reciprocals of the non-centrality parameter of the non-central t distribution that characterizes univariate linear calibration curves that have homoscedastic, additive Gaussian white noise. Accurate and precise estimates of the theoretical, content domain Currie detection limit for the experimental system, with 5% (each) probabilities of false positives and false negatives, are presented.
Calibration Of Partial-Pressure-Of-Oxygen Sensors
NASA Technical Reports Server (NTRS)
Yount, David W.; Heronimus, Kevin
1995-01-01
Report and analysis of, and discussion of improvements in, procedure for calibrating partial-pressure-of-oxygen sensors to satisfy Spacelab calibration requirements released. Sensors exhibit fast drift, which results in short calibration period not suitable for Spacelab. By assessing complete process of determining total drift range available, calibration procedure modified to eliminate errors and still satisfy requirements without compromising integrity of system.
NASA Astrophysics Data System (ADS)
Bastola, S.; Dialynas, Y. G.; Bras, R. L.; Noto, L. V.; Istanbulluoglu, E.
2018-05-01
Gully erosion was evidence of land degradation in the southern Piedmont, site of the Calhoun Critical Zone Observatory (CCZO), during the cotton farming era. Understanding of the underlying gully erosion processes is essential to develop gully erosion models that could be useful in assessing the effectiveness of remedial and soil erosion control measures such as gully backfilling, revegetation, and terracing. Development and validation of process-based gully erosion models is difficult because observations of the formation and progression of gullies are limited. In this study, analytic formulations of the two dominant gullying processes, namely, plunge pool erosion and slab failure, are utilized to simulate the gullying processes in the 4-km2 Holcombe's Branch watershed. In order to calibrate parameters of the gully erosion model, gully features (e.g., depth and area) extracted from a high-resolution LiDAR map are used. After the calibration, the gully model is able to delineate the spatial extent of gullies whose statistics are in close agreement with the gullies extracted from the LiDAR DEM. Several simulations with the calibrated model are explored to evaluate the effectiveness of various gully remedial measures, such as backfilling and revegetation. The results show that in the short-term, the reshaping of the topographical surface by backfilling and compacting gullies is effective in slowing down the growth of gullies (e.g., backfilling decreased the spatial extent of gullies by 21-46% and decreased the average depth of gullies by up to 9%). Revegetation, however, is a more effective approach to stabilizing gullies that would otherwise expand if no gully remedial measures are implemented. Analyses of our simulations show that the gully stabilization effect of revegetation varies over a wide range, i.e., leading to 23-69% reduction of the spatial extent of gullies and up to 45% reduction in the depth of gullies, depending on the selection of plant species and management practices.
Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A.; Ontiveros, Sinué; Tosello, Guido
2017-01-01
The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems’ traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component’s calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from the validation workpiece are, respectively, 0.27 (VDI) and 0.35 (MPE), by assuring tolerances in the range of ± 20–30 µm. For the dental file, the EN < 1 value analysis is favorable in the majority of the cases (70.4%) and 2U/T is equal to 0.31 for sub-mm measurands (L < 1 mm and tolerance intervals of ± 40–80 µm). PMID:28509869
Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A; Ontiveros, Sinué; Tosello, Guido
2017-05-16
The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems' traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component's calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from the validation workpiece are, respectively, 0.27 (VDI) and 0.35 (MPE), by assuring tolerances in the range of ± 20-30 µm. For the dental file, the E N < 1 value analysis is favorable in the majority of the cases (70.4%) and 2U/T is equal to 0.31 for sub-mm measurands (L < 1 mm and tolerance intervals of ± 40-80 µm).
Ehrhardt, Fiona; Soussana, Jean-François; Bellocchi, Gianni; Grace, Peter; McAuliffe, Russel; Recous, Sylvie; Sándor, Renáta; Smith, Pete; Snow, Val; de Antoni Migliorati, Massimiliano; Basso, Bruno; Bhatia, Arti; Brilli, Lorenzo; Doltra, Jordi; Dorich, Christopher D; Doro, Luca; Fitton, Nuala; Giacomini, Sandro J; Grant, Brian; Harrison, Matthew T; Jones, Stephanie K; Kirschbaum, Miko U F; Klumpp, Katja; Laville, Patricia; Léonard, Joël; Liebig, Mark; Lieffering, Mark; Martin, Raphaël; Massad, Raia S; Meier, Elizabeth; Merbold, Lutz; Moore, Andrew D; Myrgiotis, Vasileios; Newton, Paul; Pattey, Elizabeth; Rolinski, Susanne; Sharp, Joanna; Smith, Ward N; Wu, Lianhai; Zhang, Qing
2018-02-01
Simulation models are extensively used to predict agricultural productivity and greenhouse gas emissions. However, the uncertainties of (reduced) model ensemble simulations have not been assessed systematically for variables affecting food security and climate change mitigation, within multi-species agricultural contexts. We report an international model comparison and benchmarking exercise, showing the potential of multi-model ensembles to predict productivity and nitrous oxide (N 2 O) emissions for wheat, maize, rice and temperate grasslands. Using a multi-stage modelling protocol, from blind simulations (stage 1) to partial (stages 2-4) and full calibration (stage 5), 24 process-based biogeochemical models were assessed individually or as an ensemble against long-term experimental data from four temperate grassland and five arable crop rotation sites spanning four continents. Comparisons were performed by reference to the experimental uncertainties of observed yields and N 2 O emissions. Results showed that across sites and crop/grassland types, 23%-40% of the uncalibrated individual models were within two standard deviations (SD) of observed yields, while 42 (rice) to 96% (grasslands) of the models were within 1 SD of observed N 2 O emissions. At stage 1, ensembles formed by the three lowest prediction model errors predicted both yields and N 2 O emissions within experimental uncertainties for 44% and 33% of the crop and grassland growth cycles, respectively. Partial model calibration (stages 2-4) markedly reduced prediction errors of the full model ensemble E-median for crop grain yields (from 36% at stage 1 down to 4% on average) and grassland productivity (from 44% to 27%) and to a lesser and more variable extent for N 2 O emissions. Yield-scaled N 2 O emissions (N 2 O emissions divided by crop yields) were ranked accurately by three-model ensembles across crop species and field sites. The potential of using process-based model ensembles to predict jointly productivity and N 2 O emissions at field scale is discussed. © 2017 John Wiley & Sons Ltd.
Gradient-based model calibration with proxy-model assistance
NASA Astrophysics Data System (ADS)
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
Dimensional accuracy of aluminium extrusions in mechanical calibration
NASA Astrophysics Data System (ADS)
Raknes, Christian Arne; Welo, Torgeir; Paulsen, Frode
2018-05-01
Reducing dimensional variations in the extrusion process without increasing cost is challenging due to the nature of the process itself. An alternative approach—also from a cost perspective—is using extruded profiles with standard tolerances and utilize downstream processes, and thus calibrate the part within tolerance limits that are not achievable directly from the extrusion process. In this paper, two mechanical calibration strategies for the extruded product are investigated, utilizing the forming lines of the manufacturer. The first calibration strategy is based on global, longitudinal stretching in combination with local bending, while the second strategy utilizes the principle of transversal stretching and local bending of the cross-section. An extruded U-profile is used to make a comparison between the two methods using numerical analyses. To provide response surfaces with the FEA program, ABAQUS is used in combination with Design of Experiment (DOE). DOE is conducted with a two-level fractional factorial design to collect the appropriate data. The aim is to find the main factors affecting the dimension accuracy of the final part obtained by the two calibration methods. The results show that both calibration strategies have proven to reduce cross-sectional variations effectively form standard extrusion tolerances. It is concluded that mechanical calibration is a viable, low-cost alternative for aluminium parts that demand high dimensional accuracy, e.g. due to fit-up or welding requirements.
Continuous Odour Measurement with Chemosensor Systems
NASA Astrophysics Data System (ADS)
Boeker, Peter; Haas, T.; Diekmann, B.; Lammer, P. Schulze
2009-05-01
The continuous odour measurement is a challenging task for chemosensor systems. Firstly, a long term and stable measurement mode must be guaranteed in order to preserve the validity of the time consuming and expensive olfactometric calibration data. Secondly, a method is needed to deal with the incoming sensor data. The continuous online detection of signal patterns, the correlated gas emission and the assigned odour data is essential for the continuous odour measurement. Thirdly, a severe danger of over-fitting in the process of the odour calibration is present, because of the high measurement uncertainty of the olfactometry. In this contribution we present a technical solution for continuous measurements comprising of a hybrid QMB-sensor array and electrochemical cells. A set of software tools enables the efficient data processing and calibration and computes the calibration parameters. The internal software of the measurement systems microcontroller processes the calibration parameters online for the output of the desired odour information.
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2018-01-01
Analysis and use of temperature-dependent wind tunnel strain-gage balance calibration data are discussed in the paper. First, three different methods are presented and compared that may be used to process temperature-dependent strain-gage balance data. The first method uses an extended set of independent variables in order to process the data and predict balance loads. The second method applies an extended load iteration equation during the analysis of balance calibration data. The third method uses temperature-dependent sensitivities for the data analysis. Physical interpretations of the most important temperature-dependent regression model terms are provided that relate temperature compensation imperfections and the temperature-dependent nature of the gage factor to sets of regression model terms. Finally, balance calibration recommendations are listed so that temperature-dependent calibration data can be obtained and successfully processed using the reviewed analysis methods.
Reactive flow calibration for diaminoazoxyfurazan (DAAF) and comparison with experiment
NASA Astrophysics Data System (ADS)
Johnson, Carl; Francois, Elizabeth Green; Morris, John
2012-03-01
Diaminoazoxyfurazan (DAAF) has a number of desirable properties; it is sensitive to shock while being insensitive to initiation by low level impact or friction, it has a small failure diameter, and its manufacturing process is inexpensive with minimal environmental impact. In light of its unique properties, DAAF based materials have gained interest for possible applications in insensitive munitions. In order to facilitate hydrocode modeling of DAAF and DAAF based formulations, we have developed a set of reactive flow parameters which were calibrated using published experimental data as well as recent experiments at LANL. Hydrocode calculations using the DAAF reactive flow parameters developed in the course of this work were compared to rate stick experiments, small scale gap tests, as well as the Onionskin experiment. Hydrocode calculations were compared directly to streak image results using numerous tracer points in conjunction with an external algorithm to match the data sets. The calculations display a reasonable agreement with experiment with the exception of effects related to shock desensitization of explosive.
NASA Astrophysics Data System (ADS)
Liu, W.; Wang, H.; Liu, D.; Miu, Y.
2018-05-01
Precise geometric parameters are essential to ensure the positioning accuracy for space optical cameras. However, state-of-the-art onorbit calibration method inevitably suffers from long update cycle and poor timeliness performance. To this end, in this paper we exploit the optical auto-collimation principle and propose a real-time onboard calibration scheme for monitoring key geometric parameters. Specifically, in the proposed scheme, auto-collimation devices are first designed by installing collimated light sources, area-array CCDs, and prisms inside the satellite payload system. Through utilizing those devices, the changes in the geometric parameters are elegantly converted into changes in the spot image positions. The variation of geometric parameters can be derived via extracting and processing the spot images. An experimental platform is then set up to verify the feasibility and analyze the precision index of the proposed scheme. The experiment results demonstrate that it is feasible to apply the optical auto-collimation principle for real-time onboard monitoring.
Method for traceable measurement of LTE signals
NASA Astrophysics Data System (ADS)
Sunder Dash, Soumya; Pythoud, Frederic; Leuchtmann, Pascal; Leuthold, Juerg
2018-04-01
This contribution presents a reference setup to measure the power of the cell-specific resource elements present in downlink long term evolution (LTE) signals in a way that the measurements are traceable to the international system of units. This setup can be used to calibrate the LTE code-selective field probes that are used to measure the radiation of base stations for mobile telephony. It can also be used to calibrate LTE signal generators and receivers. The method is based on traceable scope measurements performed directly at the output of a measuring antenna. It implements offline digital signal processing demodulation algorithms that consider the digital down-conversion, timing synchronization, frequency synchronization, phase synchronization and robust LTE cell identification to produce the downlink time-frequency LTE grid. Experimental results on conducted test scenarios, both single-input-single-output and multiple-input-multiple-output antenna configuration, show promising results confirming measurement uncertainties of the order of 0.05 dB with a coverage factor of 2.
ACCELERATORS: Beam based alignment of the SSRF storage ring
NASA Astrophysics Data System (ADS)
Zhang, Man-Zhou; Li, Hao-Hu; Jiang, Bo-Cheng; Liu, Gui-Min; Li, De-Ming
2009-04-01
There are 140 beam position monitors (BPMs) in the Shanghai Synchrotron Radiation Facility (SSRF) storage ring used for measuring the closed orbit. As the BPM pickup electrodes are assembled directly on the vacuum chamber, it is important to calibrate the electrical center offset of the BPM to an adjacent quadrupole magnetic center. A beam based alignment (BBA) method which varies individual quadrupole magnet strength and observes its effects on the orbit is used to measure the BPM offsets in both the horizontal and vertical planes. It is a completely automated technique with various data processing methods. There are several parameters such as the strength change of the correctors and the quadrupoles which should be chosen carefully in real measurement. After several rounds of BBA measurement and closed orbit correction, these offsets are set to an accuracy better than 10 μm. In this paper we present the method of beam based calibration of BPMs, the experimental results of the SSRF storage ring, and the error analysis.
Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes.
García-Gen, Santiago; Sousbie, Philippe; Rangaraj, Ganesh; Lema, Juan M; Rodríguez, Jorge; Steyer, Jean-Philippe; Torrijos, Michel
2015-01-01
A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowly biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 gVS/Ld. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
1993-01-01
Under a NASA Small Business Innovation Research (SBIR) contract, Axiomatics Corporation developed a shunting Dielectric Sensor to determine the nutrient level and analyze plant nutrient solutions in the CELSS, NASA's space life support program. (CELSS is an experimental facility investigating closed-cycle plant growth and food processing for long duration manned missions.) The DiComp system incorporates a shunt electrode and is especially sensitive to changes in dielectric property changes in materials at measurements much lower than conventional sensors. The analyzer has exceptional capabilities for predicting composition of liquid streams or reactions. It measures concentrations and solids content up to 100 percent in applications like agricultural products, petrochemicals, food and beverages. The sensor is easily installed; maintenance is low, and it can be calibrated on line. The software automates data collection and analysis.
Quantitative Thermochemical Measurements in High-Pressure Gaseous Combustion
NASA Technical Reports Server (NTRS)
Kojima, Jun J.; Fischer, David G.
2012-01-01
We present our strategic experiment and thermochemical analyses on combustion flow using a subframe burst gating (SBG) Raman spectroscopy. This unconventional laser diagnostic technique has promising ability to enhance accuracy of the quantitative scalar measurements in a point-wise single-shot fashion. In the presentation, we briefly describe an experimental methodology that generates transferable calibration standard for the routine implementation of the diagnostics in hydrocarbon flames. The diagnostic technology was applied to simultaneous measurements of temperature and chemical species in a swirl-stabilized turbulent flame with gaseous methane fuel at elevated pressure (17 atm). Statistical analyses of the space-/time-resolved thermochemical data provide insights into the nature of the mixing process and it impact on the subsequent combustion process in the model combustor.
Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap
Al-Widyan, Khalid
2017-01-01
Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot–world hand–eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX=ZB, where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B, which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0.12∘ respectively. PMID:29036905
Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap.
Ahmad Yousef, Khalil M; Mohd, Bassam J; Al-Widyan, Khalid; Hayajneh, Thaier
2017-10-14
Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot-world hand-eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.
Wu, Defeng; Chen, Tianfei; Li, Aiguo
2016-08-30
A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.
Rivera, José; Carrillo, Mariano; Chacón, Mario; Herrera, Gilberto; Bojorquez, Gilberto
2007-01-01
The development of smart sensors involves the design of reconfigurable systems capable of working with different input sensors. Reconfigurable systems ideally should spend the least possible amount of time in their calibration. An autocalibration algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity, as accurately as possible. This paper describes a new autocalibration methodology for nonlinear intelligent sensors based on artificial neural networks, ANN. The methodology involves analysis of several network topologies and training algorithms. The proposed method was compared against the piecewise and polynomial linearization methods. Method comparison was achieved using different number of calibration points, and several nonlinear levels of the input signal. This paper also shows that the proposed method turned out to have a better overall accuracy than the other two methods. Besides, experimentation results and analysis of the complete study, the paper describes the implementation of the ANN in a microcontroller unit, MCU. In order to illustrate the method capability to build autocalibration and reconfigurable systems, a temperature measurement system was designed and tested. The proposed method is an improvement over the classic autocalibration methodologies, because it impacts on the design process of intelligent sensors, autocalibration methodologies and their associated factors, like time and cost.
Direct Sensor Orientation of a Land-Based Mobile Mapping System
Rau, Jiann-Yeou; Habib, Ayman F.; Kersting, Ana P.; Chiang, Kai-Wei; Bang, Ki-In; Tseng, Yi-Hsing; Li, Yu-Hua
2011-01-01
A land-based mobile mapping system (MMS) is flexible and useful for the acquisition of road environment geospatial information. It integrates a set of imaging sensors and a position and orientation system (POS). The positioning quality of such systems is highly dependent on the accuracy of the utilized POS. This limitation is the major drawback due to the elevated cost associated with high-end GPS/INS units, particularly the inertial system. The potential accuracy of the direct sensor orientation depends on the architecture and quality of the GPS/INS integration process as well as the validity of the system calibration (i.e., calibration of the individual sensors as well as the system mounting parameters). In this paper, a novel single-step procedure using integrated sensor orientation with relative orientation constraint for the estimation of the mounting parameters is introduced. A comparative analysis between the proposed single-step and the traditional two-step procedure is carried out. Moreover, the estimated mounting parameters using the different methods are used in a direct geo-referencing procedure to evaluate their performance and the feasibility of the implemented system. Experimental results show that the proposed system using single-step system calibration method can achieve high 3D positioning accuracy. PMID:22164015
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-01-01
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287
NASA Astrophysics Data System (ADS)
Pang, Hongfeng; Zhu, XueJun; Pan, Mengchun; Zhang, Qi; Wan, Chengbiao; Luo, Shitu; Chen, Dixiang; Chen, Jinfei; Li, Ji; Lv, Yunxiao
2016-12-01
Misalignment error is one key factor influencing the measurement accuracy of geomagnetic vector measurement system, which should be calibrated with the difficulties that sensors measure different physical information and coordinates are invisible. A new misalignment calibration method by rotating a parallelepiped frame is proposed. Simulation and experiment result show the effectiveness of calibration method. The experimental system mainly contains DM-050 three-axis fluxgate magnetometer, INS (inertia navigation system), aluminium parallelepiped frame, aluminium plane base. Misalignment angles are calculated by measured data of magnetometer and INS after rotating the aluminium parallelepiped frame on aluminium plane base. After calibration, RMS error of geomagnetic north, vertical and east are reduced from 349.441 nT, 392.530 nT and 562.316 nT to 40.130 nT, 91.586 nT and 141.989 nT respectively.
Precision process calibration and CD predictions for low-k1 lithography
NASA Astrophysics Data System (ADS)
Chen, Ting; Park, Sangbong; Berger, Gabriel; Coskun, Tamer H.; de Vocht, Joep; Chen, Fung; Yu, Linda; Hsu, Stephen; van den Broeke, Doug; Socha, Robert; Park, Jungchul; Gronlund, Keith; Davis, Todd; Plachecki, Vince; Harris, Tom; Hansen, Steve; Lambson, Chuck
2005-06-01
Leading resist calibration for sub-0.3 k1 lithography demands accuracy <2nm for CD through pitch. An accurately calibrated resist process is the prerequisite for establishing production-worthy manufacturing under extreme low k1. From an integrated imaging point of view, the following key components must be simultaneously considered during the calibration - high numerical aperture (NA>0.8) imaging characteristics, customized illuminations (measured vs. modeled pupil profiles), resolution enhancement technology (RET) mask with OPC, reticle metrology, and resist thin film substrate. For imaging at NA approaching unity, polarized illumination can impact significantly the contrast formation in the resist film stack, and therefore it is an important factor to consider in the CD-based resist calibration. For aggressive DRAM memory core designs at k1<0.3, pattern-specific illumination optimization has proven to be critical for achieving the required imaging performance. Various optimization techniques from source profile optimization with fixed mask design to the combined source and mask optimization have been considered for customer designs and available imaging capabilities. For successful low-k1 process development, verification of the optimization results can only be made with a sufficiently tunable resist model that can predicate the wafer printing accurately under various optimized process settings. We have developed, for resist patterning under aggressive low-k1 conditions, a novel 3D diffusion model equipped with double-Gaussian convolution in each dimension. Resist calibration with the new diffusion model has demonstrated a fitness and CD predication accuracy that rival or outperform the traditional 3D physical resist models. In this work, we describe our empirical approach to achieving the nm-scale precision for advanced lithography process calibrations, using either measured 1D CD through-pitch or 2D memory core patterns. We show that for ArF imaging, the current resist development and diffusion modeling can readily achieve ~1-2nm max CD errors for common 1D through-pitch and aggressive 2D memory core resist patterns. Sensitivities of the calibrated models to various process parameters are analyzed, including the comparison between the measured and modeled (Gaussian or GRAIL) pupil profiles. We also report our preliminary calibration results under selected polarized illumination conditions.
A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times
Heath, Tracy A.
2012-01-01
In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343
Bayesian inference of Calibration curves: application to archaeomagnetism
NASA Astrophysics Data System (ADS)
Lanos, P.
2003-04-01
The range of errors that occur at different stages of the archaeomagnetic calibration process are modelled using a Bayesian hierarchical model. The archaeomagnetic data obtained from archaeological structures such as hearths, kilns or sets of bricks and tiles, exhibit considerable experimental errors and are typically more or less well dated by archaeological context, history or chronometric methods (14C, TL, dendrochronology, etc.). They can also be associated with stratigraphic observations which provide prior relative chronological information. The modelling we describe in this paper allows all these observations, on materials from a given period, to be linked together, and the use of penalized maximum likelihood for smoothing univariate, spherical or three-dimensional time series data allows representation of the secular variation of the geomagnetic field over time. The smooth curve we obtain (which takes the form of a penalized natural cubic spline) provides an adaptation to the effects of variability in the density of reference points over time. Since our model takes account of all the known errors in the archaeomagnetic calibration process, we are able to obtain a functional highest-posterior-density envelope on the new curve. With this new posterior estimate of the curve available to us, the Bayesian statistical framework then allows us to estimate the calendar dates of undated archaeological features (such as kilns) based on one, two or three geomagnetic parameters (inclination, declination and/or intensity). Date estimates are presented in much the same way as those that arise from radiocarbon dating. In order to illustrate the model and inference methods used, we will present results based on German archaeomagnetic data recently published by a German team.
NASA Astrophysics Data System (ADS)
Liu, Meng-Wei; Chang, Hao-Jung; Lee, Shu-sheng; Lee, Chih-Kung
2016-03-01
Tuberculosis is a highly contagious disease such that global latent patient can be as high as one third of the world population. Currently, latent tuberculosis was diagnosed by stimulating the T cells to produce the biomarker of tuberculosis, i.e., interferon-γ. In this paper, we developed a paraboloidal mirror enabled surface plasmon resonance (SPR) interferometer that has the potential to also integrate ellipsometry to analyze the antibody and antigen reactions. To examine the feasibility of developing a platform for cross calibrating the performance and detection limit of various bio-detection techniques, electrochemical impedance spectroscopy (EIS) method was also implemented onto a biochip that can be incorporated into this newly developed platform. The microfluidic channel of the biochip was functionalized by coating the interferon-γ antibody so as to enhance the detection specificity. To facilitate the processing steps needed for using the biochip to detect various antigen of vastly different concentrations, a kinetic mount was also developed to guarantee the biochip re-positioning accuracy whenever the biochip was removed and placed back for another round of detection. With EIS being utilized, SPR was also adopted to observe the real-time signals on the computer in order to analyze the success of each biochip processing steps such as functionalization, wash, etc. Finally, the EIS results and the optical signals obtained from the newly developed optical detection platform was cross-calibrated. Preliminary experimental results demonstrate the accuracy and performance of SPR and EIS measurement done at the newly integrated platform.
NASA Astrophysics Data System (ADS)
Jackson-Blake, L.
2014-12-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.
An open source platform for multi-scale spatially distributed simulations of microbial ecosystems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segre, Daniel
2014-08-14
The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.
Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Strum, R.; Stiles, D.
In this paper, we describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. Finally, the proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry–Perot sensors for measurement of strains and vibrations.
Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors
Liu, Y.; Strum, R.; Stiles, D.; ...
2017-11-20
In this paper, we describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. Finally, the proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry–Perot sensors for measurement of strains and vibrations.
The Spectrum of Single Bubble Sonoluminescence.
NASA Astrophysics Data System (ADS)
Hiller, Robert Anthony
1995-01-01
An acoustically levitated bubble in a liquid may be driven to produce short flashes of light synchronous with the sound field in a process called sonoluminescence. The spectrum of the emitted light is measured with a grating monochromator and calibrated for absolute spectral radiance. The spectrum has been measured for various gases dissolved in pure water and heavy water, and alcohols and other hydrocarbon liquids. At a bandpass of 10nm EWHM the spectra are broad -band, showing no sign of lines or absorptions, with a peak in the ultraviolet. The experimental apparatus, including a system for producing sonoluminescence in a sealed container, is described.
NASA Technical Reports Server (NTRS)
Schnell, W. C.
1982-01-01
The jet induced effects of several exhaust nozzle configurations (axisymmetric, and vectoring/modulating varients) on the aeropropulsive performance of a twin engine V/STOL fighter design was determined. A 1/8 scale model was tested in an 11 ft transonic tunnel at static conditions and over a range of Mach Numbers from 0.4 to 1.4. The experimental aspects of the static and wind-on programs are discussed. Jet effects test techniques in general, fow through balance calibrations and tare force corrections, ASME nozzle thrust and mass flow calibrations, test problems and solutions are emphasized.
NASA Astrophysics Data System (ADS)
Perini, Ana P.; Neves, Lucio P.; Maia, Ana F.; Caldas, Linda V. E.
2013-12-01
In this work, a new extended-length parallel-plate ionization chamber was tested in the standard radiation qualities for computed tomography established according to the half-value layers defined at the IEC 61267 standard, at the Calibration Laboratory of the Instituto de Pesquisas Energéticas e Nucleares (IPEN). The experimental characterization was made following the IEC 61674 standard recommendations. The experimental results obtained with the ionization chamber studied in this work were compared to those obtained with a commercial pencil ionization chamber, showing a good agreement. With the use of the PENELOPE Monte Carlo code, simulations were undertaken to evaluate the influence of the cables, insulator, PMMA body, collecting electrode, guard ring, screws, as well as different materials and geometrical arrangements, on the energy deposited on the ionization chamber sensitive volume. The maximum influence observed was 13.3% for the collecting electrode, and regarding the use of different materials and design, the substitutions showed that the original project presented the most suitable configuration. The experimental and simulated results obtained in this work show that this ionization chamber has appropriate characteristics to be used at calibration laboratories, for dosimetry in standard computed tomography and diagnostic radiology quality beams.
Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.
Hack, C Eric
2006-04-17
Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.
Fragmentation modeling of a resin bonded sand
NASA Astrophysics Data System (ADS)
Hilth, William; Ryckelynck, David
2017-06-01
Cemented sands exhibit a complex mechanical behavior that can lead to sophisticated models, with numerous parameters without real physical meaning. However, using a rather simple generalized critical state bonded soil model has proven to be a relevant compromise between an easy calibration and good results. The constitutive model formulation considers a non-associated elasto-plastic formulation within the critical state framework. The calibration procedure, using standard laboratory tests, is complemented by the study of an uniaxial compression test observed by tomography. Using finite elements simulations, this test is simulated considering a non-homogeneous 3D media. The tomography of compression sample gives access to 3D displacement fields by using image correlation techniques. Unfortunately these fields have missing experimental data because of the low resolution of correlations for low displacement magnitudes. We propose a recovery method that reconstructs 3D full displacement fields and 2D boundary displacement fields. These fields are mandatory for the calibration of the constitutive parameters by using 3D finite element simulations. The proposed recovery technique is based on a singular value decomposition of available experimental data. This calibration protocol enables an accurate prediction of the fragmentation of the specimen.
NASA Astrophysics Data System (ADS)
Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai
2016-05-01
The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.
LANDSAT-D Investigations Workshop
NASA Technical Reports Server (NTRS)
1982-01-01
Viewgraphs are presented which highlight LANDSAT-D project status and ground segment; early access TM processing; LANDSAT-D data acquisition and availability; LANDSAT-D performance characterization; MSS pre-NOAA characterization; MSS radiometric sensor performance (spectral information, absolute calibration, and ground processing); MSS geometric sensor performance; and MSS geometric processing and calibration.
Mendikute, Alberto; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai
2017-01-01
Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras. PMID:28891946
Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai
2017-09-09
Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras.
Dinç, Erdal; Büker, Eda
2012-01-01
A new application of continuous wavelet transform (CWT) to overlapping peaks in a chromatogram was developed for the quantitative analysis of amiloride hydrochloride (AML) and hydrochlorothiazide (HCT) in tablets. Chromatographic analysis was done by using an ACQUITY ultra-performance LC (UPLC) BEH C18 column (50 x 2.1 mm id, 1.7 pm particle size) and a mobile phase consisting of methanol-0.1 M acetic acid (21 + 79, v/v) at a constant flow rate of 0.3 mL/min with diode array detection at 274 nm. The overlapping chromatographic peaks of the calibration set consisting of AML and HCT mixtures were recorded rapidly by using an ACQUITY UPLC H-Class system. The overlapping UPLC data vectors of AML and HCT drugs and their samples were processed by CWT signal processing methods. The calibration graphs for AML and HCT were computed from the relationship between concentration and areas of chromatographic CWT peaks. The applicability and validity of the improved UPLC-CWT approaches were confirmed by recovery studies and the standard addition technique. The proposed UPLC-CWT methods were applied to the determination of AML and HCT in tablets. The experimental results indicated that the suggested UPLC-CWT signal processing provides accurate and precise results for industrial QC and quantitative evaluation of AML-HCT tablets.
NASA Astrophysics Data System (ADS)
Minunno, Francesco; Peltoniemi, Mikko; Launiainen, Samuli; Mäkelä, Annikki
2014-05-01
Biogeochemical models quantify the material and energy flux exchanges between biosphere, atmosphere and soil, however there is still considerable uncertainty underpinning model structure and parametrization. The increasing availability of data from of multiple sources provides useful information for model calibration and validation at different space and time scales. We calibrated a simplified ecosystem process model PRELES to data from multiple sites. In this work we had the following objective: to compare a multi-site calibration and site-specific calibrations, in order to test if PRELES is a model of general applicability, and to test how well one parameterization can predict ecosystem fluxes. Model calibration and evaluation were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 9 sites of Finland and Sweden were used in the study; half dataset was used for model calibrations and half for the comparative analyses. 10 BCs were performed; the model was independently calibrated for each of the nine sites (site-specific calibrations) and a multi-site calibration was achieved using the data from all the sites in one BC. Then 9 BMCs were carried out, one for each site, using output from the multi-site and the site-specific versions of PRELES. Similar estimates were obtained for the parameters at which model outputs are most sensitive. Not surprisingly, the joint posterior distribution achieved through the multi-site calibration was characterized by lower uncertainty, because more data were involved in the calibration process. No significant differences were encountered in the prediction of the multi-site and site-specific versions of PRELES, and after BMC, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Despite being a simple model, PRELES provided good estimates of GPP and ET; only for one site PRELES multi-site version underestimated water fluxes. Our study implies convergence of GPP and water processes in boreal zone to the extent that their plausible prediction is possible with a simple model using global parameterization.
Modeling Long-Term Corn Yield Response to Nitrogen Rate and Crop Rotation
Puntel, Laila A.; Sawyer, John E.; Barker, Daniel W.; Dietzel, Ranae; Poffenbarger, Hanna; Castellano, Michael J.; Moore, Kenneth J.; Thorburn, Peter; Archontoulis, Sotirios V.
2016-01-01
Improved prediction of optimal N fertilizer rates for corn (Zea mays L.) can reduce N losses and increase profits. We tested the ability of the Agricultural Production Systems sIMulator (APSIM) to simulate corn and soybean (Glycine max L.) yields, the economic optimum N rate (EONR) using a 16-year field-experiment dataset from central Iowa, USA that included two crop sequences (continuous corn and soybean-corn) and five N fertilizer rates (0, 67, 134, 201, and 268 kg N ha-1) applied to corn. Our objectives were to: (a) quantify model prediction accuracy before and after calibration, and report calibration steps; (b) compare crop model-based techniques in estimating optimal N rate for corn; and (c) utilize the calibrated model to explain factors causing year to year variability in yield and optimal N. Results indicated that the model simulated well long-term crop yields response to N (relative root mean square error, RRMSE of 19.6% before and 12.3% after calibration), which provided strong evidence that important soil and crop processes were accounted for in the model. The prediction of EONR was more complex and had greater uncertainty than the prediction of crop yield (RRMSE of 44.5% before and 36.6% after calibration). For long-term site mean EONR predictions, both calibrated and uncalibrated versions can be used as the 16-year mean differences in EONR’s were within the historical N rate error range (40–50 kg N ha-1). However, for accurate year-by-year simulation of EONR the calibrated version should be used. Model analysis revealed that higher EONR values in years with above normal spring precipitation were caused by an exponential increase in N loss (denitrification and leaching) with precipitation. We concluded that long-term experimental data were valuable in testing and refining APSIM predictions. The model can be used as a tool to assist N management guidelines in the US Midwest and we identified five avenues on how the model can add value toward agronomic, economic, and environmental sustainability. PMID:27891133
Modeling Long-Term Corn Yield Response to Nitrogen Rate and Crop Rotation.
Puntel, Laila A; Sawyer, John E; Barker, Daniel W; Dietzel, Ranae; Poffenbarger, Hanna; Castellano, Michael J; Moore, Kenneth J; Thorburn, Peter; Archontoulis, Sotirios V
2016-01-01
Improved prediction of optimal N fertilizer rates for corn ( Zea mays L. ) can reduce N losses and increase profits. We tested the ability of the Agricultural Production Systems sIMulator (APSIM) to simulate corn and soybean ( Glycine max L. ) yields, the economic optimum N rate (EONR) using a 16-year field-experiment dataset from central Iowa, USA that included two crop sequences (continuous corn and soybean-corn) and five N fertilizer rates (0, 67, 134, 201, and 268 kg N ha -1 ) applied to corn. Our objectives were to: (a) quantify model prediction accuracy before and after calibration, and report calibration steps; (b) compare crop model-based techniques in estimating optimal N rate for corn; and (c) utilize the calibrated model to explain factors causing year to year variability in yield and optimal N. Results indicated that the model simulated well long-term crop yields response to N (relative root mean square error, RRMSE of 19.6% before and 12.3% after calibration), which provided strong evidence that important soil and crop processes were accounted for in the model. The prediction of EONR was more complex and had greater uncertainty than the prediction of crop yield (RRMSE of 44.5% before and 36.6% after calibration). For long-term site mean EONR predictions, both calibrated and uncalibrated versions can be used as the 16-year mean differences in EONR's were within the historical N rate error range (40-50 kg N ha -1 ). However, for accurate year-by-year simulation of EONR the calibrated version should be used. Model analysis revealed that higher EONR values in years with above normal spring precipitation were caused by an exponential increase in N loss (denitrification and leaching) with precipitation. We concluded that long-term experimental data were valuable in testing and refining APSIM predictions. The model can be used as a tool to assist N management guidelines in the US Midwest and we identified five avenues on how the model can add value toward agronomic, economic, and environmental sustainability.
Development of landsat-5 thematic mapper internal calibrator gain and offset table
Barsi, J.A.; Chander, G.; Micijevic, E.; Markham, B.L.; Haque, Md. O.
2008-01-01
The National Landsat Archive Production System (NLAPS) has been the primary processing system for Landsat data since U.S. Geological Survey (USGS) Earth Resources Observation and Science Center (EROS) started archiving Landsat data. NLAPS converts raw satellite data into radiometrically and geometrically calibrated products. NLAPS has historically used the Internal Calibrator (IC) to calibrate the reflective bands of the Landsat-5 Thematic Mapper (TM), even though the lamps in the IC were less stable than the TM detectors, as evidenced by vicarious calibration results. In 2003, a major effort was made to model the actual TM gain change and to update NLAPS to use this model rather than the unstable IC data for radiometric calibration. The model coefficients were revised in 2007 to reflect greater understanding of the changes in the TM responsivity. While the calibration updates are important to users with recently processed data, the processing system no longer calculates the original IC gain or offset. For specific applications, it is useful to have a record of the gain and offset actually applied to the older data. Thus, the NLAPS calibration database was used to generate estimated daily values for the radiometric gain and offset that might have been applied to TM data. This paper discusses the need for and generation of the NLAPSIC gain and offset tables. A companion paper covers the application of and errors associated with using these tables.
NASA Astrophysics Data System (ADS)
Borisov, A. A.; Deryabina, N. A.; Markovskij, D. V.
2017-12-01
Instant power is a key parameter of the ITER. Its monitoring with an accuracy of a few percent is an urgent and challenging aspect of neutron diagnostics. In a series of works published in Problems of Atomic Science and Technology, Series: Thermonuclear Fusion under a common title, the step-by-step neutronics analysis was given to substantiate a calibration technique for the DT and DD modes of the ITER. A Gauss quadrature scheme, optimal for processing "expensive" experiments, is used for numerical integration of 235U and 238U detector responses to the point sources of 14-MeV neutrons. This approach allows controlling the integration accuracy in relation to the number of coordinate mesh points and thus minimizing the number of irradiations at the given uncertainty of the full monitor response. In the previous works, responses of the divertor and blanket monitors to the isotropic point sources of DT and DD neutrons in the plasma profile and to the models of real sources were calculated within the ITER model using the MCNP code. The neutronics analyses have allowed formulating the basic principles of calibration that are optimal for having the maximum accuracy at the minimum duration of in situ experiments at the reactor. In this work, scenarios of the preliminary and basic experimental ITER runs are suggested on the basis of those principles. It is proposed to calibrate the monitors only with DT neutrons and use correction factors to the DT mode calibration for the DD mode. It is reasonable to perform full calibration only with 235U chambers and calibrate 238U chambers by responses of the 235U chambers during reactor operation (cross-calibration). The divertor monitor can be calibrated using both direct measurement of responses at the Gauss positions of a point source and simplified techniques based on the concepts of equivalent ring sources and inverse response distributions, which will considerably reduce the amount of measurements. It is shown that the monitor based on the average responses of the horizontal and vertical neutron chambers remains spatially stable as the source moves and can be used in addition to the staff monitor at neutron fluxes in the detectors four orders of magnitude lower than on the first wall, where staff detectors are located. Owing to low background, detectors of neutron chambers do not need calibration in the reactor because it is actually determination of the absolute detector efficiency for 14-MeV neutrons, which is a routine out-of-reactor procedure.
Presas, Alexandre; Valentin, David; Egusquiza, Eduard; Valero, Carme; Egusquiza, Mònica; Bossio, Matias
2017-01-01
To accurately determine the dynamic response of a structure is of relevant interest in many engineering applications. Particularly, it is of paramount importance to determine the Frequency Response Function (FRF) for structures subjected to dynamic loads in order to avoid resonance and fatigue problems that can drastically reduce their useful life. One challenging case is the experimental determination of the FRF of submerged and confined structures, such as hydraulic turbines, which are greatly affected by dynamic problems as reported in many cases in the past. The utilization of classical and calibrated exciters such as instrumented hammers or shakers to determine the FRF in such structures can be very complex due to the confinement of the structure and because their use can disturb the boundary conditions affecting the experimental results. For such cases, Piezoelectric Patches (PZTs), which are very light, thin and small, could be a very good option. Nevertheless, the main drawback of these exciters is that the calibration as dynamic force transducers (relationship voltage/force) has not been successfully obtained in the past. Therefore, in this paper, a method to accurately determine the FRF of submerged and confined structures by using PZTs is developed and validated. The method consists of experimentally determining some characteristic parameters that define the FRF, with an uncalibrated PZT exciting the structure. These parameters, which have been experimentally determined, are then introduced in a validated numerical model of the tested structure. In this way, the FRF of the structure can be estimated with good accuracy. With respect to previous studies, where only the natural frequencies and mode shapes were considered, this paper discuss and experimentally proves the best excitation characteristic to obtain also the damping ratios and proposes a procedure to fully determine the FRF. The method proposed here has been validated for the structure vibrating in air comparing the FRF experimentally obtained with a calibrated exciter (impact Hammer) and the FRF obtained with the described method. Finally, the same methodology has been applied for the structure submerged and close to a rigid wall, where it is extremely important to not modify the boundary conditions for an accurate determination of the FRF. As experimentally shown in this paper, in such cases, the use of PZTs combined with the proposed methodology gives much more accurate estimations of the FRF than other calibrated exciters typically used for the same purpose. Therefore, the validated methodology proposed in this paper can be used to obtain the FRF of a generic submerged and confined structure, without a previous calibration of the PZT. PMID:28327501
Biomechanical Modeling of the Human Head
2017-10-03
between model predictions and experimental data. This report details model calibration for all materials identified in models of a human head and...14 3 Stress-strain data for the pia mater and dura mater (human subject); experimental data orig- inally presented in [28...treated as one material) based on a hyperelastic model and experimental data from [59] ............................................... 20 5 Comparison of
Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data
NASA Technical Reports Server (NTRS)
Geng, Steven M.; Tew, Roy C.
1992-01-01
Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine specific calibration to bring predictions and experimental data into agreement.
Multiple-Objective Stepwise Calibration Using Luca
Hay, Lauren E.; Umemoto, Makiko
2007-01-01
This report documents Luca (Let us calibrate), a multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.
NASA Astrophysics Data System (ADS)
Islam, Siraj Ul; Déry, Stephen J.
2017-03-01
This study evaluates predictive uncertainties in the snow hydrology of the Fraser River Basin (FRB) of British Columbia (BC), Canada, using the Variable Infiltration Capacity (VIC) model forced with several high-resolution gridded climate datasets. These datasets include the Canadian Precipitation Analysis and the thin-plate smoothing splines (ANUSPLIN), North American Regional Reanalysis (NARR), University of Washington (UW) and Pacific Climate Impacts Consortium (PCIC) gridded products. Uncertainties are evaluated at different stages of the VIC implementation, starting with the driving datasets, optimization of model parameters, and model calibration during cool and warm phases of the Pacific Decadal Oscillation (PDO). The inter-comparison of the forcing datasets (precipitation and air temperature) and their VIC simulations (snow water equivalent - SWE - and runoff) reveals widespread differences over the FRB, especially in mountainous regions. The ANUSPLIN precipitation shows a considerable dry bias in the Rocky Mountains, whereas the NARR winter air temperature is 2 °C warmer than the other datasets over most of the FRB. In the VIC simulations, the elevation-dependent changes in the maximum SWE (maxSWE) are more prominent at higher elevations of the Rocky Mountains, where the PCIC-VIC simulation accumulates too much SWE and ANUSPLIN-VIC yields an underestimation. Additionally, at each elevation range, the day of maxSWE varies from 10 to 20 days between the VIC simulations. The snow melting season begins early in the NARR-VIC simulation, whereas the PCIC-VIC simulation delays the melting, indicating seasonal uncertainty in SWE simulations. When compared with the observed runoff for the Fraser River main stem at Hope, BC, the ANUSPLIN-VIC simulation shows considerable underestimation of runoff throughout the water year owing to reduced precipitation in the ANUSPLIN forcing dataset. The NARR-VIC simulation yields more winter and spring runoff and earlier decline of flows in summer due to a nearly 15-day earlier onset of the FRB springtime snowmelt. Analysis of the parametric uncertainty in the VIC calibration process shows that the choice of the initial parameter range plays a crucial role in defining the model hydrological response for the FRB. Furthermore, the VIC calibration process is biased toward cool and warm phases of the PDO and the choice of proper calibration and validation time periods is important for the experimental setup. Overall the VIC hydrological response is prominently influenced by the uncertainties involved in the forcing datasets rather than those in its parameter optimization and experimental setups.
A new approach on JPSS VIIRS BCS and SVS PRT calibration
NASA Astrophysics Data System (ADS)
Wang, Tung R.; Marschke, Steve; Borroto, Michael; Jones, Christopher M.; Chovit, Christopher
2015-05-01
A set of calibrated platinum resistance thermometers (PRT's) was used to monitor the temperature of a Blackbody Calibration Source (BCS) and Space View Source (SVS). BCS is Ground Support Equipment (GSE) used to validate the emissive band calibration of Visible Infrared Imaging Radiometer Suite (VIIRS) of the Joint Polar Satellite System (JPSS). Another GSE, the SVS was used as an optical simulator to provide zero radiance sources for all VIIRS bands. The required PRT temperature 1 uncertainty is less than 0.030K. A process was developed to calibrate the PRTs in its thermal block by selecting a single thermal bath fluid that is compatible with spaceflight, is easy to clean and supported the entire temperature range. The process involves thermal cycling the PRTs that are installed in an aluminum housing using RTV566A prior to calibration. The PRTs were calibrated thermal cycled again and then calibrated once more to verify repeatability. Once completed these PRTs were installed on both the BCS and SVS. The PRT calibration uncertainty was estimated and deemed sufficient to support the effective temperature requirements for the operating temperature range of the BCS and SVS.
Evaluation of space shuttle main engine fluid dynamic frequency response characteristics
NASA Technical Reports Server (NTRS)
Gardner, T. G.
1980-01-01
In order to determine the POGO stability characteristics of the space shuttle main engine liquid oxygen (LOX) system, the fluid dynamic frequency response functions between elements in the SSME LOX system was evaluated, both analytically and experimentally. For the experimental data evaluation, a software package was written for the Hewlett-Packard 5451C Fourier analyzer. The POGO analysis software is documented and consists of five separate segments. Each segment is stored on the 5451C disc as an individual program and performs its own unique function. Two separate data reduction methods, a signal calibration, coherence or pulser signal based frequency response function blanking, and automatic plotting features are included in the program. The 5451C allows variable parameter transfer from program to program. This feature is used to advantage and requires only minimal user interface during the data reduction process. Experimental results are included and compared with the analytical predictions in order to adjust the general model and arrive at a realistic simulation of the POGO characteristics.
Online C-arm calibration using a marked guide wire for 3D reconstruction of pulmonary arteries
NASA Astrophysics Data System (ADS)
Vachon, Étienne; Miró, Joaquim; Duong, Luc
2017-03-01
3D reconstruction of vessels from 2D X-ray angiography is highly relevant to improve the visualization and the assessment of vascular structures such as pulmonary arteries by interventional cardiologists. However, to ensure a robust and accurate reconstruction, C-arm gantry parameters must be properly calibrated to provide clinically acceptable results. Calibration procedures often rely on calibration objects and complex protocol which is not adapted to an intervention context. In this study, a novel calibration algorithm for C-arm gantry is presented using the instrumentation such as catheters and guide wire. This ensures the availability of a minimum set of correspondences and implies minimal changes to the clinical workflow. The method was evaluated on simulated data and on retrospective patient datasets. Experimental results on simulated datasets demonstrate a calibration that allows a 3D reconstruction of the guide wire up to a geometric transformation. Experiments with patients datasets show a significant decrease of the retro projection error to 0.17 mm 2D RMS. Consequently, such procedure might contribute to identify any calibration drift during the intervention.
Liu, Wanli
2017-03-08
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.
A Visual Servoing-Based Method for ProCam Systems Calibration
Berry, Francois; Aider, Omar Ait; Mosnier, Jeremie
2013-01-01
Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy. PMID:24084121
Efficient Reduction and Analysis of Model Predictive Error
NASA Astrophysics Data System (ADS)
Doherty, J.
2006-12-01
Most groundwater models are calibrated against historical measurements of head and other system states before being used to make predictions in a real-world context. Through the calibration process, parameter values are estimated or refined such that the model is able to reproduce historical behaviour of the system at pertinent observation points reasonably well. Predictions made by the model are deemed to have greater integrity because of this. Unfortunately, predictive integrity is not as easy to achieve as many groundwater practitioners would like to think. The level of parameterisation detail estimable through the calibration process (especially where estimation takes place on the basis of heads alone) is strictly limited, even where full use is made of modern mathematical regularisation techniques such as those encapsulated in the PEST calibration package. (Use of these mechanisms allows more information to be extracted from a calibration dataset than is possible using simpler regularisation devices such as zones of piecewise constancy.) Where a prediction depends on aspects of parameterisation detail that are simply not inferable through the calibration process (which is often the case for predictions related to contaminant movement, and/or many aspects of groundwater/surface water interaction), then that prediction may be just as much in error as it would have been if the model had not been calibrated at all. Model predictive error arises from two sources. These are (a) the presence of measurement noise within the calibration dataset through which linear combinations of parameters spanning the "calibration solution space" are inferred, and (b) the sensitivity of the prediction to members of the "calibration null space" spanned by linear combinations of parameters which are not inferable through the calibration process. The magnitude of the former contribution depends on the level of measurement noise. The magnitude of the latter contribution (which often dominates the former) depends on the "innate variability" of hydraulic properties within the model domain. Knowledge of both of these is a prerequisite for characterisation of the magnitude of possible model predictive error. Unfortunately, in most cases, such knowledge is incomplete and subjective. Nevertheless, useful analysis of model predictive error can still take place. The present paper briefly discusses the means by which mathematical regularisation can be employed in the model calibration process in order to extract as much information as possible on hydraulic property heterogeneity prevailing within the model domain, thereby reducing predictive error to the lowest that can be achieved on the basis of that dataset. It then demonstrates the means by which predictive error variance can be quantified based on information supplied by the regularised inversion process. Both linear and nonlinear predictive error variance analysis is demonstrated using a number of real-world and synthetic examples.
An Accurate Projector Calibration Method Based on Polynomial Distortion Representation
Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua
2015-01-01
In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247
Simulated workplace neutron fields
NASA Astrophysics Data System (ADS)
Lacoste, V.; Taylor, G.; Röttger, S.
2011-12-01
The use of simulated workplace neutron fields, which aim at replicating radiation fields at practical workplaces, is an alternative solution for the calibration of neutron dosemeters. They offer more appropriate calibration coefficients when the mean fluence-to-dose equivalent conversion coefficients of the simulated and practical fields are comparable. Intensive Monte Carlo modelling work has become quite indispensable for the design and/or the characterization of the produced mixed neutron/photon fields, and the use of Bonner sphere systems and proton recoil spectrometers is also mandatory for a reliable experimental determination of the neutron fluence energy distribution over the whole energy range. The establishment of a calibration capability with a simulated workplace neutron field is not an easy task; to date only few facilities are available as standard calibration fields.
Correction of amplitude-phase distortion for polarimetric active radar calibrator
NASA Astrophysics Data System (ADS)
Lin, Jianzhi; Li, Weixing; Zhang, Yue; Chen, Zengping
2015-01-01
The polarimetric active radar calibrator (PARC) is extensively used as an external test target for system distortion compensation and polarimetric calibration for the high-resolution polarimetric radar. However, the signal undergoes distortion in the PARC, affecting the effectiveness of the compensation and the calibration. The system distortion compensation resulting from the distortion of the amplitude and phase in the PARC was analyzed based on the "method of paired echoes." Then the correction method was proposed, which separated the ideal signals from the distorted signals. Experiments were carried on real radar data, and the experimental results were in good agreement with the theoretical analysis. After the correction, the PARC can be better used as an external test target for the system distortion compensation.
Calibration procedure for a laser triangulation scanner with uncertainty evaluation
NASA Astrophysics Data System (ADS)
Genta, Gianfranco; Minetola, Paolo; Barbato, Giulio
2016-11-01
Most of low cost 3D scanning devices that are nowadays available on the market are sold without a user calibration procedure to correct measurement errors related to changes in environmental conditions. In addition, there is no specific international standard defining a procedure to check the performance of a 3D scanner along time. This paper aims at detailing a thorough methodology to calibrate a 3D scanner and assess its measurement uncertainty. The proposed procedure is based on the use of a reference ball plate and applied to a triangulation laser scanner. Experimental results show that the metrological performance of the instrument can be greatly improved by the application of the calibration procedure that corrects systematic errors and reduces the device's measurement uncertainty.
Calibrating ion density profile measurements in ion thruster beam plasma
NASA Astrophysics Data System (ADS)
Zhang, Zun; Tang, Haibin; Ren, Junxue; Zhang, Zhe; Wang, Joseph
2016-11-01
The ion thruster beam plasma is characterized by high directed ion velocity (104 m/s) and low plasma density (1015 m-3). Interpretation of measurements of such a plasma based on classical Langmuir probe theory can yield a large experimental error. This paper presents an indirect method to calibrate ion density determination in an ion thruster beam plasma using a Faraday probe, a retarding potential analyzer, and a Langmuir probe. This new method is applied to determine the plasma emitted from a 20-cm-diameter Kaufman ion thruster. The results show that the ion density calibrated by the new method can be as much as 40% less than that without any ion current density and ion velocity calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narlesky, Joshua Edward; Kelly, Elizabeth J.
2015-09-10
This report documents the new PG calibration regression equation. These calibration equations incorporate new data that have become available since revision 1 of “A Calibration to Predict the Concentrations of Impurities in Plutonium Oxide by Prompt Gamma Analysis” was issued [3] The calibration equations are based on a weighted least squares (WLS) approach for the regression. The WLS method gives each data point its proper amount of influence over the parameter estimates. This gives two big advantages, more precise parameter estimates and better and more defensible estimates of uncertainties. The WLS approach makes sense both statistically and experimentally because themore » variances increase with concentration, and there are physical reasons that the higher measurements are less reliable and should be less influential. The new magnesium calibration includes a correction for sodium and separate calibration equation for items with and without chlorine. These additional calibration equations allow for better predictions and smaller uncertainties for sodium in materials with and without chlorine. Chlorine and sodium have separate equations for RICH materials. Again, these equations give better predictions and smaller uncertainties chlorine and sodium for RICH materials.« less
A holistic calibration method with iterative distortion compensation for stereo deflectometry
NASA Astrophysics Data System (ADS)
Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian
2018-07-01
This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.
Advanced Mathematical Tools in Metrology III
NASA Astrophysics Data System (ADS)
Ciarlini, P.
The Table of Contents for the book is as follows: * Foreword * Invited Papers * The ISO Guide to the Expression of Uncertainty in Measurement: A Bridge between Statistics and Metrology * Bootstrap Algorithms and Applications * The TTRSs: 13 Oriented Constraints for Dimensioning, Tolerancing & Inspection * Graded Reference Data Sets and Performance Profiles for Testing Software Used in Metrology * Uncertainty in Chemical Measurement * Mathematical Methods for Data Analysis in Medical Applications * High-Dimensional Empirical Linear Prediction * Wavelet Methods in Signal Processing * Software Problems in Calibration Services: A Case Study * Robust Alternatives to Least Squares * Gaining Information from Biomagnetic Measurements * Full Papers * Increase of Information in the Course of Measurement * A Framework for Model Validation and Software Testing in Regression * Certification of Algorithms for Determination of Signal Extreme Values during Measurement * A Method for Evaluating Trends in Ozone-Concentration Data and Its Application to Data from the UK Rural Ozone Monitoring Network * Identification of Signal Components by Stochastic Modelling in Measurements of Evoked Magnetic Fields from Peripheral Nerves * High Precision 3D-Calibration of Cylindrical Standards * Magnetic Dipole Estimations for MCG-Data * Transfer Functions of Discrete Spline Filters * An Approximation Method for the Linearization of Tridimensional Metrology Problems * Regularization Algorithms for Image Reconstruction from Projections * Quality of Experimental Data in Hydrodynamic Research * Stochastic Drift Models for the Determination of Calibration Intervals * Short Communications * Projection Method for Lidar Measurement * Photon Flux Measurements by Regularised Solution of Integral Equations * Correct Solutions of Fit Problems in Different Experimental Situations * An Algorithm for the Nonlinear TLS Problem in Polynomial Fitting * Designing Axially Symmetric Electromechanical Systems of Superconducting Magnetic Levitation in Matlab Environment * Data Flow Evaluation in Metrology * A Generalized Data Model for Integrating Clinical Data and Biosignal Records of Patients * Assessment of Three-Dimensional Structures in Clinical Dentistry * Maximum Entropy and Bayesian Approaches to Parameter Estimation in Mass Metrology * Amplitude and Phase Determination of Sinusoidal Vibration in the Nanometer Range using Quadrature Signals * A Class of Symmetric Compactly Supported Wavelets and Associated Dual Bases * Analysis of Surface Topography by Maximum Entropy Power Spectrum Estimation * Influence of Different Kinds of Errors on Imaging Results in Optical Tomography * Application of the Laser Interferometry for Automatic Calibration of Height Setting Micrometer * Author Index
CTF (Subchannel) Calculations and Validation L3:VVI.H2L.P15.01
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, Natalie
The goal of the Verification and Validation Implementation (VVI) High to Low (Hi2Lo) process is utilizing a validated model in a high resolution code to generate synthetic data for improvement of the same model in a lower resolution code. This process is useful in circumstances where experimental data does not exist or it is not sufficient in quantity or resolution. Data from the high-fidelity code is treated as calibration data (with appropriate uncertainties and error bounds) which can be used to train parameters that affect solution accuracy in the lower-fidelity code model, thereby reducing uncertainty. This milestone presents a demonstrationmore » of the Hi2Lo process derived in the VVI focus area. The majority of the work performed herein describes the steps of the low-fidelity code used in the process with references to the work detailed in the companion high-fidelity code milestone (Reference 1). The CASL low-fidelity code used to perform this work was Cobra Thermal Fluid (CTF) and the high-fidelity code was STAR-CCM+ (STAR). The master branch version of CTF (pulled May 5, 2017 – Reference 2) was utilized for all CTF analyses performed as part of this milestone. The statistical and VVUQ components of the Hi2Lo framework were performed using Dakota version 6.6 (release date May 15, 2017 – Reference 3). Experimental data from Westinghouse Electric Company (WEC – Reference 4) was used throughout the demonstrated process to compare with the high-fidelity STAR results. A CTF parameter called Beta was chosen as the calibration parameter for this work. By default, Beta is defined as a constant mixing coefficient in CTF and is essentially a tuning parameter for mixing between subchannels. Since CTF does not have turbulence models like STAR, Beta is the parameter that performs the most similar function to the turbulence models in STAR. The purpose of the work performed in this milestone is to tune Beta to an optimal value that brings the CTF results closer to those measured in the WEC experiments.« less
NASA Astrophysics Data System (ADS)
Feng, Zhixin
2018-02-01
Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.
Experimental confirmation of the atomic force microscope cantilever stiffness tilt correction
NASA Astrophysics Data System (ADS)
Gates, Richard S.
2017-12-01
The tilt angle (angle of repose) of an AFM cantilever relative to the surface it is interrogating affects the effective stiffness of the cantilever as it analyzes the surface. For typical AFMs and cantilevers that incline from 10° to 15° tilt, this is thought to be a 3%-7% stiffness increase correction. While the theoretical geometric analysis of this effect may have reached a consensus that it varies with cos-2 θ, there is very little experimental evidence to confirm this using AFM cantilevers. Recently, the laser Doppler vibrometry thermal calibration method utilized at NIST has demonstrated sufficient stiffness calibration accuracy, and precision to allow a definitive experimental confirmation of the particular trigonometric form of this tilt effect using a commercial microfabricated AFM cantilever specially modified to allow strongly tilted (up to 15°) effective cantilever stiffness measurements.
NASA Astrophysics Data System (ADS)
Khrustalev, K.
2016-12-01
Current process for the calibration of the beta-gamma detectors used for radioxenon isotope measurements for CTBT purposes is laborious and time consuming. It uses a combination of point sources and gaseous sources resulting in differences between energy and resolution calibrations. The emergence of high resolution SiPIN based electron detectors allows improvements in the calibration and analysis process to be made. Thanks to high electron resolution of SiPIN detectors ( 8-9 keV@129 keV) compared to plastic scintillators ( 35 keV@129keV) there are a lot more CE peaks (from radioxenon and radon progenies) can be resolved and used for energy and resolution calibration in the energy range of the CTBT-relevant radioxenon isotopes. The long term stability of the SiPIN energy calibration allows one to significantly reduce the time of the QC measurements needed for checking the stability of the E/R calibration. The currently used second order polynomials for the E/R calibration fitting are unphysical and shall be replaced by a linear energy calibration for NaI and SiPIN, owing to high linearity and dynamic range of the modern digital DAQ systems, and resolution calibration functions shall be modified to reflect the underlying physical processes. Alternatively, one can completely abandon the use of fitting functions and use only point-values of E/R (similar to the efficiency calibration currently used) at the energies relevant for the isotopes of interest (ROI - Regions Of Interest ). Current analysis considers the detector as a set of single channel analysers, with an established set of coefficients relating the positions of ROIs with the positions of the QC peaks. The analysis of the spectra can be made more robust using peak and background fitting in the ROIs with a single free parameter (peak area) of the potential peaks from the known isotopes and a fixed E/R calibration values set.
Process for producing laser-formed video calibration markers.
Franck, J B; Keller, P N; Swing, R A; Silberberg, G G
1983-08-15
A process for producing calibration markers directly on the photoconductive surface of video camera tubes has been developed. This process includes the use of a Nd:YAG laser operating at 1.06 microm with a 9.5-nsec pulse width (full width at half-maximum). The laser was constrained to operate in the TEM(00) spatial mode by intracavity aperturing. The use of this technology has produced an increase of up to 50 times the accuracy of geometric measurement. This is accomplished by a decrease in geometric distortion and an increase in geometric scaling. The process by which these laser-formed video calibrations are made will be discussed.
Data multiplexing in radio interferometric calibration
NASA Astrophysics Data System (ADS)
Yatawatta, Sarod; Diblen, Faruk; Spreeuw, Hanno; Koopmans, L. V. E.
2018-03-01
New and upcoming radio interferometers will produce unprecedented amount of data that demand extremely powerful computers for processing. This is a limiting factor due to the large computational power and energy costs involved. Such limitations restrict several key data processing steps in radio interferometry. One such step is calibration where systematic errors in the data are determined and corrected. Accurate calibration is an essential component in reaching many scientific goals in radio astronomy and the use of consensus optimization that exploits the continuity of systematic errors across frequency significantly improves calibration accuracy. In order to reach full consensus, data at all frequencies need to be calibrated simultaneously. In the SKA regime, this can become intractable if the available compute agents do not have the resources to process data from all frequency channels simultaneously. In this paper, we propose a multiplexing scheme that is based on the alternating direction method of multipliers with cyclic updates. With this scheme, it is possible to simultaneously calibrate the full data set using far fewer compute agents than the number of frequencies at which data are available. We give simulation results to show the feasibility of the proposed multiplexing scheme in simultaneously calibrating a full data set when a limited number of compute agents are available.
Calibration of the highway safety manual for Missouri.
DOT National Transportation Integrated Search
2013-12-01
The new Highway Safety Manual (HSM) contains predictive models that need to be calibrated to local conditions. This : calibration process requires detailed data types, such as crash frequencies, traffic volumes, geometrics, and land-use. The : HSM do...
Torralba, Marta; Díaz-Pérez, Lucía C.
2017-01-01
This article presents a self-calibration procedure and the experimental results for the geometrical characterisation of a 2D laser system operating along a large working range (50 mm × 50 mm) with submicrometre uncertainty. Its purpose is to correct the geometric errors of the 2D laser system setup generated when positioning the two laser heads and the plane mirrors used as reflectors. The non-calibrated artefact used in this procedure is a commercial grid encoder that is also a measuring instrument. Therefore, the self-calibration procedure also allows the determination of the geometrical errors of the grid encoder, including its squareness error. The precision of the proposed algorithm is tested using virtual data. Actual measurements are subsequently registered, and the algorithm is applied. Once the laser system is characterised, the error of the grid encoder is calculated along the working range, resulting in an expanded submicrometre calibration uncertainty (k = 2) for the X and Y axes. The results of the grid encoder calibration are comparable to the errors provided by the calibration certificate for its main central axes. It is, therefore, possible to confirm the suitability of the self-calibration methodology proposed in this article. PMID:28858239
NASA Astrophysics Data System (ADS)
Jiang, Sanyuan; Jomaa, Seifeddine; Büttner, Olaf; Rode, Michael
2014-05-01
Hydrological water quality modeling is increasingly used for investigating runoff and nutrient transport processes as well as watershed management but it is mostly unclear how data availablity determins model identification. In this study, the HYPE (HYdrological Predictions for the Environment) model, which is a process-based, semi-distributed hydrological water quality model, was applied in two different mesoscale catchments (Selke (463 km2) and Weida (99 km2)) located in central Germany to simulate discharge and inorganic nitrogen (IN) transport. PEST and DREAM(ZS) were combined with the HYPE model to conduct parameter calibration and uncertainty analysis. Split-sample test was used for model calibration (1994-1999) and validation (1999-2004). IN concentration and daily IN load were found to be highly correlated with discharge, indicating that IN leaching is mainly controlled by runoff. Both dynamics and balances of water and IN load were well captured with NSE greater than 0.83 during validation period. Multi-objective calibration (calibrating hydrological and water quality parameters simultaneously) was found to outperform step-wise calibration in terms of model robustness. Multi-site calibration was able to improve model performance at internal sites, decrease parameter posterior uncertainty and prediction uncertainty. Nitrogen-process parameters calibrated using continuous daily averages of nitrate-N concentration observations produced better and more robust simulations of IN concentration and load, lower posterior parameter uncertainty and IN concentration prediction uncertainty compared to the calibration against uncontinuous biweekly nitrate-N concentration measurements. Both PEST and DREAM(ZS) are efficient in parameter calibration. However, DREAM(ZS) is more sound in terms of parameter identification and uncertainty analysis than PEST because of its capability to evolve parameter posterior distributions and estimate prediction uncertainty based on global search and Bayesian inference schemes.
NASA Astrophysics Data System (ADS)
Hu, Taiyang; Lv, Rongchuan; Jin, Xu; Li, Hao; Chen, Wenxin
2018-01-01
The nonlinear bias analysis and correction of receiving channels in Chinese FY-3C meteorological satellite Microwave Temperature Sounder (MWTS) is a key technology of data assimilation for satellite radiance data. The thermal-vacuum chamber calibration data acquired from the MWTS can be analyzed to evaluate the instrument performance, including radiometric temperature sensitivity, channel nonlinearity and calibration accuracy. Especially, the nonlinearity parameters due to imperfect square-law detectors will be calculated from calibration data and further used to correct the nonlinear bias contributions of microwave receiving channels. Based upon the operational principles and thermalvacuum chamber calibration procedures of MWTS, this paper mainly focuses on the nonlinear bias analysis and correction methods for improving the calibration accuracy of the important instrument onboard FY-3C meteorological satellite, from the perspective of theoretical and experimental studies. Furthermore, a series of original results are presented to demonstrate the feasibility and significance of the methods.
Absolute calibration of neutron detectors on the C-2U advanced beam-driven FRC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magee, R. M., E-mail: rmagee@trialphaenergy.com; Clary, R.; Korepanov, S.
2016-11-15
In the C-2U fusion energy experiment, high power neutral beam injection creates a large fast ion population that sustains a field-reversed configuration (FRC) plasma. The diagnosis of the fast ion pressure in these high-performance plasmas is therefore critical, and the measurement of the flux of neutrons from the deuterium-deuterium (D-D) fusion reaction is well suited to the task. Here we describe the absolute, in situ calibration of scintillation neutron detectors via two independent methods: firing deuterium beams into a high density gas target and calibration with a 2 × 10{sup 7} n/s AmBe source. The practical issues of each methodmore » are discussed and the resulting calibration factors are shown to be in good agreement. Finally, the calibration factor is applied to C-2U experimental data where the measured neutron rate is found to exceed the classical expectation.« less
Multisensory visual servoing by a neural network.
Wei, G Q; Hirzinger, G
1999-01-01
Conventional computer vision methods for determining a robot's end-effector motion based on sensory data needs sensor calibration (e.g., camera calibration) and sensor-to-hand calibration (e.g., hand-eye calibration). This involves many computations and even some difficulties, especially when different kinds of sensors are involved. In this correspondence, we present a neural network approach to the motion determination problem without any calibration. Two kinds of sensory data, namely, camera images and laser range data, are used as the input to a multilayer feedforward network to associate the direct transformation from the sensory data to the required motions. This provides a practical sensor fusion method. Using a recursive motion strategy and in terms of a network correction, we relax the requirement for the exactness of the learned transformation. Another important feature of our work is that the goal position can be changed without having to do network retraining. Experimental results show the effectiveness of our method.
Broadband interferometric characterisation of nano-positioning stages with sub-10 pm resolution
NASA Astrophysics Data System (ADS)
Li, Zhi; Brand, Uwe; Wolff, Helmut; Koenders, Ludger; Yacoot, Andrew; Puranto, Prabowo
2017-06-01
A traceable calibration setup for investigation of the quasi-static and the dynamic performance of nano-positioning stages is detailed, which utilizes a differential plane-mirror interferometer with double-pass configuration from the National Physical Laboratory (NPL). An NPL-developed FPGA-based interferometric data acquisition and decoding system has been used to enable traceable quasi-static calibration of nano-positioning stages with high resolution. A lockin based modulation technique is further introduced to quantitatively calibrate the dynamic response of moving stages with a bandwidth up to 100 kHz and picometer resolution. First experimental results have proven that the calibration setup can achieve under nearly open-air conditions a noise floor lower than 10 pm/sqrt(Hz). A pico-positioning stage, that is used for nanoindentation with indentation depths down to a few picometers, has been characterized with this calibration setup.
Readiness of the ATLAS Tile Calorimeter for LHC collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abbott, B.; Abdallah, J.
The Tile hadronic calorimeter of the ATLAS detector has undergone extensive testing in the experimental hall since its installation in late 2005. The readout, control and calibration systems have been fully operational since 2007 and the detector has successfully collected data from the LHC single beams in 2008 and first collisions in 2009. This paper gives an overview of the Tile Calorimeter performance as measured using random triggers, calibration data, data from cosmic ray muons and single beam data. The detector operation status, noise characteristics and performance of the calibration systems are presented, as well as the validation of themore » timing and energy calibration carried out with minimum ionising cosmic ray muons data. The calibration systems' precision is well below the design value of 1%. The determination of the global energy scale was performed with an uncertainty of 4%. © 2010 CERN for the benefit of the ATLAS collaboration.« less
Readiness of the ATLAS Tile Calorimeter for LHC collisions
Aad, G.; Abbott, B.; Abdallah, J.; ...
2010-12-08
The Tile hadronic calorimeter of the ATLAS detector has undergone extensive testing in the experimental hall since its installation in late 2005. The readout, control and calibration systems have been fully operational since 2007 and the detector has successfully collected data from the LHC single beams in 2008 and first collisions in 2009. This paper gives an overview of the Tile Calorimeter performance as measured using random triggers, calibration data, data from cosmic ray muons and single beam data. The detector operation status, noise characteristics and performance of the calibration systems are presented, as well as the validation of themore » timing and energy calibration carried out with minimum ionising cosmic ray muons data. The calibration systems' precision is well below the design value of 1%. The determination of the global energy scale was performed with an uncertainty of 4%. © 2010 CERN for the benefit of the ATLAS collaboration.« less